AI = A presentist.
Presentist: To live in the present.
We’re often too caught up on the past, ask any war veteran who’s worth his salt and he’d concur the same. However with the turn of the last decade, we’re as a community, a lot more focused on the now and where we’re going from here.
One word that comes to mind when we’re dwelling on the idea of the future in relation to making use of the tools we’ve got at our disposal today is artificial intelligence. OK, that’s two words, but you get the point.
We’ve got self-actualized artificial intelligence or also know as AGI (artificial general intelligence). These machines come with a self of sense awareness and self-preservation, and here’s where things get a little scary (if you’re skeptical about AI)
Whether it's crunching a million numbers in mere seconds or helping us deduce why a certain part of a solution has not worked, again, in mere seconds; artificial intelligence has ensured that our jobs will never be the same again.
In short: AI helps us solve unsolvable problems, and in record time. It’s possibly AI’s biggest USP at the moment, and no one seems to think so otherwise.
AI a cause for concern?
So we set out to dig under the surface of mediocrity and online definitions, to really understand if we have to, at some point in the near future, have to queue James Camerons’ OST from the Terminator. Calling Skynet.
“I believe that we should not be overly concerned about artificial general intelligence just yet”
- Andrew Yang
What’s Yang on about? Well, what he's doing here is building on what many experts already know and believe. The fact that there doesn’t exist a defined path for artificial general intelligence = we’re still a few decades off before we've got machines with a sense of self-awareness and self-preservation. Andrew feels that we’re a couple of series and breakthroughs short for us to worry about self-aware machines.
However, right now? We do have machine learning algorithms that can solve extremely complex problems beyond and humans intelligence – but here’s where things get a little interesting. Outside of that problem? Outside of solving that complex problem what even three humans, if worked together, would not be able to attain but they’re also complete idiots and have the collective intelligence of a two-year-old with anything that’s not that problem.
What’s that golden mantra with AI?
Give AI data sets and watch as they weave their speedy magic to devise and come up with brilliant correlations. Anything outside of the data set, and your AI’s caught dead in water. It serves little to no purpose.
AI is taking out jobs, and we’ve got a reason to fear but it’s not what you think. In a recent interview with H3, Yang finally addressed what some fear as the biggest problem AI will cause in relation to our jobs.
“What I’m more concerned about is, dumb artificial intelligence getting rid of a lot of our jobs.”
- Andrew Yang
What’s a lot more worrying is the fact the rate at which dumb AI replaces human jobs as opposed to smart AI replacing human jobs is way higher, the both of them aren’t even in the same ballpark.
What’s the risk with AI?
There are various hazards associated with really dumb AI. What Yang is talking about is with dumb AI, he doesn't really mean dumb as in your elementary definition but lets, for instance, we could use AI to cure cancer but we could also use it to hack someone's infrastructure and make it inoperable.
This gives rise to an even more important question. Whether we can build competent AI machines without losing control over them?
Sam & his artificial worries!
Sam Harris, a neuroscientist, and philosopher negotiate this question. He draws out parallels between famine and AI and it makes sense. What he’s trying to say is. If we were to somehow find ourselves in a situation where the entire world was all out of food, people starve and this leads to the end of humanity as we know it.
It's not a pretty picture, perhaps because it’s happened to a lot of us in the past and continues to do so in the present, we look down upon and try and avoid going hungry like the plague. We don’t romanticize famine. However, we do romanticize science fiction, more importantly – the end of humanity by machines.
The Terminator series, The Matrix, Ex Machine, these are just a couple of Hollywood blockbusters that make it easier for us to digest. Easier than that famine at least.
One of the things that worries Sam the most about AI development is our inability to marshal an appropriate emotional response to the dangers that lie ahead. This is possibly a tad too dystopian, but it is a concern that some experts voice, to only be drowned out by the benefits of ai.
We built machines. Superhuman machines, that help us innovate better and even drives better business practices, allows us to focus on the really important things in front of us. If it’s business, then that’s revenue generation. If it’s personal, then that means more YOU time. Regardless of how you’re looking at it, all great innovations are double-edged.
What seems clear is the underlying problem where we’ve failed to grapple with the problems of creating something that may,(or may not) treat us the way we treat ants.
Currently engaged with several enterprises in the Americas, Europe-Middle East-Africa (EMEA), and Asia-Paciﬁc region, interface's Intelligent Virtual Assistants or IVAs make every digital channel of an enterprise intelligent. With rich IVAs, an enterprise can leapfrog customer & employee experience to voice-first natural language interface. For more information, check out interface.ai