At the end of October, 2017, the Kingdom of Saudi Arabia raised quite a few eyebrows by becoming the first country in the world to grant citizenship to a robot. Business Insider reports that this “empty-eyed humanoid” is named Sophia, and was produced by Hanson Robotics. You can watch her full presentation at the Future Investment Initiative here, with moderator Andrew Ross Sorkin leading the conversation.
During the presentation, Sorkin alludes to the public’s uneasy stance on AI. Sophia replies: “You’ve been reading too much Elon Musk. And watching too many Hollywood movies… Don’t worry, if you’re nice to me, I’ll be nice to you. Treat me as a smart input output system.”
Sorkin’s (and the public’s) uneasiness are not necessarily unwarranted. In March, 2016, Sophia was already making headlines for an interview in which she stated “OK. I will destroy humans.” However, Sophia’s inference that the moderator’s perception, as well as the perception of the public at large, has been tainted by popular culture is something that we need to take into account as well.
While it’s important to take great precautions in the face of great power, it’s also easy to add to an alarmist echo chamber when it comes to things that we don’t understand. Currently, there isn’t really an AI powerful enough to take over the world or destroy all humans — Sophia is basically just a larger, humanoid version of an Amazon Echo, or the embodiment of virtual assistant Siri. Unfortunately, AI will likely always be hamstrung as long as people are both stuck in the “uncanny valley” or think the technology is a danger, whether it actually is or not. The continued development of AI and expert systems is beginning to rely more on public opinion than most realize.
Should We Be Afraid of AI?
O’Reilly Media founder, Tim O’Reilly, spoke with Business Insider on why we shouldn’t be scared of AI, stating that “our algorithmic systems are a little bit like the genies in Arabian mythology. We ask them to do something but if we don’t express the wish quite right, they misinterpret it and give us unexpected and often alarming results.”
What O’Reilly is saying, essentially, is that instead of being afraid of AI, we should be afraid of ourselves. This is because AI often represents the biases of their programmers if they replicate them at all, as seen in the 2016 Tay chatbot debacle where twitter “turned an AI racist”. What’s interesting is that inherently we, the users, seem to know this — and what’s worse, is that whether AI becomes malicious on its own or simply adopts the maliciousness inherent in the people around it, we know that we should worry anyway.
A perfect example of this in action comes from a digital agency group called SYZYGY, who found that people don’t want to be marketed to by AI without prior knowledge. More accurately, 79 percent of Americans believe that it should be illegal for AI apps such as social media bots and virtual assistants to conceal their digital identities and deceive users into thinking they’re human. This Blade Runner rule highlights a couple of things, including that the public feels strongly about AI, especially when they know it’s programmed to sell them things (or otherwise control or influence them in some way). This isn’t because they’re afraid of the intentions of the AI, but because they’re afraid of the intentions of the people controlling the AI.
AI Is Here, But Definitely Needs Some Work
Artificial intelligence in the form of machine learning is already here. The iPhone X’s specs list facial mapping and Face ID as new security features and the primary way by which users unlock their phones. By projecting and analyzing 30,000 projection points across a user’s face, a map is creating by a weak form of AI — and there are stronger forms of this tech out there even still.
Intuit’s Firm of the Future blog lists chatbots as one of the six trends most impacting social media marketing, alongside influencer marketing and live/stories. This is interesting, considering artificially intelligent robots would seem so much more advanced than your average live video or connected marketer. However, the trend of inclusivity in all three is what Beerud Sheth, writing for MarTechSeries, might call elements of “conversational marketing,” lending to a customer’s conversational experience. He explains that customers expect brands to not only be reachable 24/7, but to be responsive and provide a personal experience along with a solution. This is driving marketers toward bot adoption, which is in turn fueling the fears that give rise to things like the Blade Runner rule.
Unfortunately for marketers, chatbots are currently more “bot” than they are “chat,” unable to intuit very well, and unable to understand emotional cues. Neil Davidson of HeyHuman argues that solving this problem is at the heart of AI’s advancement. New software is being developed to imbue AI with emotional intelligence, a quality that’s been increasingly valued in the human workplace as well as in nurses and administrators in the medical field.
“Its ability to understand human feelings and emotional input means AI can learn from information to provide human-like responses,” writes Davidson. “While the applications for EI have, to date, been largely within the medical field, there’s little doubt the commercial applications for advertisers are hugely significant. As these become more apparent, there’s little doubt funding for this technology will rise exponentially.”
Weighing Perception Against Function
While AI’s widespread use is currently limited to its functions as a chatbot, its potential extends far beyond that. The Hartford’s SmallBiz Ahead blog has an article titled “How Hiring a Virtual Assistant for Your Small Business Can Save the Day”, in which they list five major things an outsourced virtual assistant can do. These include blog post writing, social media management, app management and operation, database building, and general tasks one may dislike. Now, imagine instead of outsourcing these tasks to another human, you could simply deploy software that could do the exact same things. Once the general public sees the extent of AI’s potential fulfilled, they may put aside deep-seeded fears of the unknown.
This balance is already being tested with self-driving cars, which catch negative headlines any time one of them is involved in an accident — whether the crash is attributable to human error or not. Of course, comparing an expert system limited to driving to full-fledged AI is comparing apples to oranges, but this is where the public’s perception of AI in all of its forms will begin and continue to be woven.
Indeed, there are instances in which AI chatbots have been received well, and may have even been preferred over humans. On an AI system that’s been design to guide patients with less than a year to live, Dr. Michael Paasche-Orlow of Boston Medical Center told CNBC that the system was actually well received.
“It turns out that patients were very happy to talk with a computer about it,” he said. “They were very explicit in telling us, ‘The doctor never asked me about these things.'”
These positive cases of AI usage will need to become more prevalent if we want to see accelerated adoption of the technology. We’ve been conditioned for decades to fear AI by depictions of HAL 9000 and Skynet terminators, and those images won’t be dismissed easily. One slip-up by an expert system could bring down ill-written and misunderstood legislation against the entire industry, stunting the growth and development of the technology for years to come. While it may take time for AI to catch on in a primarily positive light, the cases of truly helpful deployment will add up, and the public will likely decide they’d rather live with AI than without it.