Someday soon, we might all rely on robots to get through our daily routine.
“I believe that we will be in a world in which we will have an A.I. sidekick, if you will,” Dor Skuler, CEO and co-founder of Intuition Robotics, tells Urbo.
“It will be there for us and help us and give us information,” he says. “It will help us overcome challenges in our lives. That doesn’t mean it takes the place of significant and important human connection, but it does mean we have assistance [with] meeting our goals for the type of life we want to live.”
We’re picturing an adorable WALL-E-like robot following us around, taking notes and making suggestions.
It might sound like science fiction, but it’s not too far removed from reality. Recently, Intuition Robotics’ ElliQ, a “social robot” designed to help older adults stay active, was named the Best of Innovation winner in the Smart Home category at the 2018 Consumer Electronics Show. It sounds a lot like Skuler’s “robot companion” of the future; by using state-of-the-art machine learning techniques, it gradually becomes a personalized assistant for its owner.
“You don’t need to approach it and ask for something, like you would for an [Amazon] Alexa,” Skuler explains. “The system understands the context of what’s happening at home, and then ElliQ will simply wake up and say ‘Hey! Would you like to listen to some music?’ or, ‘It’s a nice day, would you like to go for a walk?'”
“Or it will remind you to take your medicine or to drink some water or suggest an interesting TedTalk or something of that nature. That really changes the paradigm.”
ElliQ is designed to communicate with humans in a variety of ways. It speaks, but it also has a removable screen that shows pictures, videos, and other types of information. By gradually learning about its user, it personalizes itself; the robot might prompt certain users to become more physically active, or it might show captions for users who are hard of hearing.
“It doesn’t look like any other robot out there,” Skuler says. “It doesn’t try to look humanoid, doesn’t try to recreate what we imagine robots to be from science fiction movies.”
These types of innovative robotics applications could conceivably change the world for the better. Picture a world where older people never have to ask their young relatives to show them how to pull up a video on YouTube. Consider how much easier a nurse’s job might be if he never has to remind patients to take their medications.
Major advances in robotics have resulted in amazing new technologies over the past decade, both in terms of artificial intelligence and mechanics. Inventors like Elon Musk envision a world in which robots move so fast, “you’ll need a strobe light to see [them],” while Skuler believes that “social robotics” will change the way we communicate.
But those advances carry potentially serious consequences. Some industry analysts believe that robots also pose a threat to human existence—or at least to current societal norms. Musk also referred to artificial intelligence as “our biggest existential threat,” while physicist Stephen Hawking told WIRED that robots could become “a new form of life that will outperform humans.“
So, which is it? Will we live comfortably with our artificially intelligent robot companions for centuries to come, or will our robotic overlords demolish humanity?
The answer is, of course, somewhat complicated. For starters…
Machine learning has incredible potential, but bias is still a problem.
Essentially, a robot is simply a machine that carries out a series of actions after receiving a command. The potential threat doesn’t come from the robot itself, but rather the source of the command. That’s where machine learning comes in. Many of the most innovative robotics projects rely on computers that learn without explicit programming.
Over time, the computers develop their own instructions after processing information, but that “self improvement” could become problematic if it’s not in humans’ best interest.
“Computers only work with the information you give them,” a California-based robotics engineer, who asked to remain anonymous given the sensitive nature of this topic, tells Urbo. “Say you use machine learning to tell police departments where to police—and some departments are doing that right now.”
“The program tells them where they should be doing their patrols, but to create that information, you’re telling the program where police have been patrolling. You’re not giving it an unbiased, unfiltered list of all of the crimes in each neighborhood, you’re specifically giving it arrest information.”
Such a program might actually result in biased policing, potentially threatening humans’ right to a presumption of innocence. If this sounds like something out of Minority Report, consider this: It’s happening. In 2017, Time reported on Chicago’s algorithm-based methods for determining whether suspects might be threats.
“It’s potentially more dangerous than a typical bias, since people assume that computers are ‘smarter’ than humans,” the engineer says. “They’re less willing to see bias when it’s coming from a machine.”
As robots become more proactive, the threat becomes greater.
Over the past decade, weaponized drones—unmanned flying vehicles, typically piloted by a human in a remote location—have changed warfare. Some time in the near future, drones might not need the human pilot.
Once again, this isn’t science fiction. The United States military already has an autonomous drone called Perdix, which typically operates in a “swarm” of eight robots. The robots communicate with each other to map roads, locate potential targets, and handle other relatively complex tasks after receiving simple orders from a human. Tell them to explore a road, and they’ll figure out the best way to accomplish their mission.
Perdix isn’t a weapon, but the Pentagon is spending upwards of $3 billion per year on autonomous weapon systems. Give those weapons a nefarious mission such as “eliminate as many human targets as possible,” and they’ll figure out the most efficient way to do that.
Yes, we know that this is starting to sound like a Black Mirror episode.
In the near future, robots could add to economic instability.
Type “will robots” into Google, and one of the top results is “will robots take my job?” Granted, that’s exactly what we were going to search for—credit to Google’s machine learning search algorithms—but it also shows that many people are anxious about the robotic workforce. In fact, there’s even a website called Will Robots Take My Job? that attempts to assign an “automation risk level” to various careers. Writers, by the way, are fairly safe.
Skuler says that, yes, machines will take some jobs. The good news: By and large, they aren’t jobs that most people want to do.
“It’s kind of like the industrial revolution. We had a really big change that was good for humanity overall. For people who were working in the shoe factory in the beginning of the previous century, however, it probably didn’t feel all that great.”
But Skuler says that those machines won’t replace, say, schoolteachers and receptionists; they’ll simply eliminate the boring parts of their jobs, allowing those professionals to focus on human connections.
“I think if you look at the potential of systems that are proactive, highly personalized, and also have a personality, one can imagine [robots] with kids,” he says. “One can imagine them in an office reception desk, and [there are] many, many other potentials as well.”
As for people in manufacturing and other industries where automation is an immediate concern, they probably need to prepare for some growing pains.
“[Robots] can just do repetitive tasks faster and more efficiently than us,” Skuler says. “Hopefully, that means that we won’t be weighed down by doing repetitive tasks, and [we’ll] do things that are more interesting.”
As large swaths of the workforce surrender their day jobs to automation, humans will need to do what we do best: adapt. That might mean introducing a universal basic income, shortening the work week, or finding new types of jobs—or, more likely, some combination of all three.
Ultimately, robotics is ultimately like any other powerful tool: It needs regulation.
As robots become more and more capable, we’ll need to put rules in place to ensure that they’re working for humanity’s benefit. Simple, right?
Entrepreneurs like Musk don’t have much of a financial incentive to push for regulation, but they’re doing just that. That’s because artificial intelligence is, well, terrifying, and it doesn’t always work in obvious ways; and as computers become more self aware, regulations might need to be updated constantly.
Before you run out to buy your robot insurance, however, remember that there’s always some degree of anxiety surrounding exciting new technologies.
For his part, Skuler doesn’t believe that we’re headed for a dystopian future. While he says that some regulation will help to point robotic applications in the right direction, he believes that intelligent, physically capable robots will help us create a better future. Robots will take the jobs that humans don’t want to do, along with the jobs that they can perform better than humans, but they won’t replace human interaction—as long as humans set the right guidelines.
“I think we should look at [regulating] weaponized drones,” Skuler says. “There are scary videos out there of people talking about those things…[but] I don’t think it’s different from any other times we invented very powerful technology in mankind’s history. And it’s up to us to decide who we want to use it and how we regulate it.”