<roblem with building advanced, capable robots like the Jetsons’ robot maid Rosie is that as smart as robots can be when programmed with the proper algorithm—algorithms to play chess, or integrate mathematical functions—they fail spectacularly at successfully navigating the everyday situations, like washing dishes by hand, that humans tackle with ease.
The key to building an effective robot like Rosie does not lie in programming it with algorithms for every conceivable type of situation, which would be both inefficient and require too much data. Instead, our capable robot friend might develop the same way a human child does, learning from experience and imitating those around it to eventually gain some of the same abilities as human adults.
Have you ever seen a baby learning to walk? First, it must observe adults walking. Next, it sets out to imitate, pushing itself to its feet. But it takes some trial and error, and lots of losing its balance and landing on its bottom, before it can consistently toddle around on two legs. Robots could be programmed to learn in the same way.
After all, if a drooling bundle of nerves and muscles can quickly learn how to interact with its environment and grow into a conscious, self-aware human being, surely a high-tech robot could do the same. Unfortunately, we can’t program a robot to gain knowledge exactly like a baby because we don’t entirely understand why babies’ brains learn so quickly and efficiently.
In fact, scientists recently developed a “robot toddler,” called the iCub, to provide a model for a child’s development. Researchers wish to discover whether the iCub can learn the same way a human child does, which may shed light on how a child’s brain picks up and integrates information.
Although scientists created the iCub to learn about human development, it raises a question: Once we know more about how babies develop, could we make a more realistic robot baby, an iCub 2.0? This robot would integrate knowledge like a human child does, gaining the ability to navigate daily life.
In order to pick up as much data as possible about its surroundings, the iCub 2.0 will need advanced senses. We can simulate taste and smell by enabling the robot to analyze the molecular structure of the substances around it, and it can accomplish hearing and with sensors. What about touch?
Researchers at UC-Berkeley and Stanford University have developed two different types of flexible mechanical “skin,” which can differentiate between the feel of different materials. Each university’s skin works differently, but both require a touch or pressure on the skin to send a detector some signal proportional to the touch’s strength.
For example, if a butterfly lands on the UC-Berkeley skin, its feet touch pressure-sensitive rubber, which sends the signal through germanium and silicon nanowires rolled onto a film. Their nanowire form lends flexibility to these normally brittle semiconductor materials.
The Stanford skin, on the other hand, relies on a polymer layer riddled with pyramid-shaped holes. If a bug lands on this skin, it depresses the polymer, which squishes into one of the holes and changes the polymer’s ability to hold electric charge. Either of these skins could give our robot baby the sense of touch.
So we build a mechanical construct equipped with the machine equivalent of all five human senses and programmed to integrate information the same way human children do. Do we press the “on” switch? Not so fast.
Imagine a variation on hide-and-seek. You have to close your eyes and count to twenty, while your opponent chooses one of three hiding spots. On the way to each hiding spot, a marker is balanced so that if your opponent travels straight to her chosen spot, she’ll knock over that spot’s marker on her way. When you open your eyes, you just have to look for the fallen marker to find your hidden partner.
A devious opponent, however, might knock over one marker, but move to a different hiding spot once beyond the markers. This little deception sounds like a human trick, but researchers recently trained a robot to do the same. If it was in conflict and could influence its opponent through deceit, the hiding robot could and did set up a false trail, essentially lying to its opponent.
However, a robot needn’t be programmed for deceit to develop lying abilities. In 2007, researchers found that robots programmed to learn from experience, like our robot baby, could spontaneously learn to lie. Before we fire up Junior, we might want to consider the implications of our actions—we don’t want to create Skynet.
My problem is that robotics researchers don’t seem to be worried about these implications. They blithely go about building lying robots without considering the fact that this is a terrifying idea. Okay, so it’s a bit of an exaggeration to go from lying robots to the robot uprising that will crush humanity, but the fact remains: robots that learn from experience can pick up human-like abilities like lying. If our baby robot develops like a human, might it also develop self-awareness? We will have created mechanical life!
Is it right for us to play G-d in this way? I haven’t heard much debate about the moral quandary of humans creating sentient life. Granted, we can blame the lack of argument on the fact that we are not even close to the technology to make an advanced baby robot. But if we could, should we? Would it be morally wrong–could it put humanity at risk? Let’s start the debate.
-Sophie Bushwick is a Carletonian columnist