On July 15, 2007, in Uncategorized
Given the acceleration of innovation in robotics, South Korean and European scientists think it is not too soon to start considering the risks, responsibilities, and ethicsrelating to increasingly intelligent and autonomous robots.Already, on the premise that robots will intrude much deeper into our lives, European researchers are working to make them sensitive to human emotions
Robot EthicsSource articleAs increasingly intelligent robots encroach on human spheres of endeavor, issues not only of risk and responsibility but also of ethics will have to be addressed. Scientists in South Korea and Europe have begun formal efforts to do so.

Who, asks scientist Dylan Evans writing for the BBC News , is responsible if an intelligent robot injures someone? Is it the designer, the user, or the robot itself? Software robots – basically, just complicated computer programs – already make important financial decisions. Whose fault is it if a software robot, commonly used today by investors, makes a bad investment?

What if robots became sentient, feel pain, develop emotions? Should they be allowed to marry humans? Should they be allowed to own property? “These questions might sound far-fetched,” writes Evans, “but debates over animal rights would have seemed equally far-fetched to many people just a few decades ago.”

Emotion Robots

Source article

Roboticists, developmental psychologists, and neuroscientists from six European countries have started a three-year project to help robots interact emotionally with humans. “Feelix Growing,” as the project is called, is building a series of robots that “learn from humans and respond in a socially and emotionally appropriate manner,” as a project leader described it to the BBC News .

Like babies taking cues from those around them, the robots take sensory input from the humans they are interacting with and then adapt their behavior accordingly. The hardware is simple, including off-the-shelf robots, though the team will add expressive capabilities to robot faces. The key is the software that learns from the tactile and emotional feedback the robots receive from humans, such as kind words, a pat on the back, or helping the robot out of a jam.

The feedback comes via simple cameras, microphones, contact sensors, and distance sensors, and is analyzed by artificial neural network software to detect facial expressions and motion patterns. The project is focusing on emotions such as anger, happiness, and loneliness — emotions that should alert the robot to an appropriate response.

One of the first robots built is already exhibiting the kind of imprinted behavior found among newborn birds and some mammals. A researcher said: “They get attached to the first object they see when born. It is usually the mother and that’s what makes them follow the mother around. We have a prototype of a robot that follows people around and can adapt to the way humans interact with it. It follows closer or further away depending on how the human feels about it.”

At the end of the project two robots will be built which integrate the different aspects of all of the machines being developed at the project’s academic and commercial partners across Europe.


Leave a Reply

Your email address will not be published. Required fields are marked *