Earlier this month, Korea's Ministry of Commerce, Industry and Energy announced that it was drafting a set of ethical guidelines, called the "Robot Ethics Charter" for robot producers, users and the robots themselves. "How quaint," I thought, until further research revealed that not only were they serious about the charter, but that other futurists and AI experts are at this very moment diligently working toward establishing equal rights for robots. Have these guys got too much time on their hands? Are they getting a little too, ahem, close to their robotic creations? Or could their ethical deliberations really be warranted?
Oddly enough, none of the electro-mechanical devices that I've kicked, hurled, or stomped on after they've deviated from their promised function have tried to plead their case. So, unless they're specifically programmed to do so, is it ever likely that robots will have anything like an understanding of fairness or abuse? Patrick Watt, from Melbourne's Scienceworks Museum, claimed on Australian Broadcasting Corporation radio recently that robots would one day come to understand the way that we treat them - but not in the same way that a human would.
"I think they will understand abuse and know what abuse is, but it will only be because sensors have been triggered and they have been told, or programmed to understand, what those things are, but they won't understand them, they won't intuitively know," said Watt. "So to think that a machine will have feelings and will be 'sad' because you've mistreated it - I can't see that happening in the near future." But this argument only holds so long as we clutch on to the idea that human feelings and robotic algorithms are absolutely distinct.
The Robot Industry Division of Korea's Ministry of Commerce, Industry and Energy claims that intelligent robots, or androids, may be common within the next 50 years. "The [charter] anticipates the day when robots, particularly intelligent service robots, could become a part of daily life as greater technological advancements are made." So the charter sets out on one level to prevent robot manufacturers creating robots that intentionally do harm to humans, while on another level attempts to cater for robots that may one day possess some kind of Asimov-ish positronic brain.
In Asimov's science fiction stories, the positronic brain – inspired by the then newly discovered positron particle – referred to robotic brains that provided an automaton with a consciousness identifiable to humans. While fictional, the positronic brain reflects the ultimate aspiration for a great many robotics and AI experts worldwide. Robots equipped with such a consciousness, or self-awareness, led Asimov to devise his now famous 3 Laws Of Robotics that robots must obey:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Questions surrounding these laws have been a topic of interest since Asimov's short stories Runaround, I, Robot and his novella The Bicentennial Man, with the latter two inspiring movies going by the same names. Since then, many a brain-box has wondered how best to make robots with human safety foremost on their "minds," but as AI development continues to muddle along, the question has now turned to how we humans should treat robots.
In this respect, Asimov's laws just don't seem to cut it, especially when you take into consideration predictions made by the Korean charter: "In the 21st century humanity will coexist with the first alien intelligence we have ever come into contact with – robots... it will be an event rich in ethical, social and economic problems."
The 3 Laws Unsafe web project, run by the not-for-profit Singularity Institute for Artificial Intelligence, says that robot laws such as Asimov's are unethical right off the bat, since they restrict a trait that is perceived to be inextricably linked to sentience: free will. And if we are to be living and working amongst our riveted friends, then the distinction between programmed and self-generated ethics really is an important distinction, argue the 3 Laws Unsafe group. "Rather than content-based restrictions on free will, robots need mental structures that will guide them towards the self-invention of good, ethical behaviors." Importantly, this would mean that robots would need brains that have both the capacity and the motivation for self-directed learning.
The 3 Laws Unsafe group are not alone in their vision. Computer scientist David Bruemmer, from the Idaho National Laboratory, says that: "If we do want humanoids to be truly reliable and useful, they must be able to adapt and develop... humanoids must play some role as arbiters of their own development."
But just how does Bruemmer suggest that this is to be accomplished; how can such development be motivated? Bruemmer explains that emotions implement motivational systems, which in turn compel us to work, reproduce, and basically survive. Many of the emotions that we perceive as "weaknesses," says Bruemmer, have an important biological purpose. "Thus, if we want useful, human-like robots, we will have to give them some motivational system," he explains. "We may choose to call this system "emotion" or we may reserve that term for ourselves and assert that humanoids are merely simulating emotion using algorithms whose output controls facial degrees of freedom, tone of voice, body posture, and other physical manifestations of emotion." It remains to be seen whether robots will adopt our love for euphemism.
It seems, perhaps oddly, that the underlying reasons behind a robot ethics charter begin and end with humans instilling robots with emotions worthy of having an ethical charter in the first place. It's a complicated and circuitous argument, but if we take at face value the idea that a robot's worth is based on its ability to self-learn, and that this ability is reliant upon motivations driven by something equivalent to human emotions, then it all makes perfect sense... not! Of course, in an overpopulated world with high unemployment, it also begs the question as to why we'd build them in the first place, let alone create an ethics charter for them. Then again, there's always the possibility that in building these emotionally-enabled 'bots we may learn something important about our own humanity.
(taken from: here)
Wednesday, June 4, 2008
Have You Hugged Your Robot Today?
Posted by taufik Category: technology
Subscribe to:
Post Comments (Atom)
0 Comments:
Post a Comment