Robots will be more useful if they are made to lack confidence
Robot confidence in your abilities is usually a good factor – so long as you may understand when it’s time to ask for assist. As we build ever smarter software program, we can also want to apply the equal thinking to machines. An experiment that explores a robot’s experience of its personal usefulness could help guide. How future artificial intelligence are built.
Robot Overconfident Artificial intelligence can cause all kinds of problems, says Dylan Had-field-Men ell on the university of California, Berkeley. Take Facebook’s news feed algorithms, as an instance. these are designed to suggest articles and posts that people need to look and share. yet by following this remit unquestioningly, they have got ended up filling a few people’s feeds with fake information.
For Had-field-Men ell and his colleagues, the solution is to make AIs that seek and accept human oversight. “If Facebook had this thinking, we won’t have had this kind of problem with fake news,” he says. Rather than pushing every article it thinks Facebook users want to see, an algorithm that was more uncertain of its abilities would be more likely to defer to a human’s better judgement.
The Berkeley group designed a mathematical model of an interaction among humans and robots called the “off-switch recreation” to explore the idea of a pc’s “self-confidence”.
In this theoretical game, a robot with an off switch is given a task to do. A human is then free to press the robots off switch whenever they like, but the robot can choose to disable its switch so the person cannot turn it off. Robots given a high degree of “confidence” that what they were doing was useful would never let the human turn it off, because they tried to maximize the time spent doing the task. In contrast, a robot with low confidence would always let a human switch it off, even if it was doing a good job. Yet Had-field-Men ell does not think we should make an AI too insecure. If an autonomous car was driving a young child to school, the car should never allow the kid to take control, for example. In this case, the AI should be confident that its own abilities outstrip the child’s, whatever the situation, and refuse to let the child turn it off. The safest robots will strike a balance between the two extremes, says Had field-Men ell.
AIs that refuse to let humans turn them off might sound far-fetched. But such issues need to be critical for all people making robots that work along human beings. Says Marta Kwiatkowska at the University of Oxford. Machines such as driver less cars and robot firefighters will be asked to make decisions about human safety, so it is vital that the ethical framework for these decisions is put in place sooner rather than later, she says. The off-switch game is only the start, says Had field-Men ell. He plans to explore how a robot’s decision-making changes when it has access to more information about its own usefulness. For example, a coffee-making robot might consider its task more useful in the morning. Ultimately, he hopes his research will lead to AI that is more predictable and makes decisions which might be easier for humans to understand. “If you’re sending a robot out into the real world, you want to have a pretty good idea of what it’s doing,” he says.