An animal charity in San Francisco has become a target for much global criticism and local abuse, all because of the behaviour of its most recent recruit: an R2D2-style robot.
The 1.5 metre tall Knightscope security robot trundled around neighbouring car parks and alleyways recording video, stopping to say hello to passers by. But it’s also accused of harassing homeless people living in nearby encampments, who saw visits by the winking bleeping robot as just surveillance from an intruder feeding information to the police.
A social media storm led to calls for acts of retribution, violence and vandalism against the charity. The Knightscope itself was regularly tipped over, once covered in a tarpaulin, smeared with barbecue sauce and even faeces. It’s not the first time there’s been problems in the presence of electronic brains in the real world. A similar robot in another US city inadvertently knocked over a toddler; others have been tipped over by disgruntled office workers.
It’s an example of how robot technologies can provoke extreme reactions. Security was a real issue for the charity. There had been a string of break-ins, incidents of vandalism and evidence of hard drug use that was making staff and visitors feel unsafe, and it seems reasonable the charity should try to make people feel safer. No-one intended to interfere with the lives of homeless people, some of whom found the robot endearing.
New forms of Artificial Intelligence
The introduction of new forms of Artificial Intelligence into people’s everyday lives is one of the biggest challenges facing societies. How far should we let robots walk the same streets, live in our homes, look after relatives, perform surgery, drive our trains and cars? What kinds of roles are acceptable and what are not? And most importantly, who sets the rules for how they behave, and how they decide on priorities?
The potential of robot-enhanced living – in everything from transport to health and social care – and the huge commercial opportunities involved, mean we will become more accepting of robots as they become more familiar and inescapable part of our lives. But that central issue, of what kinds of robots we want and where, needs dealing with now. The debate needs to be shaped as much by ordinary members of the public as by technologists and engineers. We all have a stake in deciding what makes a ‘good’ robot.
The British Standards Institute published the first standard for the robot ethics in 2016, the BS 8611. But that’s just the start. As part of its work with the BSI’s UK Robot Ethics Group, Cranfield University wants the views of the public on the future of robots in our lives (www.cranfield.ac.uk/robotethics) to develop new standards and inform the work of developers and manufacturers.
Our relationship to AI and to robots is messy and confused. On the one hand there are attacks on robots when there’s any suggestion of potential intrusion. On the other we’re increasingly attached emotionally to our personal technologies, to our personalised phones and tablets, and make pets of robot toys and anything which shows signs of engagement, no matter how limited and fake. We need to be clear-sighted about the future and human-robot relationships and that means debate now before the sheer scale of consumer opportunities and cost-savings make the decisions for us.
Find out more about our new Robotics MSc course: https://www.cranfield.ac.uk/courses/taught/robotics
Read more about our work in this area: www.cranfield.ac.uk/robotethics8