A robot is any automatically operated machine that replaces human effort, following a set of instructions. It can be controlled remotely or have its own built-in control system. An autonomous robotic machine performs as a co-worker, aiding in tasks typically performed by humans, yet differing in appearance and manner of operation. Today, the global ratio of robots to humans in the manufacturing industry is 1 to 71, with over 3.4 million industrial robots worldwide. But rapid evolution of robotic technology with autonomy and agency and their increasing use in industries has raised concerns among stakeholders and scientists.

As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement. And studies have suggested that further advancements might lead to robots being held accountable for unfortunate incidents, particularly those causing harm to civilians. Dr. Rael Dawtry, who led a study at the University of Essex’s Department of Psychology, raises very crucial questions about how responsibility should be determined in case of accidents as robots take on riskier tasks with less human control.

Interestingly, the study, published in The Journal of Experimental Social Psychology, found that simply labeling machines as “autonomous robots” rather than “machines” increased perceptions of agency and blame.

Assigning blame promptly is the current tendency

Blaming a robot for accidents might seem pointless since they don’t have feelings. Robots’ actions are controlled by their programming, which is the responsibility of humans like designers and users. Similar to how blame falls on the manufacturer when a car has a defect, it often falls on those who oversee the safety of autonomous vehicles.

Even though robots might seem to make their own decisions, people still tend to blame them, especially in situations involving harm. This blame extends even when the robot’s choices aren’t clear or when accidents occur due to human error or mechanical issues.

Despite this, people tend to assign blame quickly, even if the robot’s actions weren’t intentional. This suggests that people attribute higher levels of agency to robots, leading to increased blame when things go wrong, even if the robots lack subjective experience.

Why We Tend to Blame Robots

Understanding why we blame robots involves looking at two things: agency and experience. Basically, we tend to see robots as having some level of human-like ability to think and act on their own, and we also sometimes think they can feel things like humans do.

We’re pretty quick to think robots have agency, especially when they move around by themselves or seem human-like. This helps us understand their actions based on what we know about how people behave. If a robot does something unexpected or harmful, we’re more likely to see it as having agency and therefore being responsible for what it did.

When we think a robot could have made different choices to avoid causing harm, we’re more likely to blame it. This is because we see agency as involving the ability to foresee what might happen and choose different actions. So, the more agency we think something has, the more we’re likely to hold it accountable.

As for experience, it’s a bit less clear-cut. Sometimes we think robots can feel things, especially if they look human-like, but it’s not as strong as our tendency to see them as having agency. Still, considering both agency and experience can help us decide who’s to blame for what. If we see a robot as having experience, we might be more likely to blame it, especially if we think it should feel bad about what it did.

Are robots solely responsible for deaths?

Of course not!

The study found that the more advanced robot was seen as having more control compared to the less advanced one. However, when it came to blaming for mistakes, the sophistication of the robot didn’t really matter. Instead, who was being blamed depended on the situation.

The research looked into how people judge the actions of robots. They discovered that people tend to see robots as having more control and are more likely to blame them for mistakes. This blaming happens because people think robots have more power to make decisions.

In addition, the researchers came to know that simply calling machines “autonomous robots” instead of just “machines” increases the perception of their control and blame. What this exactly suggests is people automatically assume that autonomous robots have more human-like qualities, like making decisions.

When it comes to deciding who’s responsible for accidents involving robots, it’s a big topic in ethics and law, especially with things like autonomous weapons. And these findings show that as robots become more independent, people may hold them more accountable for their actions. Or, they will assume themselves to be less powerful than those machines.

Whom to blame, then?

The research has suggested that people tend to blame robots more than machines for accidents, especially when robots are labeled as “autonomous.” Even when robots and machines had similar levels of experience, participants still leaned towards blaming robots more, with an increase in blame of 39% (p < .05). This indicates that how sophisticated and autonomous a robot appears influences how much blame it receives.

However, despite the tendency to blame robots, humans were consistently blamed more than robots in accidents with humans being blamed 63% more than robots (p < .05). This raises questions about how responsibility should be assigned in situations involving autonomous machines.

The essence of this research, indeed, is less about arriving at a definitive answer to the question of whom to blame in situations involving autonomous machines and more about presenting the complexity of assigning responsibility in such scenarios.

References:

  • https://www.sciencedirect.com/science/article/pii/S0022103123001397
  • www.euractiv.com/section/transport/opinion/whos-to-blame-when-your-autonomous-car-kills-someone/
  • www.bmj.com/content/363/bmj.k4791
  • https://apnews.com/article/technology-business-traffic-government-and-politics-a16c1aba671f10a5a00ad8155867ac92
Oracle introduces generative financial AI Previous post Oracle introduces generative financial AI
A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords Next post A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords
Show Buttons
Hide Buttons