18 April, 2024

A fair cop for robots

10 May, 2022

In most cases, humans have accountability in the workplace, whether in terms of quality standards, health & safety practice, treatment of other members of staff and so on. Therefore, decision making and the execution of practices that follow those decisions need to be carefully and professionally considered within current legal frameworks. Interestingly, as the use of robots becomes more prevalent within various professional spheres, a question that hitherto hasn’t been addressed to any major degree is what level of accountability should our high-tech friends have for their actions during their allocated hours of work?


According to new research from Durham University Business School, we need to create specific accountability guidelines to ensure that the use of AI robots remains ethical. The paper sets out a new framework for ensuring organisations that employ AI robots have accountability guidelines in place. In order to develop the framework, Zsofia Toth, associate professor in marketing and management at Durham University Business School, alongside her colleagues’, professors Robert Caruana, Thorsten Gruber and Claudia Loebbecke, reviewed the uses of AI robots in different professional settings from an ethical perspective. The researchers then developed four clusters of accountability, to help the identification of specific actors who are accountable for AI robots’ actions. These four clusters revolved around the ethical categories of illegal, immoral, permissible and supererogatory, which are outlined in normative business ethics.

Supererogatory actions represent a positive extra mile from what one expects morally, while the other three categories were either neutral or negative. Illegal is any action that is against the law and regulations; immoral is any action that only reaches the legal threshold’s bare minimum, and morally permissible actions are those not requiring explanations of putative fairness or appropriateness. Humans can set boundaries in what AI robots can and should learn and unlearn (for instance, to decrease or eliminate racial/gender bias) and the type of decisions they can make without human involvement (for instance, in case of a self-driving car in an emergency setting).

Professor Toth says that in a normal working environment, if a person makes an error, mistake or commits any wrongdoing, it is obvious who is accountable in most circumstances, either that person specifically or the wider organisation. “However, when you bring AI robots into the mix, this becomes much more difficult to understand, and also how such incidents could be prevented. Hence why this framework offers ways to ensure that there is more clarity about responsibilities from the beginning of the use of an AI robot.”

Each cluster in the framework had actors who are responsible for the AI robot’s actions. In the first cluster ‘professional norms’, where AI robots are used for small, remedial, everyday tasks like heating or cleaning, robot design experts and customers take most responsibility for the appropriate use of the AI robots. In the second cluster, ‘business responsibility’ – where AI robots are used for difficult but basic tasks, such as mining or agriculture – a wider group of organisations bear the brunt of responsibility for AI robots. In cluster three, ‘inter-institutional normativity’ – where AI may make decisions with potential major consequences, such as healthcare management and crime-fighting – governmental and regulatory bodies should be increasingly involved in agreeing on specific guidelines.

While in the fourth cluster, ‘supra-territorial regulations’ – where AI is used on a global level, such as in the military, or driverless cars – a wide range of governmental bodies, regulators, firms and experts are hold accountability. This comes with the high dispersal of accountability. This, however, does not imply that the AI robots ‘usurp’ the role of ethical human decision-making, but it becomes increasingly complex to attribute the outcomes of AI robots’ use to specific individuals or organisations and thus these cases deserve special attention.

As the researchers say, accountability for the actions of robots has previously been something of a grey area. However, frameworks such as this could help to reduce the number of ethically problematic cases related to the use of AI robots.

Ed Holden, Editor




Events
 
Buyers' Guide Search
 
Search for UK supplier by name
Browse by Product Group.
Magazine
MARCH 2024To view a digital copy of the MARCH 2024 edition of Hydraulics & Pneumatics Magazine, click here.

For a FREE subscription please click here

To visit the Library for past issues click here

JANUARY/FEBRUARY 2024 IssueTo view a digital copy of the JANUARY/FEBRUARY 2024 edition of Hydraulics & Pneumatics Magazine, click here.

For a FREE subscription please click here

To visit the Library for past issues click here

JULY/AUG 2023 Issue inc. BUYERS' GUIDETo view a digital copy of the JULY/AUGUST ISSUE of Hydraulics & Pneumatics magazine that includes the ANNUAL BUYERS' Guide for 2023, click here.

To visit the Library for past issues click here

BFPA YearbookTo read the latest BFPA Yearbook, click here ..
BFPA Training AcademyClick the image to go to the BFPA Training Academy website
Compressed Air & Vacuum Technology Guide 2018To read the official BCAS Compressed Air & Vacuum Technology Guide 2018 click here
Offshore Europe Journal
Newsletter
 
Newsletter