What happens when the robots get it wrong?

What happens when the robots get it wrong?

As new technologies and automation start playing an ever more important role in the legal world, what are the risks in relation to negligence? As new technologies and automation start playing an ever more important role in the legal world, what are the risks in relation to negligence? This article forms part of our special report into the future of law. Open the report here.

Technological progress always has about it a sense that change is happening at an unprecedented rate. In the case of automation, that notion seems to have about it an unusual sense of accuracy. As robots and algorithms take over responsibility for more and more daily tasks our lawyers, policymakers and software engineers are having to give increasing attention to the implications of these developments.

A significant reason for the scrutiny, and ever a key question in law, is what happens when things go wrong? Automation and technology brings with it unchartered understandings of human agency, and with these new understandings of agency we are beginning to see the potential for new kinds of negligence.

This article forms part of our special report into the future of law. Open the report here.

Unlimited liability

A central concern in understanding negligence is where ultimate fault might be said to reside. Much thought is already being given to the idea that the buck could stop with the developer of the programme. While it might seem abstract to consider that a programmer could be held liable for unintended consequences within aline of code, the thinking perhaps rests only on the intangible nature of code and software. We readily expect tangible goods and systems, such as cars or electronics, to be serviced and in full working order, without our understanding the intricacies of the software personally. So could developers be held responsible for damage suffered by a law firm using their technology?

Karen Yeung, of King’s College London, is sceptical. ‘This is a question about proximity, and causation,’ she says. ‘My gut feeling at this stage is that it is unlikely, but that depends upon whether the damage meets the test of reasonable “foreseeability” in negligence—as with a firm being accused of overreliance on automation, it would depend very much on the facts.’

Artificial intelligence (AI), however, is renowned for putting the subject of foreseeability onto very shaky ground indeed. The power of machine learning, and specifically its potential to work through near-infinite scenarios and points of removal, creates problems in defining what precisely constitutes ‘foreseeable’. A robot that trawls through data is equipped to discern patterns, connections—and thus foreseeability—that stretch human-scale understandings of negligence.

This article forms part of our special report into the future of law. Open the report here.

Gary Lea, an academic working on the regulatory impacts of AI, is pragmatic in his assessment of developer liability. ‘This will depend very much on the nature of the technology supplied and the circumstances of that supply—if, for example, the technology is encapsulated in software which is custom written for a law firm and supplied under agreement, it might be treated as supply of a service under the Supply of Goods and Services Act 1982 (SGSA 1982). If that were the case, reasonable care and skill in supply would naturally be required per SGSA 1982, s 13.’

The awareness of the risk of negligence, and its quite stark realities, is certainly affecting the way programmers do business. One programmer, who didn’t want to be named, suggested that he and others in the industry—who often work alone, in pairs, or as small groups—are increasingly registered as limited companies, and paying themselves by use of dividends, rather than setting up under self-employed status. The latter is often seen as preferential for its flexibility, but remains open to the risks of unlimited liability should there be mishaps that affect the large and valuable businesses they routinely undertake work for.

Casting her eye forward, Kristjana Çaka, a colleague of Yeung’s at King’s College London, assesses the difficulties at hand. ‘There are lots of conversations taking place, but identifying liability is always a sticky issue. Perhaps another path that might be explored is the idea of associating AI systems with a particular set of responsibilities.

‘Following this, it could well mean we can identify who is to be held responsible when a particular issue does arise. This line of thought allows you to come across various issues such as the difference between a system and a product and whether, from a legal perspective, there should be a difference between the two and whether we should, in some sense, personify such systems with responsibilities accordingly.’


If developers are concerned at the consequences they face in the event of being found negligent—should legal professionals be similarly conscious of the ramification if their oversight of AI is found wanting? Gary Lea furthers the notion that standards we already have should, if applied correctly, also remain of use in the future. He highlights that the Solicitors Regulation Authority, in its existing Code of Conduct (2011) outcomes, urges at O7.2: ‘you have effective systems and controls in place to achieve and comply with all the principles, rules and outcomes and other requirements of the handbook, where applicable;’

‘It is conceivable that relying too heavily on AI could open a solicitor to a charge of negligence,’ says Lea. ‘Solicitors are expected to exercise their own independent skill and judgment in giving legal advice to clients. The expectation is seldom departed from.’

However unsettling the notion of negligence by AI, as with much technology, it becomes less so when reconsidered not as the dawn of a new, robotic era, but in human terms.

‘Concerns about negligence will apply when relying on advice given by counsel, except where counsel has specialist legal knowledge that the relevant solicitor does not. The 2009 Fraser v Bolt Burdon case is a good example of as much, and how by parity of reasoning, reliance and overreliance on automated systems could be treated as failure to exercise independent skill and judgment.’ If the notion of how to provide oversight to an automated system seems troubling, perhaps the answer lies in reapplying that which we already have’.

This article forms part of our special report into the future of law. Open the report here.

Double standards

It is beyond doubt that automation and machine learning will affect standards in legal proficiency. Most obviously, this will be because of a need to ensure that automation can meet existing human standards, but its impact will also be felt in the opposite direction, based on a premise that automation can be used not to replace but to enhance the competency of human lawyers.

Roger Brownsword, who sits on the Royal Society’s working party on machine learning, makes the point clearly: ‘There will be a need to figure out a workable legal approach if lawyers find themselves sued for professional negligence where—in the first instance—they are claimed to have over-relied on machine learning but also—in a second plausible scenario—where they are claimed to have under-relied on the AI that is available.’

As outlined in Richard and Daniel Susskind’s book, The Future of the Professions, we are set to see white collar work redefined in ways that have already taken place among the blue collar of the western world. If software does to law what mechanical automation has already done in factories, we are entering a phase in the history of employment in which even lawyers—no matter their skill, training and traditional social status—will face new pressures.

This article forms part of our special report into the future of law.

Open the report here or click on the falling robots!

Related Articles:
Latest Articles:
About the author: