This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

EU regulation of AI: What Asimov’s Three Laws of Robotics would look like in the real world?

Isaac Asimov condensed his framework to regulate robots into three succinct laws. First, a robot may not injure a human or, through inaction, allow a human to come to harm. Secondly, a robot must obey orders from humans, except where they conflict with the first law. Thirdly, a robot must protect its own existence so long as that does not conflict with the first or second law. (Asimov later added the Zeroth law to cater for situations in which robots have taken responsibility for governing).

Leaked proposals from the EU Commission on how it proposes to regulate AI are less easily digestible, running to over eighty pages and creating a regulatory super-structure for AI.

Given the technological acceleration caused by the Covid-19 pandemic, will this be too late to rein in the rampant expansion of AI in society, or will the hype cycle have moved on, leading to questions as to why this particular form of technology so desperately needed its own specific regulation?

The challenge of an emerging technology

The length and complexity of this proposed Regulation is partly due to the challenges in creating an effective regulatory solution for AI.

This technology can be embedded within a product, provided as a service or used internally within an organisation. The technology might exist as a general purposes tool or a model trained for a specific task. Added to that there is little agreement as to where the boundaries to this technology lie – as the draft Regulation candidly admits: “there is no universally agreed definition of artificial intelligence".

This is also an emerging technology. While it has cracked some hard domain-specific problems (such as face recognition or language translation), this is little sign of machines with genuine intelligence or replicating the flexibility of the human mind. Skynet and HAL 2000 remain firmly in the realm of science fiction. Regulating this type of emerging technology risks constraining innovation and might just be unnecessary.

The solution - a tiered and targeted approach

The solution adopted in the leaked Regulation is a tiered and targeted approach to regulation:

  • There will a general ban on the use of AI to manipulate or exploit humans to their detriment, conduct indiscriminate surveillance or carry out general purpose scoring.
  • A tightly defined class of “high risk” AI will be subject to detailed and onerous regulation including obligation to use risk management systems, provide human oversight, only use good quality data and include details of the AI on a public register.
  •  Transparency obligations will apply to the creation of “deepfakes” and other AI systems used to interact with humans, to make it clear the human is dealing with a robot.
  •  Similarly, specific obligations will apply to the use of biometrics in public spaces, such as facial recognition enabled CCTV.

See our latest DigiLinks post for more details.

Next steps

The new Regulation will have a one or two year implementation period. So adding in the time for this to pass through the EU’s legislative machinery, these new obligations are unlikely to apply until 2024 at the earliest. 

Read more

In the meantime, the use of artificial intelligence continues to raise new and interesting legal issues across a whole range of areas, including data protection, intellectual properly, competition, financial regulation and product liability. Our AI Toolkit provides detailed, practical tips on how to deploy this technology safely, ethically and lawfully.


There is no universally agreed definition of artificial intelligence

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

data