The European Union is considering new legally binding requirements for developers of artificial intelligence (AI) in an effort to ensure modern technology is developed and used in an ethical way.
The EU’s executive arm is set to propose the new rules apply to “high-risk sectors,” such as health care and transport, and suggest the bloc updates safety and liability laws, according to a draft of a so-called “white paper” on artificial intelligence obtained by Bloomberg. The European Commission is due to unveil the paper in mid-February and the final version is likely to change.
The paper is part of the EU’s broader effort to catch up to the US and China on advancements in AI, but in a way that promotes European values such as user privacy. While some critics have long argued that stringent data protection laws like the EU’s could hinder innovation around AI, EU officials say harmonising rules across the region will boost development.
European Commission president Ursula von der Leyen has pledged her team would present a new legislative approach on artificial intelligence within the first 100 days of her mandate, which started December 1, handing the task to the EU’s digital chief, Margrethe Vestager, to coordinate.
A spokesman for the Brussels-based Commission declined to comment on leaks but added: “To maximise the benefits and address the challenges of artificial intelligence, Europe has to act as one and will define its own way, a human way. Trust and security of EU citizens will therefore be at the center of the EU’s strategy.”
The EU wants to urge its member states to appoint authorities to monitor the enforcement of any future rules governing the use of AI, according to the document.
Additionally, the EU is also considering new obligations for public authorities around the deployment of facial recognition technology and more detailed rules on the use of such systems in public spaces. The provision suggests prohibiting use of facial recognition by public and private actors in public spaces for several years to allow time to assess the risks of such technology.
In the draft, the EU defines high-risk applications as “applications of artificial intelligence which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity”.
Artificial intelligence is already subject to a variety of European regulations, including rules on fundamental rights around privacy, non-discrimination, as well as product safety and liability laws, but the rules may not fully cover all specific risks posed by new technologies, the Commission says in the document. For instance, product safety laws currently would not apply to services based on AI.
The EU’s AI strategy will build on previous work coordinated by the commission, including reports published in the last year by a committee of academics, experts and executives. EU rules often reverberate across the globe, as companies do not want to build software or hardware which would be banned from the bloc’s vast developed market.
One of the reports outlined a set of seven key requirements that AI systems should implement to be deemed trustworthy, including incorporating human oversight, respect for privacy, traceability and avoiding unfair bias in decisions taken by the systems. The other report outlined policy and investment recommendations for the EU and its member states. The experts said unnecessarily prescriptive regulation should be avoided but that governments should restrict the development of automated lethal weapons and consider new rules around unjustified tracking through facial recognition or other biometric technologies.
Alphabet’s chief executive officer Sundar Pichai will also make a rare public appearance in Brussels next week to give a speech at a think-tank about the development of responsible AI ahead of the EU’s February announcement.