European officials want to limit police use of facial recognition and ban the use of certain kinds of AI systems, in one of the broadest efforts yet to regulate high-stakes applications of artificial intelligence.
The European Union’s executive arm proposed a bill Wednesday that would also create a list of so-called high-risk uses of AI that would be subject to new supervision and standards for their development and use, such as critical infrastructure, college admissions and loan applications. Regulators could fine a company up to 6% of its annual world-wide revenue for the most severe violations, though in practice EU officials rarely if ever mete out their maximum fines.
The bill is one of the broadest of its kind to be proposed by a Western government, and part of the EU’s expansion of its role as a global tech enforcer.
In recent years, the EU has sought to take a global lead in drafting and enforcing new regulations aimed at taming the alleged excesses of big tech companies and curbing potential dangers of new technologies, in areas ranging from digital competition to online-content moderation. The bloc’s new privacy law, the General Data Protection Regulation, helped set a template for broadly applied rules backed by stiff fines that has been followed in some ways by other countries—and some U.S. states.
Popular stories from WSJ.com: