Some “unacceptable” uses of AI would be banned in Europe under proposals unveiled on Wednesday.
The European Commission’s rules would ban “AI systems considered a transparent threat to the security, livelihoods, and rights of people”, it said.
It is also proposing far stricter rules on the utilization of biometrics – like face recognition getting used by enforcement, which might be limited.
Breaking the rules could lead to fines of up to 6% of global turnover.
For the largest technology companies, that could amount to billions.
The commission’s digital chief, Margrethe Vestager, said: “On AI, a trust may be a must, not a nice-to-have.”
And the EU was developing “new global norms” for AI.
“Future-proof and innovation-friendly, our rules will intervene were strictly needed – when the security and fundamental rights of EU citizens are at stake,” she said.
The draft rules face a lengthy approval process and aren’t yet final.
Many of the core ideas were leaked last week, beforehand of the announcement, prompting concern from the technology community that it could stifle innovation.
“The European Commission’s proposed regime will not sit well with many in the community,” said Nikolas Kairinos, chief executive of Soffos.ai, which makes an AI for employee training in businesses.
“Loose definitions like ‘high risk’ are unhelpfully vague.
“An ambiguous, tick-box approach to regulation that’s overseen by individuals who might not have an in-depth understanding of AI technology will hardly inspire confidence.”
Herbert Swaniker, a technology expert at the law firm Clifford Chance, said the proposed hefty fines gave AI regulation much more power – and was “extremely ambitious” in scope.
“There’s a lot to do to sharpen some of these concepts,” he said.
“The fines are one thing – but how will vendors address the many costs and human input needed to form compliance a reality?
“The proposals will force vendors to fundamentally rethink how AI is procured and designed.”
Ursula von der Leyen tweets, “Artificial Intelligence is a fantastic opportunity for Europe. And citizens deserve technologies they can trust. Today we present new rules for trustworthy AI. They set high standards supported the various levels of risk.”
The rules would govern what AI was used for, instead of the technology itself, Ms. Vestager said.
But “AI systems or applications that manipulate human behavior to bypass users’ free will”, including “subliminal techniques”, would fall under the banned “unacceptable risk” category.
Those operating in high-risk areas – like national infrastructure, education, employment, finance, and enforcement – would face a series of hurdles before they might be used.