
European Union (EU) has introduced strict new rules banning certain artificial intelligence (AI) systems that pose “unacceptable risks” to public safety and fundamental rights.
The sweeping measures, announced on Saturday under the EU Artificial Intelligence Act (AIA), target several high-risk applications.
The act prohibits AI systems that use manipulative techniques, such as subliminal messaging or exploiting vulnerable groups.
Social scoring systems that evaluate individuals based on behaviour or personality traits are also banned.
In addition, no facial recognition databases created by scraping online images or CCTV footage are allowed.
Real-time biometric identification in public spaces is not allowed, except in limited law enforcement cases.
AI systems that infer emotions in educational or workplace settings and those predicting criminal behavior based solely on profiling are not allowed.
The EU plans a phased rollout of the regulations.
Companies have until full enforcement in August 2026 to comply with all the rules of the AIA.
In May 2025, organizations must implement new codes of practice and transparency requirements while conducting thorough risk assessments of their AI operations.
The rollout is designed to give companies time to adjust while dangerous practices are stopped.
Authorities warn that non-compliance will come with heavy penalties.
Firms could face fines of up to €35 million or 7% of their total worldwide annual revenue for serious violations.
Minor infractions, such as providing misleading information to authorities, may result in fines reaching €7.5 million or 1% of annual revenue.
National competent authorities in each member state will enforce the rules with oversight from the European Artificial Intelligence Board.
The AIA attempts to strike a balance between innovation and protecting citizens.
The EU hopes the new framework will set a global standard for safe AI development.