Lee Hishammuddin Allen & Gledhill

[TMT] Understanding The First-Ever AI Act

Artificial Intelligence (“AI”) has found widespread applications across virtually every industry, empowering businesses with intelligent automation, data-driven insights, and cutting-edge solutions. Since last year, discussions surrounding AI have reached new heights as its disruptive capabilities and potential benefits continue to captivate the world, prompting deeper exploration and contemplation of its impact on society.

In our previous alert, we explored the possibility of AI replacing legal professionals. In this alert, we shift our focus to the recently passed EU Artificial Intelligence Act (“EU AI Act”), which aims to regulate the swift progression of AI. The Act garnered substantial support in the EU Parliament, receiving 499 votes in favour out of a total of 620 votes.[1]   Prior to its formal adoption, the EU AI Act’s final step involves a trilogue between the EU Parliament, Commission, and Council, which is set to conclude by the end of 2023.[2]

The EU AI Act places its emphasis on ensuring safety, transparency, traceability, non-discrimination, and environmental friendliness in the realm of artificial intelligence.[3]  The Act categorises AI based on several risk levels, namely, unacceptable, high-risk, limited and minimal, thereby establishing corresponding obligations for providers and users alike. Moreover, extending its reach beyond EU service providers, the Act also applies to service providers located outside the EU but engaged in offering services within the EU.[4]

 

Risk Categories

The risk levels outlined in the EU AI Act can be characterised as follows:

Unacceptable risk

AI systems that pose an unacceptable level of threat to personal safety or constitute to intrusive and discriminatory practices, such as subliminal manipulation, exploitation of vulnerabilities, social rating systems, real-time remote biometric identification, predictive policing, emotion recognition in sensitive contexts, and untargeted facial recognition databases.[5]

 

EXPAND ARTICLE

High risk

This includes 2 main categories of AI systems: (1) AI used in products falling under the EU’s product safety legislation, such as machinery, toys, and lifts, and (2) AI related to areas such as biometric identification, critical infrastructure management, education & vocational training, employment & worker management, access to essential services, law enforcement, migration & border control management, and legal interpretation & application of the law.[6]

 

Limited risk

AI systems that pose acceptable risk and compliance to transparency requirements is necessary, including generative AI such as ChatGPT and Deepfakes.[7]

 

Minimal risk

AI systems that pose only low or minimal risk may be developed and used in the EU without being subject to any additional legal obligations.

 

Legal Consequences

Among the above risk categories, AI that is categorised as carrying unacceptable risk will be banned from the EU, while high-risk AI will be properly assessed before being put on the market and throughout its lifecycle, where compliance with a range of requirements is mandatory. On the other hand, limited risk AI need only comply with minimal transparency requirements that allow users to make informed decisions, and lastly, minimal risk AI can be developed and used in the EU without additional legal obligations than existing legislation. Non-compliance with the Act may result in fines up to €40 million or 7% of the company’s global income.[8]

 

Key Takeaways for all service providers under the purview of the EU AI Act

  • Compliance with Risk-Based Regulations and Prohibition of Unacceptable AI Practices: Companies developing or using AI technologies must assess the risk level of their AI systems and ensure adherence to corresponding regulatory obligations, avoiding the use of AI systems that fall under the unacceptable risk category, which are banned by the Act.

 

  • Responsible AI Innovation: To comply with the Act, companies should prioritise responsible AI development, which includes addressing biases in AI algorithms, ensuring fairness, and actively managing potential risks associated with AI deployment.

 

  • Adherence to Privacy Regulations: Companies must align their AI practices with the EU’s strict privacy regulations, including compliance with Article 16 of the Treaty on the Functioning of the European Union (TFEU), to protect individuals’ data privacy rights when using AI technologies. This ensures responsible data handling and privacy protection in line with the EU AI Act.

 

Conclusion

As the world’s first comprehensive law on AI, the EU AI Act holds the potential to trigger evolutionary effects on technology and legal frameworks alike. With the rapid development of AI posing various risks, its implementation becomes an essential response.

By employing risk-based regulations and emphasising ethical principles such as privacy protection and human oversight, the Act paves way for responsible AI innovation, fostering trust and accountability in the dynamic realm of artificial intelligence. Its precedent-setting nature may influence global AI regulation and shape the future of AI technologies worldwide.

If you have any further queries, please contact associates, Wee Yun Zhen (wyz@lh-ag.com), or her team Partner, G. Vijay Kumar (vkg@lh-ag.com).

REFERENCES:

[1]Ryan Brownie, ‘Eu lawmakers pass landmark artificial intelligence regulation’ (cnbc, 14 June 2023) https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html accessed 13 August 2023

[2]Alex Engler, ‘Key enforcement issues of the AI Act should lead EU trilogue debate’ (Brookings, 16 June 2023) https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/ accessed 13 August 2023

[3]European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (European Parliament, 14 June 2023) https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence accessed 13 August 2023

[4] Tambiama Madiega, ‘Artificial intelligence act’ (EPRS, June 2023) https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf accessed 13 August 2023

[5]European Parliament, ‘Artificial Intelligence Act’ (European Parliament, 14 June 2023) https://oeil.secure.europarl.europa.eu/oeil/popups/summary.do?id=1747977&t=e&l=en accessed 13 August 2023

[6] Recitals 30-40, Artificial Intelligence Act (amendments adopted by the European Parliament on 14 June 2023), P9_TA (2023) 0236  https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html accessed 13 August 2023

[7]Tambiama Madiega, ‘Artificial intelligence act’ (EPRS, June 2023) https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf accessed 13 August 2023

[8] Amendment 647 to Article 71 – paragraph 3, Artificial Intelligence Act (amendments adopted by the European Parliament on 14 June 2023), P9_TA (2023) 0236 https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html accessed 13 August 2023

Share this article

Partners

Learn more about our partners who specialize in this area

G. Vijay Kumar

Partner

G. Vijay Kumar

Partner