- Home
- News
- Articles+
- Aerospace
- Agriculture
- Alternate Dispute Resolution
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- FDI
- Food and Beverage
- Health Care
- IBC Diaries
- Insurance Law
- Intellectual Property
- International Law
- Know the Law
- Labour Laws
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Technology Media and Telecom
- Tributes
- Zoom In
- Take On Board
- In Focus
- Law & Policy and Regulation
- IP & Tech Era
- Viewpoint
- Arbitration & Mediation
- Tax
- Student Corner
- AI
- ESG
- Gaming
- Inclusion & Diversity
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- News
- Articles
- Aerospace
- Agriculture
- Alternate Dispute Resolution
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- FDI
- Food and Beverage
- Health Care
- IBC Diaries
- Insurance Law
- Intellectual Property
- International Law
- Know the Law
- Labour Laws
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Technology Media and Telecom
- Tributes
- Zoom In
- Take On Board
- In Focus
- Law & Policy and Regulation
- IP & Tech Era
- Viewpoint
- Arbitration & Mediation
- Tax
- Student Corner
- AI
- ESG
- Gaming
- Inclusion & Diversity
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
European Parliament Passes World's First Act to Regulate Artificial Intelligence
European Parliament Passes World's First Act to Regulate Artificial Intelligence
On Wednesday, March 13, 2024, Members of the European Parliament overwhelmingly supported the Artificial Intelligence Act ("AI Act"), with 523 votes in favor, 46 against, and 49 abstentions. This historic legislation, hailed as the world’s first comprehensive AI law, is expected to exert significant influence on the development of AI regulations globally. The European Parliament's endorsement of the EU AI Act marks a pivotal moment in AI governance.
During the plenary debate, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.
The comprehensive legislation has divided the AI technology into four distinct categories to facilitate regulation and compliance: prohibited, high-risk, limited risk, and minimal risk. Sectors involving biometric identification in health, law, and education fall under the high-risk category, subject to stringent requirements, human oversight, security measures, and assessment. Conversely, systems involving chatbots and image generation programs are classified as limited risk, necessitating informed consent from users.
The legislation adopts a zero-tolerance approach towards AI models that manipulate human behavior or exploit vulnerabilities related to race, religion, or sexual orientation. Meanwhile, systems with negligible risk, such as spam filters and smart appliances, are categorized as minimal risk and are required to adhere to existing laws. Additionally, the Act holds producers of general-purpose AI systems accountable for providing the materials used to train their models and complying with EU copyright laws.
The Act also recognizes the significance of innovation and aims to achieve a balance between regulation and progress. It encourages low-risk AI systems, like content recommendation algorithms, to adhere to voluntary codes of conduct. This approach promotes responsible AI development while supporting technological advancement.
Implications of the AI Act on Businesses
Businesses operating within the European Union need to ensure compliance with the regulations outlined in the AI Act. Although the Act primarily focuses on AI developers and providers, its impact extends across diverse industries and sectors. Whether in healthcare or finance, businesses leveraging AI technologies must evaluate the risk profile of their AI systems and guarantee compliance with regulatory mandates.
Extraterritorial Reach
A notable aspect of the AI Act is its extraterritorial reach, akin to the General Data Protection Regulation (GDPR). Entities situated outside the EU but engaged in activities involving AI systems within the EU market will be subject to the Act's regulations. This includes importers, distributors, and implementers of AI technologies, highlighting the broad influence of EU regulations on AI.
Role of Regulatory Bodies
Regulatory bodies will play a crucial role in enforcing compliance with the AI Act, overseeing its implementation and supervision. Each EU member state will establish its own AI watchdog responsible for managing complaints and upholding regulatory standards. Additionally, a centralized AI Office at the EU level will monitor the enforcement of the Act, particularly concerning general-purpose AI models.
Penalties for Non-Compliance
Non-compliance with the AI Act may result in significant penalties. Violations could lead to fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue. These measures underscore the EU's commitment to ensuring transparency, accountability, and ethical governance in the field of artificial intelligence.
The EU's endorsement of the Artificial Intelligence Act represents a significant step towards regulating AI technology and ensuring ethical AI governance. This comprehensive legislation addresses various aspects of AI development and usage, categorizing systems based on their risk profiles and imposing strict requirements on high-risk applications. Businesses operating within the EU must prepare to comply with the AI Act's provisions, as non-compliance can lead to substantial fines and penalties. The establishment of regulatory bodies at both the EU and member state levels underscores the commitment to enforcing compliance and upholding transparency and accountability in AI practices. As the AI landscape continues to evolve, the AI Act sets a precedent for responsible AI development and serves as a model for other jurisdictions seeking to regulate AI technology effectively.