- Home
- News
- Articles+
- Aerospace
- Agriculture
- Alternate Dispute Resolution
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- FDI
- Food and Beverage
- Health Care
- IBC Diaries
- Insurance Law
- Intellectual Property
- International Law
- Know the Law
- Labour Laws
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Technology Media and Telecom
- Tributes
- Zoom In
- Take On Board
- In Focus
- Law & Policy and Regulation
- IP & Tech Era
- Viewpoint
- Arbitration & Mediation
- Tax
- Student Corner
- AI
- ESG
- Gaming
- Inclusion & Diversity
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- News
- Articles
- Aerospace
- Agriculture
- Alternate Dispute Resolution
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- FDI
- Food and Beverage
- Health Care
- IBC Diaries
- Insurance Law
- Intellectual Property
- International Law
- Know the Law
- Labour Laws
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Technology Media and Telecom
- Tributes
- Zoom In
- Take On Board
- In Focus
- Law & Policy and Regulation
- IP & Tech Era
- Viewpoint
- Arbitration & Mediation
- Tax
- Student Corner
- AI
- ESG
- Gaming
- Inclusion & Diversity
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
Navigating AI Regulation: Insights From The EU AI Act For India's Regulatory Future
Navigating AI Regulation: Insights From The EU AI Act For India's Regulatory Future
Navigating AI Regulation: Insights From The EU AI Act For India's Regulatory Future By categorizing AI systems based on risk and implementing stringent regulatory measures, the EU has effectively addressed the complexities and ethical challenges associated with AI deployment INTRODUCTION This article explores the regulatory efforts undertaken by the European Union (“EU”) in formulating...
ToRead the Full Story, Subscribe to
Access the exclusive LEGAL ERAStories,Editorial and Expert Opinion
Navigating AI Regulation: Insights From The EU AI Act For India's Regulatory Future
By categorizing AI systems based on risk and implementing stringent regulatory measures, the EU has effectively addressed the complexities and ethical challenges associated with AI deployment
INTRODUCTION
This article explores the regulatory efforts undertaken by the European Union (“EU”) in formulating the European Union Artificial Intelligence Act, 2024 (“EU AI Act”), focusing on how these regulations can harness Artificial Intelligence (“AI”) for the collective good while maximizing its potential. This article also discusses how the EU AI Act could serve as a blueprint for Indian regulators in effectively managing and utilizing AI technology.
ANALYSIS OF THE EU AI ACT
1. The European Parliament approved the EU AI Act on April 22, 2024. This landmark legislation introduces a risk-based approach to AI regulation, categorizing AI systems into four tiers (as shown below) based on the level of risk they pose. This classification is pivotal as it sets the stage for understanding how AI technologies will be regulated and managed within the EU.
(i) Unacceptable Risk AI
This category of AI is completely prohibited under the EU AI Act. This categorization pertains to AI systems that engage in manipulative or deceptive practices, exploit vulnerabilities in individuals, discriminate based on race or religion using biometric data, or compile facial recognition databases through the indiscriminate scraping of images from sources like the internet or CCTV footage. For Example: An AI system designed to manipulate social media users by creating compelling deepfake videos that deceive viewers into believing false narratives about political candidates.
(ii) High Risk AI
In contrast to ‘Unacceptable Risk AI’, the ‘High-Risk AI’ category under the EU AI Act encompasses a broader spectrum of applications. This classification includes AI technologies that, while not outrightly prohibit, pose significant risks to individuals, public safety, or essential services. Annex III of the EU AI Act provides a detailed list of AI systems that could be categorized as ‘High-Risk AI Systems’. Further, Article 49 (Registration) of the EU AI Act makes it mandatory for authorized representatives of AI systems to register themselves as well as their AI systems in the EU Databases of the European Commission. Therefore, while these AI systems are not prohibited, they are subject to rigorous regulatory oversight aimed at mitigating potential harm and ensuring their responsible deployment. Such regulatory measures may include rigorous testing procedures, continuous monitoring for accuracy and reliability, and strict data protection protocols to safeguard privacy. For Example: An AI-powered medical diagnostic system designed to analyze complex medical images and provide diagnostic recommendations. Such technology may fall under the ‘High Risk AI’ category due to its critical impact on patient health and safety.
(iii) Limited Risk AI and Minimal Risk AI
a. The final two categories under the EU AI Act are “Limited Risk AI” and “Minimal Risk AI”.
b. For “Limited Risk AI”, the EU AI Act mandates enhanced transparency measures. Consumers interacting with AI systems, such as chatbots, must be informed comprehensively in order to make informed decisions. Additionally, providers must ensure that AI-generated content, whether text, audio, or video, is clearly identified, especially in cases involving educational or informational content where deepfakes could mislead the public. For example: A chatbot designed to provide customer support for an online retail platform falls under the “Limited Risk AI” category. The platform must disclose to users that they are interacting with an AI system and ensure that any responses generated by the chatbot are clearly distinguishable from human-generated replies.
c. Conversely, “Minimal Risk AI” enjoys unrestricted usage under the EU AI Act. This category encompasses AI applications such as spam filters and AI-enhanced video games, where the risks to individuals or public safety are minimal. For example: an AI-powered virtual assistant in a video game, used to enhance gameplay experience by providing hints and challenges, qualifies as “Minimal Risk AI”. Such applications are considered low-risk because they do not harm individuals or public safety significantly, allowing for unrestricted use within regulatory guidelines.
2. Penalties for Non-compliance under the EU AI Act
The EU AI Act currently identifies severe violations, including infringement of Article 5 (Prohibited AI Practices), which can result in fines of up to €35 million or 7% of the offending entity’s annual worldwide turnover, whichever amount is higher. For less serious transgressions, such as failure to meet the transparency requirements outlined in Articles 16, 22, 23, 24, 26, and 50, fines of up to €15 million or 3% of global revenue may be imposed. Fines of up to €7.5 million or 1% of revenue may also be levied for providing inaccurate, incomplete, or misleading information to national authorities or notified entities.
INDIA: KEY DEVELOPMENTS
1. While India has not attempted to regulate AI through legislation in a comprehensive manner, as yet, it has, through the Ministry of Electronics and Information Technology, Government of India (“MeitY”) on December 26, 2023, issued an advisory (“2023 Advisory”) to all intermediaries, mandating compliance with the existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). The directive specifically targeted the growing concerns around misinformation powered by AI – Deepfakes. The 2023 Advisory mandated all intermediaries to clearly communicate prohibited content, as specified under Rule 3(1)(b) of the IT Rules, through terms of service, user agreements, and regular notifications during user interactions.
2. Following the 2023 Advisory, MeitY issued another advisory on March 15, 2024 (“2024 Advisory”), reinforcing due diligence obligations outlined in the IT Rules. The key elements of the 2024 Advisory are as follows:
(i) Due diligence requirements for AI models, large language models (LLMs), generative AI software, algorithms, and computer resources to prevent discrimination or compromise of electoral integrity.
(ii) Imposition of responsibilities on intermediaries and platforms to prevent users from violating the IT Act 2000 and IT Rules by hosting, displaying, modifying, transmitting, storing, or sharing unlawful content.
(iii) Intermediaries and platforms using under-tested or unreliable AI models on the Indian internet must obtain explicit government approval.
(iv) Introduction of a consent mechanism for AI models to inform users about inherent risks and consequences.
(v) Making it mandatory for AI platforms to label potentially misleading or deepfake-generated information with unique metadata or identifiers, enabling traceability to the intermediary and creator.
CONCLUSION AND WAY FORWARD
The EU has made commendable strides in regulating AI through the enactment of the EU AI Act. By categorizing AI systems based on risk and implementing stringent regulatory measures, the EU has effectively addressed the complexities and ethical challenges associated with AI deployment. Looking ahead, it is expected that India shall also be required to enact comprehensive legislation to regulate AI in the near future and may, in this regard, draw valuable insights from the EU AI Act.
Disclaimer – The views expressed in this article are the personal views of the authors and are purely informative in
https://www.hklaw.com/en/insights/publications/2024/03/the-european-unions-ai-act-what-you-need-to-know
2. EUROPEAN COMMISSION,
https://digitalstrategy.ec.europa.eu/en/policies/regulatory-framework-ai (last visited Jul. 6, 2024)
3. Anne-Gabrielle Haie, Tod Cohen, Andrew Golodny, Maury Shenk, Maria Avramidou, Elizabeth Goodwin, Vito Arethusa, A Comparative Analysis of the EU, US and UK Approaches to AI Regulation, STEPTOE, (Jul. 24, 2024, 4:00 PM),
https://www.steptoe.com/en/news-publications/steptechtoe-blog/a-comparative-analysis-of-the-eu-us-and-uk-approaches-to-ai-regulation.html#:~:text=For%20example%2C%20the%20definition%20of,human%2Ddefined%20objectives%2C%20without%20specifying
4. Paritosh Chauhan, Sameer Avasarala and Abhishek Singh, MEITY Advisory: Dawn of AI Regulation in India or a false start, LEXOLOGY, (Jul. 24, 2024, 4:00 PM),
https://www.lexology.com/library/detail.aspx?g=47dda3b5-1111-4b6b-9f87-799ef8066802#:~:text=Way%20forward,and%20strengthening%20innovation%20and%20growth
5. Frederiek Fernhout and Thibau Duquin, The EU Artificial Intelligence Act: Our 16 key takeaways, Stibbe, (Jul. 24, 2024, 4:00 PM),
https://www.stibbe.com/publications-and-insights/the-eu-artificial-intelligence-act-our-16-key-takeaways