Evolving Legal Paradigms Of AI Governance

Update: 2024-11-07 04:00 GMT
story

Evolving Legal Paradigms Of AI Governance Governance of Artificial Intelligence has become a globally recognised policy imperative. The European Union has framed the first comprehensive and special statute on Artificial Intelligence, the EU AI Act, 2023. A governance framework for AI technology will involve a latticework of policy, legislations, standards, processes, risk frameworks and...


Evolving Legal Paradigms Of AI Governance

Governance of Artificial Intelligence has become a globally recognised policy imperative. The European Union has framed the first comprehensive and special statute on Artificial Intelligence, the EU AI Act, 2023.

A governance framework for AI technology will involve a latticework of policy, legislations, standards, processes, risk frameworks and best practice s, which would apply to design, development, deployment and use of AI.

The aim of AI governance, as we understand it today, is to put human agency at the centre of the AI system to ensure that an AI system does not threaten human rights and values. The principles of AI governance are not far removed from how we seek to regulate human behaviour. The greater the risk of harm, the greater the regulation.

The governance framework for AI is at different levels of maturity around the world. In India, Niti Ayog has published detailed papers on AI, giving us a glimpse of India’s policy perspective. The ‘National Strategy for Artificial Intelligence’, issued in June 2018 and ‘Responsible AI’, issued in February 2021, focus on the benefic uses of AI and the importance of ethics, privacy, transparency, fairness, accountability and suggest broad ethical principles for developing and deploying AI in India. The term ‘Responsible AI’ in essence means that AI systems in their design, development and use, be trustworthy, transparent, ethical and explainable. The Telecom Regulatory Authority of India released its recommendations on “Leveraging Artificial Intelligence and Big Data in Telecommunication Sector” in 2023. However, India is yet to adopt a comprehensive law dealing with AI.


The EU AI Act

The European Union has taken the lead by framing the first comprehensive and special statute on Artificial Intelligence, the EU AI Act, 2023, which came into effect from July 2024 and will see a staggered implementation. The EU law adopts the risk- based approach -

  • Systems considered to carry unacceptable risk and are either prohibited or allowed only for certain narrow purposes. Examples: biometric categorisation systems, social scoring systems, facial recognition, etc.
  • AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life.
  • General Purpose AI (GPAI) will have to publish summary of training data, amongst other compliance requirements.
  • As regards liability, while some obligations are cast on professional users who deploy/use AI systems, however, majority of the obligations are placed on providers/developers.

As primary drivers and developers of the technology, corporates have a vital responsibility to inform stakeholders of the risks of AI systems and their uses

Certain Key Principles

1. Ethics – We all want to prevent the harmful use of AI technology. But in reality, we have wildly varied understanding of what constitutes harm. For example, the rules around free speech and informational control vary a great deal across the world. Obviously, there is no one-size-fits-all approach when it comes to setting ethical guardrails around AI technology. This is where multilateral efforts can bring value by shaping universally accepted principles for ethical and responsible use of AI technology.

2. Multilateral approach -

a. The OECD Principles on Artificial Intelligence (May 2019) promote AI that is innovative, trustworthy and that respects human rights and democratic values.

b. UNESCO’s ‘Recommendation on the Ethics of Artificial Intelligence’ (November 2021) was adopted by 193 member States and rests on four core values –

i. Human rights and human dignity.

ii. Living in peaceful, just, and interconnected societies.

iii. Ensuring diversity and inclusiveness.

iv. Environment and ecosystem flourishing.

c. The Bletchley Declaration - Hosted in Bletchley Park, UK, in 2023 and attended by 29 countries, including India, the declaration is a positive step towards a cohesive international response to the responsible use of AI. The declaration states –

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all...”

3. Data Governance - Enormous amount of data is required to train AI systems, especially GPAI. Thus, most AI governance frameworks would see rules around collection, use and protection of personal data, use of non-personal or anonymised data, governance of dark data, spoliation of data, use of IP data/materials, etc.

4. Liability framework – In September 2022, the European Commission published a proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence, with the intention that,

persons harmed by AI systems must enjoy the same level of protection as persons harmed by other technologies. On its part, India has not been slow to recognise the impact of dark patterns on e-commerce and online customer experience and has thus stated that the Unfair Trade Practices which are regulated under section 2(47) of Consumer Protection Act, 2019 apply equally to dark patterns. It is fair to say that liability frameworks will have to stretch and strengthen to address potential harm caused by AI.

5. Checks and balances – Ethics by design, certification requirements, compliance with standards, audits are some checks which can be deployed. A matrix of ex-ante audit and certification and regular ex-post audits (especially to address emergent capabilities and risks) of AI systems would provide needed confidence in the use of AI.

Role of Corporates

Any discussion on AI governance is incomplete without reference to the role of corporates. As primary drivers and developers of the technology, corporates have a vital responsibility to inform stakeholders of the risks of AI systems and their uses. Developing appropriate risk frameworks and industry best practices takes on greater significance in the absence of comprehensive policy and regulatory frameworks. The accelerated pace of technology development has put the corporates in a key position to develop partnerships with government and multilateral organisations for policy formulation. It is hoped that this position will be leveraged for good – to let technology aid and empower mankind, enhance productivity and unlock abundance, while limiting the inherent and external risks of AI, including economic inequality, use by bad actors, and potentially all destructive arms race.

Disclaimer – The views expressed in this article are the personal views of the author and are purely informative in nature.

Tags:    

By: - Attreyi Mukherjee

Attreyi is a dual qualified lawyer (India and UK) with over 20 years of experience in corporate legal practice. After several years of practice at reputed law firms and Big 4, Attreyi joined Tata as an inhouse counsel in 2011. In her current role as GC of Tata Industries Ltd, she advises the company on its transactions and commercial contracts for its global portfolio of investments and operating businesses. In her career, Attreyi has advised businesses in diverse sectors including Aviation, Aerospace Manufacturing, E-commerce, EdTech, Digital Health, Fintech and Lifesciences.

She has co-authored successive editions of a legal commentary titled ‘Handbook on the Law of Sexual Harassment at Workplace’, published by Thomson Reuters. She is Co-Chair of the Legal Affairs & IPR Committee of Bombay Chamber of Commerce and Industry.

Attreyi is frequently invited to speak at domestic and international seminars, corporate trainings and industry body events, on topical issues like Technology Laws, Data Privacy, Anti-Corruption, Sexual Harassment and Gender Issues.

Similar News

Cyber Security