- Home
- News
- Articles+
- Aerospace
- Agriculture
- Alternate Dispute Resolution
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- FDI
- Food and Beverage
- Health Care
- IBC Diaries
- Insurance Law
- Intellectual Property
- International Law
- Know the Law
- Labour Laws
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Technology Media and Telecom
- Tributes
- Zoom In
- Take On Board
- In Focus
- Law & Policy and Regulation
- IP & Tech Era
- Viewpoint
- Arbitration & Mediation
- Tax
- Student Corner
- ESG
- Gaming
- Inclusion & Diversity
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- News
- Articles
- Aerospace
- Agriculture
- Alternate Dispute Resolution
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- FDI
- Food and Beverage
- Health Care
- IBC Diaries
- Insurance Law
- Intellectual Property
- International Law
- Know the Law
- Labour Laws
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Technology Media and Telecom
- Tributes
- Zoom In
- Take On Board
- In Focus
- Law & Policy and Regulation
- IP & Tech Era
- Viewpoint
- Arbitration & Mediation
- Tax
- Student Corner
- ESG
- Gaming
- Inclusion & Diversity
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
Generative AI – A Pursuit Beyond Digital Transformation
Generative AI – A Pursuit Beyond Digital Transformation
Generative AI – A Pursuit Beyond Digital Transformation While Generative AI (GenAI) offers numerous benefits, it definitely encompasses a duality through several potential threats such as deepfakes, misinformation, copyright infringement, bias, hallucination, discrimination, data privacy, de-anonymization, cyberattacks, autonomous weapons and other military applications, job insecurity,...
ToRead the Full Story, Subscribe to
Access the exclusive LEGAL ERAStories,Editorial and Expert Opinion
Generative AI – A Pursuit Beyond Digital Transformation
While Generative AI (GenAI) offers numerous benefits, it definitely encompasses a duality through several potential threats such as deepfakes, misinformation, copyright infringement, bias, hallucination, discrimination, data privacy, de-anonymization, cyberattacks, autonomous weapons and other military applications, job insecurity, energy consumption etc. that need to be carefully managed.
A Pursuit Beyond Digital Transformation
The ascent of artificial intelligence (AI) from experimental concepts to gradually permeating through our daily living i.e. in jobs, in travels, in smartphones, chatbots, recruitment, marketing, business planning, content creation, art, design, automation, product engineering, medical research, diagnostic, document review, regulatory compliances, simulation, customer engagement etc., it is everywhere.
In this article, I will cover the dual nature of GenAI and need for regulatory compliances and also how different countries are dealing with this transformational shift.
Dual Nature of GenAI
While Generative AI (GenAI) offers numerous benefits, it definitely encompasses a duality through several potential threats that need to be carefully managed. These threats includes deepfakes, misinformation, copyright infringement, bias, hallucination, discrimination, data privacy, de-anonymization, cyberattacks, autonomous weapons and other military applications, job insecurity, energy consumption etc. To mitigate these threats, it is crucial to develop and enforce robust regulations, promote ethical AI practices, ensure transparency and accountability in AI systems, and engage in continuous dialogue about the societal implications of GenAI.
Elon Musk and other AI leaders, took a joint stance at the world’s biggest AI conference (the International Joint Conference on AI 2017 in Melbourne) to urgently ban the usage of AI-based lethal weapons such as ‘killer robots’. In March 2023, OPENAI™ CEO Samuel Altman said in an interview with ABC NEWS® that ‘AI will reshape the society’, but at the same time, he also acknowledged that he was scared of the potential it carries and that there is a need for regulations to ‘deter the potential negative consequences that this technology could have on humanity’.
Towards an open letter was sent by the Future of Life Institute, on all AI laboratories to immediately pause the training of AI systems more powerful than GPT-4®. This letter has till date been signed by over 30,000 signatories, including Elon Musk and many of the world’s leading AI scientists. On similar lines, the United Nations Educational, Scientific and Cultural Organization (UNESCO) had called for countries to implement its global ethical framework for AI. This framework was adopted unanimously by all 193 member states of UNESCO. Recently, India has successfully partnered with other countries including the US, France, Canada, the UK, Japan, Korea, Brazil and Argentina in the Global Partnership on AI [“GPAI”] as Apex authority for all matters related to AI including issues pertaining to AI regulations and drawing up a common global framework for AI.
GenAI Governance
The European Union
The need to frame the regulations was felt prominently for the first time when Italy’s data protection authority (Italian Garante) ordered OPENAI™ to stop processing people’s data locally in lieu of its concern about violation of General Data Protection Regulation (the ‘GDPR’) provisions. Within a day, CHATGPT™ was geo-blocked for its users in Italy. Following this, a complaint was filed in Poland by a local privacy and security researcher against OPENAI™ pertaining to processing of data in an unlawful and unreliable manner and alleging that the rules on which data was processed were non-transparent. Even Spain’s data protection agency, AEPD (Agencia Española de Protección de Datos), announced a preliminary investigation of OPENAI™ over alleged breaches of the GDPR. Further, the French competition regulatory body had fined GOOGLE® for EUR 271 million for not informing news publishers about their use of GenAI with copyrighted content. This comes in the context of GOOGLE®’s previous commitments to engage in fair payment discussions with publishers regarding content reuses.
More recently, in yet another member state of the EU, a non-government organization structured as a privacy advocacy group, NOYB (None Of Your Business), filed a complaint against OPENAI™, citing concerns that CHATGPT™ cannot guarantee accurate information nor rectify hallucinated personal data generated in output, which may violate EU privacy laws. NOYB has requested the Austrian data protection agency, DSB (Datenschutzbehörde), to investigate OPENAI™’s data collection and response generation processes. This complaint highlights broader privacy risks around how AI chat bots gather and potentially share large amounts of user data, including personal information, in ways that may infringe on individuals’ privacy rights under the GDPR. NOYB has argued that companies currently cannot make AI chat bots fully compliant with EU data protection laws when dealing with people’s private data. On the other hand, OPENAI™ simply argues that ‘factual accuracy in large language models remains an area of active research’.
The European Union Artificial Intelligence Act, 2024.
The EU’s Artificial Intelligence Act (the ‘AI Act’) is the first-ever legal framework on AI, which comprehensively addresses the risks of AI, and positions Europe to play the leading role globally. Other countries will soon follow on the same lines; hence, it is imperative to understand the implications of the AI Act and also to be sensitised on the importance of harmonizing AI rules across the EU, fostering innovation while safeguarding fundamental rights and values. The AI Act provides for EU-wide rules on data quality, transparency, human oversight and accountability.
Proposed on April 21, 2021, the AI Act came into effect on March 13, 2024, and will be implemented soon in phases. Its application covers 27 member countries in the Europe region. It is applicable to entities both inside and outside of the EU, as long as the AI system is placed on the EU market, or its use effects the people located in the EU. It provides for risk based classification, and also provides for penalties for failing to comply which can be significant, spanning from fines of €35 million or 7 percent of worldwide revenue to €7.5 million or 1.5 percent of revenue, contingent upon the nature of the violation and the scale of the organization.
The United Kingdom
In the UK, government intervention regarding GenAI technologies is limited due to the absence of specific laws. However, the House of Commons Culture, Media, and Sport Committee’s Third Special Report of 2023-24 emphasized that using copyright works as AI training data without proper permission could constitute copyright infringement. Additionally, the government has decided against extending the text and data mining exception to encompass AI development for commercial purposes, which would have allowed researchers to analyse copyrighted material computationally under certain conditions.
In the case of Thaler vs. Comptroller-General of Patents, Designs and Trademarks, the Supreme Court of the UK unanimously found that AI cannot be named as the inventor for a patent application. Thaler had submitted two patent applications for inventions, which were created by an AI machine called ‘DABUS’, owned by Thaler. It was held that an AI system is not a person, let alone a natural person, so as to fit in the criteria of an ‘inventor’.
China
The world’s first legally binding verdict concerning copyright infringement of AI-generated images was pronounced in February 2024 by the Guangzhou Internet Court in China. It was ruled that an AI company is liable for copyright infringement related to the Ultraman series. The AI company generated images substantially similar to the Ultraman images without authorization, which the court found to violate the plaintiff’s reproduction and adaptation rights. The AI company was ordered to pay 10,000 RMB (approx. US$ 1,389) in damages. This case illustrates the potential liability of service providers for the output of their AI tools. In assessing the defendant’s liability, the Guangzhou Internet Court consulted the Interim Measures for the Management of Generative Artificial Intelligence Services that came into effect on August 15, 2023. The court ruled that as a provider of generative AI services, the defendant had an obligation to implement specific technical safeguards to prevent the generation of images substantially resembling copyrighted works.
The United States of America
Since late 2022, we have witnessed several class-action suits demanding jury trial being filed in both district and federal courts of various states in the US.
• One of the earliest cases dates back to November 3, 2022, when a suit was filed against GITHUB®, MICROSOFT®, and OPENAI™ in San Francisco, challenging the legality of GITHUB COPILOT™ and OPENAI CODEX™, on the grounds of alleged violation of the copyrights of code creators, who posted their codes on GITHUB®. Verdict was in favour of the plaintiffs.
• In January 2023, artists Sarah Anderson, Kelly McKernan, and Karla Ortiz filed a complaint against STABILITY AI™, DEVIANTART®, and MIDJOURNEY™ for using their copyrighted images to train Stable Diffusion without consent. A judge dismissed the lawsuits against MIDJOURNEY™ and DEVIANTART® but allowed a single direct infringement complaint against STABILITY AI™ to proceed.
• In July 2023, OPENAI™ and Meta were infamously dragged in the complaints on the ground of them training CHATGPT™ and LLAMA™ using datasets containing copyrighted material obtained from controversial ‘shadow library’ websites that distribute pirated digital content.
• In December 2023, THE NEW YORK TIMES® (‘NYT’) filed a complaint against MICROSOFT® and the OPENAI™ group, alleging unauthorized use of copyrighted content, false attribution of facts, and misappropriation of commercial referrals.. OPENAI™ emphasized ongoing efforts to address the issue and raised fair use arguments, disputing damages for past reproductions beyond a 3-year limit. The case sparked debate on fair-use versus free-riding, prompting suggestions for an independent audit to assess revenue impact on NYT pre and post AI model implementation. While GenAI is here to stay, if its usage is not regulated, it might evolve into chaos, confusion and misuse of GenAI in today’s digital era.
• More recently, in May 2024, eight newspaper publishing companies (owned by Alden Global Capital Limited) filed a complaint in the federal court of New York against OPENAI™ and MICROSOFT® accusing them of illegally using news articles to power their AI chat bots.
• Meanwhile the US’ Copyright Office (the ‘USCO’) has provided some relief to individual authors and artists in the form of grant of copyright ownership on works created using GenAI as an assistive tool. One of the landmark instances regarding such grant of copyright application was by Kris Kashtanovo in March 2023. In its registration ruling concerning the comic book ‘Zarya of the Dawn’, the US Copyright Office (USCO) declined to grant protection to images produced through the generative AI art platform (MIDJOURNEY™). However, it permitted registration for the textual content, as well as the selection and organization of both images and text. This allowance was made on the condition that the applicant, Kris Kashtanova, asserted sole authorship over those specific elements. This approach by the USCO incentivizes and protects the substantive creative input from human authors who incorporate AI-generated outputs into their works. It recognizes the creative expression and decision-making undertaken by the human in compiling those elements into a cohesive work.
• In another application, Elisa Shupe, a 60-year-old US Army veteran, had self-published a novel about her life and advocacy for more inclusive gender recognition. This novel was among the first creative works to receive a copyright for the arrangement of AI-generated text. However, this copyright came with a caveat – Shupe was not considered the author of the whole text, but only of the ‘selection, coordination, and arrangement of text generated by artificial intelligence’. Both the above instances suggest that while AI-generated text itself may not be directly copyrightable, the creative arrangement and curation of AI-generated text by a human author can receive copyright protection.
• Simultaneously, there have also been cases in the legal fraternity, such as one where the usage of AI backfired on a counsel who leveraged AI for the purpose of preparing a certain statement of facts, which thereby became a worst-case scenario for attorneys experimenting with generative AI. Prima facie, cases like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines look as real as any precedent pronounced by a judicial body. But these decisions were nowhere to be found by the judge because CHATGPT™ had invented everything. The lawyer who used these fake precedents admitted the usage of AI for his legal research, which poignantly revealed itself to be untrustworthy. The judge in this case issued sanctions in light of the casual approach and irresponsibility at the attorney’s end.
India
Ministry of Commerce and Industry (the ‘MoCI’) (established under the Department of Commerce, Government of India) in February 2024, regarding intellectual property rights’ infringement by GenAI, stated that the current Indian legal framework, under the patent and copyright legislations, is ‘well-equipped to protect AI-generated works and connected innovations’ and that the exclusive economic rights granted by the existing Copyright Act obligates users of GenAI to obtain permission for commercial use of such works, unless covered under fair dealing exceptions. Illustrating the judiciary’s role in enforcing these protections, the Delhi High Court had recently issued a restraining order preventing the use of the Bollywood actor Jackie Shroff’s voice in a chat-bot without his explicit permission. This order underscores the court’s commitment to safeguarding individual rights against unauthorized exploitation by AI technologies.
Prior to MoCI’s release by the Press Information Bureau, there was a plea for codification of rules on the usage of GenAI. This plea was raised by the Digital News Publishers Association (an organisation of the digital arms of prominent media companies of India) in light of THE NEW YORK TIMES® complaint against OPENAI™ and MICROSOFT®. To this, the country’s IT Minister Rajeev Chandrasekhar said that “The New York Times case would be ‘a defining case’ on the rights of digital news platforms”. He also advised that ‘content creators must have a right to whatever value comes out of the monetisation of that content, be it open to the public or behind a paywall’ and said that ‘legislations in that direction must be discussed’.
India’s approach towards adopting ethical AI practices began way back in 2018 when the country’s apex public policy think tank, NITI Aayog, devised the National Strategy for Artificial Intelligence (‘NSAI’). In India, a key aspect of the strategic approach to AI adoption lies in fostering collaboration between government entities and private/public sectors. The NITI Aayog has worked extensively not only on analysing the societal concerns, but also giving consideration to the global approaches to AI regulation. Not only NITI Aayog, the country’s regulatory authorities, the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI), have also been proactive to enable innovation and regulation of artificial intelligence. While RBI has been using the sandbox mechanism to monitor and at the same time facilitate ethical disruption in finance sector, SEBI has been a watchdog with its release of a circular on reporting requirements for artificial intelligence/ machine learning applications and systems. Lastly, with the recent enactment of the Digital Personal Data Protection Act in 2023, the country has shown optimism for protecting the personal and sensitive data of its citizens and regulating the activities of persons who are in custody of such data.
Being the fastest growing economy with one of the largest populations in the world, India holds a meaningful stake in this pursuit beyond digital transformation – the AI revolution.
Conclusion
It is imperative that we bring forth a balanced approach that does not hinder individuals from innovating, because in the end, it is the public at large, that becomes a subject of the collateral damages posed by such innovations. To that effect, the intent that we aimed to articulate in this article, resonates with Samuel Altman’s testimony before the US Senate where he acknowledged people’s anxiety about AI, and stated that ‘… government and industry together can manage the risks so that we can all enjoy the tremendous potential [of generative artificial intelligence systems].’ The technology, which was once only imagined in fictional stories and novels, is now a reality in our present-day lives. With the onset of the fourth industrial revolution (as idealised by the World Economic Form), it is crucial for us to shift our focus beyond just disruptive advancements, and towards responsible innovation. The success of disruption that AI has the potential of creating, shall lie in ensuring a right balance between innovation and regulation.
Disclaimer – The views expressed in this article are the personal views of the authors and are purely informative in nature.