Future Proofing Companies: The Essential Role Of In-House Counsel In The AI Age
It would do well for in-house counsel to remember that implementing AI without sufficient human oversight exposes companies to legal challenges related to negligence, discrimination, and procedural fairness
In a world driven by algorithms, who watches the coders? It echoes the famous phrase by the Roman poet Juvenal ‘Quis custodiet ipsos custodes?’ (‘Who watches the watchmen?’), which questions the accountability of the ones in the decision making capacity.
Organisations and professionals are employing artificial intelligence (“AI”) technologies to boost productivity, streamline operations, and predict consumer behavior. While these innovations create opportunities, they also pose significant legal and ethical challenges. This has resulted in the role of in-house counsels expanding from the traditional role of being legal advisors confined to compliance and dispute resolution to now being required to wear multiple hats.
While India has begun to introduce regulations and policy guidelines that directly or indirectly address AI- related concerns, it does not yet have a comprehensive legal framework specifically for AI. This regulatory gap and ambiguities have heightened the importance and complexity of the in-house counsel’s role. They must now engage in cross-disciplinary collaboration with technologists and borrow jurisprudence from the existing foundations of laws and existing legislations/ guidance along with global guidance on AI to act as the ethical compass guiding the responsible use of AI.
The provisions in the Constitution of India (Articles 14, 15, and 16) and existing anti-discrimination laws serve as a foundation for AI technology used in various sectors to ensure that they do not discriminate against individuals based on race, gender, religion, or caste. An in-house counsel will need to be particularly vigilant if their company uses AI in recruitment, as such systems could potentially lead to biased outcomes. Similarly, staying abreast with sector-specific AI regulatory guidelines is paramount. Some examples are the Reserve Bank of India (“RBI”) guidelines on digital lending, 20221 focusing on ensuring fairness, transparency, and accountability when AI is used in digital lending apps; and mandating companies using AI for credit scoring to ensure that consumers are not unfairly denied loans based on flawed or biased algorithms.
On March 15, 20242, the Ministry of Electronics and Information Technology (“MeitY”) issued an advisory in continuation to the advisory dated December 23, 2023 (“Advisory”). This Advisory replaced the old advisory dated March 1, 2024 by introducing changes that did away with a mandatory prior government approval regime while replacing it with a notification-based regime along with easing the requirements for submission of ‘action taken-cum status reports’ for unreliable or untested AI models. This Advisory imposes on intermediaries, platforms and its users the risk of facing penal consequences under criminal laws for non-compliance with the IT Act and its rules. It also places a compliance burden requiring explicit labeling related to deepfakes. Recently, the PIL filed in the Hon’ble Delhi High Court on Deepfakes3 is a landmark development, concerning the growing threat of deepfake technology i.e. use of AI to manipulate video and audio, which poses a serious threat to privacy, reputation, and even in some cases national security. While the court has yet to deliver its final ruling, this case emphasizes the judiciary’s awareness of AI’s darker capabilities and the growing necessity for regulation.
In-house counsels must closely monitor updates related to AI and ensure that their company’s AI deployments align with emerging standards and regulations in line with the roots of the Indian Constitution.
India’s Personal Data Protection Act, 2023 (“DPDP Act”) focuses on protecting ‘PII’4 by imposing restrictions on the collection, processing, and storage of data, which is central to AI systems that rely heavily on large datasets. The Aadhaar judgment5, although not directly related to AI, provides a significant framework for the handling of personal data. Similarly, in the Schrems II Case (2020)6, the Court of Justice of the European Union invalidated the EU-U.S. Privacy Shield, citing concerns about the transfer of personal data to countries with insufficient protections. This ruling has profound implications for companies using AI systems that process or transfer personal data across borders. AI systems often rely on cloud computing, which may involve cross-border data flows.
For in-house counsel, these judgments necessitate revisiting data transfer agreements and ensuring compliance with the latest data protection standards by collaborating with AI engineers to implement privacy-by-design principles and embedding data protection measures into AI systems from their inception.
Another key challenge is determining who owns intellectual property (IP) in AI-generated works. The Copyright Act, 1957, currently does not explicitly recognize AI-generated works. However, debates are ongoing about whether AI can be considered an author or inventor, and how to manage IP rights when AI tools contribute to creative or technical outputs. Courts across jurisdictions have had varying stances on whether AI can hold copyright or whether its output belongs to the user or developer. In a widely discussed case in the United States: Thaler v. United States Copyright Office (2022)7, an AI system “Creativity Machine” created a piece of artwork, and its creator attempted to claim copyright on its behalf. The court ruled that AI- generated works cannot receive copyright protection in the U.S. unless there is significant human involvement. The case sets an important precedent for companies using AI to create content or products- without human creativity, the output cannot be protected under copyright law. In another case, the Delhi High Court8 emphasized the necessity of human creativity in works created with AI assistance, while determining that a list compiled through computing without human intervention cannot get copyright protection.
Therefore, in-house legal teams need to consider the implications of using AI for content creation or innovation. For instance, if an AI tool helps a company develop a new product design, questions may arise about who holds the rights to that design. This is an area of ambiguity under current Indian law, and companies must stay informed as IP frameworks evolve to address AI-related issues. This also calls for regular consultations with external legal experts in regions where the company operates.
Despite the complexities associated with AI, Indian courts have shown receptiveness in integrating AI into daily operations. Be it a matter in the P&H HC9 wherein the Judge asked ChatGPT for feedback to gain a more comprehensive understanding of whether bail should be granted in cases involving “cruelty”; or, the Manipur HC10 utilizing the AI tool- ChatGPT to assist in legal research while ruling on a complex case. However, these courts themselves acknowledged that while the use of AI tools like ChatGPT is fine for reference, the actual legal reasoning employed by them in judicial pronouncements must remain subject to human assessment. For in-house counsel, these judgments underscore the importance of building systems that retain human control over critical decisions. Implementing AI without sufficient human oversight exposes companies to legal challenges related to negligence, discrimination, and procedural fairness.
Legal counsels can also borrow jurisprudence from global legislations like the comprehensive European Union Artificial Intelligence Act, 2024 (“EU AI Act”) which has created a tiered risk categorization system. The EU AI Act applies to any AI system within the EU that is on the market, in service or in use. It categorizes AI technology mainly under three subsets i.e. (i) Unacceptable AI systems that include social scoring and emotion manipulation which are prohibited; (ii) High-risk AI systems i.e. those used in employment, education and other essential services which mandatorily require registration in a public database; and finally (iii) Limited/ Minimal AI which enjoys unrestricted use within the regulatory framework.
Learning from such legal developments, the in-house counsel must be at the forefront of developing and proactive implementation of AI governance frameworks that balance innovation with risk mitigation by addressing key concerns such as data privacy, algorithmic transparency, human oversight, and compliance with applicable laws.
In an age where AI is increasingly becoming the cornerstone of innovation, a verse from the Rigveda11 and Vedic philosophy which emphasizes the fact that ‘knowledge continues to grow without limits’ resonates now more than ever. The scope of knowledge is indeed boundless, especially when machines are capable of learning and adapting beyond human limitations. This is where the role of the in-house counsel becomes pivotal- serving as both a legal safeguard as well as a strategic advisor who can help future-proof companies while this AI landscape continues to evolve.
Disclaimer – The views expressed in this article are the personal views of the author and are purely informative in nature.
1. Reserve Bank of India - Notifications (rbi.org.in)
2. Advisory 15March 2024.pdf (meity.gov.in)
3. PIL filed in Delhi High Court against use of deepfake technologies in political campaign for Lok Sabha, Assembly elections - The Hindu
4. Personal Identifiable Information
5. Justice K.S. Puttaswamy (Retd.) v. Union of India, 2018
6. The CJEU Judgement in the Schrems II Case (europa.eu)
7. Opinions Response (uscourts.gov)
8. Navigators Logistics Ltd. vs Kashif Qureshi & Ors (2018)
9. In the matter of Jaswinder Singh v. State of Punjab (Order dated 27-3-2023)
10. In the matter of Md Zakir Hussain v. State of Manipur (WP(C) No. 70 of 2023)
11. Rigveda- Thirty ninth mantra of one hundred and sixteenth Sukta of the first mandala of the Rigveda