- Home
- News
- Articles+
- Aerospace
- AI
- Agriculture
- Alternate Dispute Resolution
- Arbitration & Mediation
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- ESG
- FDI
- Food and Beverage
- Gaming
- Health Care
- IBC Diaries
- In Focus
- Inclusion & Diversity
- Insurance Law
- Intellectual Property
- International Law
- IP & Tech Era
- Know the Law
- Labour Laws
- Law & Policy and Regulation
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Student Corner
- Take On Board
- Tax
- Technology Media and Telecom
- Tributes
- Viewpoint
- Zoom In
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- News
- Articles
- Aerospace
- AI
- Agriculture
- Alternate Dispute Resolution
- Arbitration & Mediation
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- ESG
- FDI
- Food and Beverage
- Gaming
- Health Care
- IBC Diaries
- In Focus
- Inclusion & Diversity
- Insurance Law
- Intellectual Property
- International Law
- IP & Tech Era
- Know the Law
- Labour Laws
- Law & Policy and Regulation
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Student Corner
- Take On Board
- Tax
- Technology Media and Telecom
- Tributes
- Viewpoint
- Zoom In
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events

AI Agents: Legal Considerations And Future Directions
AI Agents: Legal Considerations And Future Directions

AI Agents: Legal Considerations And Future Directions The imperative to regulate agentic AI emanates from the fact that compared to generative AI, agentic AI will have more real-world applications and impact, magnifying the potential for harm What are AI Agents? The next step in AI evolution is agentic AI, an AI system which can autonomously make decisions and execute a task without...
ToRead the Full Story, Subscribe to 
Access the exclusive LEGAL ERAStories,Editorial and Expert Opinion
AI Agents: Legal Considerations And Future Directions
The imperative to regulate agentic AI emanates from the fact that compared to generative AI, agentic AI will have more real-world applications and impact, magnifying the potential for harm
What are AI Agents?
The next step in AI evolution is agentic AI, an AI system which can autonomously make decisions and execute a task without needing step-by-step prompts. In an article by IBM1, an AI Agent is defined as a system that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions.
AI Agents are goal oriented and can be categorised based on how they evaluate and make decisions, and are referred to as goal-based, utility based, model based reflex agents, and so on. Self-driving cars are a good example of model-based reflex agent, and financial trading is an apt example of utility-based agent.

The Legal Framework and Key Challenges
At an extreme end, if an AI Agent were told to bring world peace, it may decide that the best way to achieve this would be to kill all humans – the Skynet nightmare. Clearly, responsible systems with human agency at its core should be the objective of any governance architecture. The imperative to regulate agentic AI emanates from the fact that compared to generative AI, agentic AI will have more real-world applications and impact, magnifying the potential for harm. Unfortunately, legislators are struggling to keep up with the pace of development of AI.
Integration of AI agents with blockchain technology is a new vista of possibilities
- Liability: Applying conventional theories of civil liability which look for causal responsibility may be challenging where the damage caused by an AI Agent cannot be traced back precisely to hardware/ software/ user/ environment. The question becomes more complicated when the system that caused harm is comprised of several AI Agents which interact with one another. The Tesla autonomous cars’ cases have highlighted the complexity of the issue in real world scenarios. The level of autonomy is an important consideration for a provider when devising the system’s human oversight or risk mitigation measures in the context of the intended purpose of a system.2
Insurers may play a role in establishing standards for testing requirements. Statutory penalties and contractual risk allocation will likely remain the mainstay for compensating pure economic loss. - Data Privacy: Agentic AI systems require enormous amount of personal data to be able to replicate human reasoning and problem-solving skills. In this backdrop, it may be challenging to implement the current global legal framework for personal data protection, in particular, universal principles of consent, data minimisation, right to be forgotten/ erasure, restrictions on data transfers, which are at the core of the EU GDPR and India’s nascent Digital Personal Data Protection Act, 2023. Data privacy is a fundamental right and adherence to law may be built into standards and algorithm audit tools.
- Intellectual Property: In 2018, Obvious, a Paris-based arts collective created a painting called ‘Portrait of Edmond de Belamy’ using generative adversarial network. While the painting sold at a premium, however, it was not recognised as art by the critics since it was created by technology. The present legal framework of intellectual property recognises only natural persons as inventors and creators, but this position would have to be re-examined in the light of the increasing AI capabilities - when AI systems autonomously create and invent, without human intervention.
Developers of foundational LLMs - Open AI, Anthropic, etc. are facing lawsuits around the world, with copyright owners alleging that copyrighted works have been used to train the AI models, without permission. While the internet changed the economics of information, AI is changing the economics of creativity, and issues of IP ownership and infringement need to be addressed urgently. - Ethics: The discussion on responsible AI acquires greater significance in the context of agentic AI. Transparency and accountability are twin pillars on which rests the edifice of responsible AI. With self-learning agentic AI, it is not clear whether it would be possible in all circumstances to explain the steps/ reasoning used by the system to make the decision and/ or take the action which it has.
Development and compliance with standards and auditing systems are viewed as possible mitigants. Integration of AI agents with blockchain technology is a new vista of possibilities. Blockchain’s immutable ledger can ensure that AI agent can make decisions and execute tasks transparently and securely. This may be useful for decentralised finance and other blockchain applications.
Governance and Future Outlook
The European Union has led the charge on AI governance with the EU AI Act, 2023, which offers a comprehensive definition of AI System – “a machine-based system ... designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that…infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions…”3– this definition includes agentic AI, since the terms “varying levels of autonomy” and “adaptiveness after deployment” will include systems which are self-learning and fully autonomous.
The Indian Government through the Ministry of Electronics and Information Technology undertook to analyse gaps and offer recommendations for developing a comprehensive framework for governance of AI and published its report on 6th January 20254. The report emphasizes the development of frameworks to ensure the responsible deployment of AI and recommends a coordinated approach across various government instrumentalities to manage AI’s challenges effectively. The proposed Digital India Act is expected to strengthen the legal framework, regulatory capacity, and adjudicatory setup.
Conclusion
Balancing innovation with protection of fundamental rights of individuals is the key to successful AI regulation. Global cooperation is important to set universal principles and standards for development and the deployment of AI systems. The development and implementation of standards and regulatory principles can only be successful with participation of private corporations engaged in developing AI systems. Industry self-regulation through voluntary commitments on transparency and baseline commitments for high-capability AI systems, including setting up AI Ethics Boards, will go a long way in ensuring ethical and beneficial deployment of AI.
Disclaimer – The views expressed in this article are the personal views of the author and are purely informative in nature.
2. https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
3. https://artificialintelligenceact.eu/article/3/
4. https://www.meity.gov.in/content/report-ai-governance-guidelines-development-public-consultation