Impact Of AI Regulation On Technology Contracts
The GDPR is generally recognized as setting the "gold standard" in relation to data protection. Clearly the EU is hoping for a similar pattern to emerge in relation to their AI Act
It is difficult to avoid the hype surrounding Artificial Intelligence (AI) and its likely impact on industry and wider society. Part of the discussion at least involves the extent to which lawmakers and regulators should be attempting to set out guard rails for the development and implementation of AI solutions and services, and we are already seeing the first examples of such approaches (most notably with the new EU AI Act). However, what impact will this have upon the process of negotiating and drafting technology-related contracts?
The first point to note in this regard is that we have some potential precedents to fall back on, in the form of the web of laws and regulations relating to the use and processing of personal data.
In the earlier days of the information revolution, the proliferation of technology solutions and the growing ease of manipulation of large data sets led to concerns as to how personal data might be abused to the detriment of the rights of individuals, such that many countries (albeit not all!) concluded that legislation was required to strike an appropriate balance between such rights and the legitimate expectations of businesses.
At that time – and much as we are now seeing in relation to AI – the EU sought to acquire “first mover” advantage by setting out a detailed set of rules and regulations related to the processing of personal data, first through the 1995 Data Protection Directive (15/46/EC) and thereafter in 2016 via the General Data Protection Directive (generally referred to as GDPR).
The GDPR replaced the 1995 Directive, and introduced stricter data protection rules, enhanced individual rights, and imposed significant penalties for non-compliance.
The GDPR is generally recognized as setting the “gold standard” in relation to data protection. Clearly the EU is hoping for a similar pattern to emerge in relation to their AI Act.
From the perspective of the drafting of technology contracts, the original 1995 Directive gave rise to more detailed contractual provisions regarding the treatment of personal data (which previously might have dealt with only in passing, and might have been simply subsumed within wider treatment of confidential information, whether or not it related to living individuals).
The advent of GDPR was a game changer in this regard because of both the quantum of the potential fines which could be levied for non-compliance (up to 4% of global group turnover) and the apparent willingness of data protection regulators to make use of these new powers to actually impose significant fines in practice.
The parallel with the EU AI Act is therefore easy to see, given that the quantum of potential fines for non-compliance with this legislation is – at worst – the greater of €35m or 7% of global turnover (i.e. even higher fines than are imposed by the GDPR).
Whilst this level of fines is reserved for the most egregious breaches (eg. using AI for the “prohibited” purposes identified in the EU Act), more likely types of infringement (eg. regarding usage of inappropriate data sets) can attract fines of the greater of €15m or 4% of global turnover, and even the provision of inaccurate or incomplete information can result in fines of €7.5m/1% of global turnover.
Faced therefore with the EU Act (and the likelihood that other countries will, in due course, take similar approaches), contract drafters will need to react, in much the same way as they did in relation to GDPR.
The first area of attention will be in relation to the imposition of obligations to mirror those in the EU Act, eg. so that a customer who is commissioning or licensing some form of AI solution can be assured that the relevant supplier has complied with the key requirements, and particular so as to require warranties/representations as to:
- The creation and implementation of risk management systems to identify and mitigate risks throughout the AI development lifecycle
- Ensuring the use of appropriate and accurate data sets
- The provision of comprehensive technical documentation
- Ensuring transparency and explainability of the decision-making process
- Robustness, accuracy and security of the underlying application code
For their part, suppliers will want to ensure that they are not being prejudiced by the instructions or data that they are provided with by their customers (eg. if the data that the customer provides is itself discriminatory or biased in some way, then it is highly likely that the outputs of the AI solution will reflect this).
Disclaimer – The views expressed in this article are the personal views of the author and are purely informative in nature.