In this article, our Head of Commercial, Technology and Data, Chris Perrin, explores the legal landscape of AI in the UK and provides tips for businesses on how to navigate this complex area of law.
Artificial intelligence (AI) is one of the hottest topics right now in our client conversations, and with good reason. Across industries, from healthcare to finance, AI is undoubtedly transforming the way in which businesses operate. However, whilst there are many great things that AI may be put to use for, the news stories that are becoming increasingly more common about AI are those that centre around the concerns of this intelligence – the rapid development of AI and the potential consciousness of AI as systems become more advanced.
The use of AI therefore raises a range of legal and ethical challenges that businesses much carefully navigate.
Overview of AI and the law
AI by definition refers to the use of algorithms and machine learning to automate decision-making processes. Whilst it can offer significant benefits to businesses, it also raises a range of legal and ethical issues such as data protection, discrimination and liability.
Currently in the UK, there is no specific legislation governing the use of AI. However, that’s unlikely to stay the case for long. In late March, the UK government published its long-awaited paper, setting out the government’s proposals to govern and regulate AI.
The paper, which was headed ‘A pro-innovation approach to AI regulation’, details how the government intends to support innovation while providing a framework to ensure risks are identified and addressed. Rather than target specific technologies, it focuses on the context in which AI is deployed. This, claims the government, will enable regulators to take a balanced approach to weighing up the benefits versus the potential risks.
Other recent government decisions very much support this pro-innovation approach. For example, a task force will receive initial start-up funding of £100m to help accelerate research and development efforts in the field of AI. This has been introduced with the view to ensuring that the UK remains at the forefront of AI innovation by 2030, while giving priority to responsible and ethical AI technology development.
Unsurprisingly, the UK is not alone in seeking to regulate AI. After a universal hiatus on AI regulation, the EU, the US, and China are also on the road to implementing their own regulatory regimes. It will be very interesting to see how each of these regimes pans out, as this will likely influence where AI companies focus both their resources and efforts.
Until any AI bill emerges and becomes law, when using AI, businesses must comply with existing laws and regulations, such as the UK and EU versions of the General Data Protection Regulation (GDPR) and the UK Equality Act.
Legal and ethical challenges of AI
1. Data Protection: AI relies on large amounts of data to function effectively. However, this data must be collected, processed and stored in compliance with data protection laws, such as the GDPR. Businesses must ensure that they have obtained the necessary consents and are transparent about how the data is being used.
2. Discrimination: AI can potentially perpetuate or even exacerbate discrimination. For example, if the data used to train an AI system is biased, this bias may be reflected in the decisions made by the AI system itself. Businesses must ensure that their AI systems do not discriminate against individuals based on protected characteristics, such as race, gender, or disability.
3. Liability: One of the most significant legal challenges of AI is determining who is responsible if something goes wrong. If an AI system makes a decision that causes harm, it can be challenging to determine whether the responsibility lies with the business that developed the AI system, the individual who trained it, or the AI system itself.
Tips for navigating the legal landscape of AI
1. Conduct a Data Protection Impact Assessment (DPIA): this will allow your organisation to identify the potential privacy risks associated with your AI system and put measures in place that can mitigate these risks.
2. Audit your data: an audit allows you to verify that your data is unbiased and does not perpetuate discrimination. Consider using a diverse range of data sources to ensure that your AI system is trained on a representative dataset.
3. Document your decision-making processes: if a legal challenge arises, it will be important to be able to demonstrate how decisions are made by your AI system. Documenting these processes is an essential part of that proof.
4. Review your contracts: your contracts should reflect the legal and ethical considerations of AI. Consider including provisions that allocate liability and responsibility for any harm caused by the AI system.
There is no doubt that the relationship between AI and businesses has the potential to yield unprecedented growth and innovation. However, this comes with its own set of concerns. It presents numerous legal and ethical challenges to organisations of all sizes.
In order for businesses to ensure that they are complying with existing laws and regulations, there are first steps that they need to take. Some of these steps include conducting a DPIA, auditing data, documenting decision-making processes and reviewing contracts.
If you would like to discuss any of the issues mentioned in this article around AI compliance, or the recommended actions to keep the AI system operations of your organisation within the law, please get in touch.