Artificial Intelligence has started to become ubiquitous in the modern world. From your bookkeeping software to your washing machine, from your speakers to your car, AI is everywhere.
Companies view AI as the new battleground where they can differentiate their products from their competitors, but with these new features being rushed to market, have executives thought through all the repercussions? When the AI does something unexpected, who is legally responsible for the consequences? If you run a business or provide legal services, then AI is the next frontier.
Whilst the impact may be minimal for the limited artificial intelligence embedded on a washing machine, that is not the case for other industries where the virtual decisions are the difference between life and death. When a self-driving car is in an accident, is the owner to blame, the car manufacturer, retailer, or the software provider? If A doctor uses AI to help assess a patient and they get the decision wrong and somebody gets the wrong medication, is the doctor liable? Is the hospital? Is the AI software firm? Could the AI itself become a legal entity that could face consequences? These are all questions that are already seeing real-life tests.
Legal Frameworks for AI Liability
As these new technologies have emerged, lawmakers have struggled to keep pace. However, there are now some legal frameworks that are either in discussions or already in force that cover AI law, and the best are adaptable for the rapid pace of change.
For example, the EU recently introduced the AI Act, which aims to address the risks and benefits of AI and position the EU within the global playing field. The AI Act provides AI developers and those that utilise their code with “clear requirements and obligations” for specific uses of AI, whilst also reducing the financial and administrative burdens on small and medium-sized business.
Meanwhile in the US, there are as yet no legal frameworks to govern AI in force, but the SAFE Innovation AI Framework offers a set of ten guidelines for AI developers, companies, and policymakers and is seen as a first step towards a federal law on the subject.
AI failures
There have already been numerous AI failures, which shows the dangers of both rushing a product to market and how difficult it can be to consider every eventuality when it comes to such a broad technology.
Amazon avoids hiring women
Back in 2014, Amazon developed an AI platform for hiring new staff members, which was intended to streamline the recruitment process for both the company and applicants. However, the system broadly failed in these objectives and worse, the system was found to have produced discriminatory results against women.
Don’t trust Google for recipe tips
More recently, Google rolled out its own AI platform to rival ChatGPT called Gemini. As one might expect for such a huge brand launching a consumer-facing AI platform, people threw billions of questions at the new system and quickly found some answers to be less than trustworthy. In one glaring example, if you asked Gemini how to make cheese better stick to a pizza, the platform encouraged people to use food-safe glue!
Is generative AI just plagiarism?
The recent emergence of generative AI tools, which can create text, music, art, or video from a series of prompts, has created a new question as to whether AI can create new art itself, or whether everything it creates is just plagiarism. And if you use art created by these tools on your own website or in your company’s promotional material – what happens if they actual are found to be infringing copyright?
How to mitigate against the legal risks of AI?
A proactive approach to understanding and addressing the legal risks involved in AI will help businesses navigate the legal landscape of AI. Setting clear AI policies that define the roles and responsibilities of employees using of the technology and a transparent chain of accountability is a good place to start. And engaging with legal experts in the field when any new questions emerge is the best step forward in minimising the legal risks.
