What Is AI And How Can I Protect My Innovations And Business?

What Is AI And How Can I Protect My Innovations And Business?

Home 9 AI focused technology 9 What Is AI Washing And How Can I Protect My Innovations And Business ( Page 2 )

What Is AI Washing And How Can I Protect My Innovations And Business?  

 

Key Points

  • AI washing, the practice of exaggerating or misrepresenting the capabilities of an artificial intelligence system, can expose developers and sellers to regulatory enforcement, contractual liability, and claims of misrepresentation under English law.
  • The Digital Markets, Competition and Consumers Act 2024 gives the Competition and Markets Authority direct enforcement powers to impose fines of up to 10% of global annual turnover for misleading commercial practices, including false AI claims, with effect from 6 April 2025.
  • The Misrepresentation Act 1967 and the Consumer Rights Act 2015 provide contractual remedies for those who contract on the basis of false statements about an AI system’s capabilities, including the right to rescind the contract and claim damages.
  • The Advertising Standards Authority has confirmed that existing advertising codes apply in full to AI-related claims: businesses must be able to substantiate any assertion made about what their AI system can do.
  • Thorough due diligence on AI products before procurement, combined with precise contractual warranties and representations, significantly reduces the risk of disputes and provides clear recourse when a system fails to perform as described.

If there was ever a time to make money, it is now. Not since the Industrial Revolution has there been so much opportunity for people with a bit of gumption to get ahead. Thousands of businesses have popped up, vigorously competing for contracts, investment, and customers on the strength of claims about what their AI systems can do. Some of those claims are accurate. Others are embellished (looking at you Open AI). A growing number are simply false, and the legal consequences of that gap between description and reality are becoming increasingly significant.

The term “AI washing” describes the practice of overstating the extent to which a product or service uses artificial intelligence, or exaggerating what an AI system is capable of delivering. It mirrors greenwashing in structure and, increasingly, in regulatory response. Where greenwashing misrepresents environmental credentials, AI washing misrepresents technical capability.

The Law around AI washing

Misrepresentation Act 1967

The Misrepresentation Act 1967 remains the central instrument for contractual misrepresentation claims in English law. Where a party makes a false statement of fact that induces another to enter a contract, the misled party may rescind the contract, claim damages, or both. The Act distinguishes between three forms of misrepresentation: fraudulent (knowingly false), negligent (made without reasonable grounds for belief), and innocent (made in good faith with reasonable grounds for belief).

Negligent misrepresentation is the most relevant category in the AI washing context. A business that asserts its software uses machine learning or predictive analytics when it does not, or claims accuracy rates that its system cannot achieve, may struggle to demonstrate it had reasonable grounds for that belief. Under the Act, the burden of proof shifts: the representor must prove they had reasonable grounds to believe the statement was true. That is a demanding standard where the technical shortcomings of a system are internal knowledge.

The Digital Markets, Competition and Consumers Act 2024

The Digital Markets, Competition and Consumers Act 2024 (DMCCA) represents a significant expansion of the Competition and Markets Authority’s enforcement powers. From 6 April 2025, the CMA can take direct enforcement action against businesses engaging in misleading commercial practices, without needing to apply to court. Fines can reach 10% of global annual turnover, or £300,000, whichever is higher. Individual directors and officers can also face personal fines of up to £300,000 for procedural breaches.

A misleading commercial practice under the DMCCA includes making false statements about the characteristics of a product or service, its capabilities, and the results to be expected from its use. An AI system marketed as capable of performing functions it cannot perform, or described as using AI when it does not, falls squarely within this prohibition. The CMA has made clear that consumer protection law applies whether information is delivered by humans or AI systems.

The Consumer Rights Act 2015

Where an AI product or service is supplied to a consumer, the Consumer Rights Act 2015 applies. Digital content, which includes software and AI-based services, must be of satisfactory quality, fit for a particular purpose, and match its description. A product sold as “AI-powered” that delivers results no different from basic rule-based automation may fail to match its description, giving the consumer the right to a repair, replacement, price reduction, or, in some cases, a full refund.

The Regulators

Competition and Markets Authority

The CMA has confirmed that its consumer protection enforcement powers apply fully to AI-related claims. Businesses supplying AI agents, AI tools, or AI-enabled services to consumers or commercial customers must ensure that representations about those systems are accurate, clear, and capable of substantiation. The CMA’s March 2026 guidance on AI agents makes clear that businesses are responsible for what their AI does in the same way they are responsible for what an employee does, including in respect of misleading statements made on their behalf.

Financial Conduct Authority

The FCA has made clear that existing market abuse and communications rules prohibit AI washing in financial services. Firms regulated by the FCA must communicate with clients in a way that is fair, clear, and not misleading. Where a regulated firm markets an AI-driven product by overstating its capabilities, for example, claiming predictive accuracy or automation features that do not exist, it risks breaching its regulatory obligations. The FCA has expanded its consumer duty requirements through its 2025 and 2026 work programmes, with an explicit focus on AI systems that may mislead consumers about outcomes or performance.

Advertising Standards Authority

The Advertising Standards Authority has confirmed through its 2025 guidance that existing advertising codes are technology-neutral and apply in full to AI-related claims. Advertisers must not falsely claim that a product uses AI when it does not, exaggerate the functionality of AI features, or claim that an AI product performs better than a non-AI alternative without evidence. The ASA has already ruled on advertisements using AI to make misleading claims about product performance, and will continue to monitor this area closely. Where a misleading AI advertisement also constitutes an unfair commercial practice, it may also attract CMA enforcement.

Contractual and Commercial Risks

Beyond regulatory enforcement, AI washing creates serious contractual risks for suppliers and buyers alike. A supplier who represents that its system has capabilities it does not possess exposes itself to misrepresentation claims, warranty breaches, and indemnity obligations. The financial consequences can include rescission of the contract, return of fees paid, and damages for loss flowing from the reliance on the false representation.

For procurers of AI systems, the risks flow in the other direction. A business that deploys an AI product on the basis of inflated capability claims, and then uses that product to make customer-facing decisions, may find itself responsible for the consequences of those decisions even though the underlying system was not as described. The CMA’s guidance is unequivocal: responsibility for what an AI agent does rests with the business that deploys it, not with the AI provider. Contractual indemnities and robust warranty provisions in the procurement agreement are therefore essential.

Directors and senior officers of businesses engaged in AI washing face personal exposure. Under the DMCCA, individuals can be fined for procedural non-compliance in CMA investigations. More broadly, where a director personally signs off on marketing materials containing false AI claims, or makes representations in a prospectus or investor document that overstate an AI system’s capabilities, liability may arise under securities law or under the general law of misrepresentation. The US Securities and Exchange Commission has already brought enforcement actions against investment advisers for making false and misleading statements about their AI use, and the FCA has signalled it will apply its existing rules to address the same conduct in the United Kingdom.

Practical Implications for Businesses

Taking the time to conduct a thorough risk management assessment designed for AI developments and create a risk register can help prevent future commercial disputes or regulatory investigations. The following steps address both sides of the transaction: businesses selling or marketing AI systems, and those buying or procuring them.

For businesses selling or marketing AI products and services:

  • Audit your claims before making them. Every statement made about an AI system’s capabilities, whether in marketing materials, sales pitches, contracts, or investor documents, should be reviewed for accuracy and substantiated by technical evidence. This is not a once-and-done exercise: as AI systems are updated, prior representations should be revisited.
  • Involve legal and technical teams together. When I draft warranties and representations for AI models, I need to understand the technical capabilities of the system. I then need to ensure that marketing teams understand the legal consequences of overstating an AI system’s capabilities.
  • Draft warranties carefully. Contractual warranties about system performance, accuracy, and capability should be realistic and, where appropriate, qualified by reference to specific operating conditions, data quality requirements, or intended use cases. Vague aspirational language creates liability.
  • Manage update risks. AI systems change. A warranty that was accurate at the time of contracting may become inaccurate as the system is modified. Contracts should address what happens when an update changes the system’s capability, and who bears the risk of that change.

For businesses procuring AI systems:

  • Conduct technical due diligence before contracting. Claims made about an AI system’s capabilities should be tested, not assumed. Require the supplier to demonstrate the system’s performance under conditions representative of your intended use, and record those demonstrations.
  • Require specific, measurable warranties. Contractual warranties should state precisely what the system will do, at what level of accuracy, under what conditions, and with what data. Vague references to “AI-powered” or “intelligent” functionality provide little protection in a dispute.
  • Include rights of audit and monitoring. A well-drafted AI procurement contract should include the right to audit system performance against the warranted specification, and to require remediation where the system falls short.
  • Allocate liability clearly between supplier and deployer. Given the CMA’s confirmation that deployers are responsible for what their AI agents do, the contract must address what happens when the system causes harm because it was not as described. I make sure that indemnity provisions, insurance obligations, and limitation clauses are carefully negotiated and take into account, in so far as possible, foreseeable future regulatory developments.

Policy Direction and Future Risk

The regulatory environment around AI claims is tightening steadily. The DMCCA’s enforcement regime is now live. The FCA and ASA are monitoring the sector with increasing focus. The UK Government’s March 2026 guidance on AI agents signals that regulators view existing consumer protection laws as adequate to address most AI washing scenarios, without the need for new AI-specific legislation. This approach places the compliance burden squarely on businesses to ensure their AI claims are accurate now, not when new rules eventually arrive.

Securities-related claims are also an emerging risk. As AI becomes a material factor in investment decisions, misstatements about AI capabilities in prospectuses, listing documents, or investor presentations carry significant liability potential under UK securities law. The structure of these claims is well established through earlier ESG and accounting misstatement litigation, and AI washing claims are likely to follow the same pattern. Litigation funders have already identified this area as commercially attractive.

Businesses with genuine AI capabilities have a strong commercial interest in enforcing accurate standards in the market. Inflated competitor claims erode trust and distort procurement decisions. The enforcement actions already taken in the United States serve as an indication of where UK regulatory activity is heading, so this is one (increasingly rare) occasion when it is worth taking what is happening across the pond seriously.

Frequently Asked Questions

What is AI washing?

AI washing describes the practice of overstating or misrepresenting the extent to which a product or service uses artificial intelligence, or exaggerating what an AI system is capable of delivering. It can range from describing a basic rule-based automation tool as an AI system, to making inflated claims about accuracy, predictive capability, or autonomy that the system cannot actually achieve.

Can I claim misrepresentation if an AI product does not perform as advertised?

Yes, where a supplier made a false statement of fact about the AI system’s capabilities before you entered the contract, and you relied on that statement in deciding to contract, you may have a claim under the Misrepresentation Act 1967. Depending on the nature of the misrepresentation, you may be entitled to rescind the contract, claim damages, or both. The supplier bears the burden of proving it had reasonable grounds for any false statement made.

What enforcement powers does the CMA have in relation to misleading AI claims?

Under the Digital Markets, Competition and Consumers Act 2024, the CMA has direct enforcement powers to impose fines of up to 10% of global annual turnover, or £300,000, whichever is higher, for misleading commercial practices including false AI claims. These powers came into force on 6 April 2025. Individual directors and officers can also face personal fines for procedural non-compliance in CMA investigations.

Are AI advertising claims covered by the ASA’s existing rules?

Yes, the ASA has confirmed that its technology-neutral advertising codes apply fully to AI-related claims. Businesses must not falsely claim a product uses AI, exaggerate the functionality of AI features, or assert that an AI product outperforms a non-AI product without supporting evidence. Where an AI advertisement creates a misleading impression, it will breach the advertising codes regardless of whether the misleading element was generated by AI or by a human.

How can businesses protect themselves when procuring AI systems?

Businesses should conduct technical due diligence on any AI system before contracting, requiring the supplier to demonstrate real-world performance under representative conditions. The contract should include specific, measurable warranties about capability and accuracy, rights of audit and monitoring, and a clear allocation of liability for outcomes where the system fails to perform as described. These steps reduce the risk of disputes and provide clear contractual remedies if a dispute arises despite that precaution.

At 43Legal, we have the knowledge and resources to undertake a comprehensive risk management process. We can also advise and represent you if a dispute develops. We will resolve the dispute quickly and cost-effectively while protecting your best interests.

To learn more about any matters discussed in this article, please email us at info@43legal.com or phone 0121 249 2400.

The content of this article is for general information only.  It is not, and should not be taken as, legal advice.  If you require any further information in relation to this article, please contact 43Legal. 

Melissa Danks is the founder of 43Legal. She has over 20 years’ experience as a solicitor working within the legal sector dealing with issues relating to risk management, dispute resolution, and advising in-house counsel in SMEs and large companies. Melissa has extensive expertise in providing practical, valuable, modern legal advice on large commercial projects, joint ventures, data protection and GDPR compliance, franchises, and commercial contracts. She has worked with stakeholders in multiple market sectors, including IT, legal, manufacturing, retail, hospitality, logistics and construction. When not providing legal advice and growing her law firm, Melissa spends her time running, walking in the countryside, reading and enjoying downtime with close friends and family.

 

Melissa Danks is the founder of 43Legal
Defining and Excluding Consequential Loss In A Contract

Get In Touch

9 + 14 =

Recent In The Know Articles

Keep Up With Articles

1 + 11 =