
Algorithms and AI are increasingly shaping the world around us, influencing a wide range of decisions and market outcomes – from the prices we pay for goods and services, to the choices available in markets.
The Competition and Markets Authority (CMA) is the UK’s main competition and consumer protection authority. We have a whole-economy remit: we promote competition and protect consumers with a clear end goal to drive economic growth and improve household prosperity.
We have examined algorithms and AI over many years, assessing their implications for competition and consumers. This includes early work on pricing algorithms and more recent work on frontier foundation models, as well as ongoing work and thinking on agentic AI.1
This technology can give rise to significant benefits but also important risks, including the potential for new forms of collusion. Businesses must take proactive steps to mitigate the risk of breaking consumer or competition law, making sure they understand the technology on which they rely to inform or shape commercial and operational decisions.
Competition agencies can also harness this powerful technology internally to drive a step change in the efficiency and effectiveness of their work, including in screening markets for potential issues. The CMA has invested heavily in our technical capabilities, including our ability to use AI and agentic systems to detect breaches of consumer and competition law at an unprecedented pace and scale.
Rise of algorithmic pricing
Algorithmic pricing is not a new phenomenon. Algorithms have been used for decades in sectors such as airline pricing, hospitality, and retail. The CMA’s early research in this area, including a study of pricing algorithms (2018) and broader report (2021), highlighted how integral algorithmic systems had already become to many businesses’ operations.
However, the prevalence and sophistication of algorithms have grown markedly in recent years, fuelled by increasingly granular, large-scale data, availability of computing power, and methodological developments. The emergence of large language models (LLM) has given businesses greater access to powerful, low-cost predictive technology that can inform or automate many important decisions, including price setting. Our more recent work has explored these developments and their potential implications.
Benefits of algorithmic pricing
Efficiency, speed and lower costs
Algorithms can process complex datasets much faster than humans, allowing businesses to adjust their prices in real-time for optimised outcomes. By automating pricing decisions and reducing human error, businesses can lower their costs and barriers to entry, potentially leading to greater competition.
Personalised pricing
Algorithms may allow businesses to tailor prices to individual customers or segments, increasing their profits while also expanding market access and affordability for some consumers.
Increased market efficiency
By making prices more responsive to changes in demand and supply – including supply chain disruptions – algorithmic pricing may make markets more efficient.
Risks and challenges of algorithmic pricing
Dynamic pricing
Algorithmic pricing can creates risks related to dynamic pricing, where prices move rapidly in response to changing market conditions. In such circumstances, there is potential for vulnerable consumers to lose out, for consumers to feel blindsided by sudden changes – particularly if they are under unfair pressure to make snap decisions – and ultimately for a loss of consumer trust and confidence.
The CMA has discussed dynamic pricing recently and set out advice for businesses to help them use innovative pricing models in a fair and transparent way.
Algorithmic collusion
Algorithmic pricing could also lead to coordinated outcomes and higher prices – often referred to as ‘algorithmic collusion’. The emergence of more powerful AI models may compound this risk in new and subtle ways.
Ezrachi and Stucke2 conceptualised a set of scenarios in which algorithmic collusion may arise:
Implementation of ‘classic’ collusion
Rival businesses may explicitly agree to collude and then use algorithms to put into practice, monitor, and enforce the agreement.
For example, in a 2016 CMA case, 2 online sellers of posters were found to have colluded by agreeing not to undercut each other on Amazon Marketplace and then giving effect to the agreement using pricing software.
The use of a common algorithm can reduce the need for sellers to communicate on an ongoing basis, is more data driven and cost-effective for real-time coordination, and enables more sustainable collusion. As with more traditional forms of collusion, this conduct is unambiguously illegal.
Hub-and-spoke collusion
Colluding businesses may use the same algorithm or data hub to exchange competitively sensitive information indirectly – which might in some cases also include delegating pricing decisions to the hub or receiving from it recommendations (based on co-mingled data).
Exchanges of information can reduce strategic uncertainty and sustain collusive outcomes, even without an explicit agreement to coordinate. The hub may allow businesses to increase prices above competitive levels, maximising collective profits. UK law recognises that indirect information exchange through third parties, such as algorithm providers, can constitute illegal, anti-competitive conduct.
Predictable agent
Businesses may use algorithms that react predictably to market events, potentially softening competition. Such algorithms may follow price leadership and punish deviations, achieving collusive outcomes without human communication or explicit agreement.
In the UK, the Chapter I prohibition in the Competition Act 1998 is engaged to the extent the conduct amounts to an agreement or concerted practice between undertakings. However, the CMA may also be able to remedy competition issues caused by coordination through market investigations without having to establish whether the businesses involved have reached a common understanding or whether that understanding is tacit or explicit.
Autonomous, complex AI systems
It is possible that advanced AI systems given the objective to maximise profits may learn to reach coordinated outcomes, even without human intent to collude.
Evidence: theory, experiments and case studies
While research continues to develop in this area, with many open and interesting areas of debate, some studies have supported concern about algorithmic collusion, including:
- theoretical contributions, experiments and simulations (by Salcedo3, Calvano et al.4., Klein5, Asker et al.6, Harrington7, and Fish et al.8)
- a few real-world, empirical studies (Assad et al.9 and Musolff10)11
While early research focused on reinforcement learning algorithms (Calvano et al.[4]), recent advances have allowed businesses to harness LLM with human-language interfaces and advanced capabilities. Through market simulations, Fish et al.[8] demonstrate that LLM-based agents used as pricing tools may be prone to colluding anonymously, with interesting sensitivity to ‘prompts’. Meanwhile, Keppo et al.12 find that a greater number and diversity of agents can reduce the risk of collusion.
Clearly, these are early days in a vibrant and still emerging field, with no overall consensus (Deng13), and much innovation and research yet to come. In the meantime, real cases seen already in some jurisdictions show there is already some regulatory interest and action in this area, and underline the need for businesses to understand the risks and their responsibilities in relation to algorithmic pricing.
How businesses can mitigate the risks
The risks described above do not imply that the use of algorithms or AI in pricing is inherently problematic, nor that all businesses face the same level of legal or competition risk. Much depends on market context, design choices, governance, and how technology is used in practice.
With that in mind, this guidance only highlights steps businesses may consider taking in order to avoid breaking competition law (which has potentially serious consequences), and to reduce the likelihood of outcomes that reduce competition or harm consumers.
Understand the law
Make sure your business understands how competition law applies to algorithms, and provide staff with appropriate training. For example, the CMA has guidance on horizontal agreements.
Do not share competitively sensitive information with your competitors
Make sure you are not sharing confidential, competitively sensitive information with rivals, whether directly or indirectly (for example, through a pricing consultant or pricing software).
Information is considered to be ‘competitively sensitive’ if it reduces competitive uncertainty in the market, and could influence the competitive strategy of other businesses.
Do not let competitors’ confidential information influence your pricing
Make sure that any pricing guidance or actions, generated by a pricing solution you use, are not influenced in any way by competitively sensitive information from rivals – even if you do not receive this information directly.
Be careful when using the same algorithm as a competitor
If you can reasonably expect that a pricing recommendation could be drawing on confidential information from a competitor (even if you have not been told this directly), you may still be breaking the law. Take extreme care when discussing pricing algorithms with your rivals.
Scrutinise data and algorithms where needed
In some cases, you may need to audit the input data and statistical approaches used, whether in-house or third-party. A traditional algorithmic audit might not be enough: consider linguistic stress testing prompts, and including explicit anti-collusion constraints (for example, if using pricing solutions that involve LLM).
Report any concerns
Report anti-competitive behaviour, including potentially illegal pricing algorithms, and consider applying for leniency where appropriate.
Remember: there are serious consequences to being caught in anti-competitive behaviour. In the UK, businesses can be fined up to 10% of their annual turnover, and individuals can face fines, director disqualification, or even criminal conviction.
Pricing consultants should also act responsibly. If you provide a pricing service, you could be held to account for breaking competition law, for example if you:
- give pricing recommendations to rival businesses that are based on confidential information from each
- facilitate the exchange of competitively sensitive information between rivals
Our approach to detection and enforcement
In some circumstances, algorithmic collusion can harm consumers through higher prices, less choice, and weaker incentives to innovate.
The CMA’s approach to addressing the risks of algorithmic collusion is multifaceted and proportionate, reflecting both the potential harms and evolving evidence base.
Raising awareness
We continue to provide guidance, research, and accessible content for competitors and algorithm providers.
Reporting and detection
We have recently updated our leniency policy, which now makes leniency available for conduct such as exchanging competitively sensitive information through a shared algorithm.
We offer a reward of up to £250,000 to anyone who tells the CMA about illegal cartel activity, including algorithmic collusion.
Screening and investigation
We have a strong range of covert and overt investigatory powers.
Screening is also an important and growing focus. We have been developing our capabilities in this area, drawing on a range of techniques including machine learning, AI, and agentic systems, alongside more traditional tools. These capabilities support proactive screening and investigation, and are used as part of a wider evidential toolkit rather than as a substitute for legal and economic analysis. As part of this we are also actively screening for algorithmic collusion and have built technical expertise to investigate algorithmic systems, including their outcomes, user interactions, and competition implications.
Our growing AI and technical capability
The CMA’s work on algorithmic collusion and our wider cartel screening form part of a broader programme of technology horizon scanning and digital capability building. Areas like AI are complex and deeply technical, requiring regulators to stay on top of rapid developments. For example, recent research (Nikandrova and Parekh14) demonstrates that LLM-based agents can autonomously learn to adopt harmful exclusionary strategies. This is a further example of the need for businesses to understand the technology and be alive to novel risks and for agencies actively to monitor this emerging space. In the CMA’s case, ensuring that we think ahead on emergent risks also reflects our focus on identifying and addressing barriers to UK growth.
A particularly thought-provoking area is agentic AI. Definitions vary but these include systems that can be instructed in natural language to achieve a goal, navigating some complexity in the environment, planning, and taking action as needed. The current reality may be modest, but interest has surged in recent years and this has become an area of active development and investment for businesses. There are many potential applications – for consumers and businesses – and potentially significant benefits as well as important considerations, including risks around pricing outcomes where businesses rely on AI agents to set prices fully autonomously for them. Agents could, in theory, learn to coordinate on outcomes akin to tacit coordination without explicit human instruction to do so. Experiments have shown LLM‑based agents in simulated environments converging to supra‑competitive pricing in repeated‑interaction settings (Fish et al.[8]).
Another possibility explored in recent theory and experimental research is that AI agents learn to coordinate by communicating through steganographic techniques – methods of concealing information within another message or physical object. For example, it has been demonstrated that AI agents instructed not to share sensitive information directly can learn to conceal this within a seemingly innocuous message about the weather (Motwani et al.15). Ghaemi16 surveys these and related possibilities.
The CMA’s programme of technology horizon scanning is continuing to monitor and unpack developments in AI including agentic systems, and embedding this insight across the breadth of our work. Likewise, businesses exploring these new technologies and operating in the UK should ensure that they understand potential risks and remain compliant with UK consumer and competition law.
As part of our proactive work, we are also exploiting advanced technologies internally at the CMA to detect consumer and competition issues at unprecedented pace and scale. This work is iterative and complements, rather than replaces, established legal and economic analysis. It includes deploying new techniques in data analysis and AI to help detect bid rigging in public procurement – a significant area of risk and potentially substantial public savings. We are using agentic AI to identify potential infringements of consumer protection law across the economy to help us better understand consumers’ experience and target our enforcement activity where it will have the greatest impact. We recently opened investigations into 8 businesses, our first with new direct consumer enforcement powers, and sent advisory letters to 100 others. These actions will make a real difference for UK households and foster consumer trust and confidence, and thus contribute to economic growth.
"UK businesses exploring these new technologies should ensure that they understand potential risks"
Since 2019, the CMA has made significant investments in building dedicated in-house capability. An interdisciplinary directorate – Data, Technology and Insight (DTI) – now brings together:
- data scientists
- engineers
- technologists
- behavioural scientists
- eDiscovery and digital forensics specialists, and more
Experts in strategic, business and financial analysis are now part of this mix too – it is essential to understand not only the technology but also how businesses operate in the real world and we have been deepening our capabilities for understanding businesses’ strategies and business models, including in relation to AI.
DTI’s integrated team of technical experts work alongside economists, lawyers, and other specialists, helping ensure that the CMA can meet the challenges of increasingly complex investigations.
Agencies do not operate in a vacuum, and engaging with stakeholders – including consumers, businesses, investors, academics, and other regulators – is crucial to the quality of the CMA’s work and impact. We proactively share insights and techniques, both:
- in the UK, for example through the Digital Regulation Cooperation Forum
- internationally, for example at the OECD, United Nations Conference on Trade and Development, G7 and through the International Competition Network’s Technologist Group, which the CMA currently chairs
Conclusion
The evolving digital economy presents both opportunities and challenges. Algorithmic pricing and AI can deliver significant benefits for businesses and consumers, but can also raise risks of collusion and harm.
Businesses are advised to ensure they understand the law, and mitigate risks that may be posed by algorithmic pricing (including the newer and more subtle risks posed by LLM and agentic AI).
For their part, agencies must:
- remain vigilant
- invest in technical capability where possible
- foster strong collaboration with peers and stakeholders
- ensure that innovation is balanced with robust competition safeguards
The CMA’s own programme of work in this space is ongoing and geared to proactive technically-informed support and guidance for businesses, with robust enforcement where needed to protect consumers and competition and ensure markets remain fair, dynamic, and innovative.
- For more of our work on pricing algorithms, read: Algorithms: How they can reduce competition and harm consumers (2021), Pricing algorithms and competition law: what you need to know (2024), and Why clear and accurate pricing matters – and how businesses can get it right (2025). For more on AI foundation models, read our update paper (2024). ↩︎
- Ezrachi, A., and Stucke, M. E. (2016). ’Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy’. Harvard University Press. Ezrachi, A., and Stucke, M. E. (2017). ‘Artificial Intelligence & Collusion: When Computers Inhibit Competition’. University of Illinois Law Review, 1775. ↩︎
- Salcedo, B (2015). ‘Pricing Algorithms and Tacit Collusion’. Working paper. ↩︎
- Calvano, E., Calzolari, G., Denicolò, V., & Pastorello, S. (2020). ‘Artificial Intelligence, Algorithmic Pricing, and Collusion’. American Economic Review, 110(10), 3267–97. ↩︎
- Klein, T. (2021). ‘Autonomous Algorithmic Collusion: Q-Learning Under Sequential Pricing’. RAND Journal of Economics 52, 538 to 558. ↩︎
- Asker, J., C. Fershtman, and A. Pakes (2023). ‘The impact of artificial intelligence design on pricing’, Journal of Economics and Management Strategy, 33(2):276–304. ↩︎
- Harrington, J. E. (2018). ‘Developing Competition Law for Collusion by Autonomous Artificial Agents’. Journal of Competition Law & Economics. Harrington J. E. (2025). ‘Hub-and-Spoke Collusion with a Third-Party Pricing Algorithm’. ↩︎
- Fish, S., Y. A. Gonczarowski, and R. I. Shorrer (2025). ‘Algorithmic Collusion by Large Language Models’. arXiv preprint. ↩︎
- Assad, S., R. Clark, D. Ershov, and L. Xu (2024). ‘Algorithmic pricing and competition: Empirical evidence from the German retail gasoline market’. Journal of Political Economy, 132(3): 723 to 771. ↩︎
- Musolff. L. (2025). ‘Algorithmic Pricing, Price Wars and Tacit Collusion: Evidence from E-Commerce’. ↩︎
- The OECD has also done some work in this area. OECD (2017). ‘Algorithms and Collusion’, Roundtable on Algorithms and Collusion, DAF/COMP(2017)4. Also OECD (2023). ‘Algorithmic competition’, OECD competition policy roundtable background note. ↩︎
- Keppo, J., Li, Y., Tsoukalas, G., and Yuan, N. (2026). 'On the Fragility of AI Agent Collusion'. ↩︎
- Deng, A. (2024). ‘What do we know about algorithmic collusion now? New insights from the latest academic research’. Mimeo. ↩︎
- Nikandrova, A. and Parekh, A. (2025). ‘Algorithmic Exclusion by Large Language Models’. ↩︎
- Motwani, S. R., M. Baranchuk, M. Strohmeier, V. Bolina, P. H. S. Torr, L. Hammond, and C. S. d. Witt (2024). ‘Secret Collusion among Generative AI Agents’. Mimeo. ↩︎
- Ghaemi, M. S. (2025). ‘A survey of collusion risk in LLM-powered multi-agent systems’. In Socially Responsible and Trustworthy Foundation Models at NeurIPS 2025. ↩︎
Leave a comment