Can AI engage in price fixing?

August 2023  |  SPECIAL REPORT: COMPETITION & ANTITRUST

Financier Worldwide Magazine

August 2023 Issue


In 2017, the US Department of Justice (DOJ) and the US Federal Trade Commission (FTC) were already considering the antitrust risks associated with the use of computer algorithms. Today, those risks have increased tenfold. Artificial intelligence (AI) advancements make the headlines every day: in November 2022, ChatGPT was released, launching a new era in intelligent and accessible AI programmes.

ChatGPT is an AI chatbot that gathers data from the internet and uses computing predictions to answer queries posed by the user. It can write emails, programme code and even craft your next resume. Technology companies have begun announcing plans to cease hiring for thousands of roles in the next few years that they believe AI can successfully fulfil. The emergence of AI has spurred novel competition concerns as more and more companies leverage AI to inform and direct business strategy.

Notably, specific competition risks have arisen as a result of companies’ increasing reliance on algorithmic programmes to optimise pricing. Lawsuits and investigations focused on algorithmic price fixing are bound to proliferate, and in fact already have. In January 2022, Amazon settled a price-fixing investigation by the Washington State Attorney General’s office for allowing sellers to use its Sold by Amazon pricing algorithm. In October 2022, plaintiffs filed a class action complaint against RealPage Inc. alleging price-fixing of apartment leases through algorithmic means. That lawsuit sparked an onslaught of subsequent cases filed against RealPage, and the company remains embroiled in multidistrict antitrust litigation today.

Competition laws in the US, European Union and UK have long forbidden competitors from colluding or conspiring to fix prices. Before AI and the advent of pricing algorithms, price fixing was typically the result of wink-and-nod agreements reached in back rooms. Now, price fixing often depends on an entirely new character: pricing algorithms.

If you have ever shopped online and noticed price variations by day, you have likely encountered the work of a pricing algorithm. Pricing algorithms, whether human-informed or AI-driven, instruct a computer to set the price of an item at a certain level, depending on various input factors such as competitors’ pricing or customer demographic. Pricing algorithms are dynamic, meaning they can change prices quickly in response to changes in market conditions. Pricing algorithms are not inherently anticompetitive, but they can become anticompetitive when they start to collude with other competitor firms’ pricing algorithms.

The DOJ has already addressed one version of this problem in U.S. v. Topkins, (2015), the first case that scrutinised the use of AI to further anticompetitive misconduct. In summary, Topkins sold posters through Amazon Marketplace and agreed with his competitors to fix the price of certain posters. They also agreed to leverage pricing algorithms to coordinate their activity and thus programmed their respective algorithms to effectuate their agreement.

The DOJ had no trouble prosecuting this agreement under traditional US antitrust frameworks because the anticompetitive conduct – humans agreeing to fix prices – was no different than traditional price-fixing agreements. The fact that Topkins and his conspirators utilised AI to execute their agreement did not change the character of their conduct – the pricing algorithms were just the means by which they unlawfully fixed prices. However, significant, and markedly different, antitrust concerns arise when AI-driven pricing algorithms collude without any accompanying human conspiracy.

Unlike the algorithms at issue in Topkins that were human-programmed to collude, AI-driven pricing algorithms rely on the AI to make its own independent decisions on pricing. AI-driven pricing algorithms are typically programmed to be reactive to market conditions, meaning it is likely that two or more competing AI-driven algorithms may act in parallel in response to market factors.

For example, if one pricing algorithm raises the price on a flight, a competing AI-driven pricing algorithm will sense that change and also raise its price in response to the market shift. Thus, it is entirely possible – even likely – that AIs may effectively begin to collude on price, all the while the human decision makers at each respective firm remain unaware of any collusion. This raises the question of whether firms can be held liable under competition laws if non-human actors collude on pricing.

The core requirement of many collusion claims is the existence of an actual agreement between competitors. Tacit collusion – where competitors in a concentrated market do not expressly agree to act in concert but nonetheless act in parallel on price – can also be actionable under competition laws when it has anticompetitive effects. The risk of tacit collusion is high in the context of AI-driven pricing algorithms because such algorithms are specifically programmed to be reactive to market conditions in order to maximise profits. With the goal of maximising revenue, AIs have already realised that colluding with competitors on price can be highly profitable.

The UK’s Competition and Markets Authority (CMA) deduced as much in its 2018 study of pricing algorithms. The study found “evidence of widespread use of algorithms to set prices particularly on online platforms”, and their simulation models confirmed “some pricing algorithms can lead to collusive outcomes even where firms are each setting prices unilaterally”. The study also noted serious concerns about ‘hub-and-spoke’ arrangements (when competitors coordinate with each other by each working with or through a central ‘hub’ for the conspiracy), because firms could “adopt the same algorithmic pricing model” from vendors or even each other.

Tacit collusion arising from an AI’s independent decision making does not seem actionable under current competition laws, but it remains an open question, and one that the DOJ and the FTC are carefully considering. In March 2023, Jonathan Kanter, chief of the DOJ Antitrust Division, announced that his division was heavily invested in staying abreast of AI antitrust concerns. In fact, the division is currently hiring data scientists and related experts to study the relevant AI in order to aid antitrust enforcers. Lina Khan, chair of the FTC, also recently penned an opinion piece for the New York Times titled ‘We Must Regulate A.I. Here’s How’, emphasising the FTC’s responsibility to watch out for the anticompetitive dangers posed by new AI technologies.

The CMA has announced similar initiatives, creating the Digital Markets Unit (DMU), a new regulatory entity aimed at understanding and preventing algorithmic collusion. US government officials also have expressed concern for the emerging problems with algorithmic collusion. In early March 2023, Elizabeth Warren, Bernie Sanders, Tina Smith and Edward Markey, all of whom are US senators, wrote a letter to the DOJ urging them to investigate RealPage Inc. for its use of algorithmic rent-setting software that appeared to fix prices and drive rapid inflation for rental properties.

It is clear that AI is here to stay, and even more clear that emerging technologies will continue to create novel legal issues, especially in the context of economic markets. For competition policy, AI represents the newest frontier. Traditional competition frameworks are ill-suited to grapple with the unique dangers presented by AI-defined business strategy. Until the legal or regulatory bodies weigh in with concrete answers to the problems raised by pricing algorithms, companies and consumers remain uncertain.

Legal uncertainty and heightened interest by antitrust enforcers create the perfect storm for businesses to be caught in the crossfire. Thus, before integrating pricing algorithms into their competitive strategies, companies are well-advised to seek the advice of counsel on these issues. In 1983, businesses had to contend with how to use the internet. In 2023, they have to learn how to – and how not to – use AI.

 

Debra D. Bernstein is managing partner of the Atlanta office of Quinn Emanuel Urquhart & Sullivan, LLP. She can be contacted on +1 (404) 238 8444 or by email: debrabernstein@quinnemanuel.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.