On June 6, 2015, the U.S. Department of Justice brought the first-ever online market-place prosecution against a price-fixing cartel. One of the special features of the case was that prices were set by algorithms. Topkins and his competitors designed and shared dynamic pricing algorithms that were programmed to act in conformity with their agreement to set coordinated prices for posters sold online. They were found to engage in an illegal cartel. Following the case, the Assistant Attorney General stated that "[w]e will not tolerate anticompetitive conduct, [even if] it occurs…over the Internet using complex pricing algorithms." The European Commissioner for Competition endorsed a similar position, stating that "companies can’t escape responsibility for collusion by hiding behind a computer program."
Competition laws forbid market players from engaging in cartels, loosely defined as agreements among market players to restrict competition, without offsetting benefits to the public. This prohibition is based on the idea that competition generally increases welfare, and that for competition to exist, competitors must make independent decisions. Accordingly, price-fixing agreements among competitors are considered the "ultimate evil" and may result in a jail sentence in the U.S., as well as in other jurisdictions, unless the agreement increases consumers’ well-being.
Until recently, formation of a cartel necessitated human intent, engagement, and facilitation. But with the advent of algorithms and the digital economy, it is becoming technologically possible for computer programs to autonomously coordinate prices and trade terms. Indeed, algorithms can make coordination of prices much easier and faster than ever before, at least under some market conditions. Their speed and sophistication can help calculate a high price that reacts to changing market conditions and benefits all competitors; the speed at which they can detect and respond to deviations from a coordinated high price equilibrium reduces the incentives of competitors to offer lower prices. Indeed, if one algorithm sets a lower price in an attempt to lure more consumers, a competitor’s algorithm may be designed to immediately respond by lowering its price, thereby shrinking the benefits to be had from lowering the price in the first place. Moreover, as John von Neumann suggested, algorithms serve a dual purpose: as a set of instructions, and as a file to be read by other programs. Accordingly, by reading another algorithm’s accessible source code, algorithms, unlike humans, can determine how other algorithms will react to their own actions, even before any action is performed by the other side. This enables competitors to design their coordinated reactions, even before any price is set.
The questions thus arise when the use of pricing algorithms constitutes an illegal cartel, and whether legal liability could be imposed on those who employ algorithms, as well as on those who design them. The stakes are high: if we cast the net too narrowly and algorithmic-facilitated coordination falls under the radar, market competition may be harmed and prices may be raised; if we cast the net too widely, we might chill the many instances in which algorithms bring about significant benefits.
To prove an illegal cartel, an agreement must be shown to exist. An agreement requires communication among competitors, which signals intent to act in a coordinated way, and reliance on the other to follow suit, in a manner that creates a concurrence of wills. Some scenarios that involve pricing algorithms easily fall within the definition. A simple scenario involves the use of algorithms to implement or monitor a prior agreement among competitors, as was done in the Topkins case, mentioned here. In such situations, a clear agreement exists, and the algorithms simply serve as tools for its execution. U.S. Federal Trade Commissioner Maureen Ohlhausen suggested a simple test that captures many of these easy cases: If the word "algorithm" can be replaced by the phrase "a guy named Bob," then algorithms can be dealt with in the same way as traditional agreements.
A more complicated scenario arises when competitors deliberately use a joint algorithmic price setter, which is designed to maximize the profits of its users. Such a scenario was recently analyzed by Luxembourg’s Competition Authority. There, numerous taxi drivers jointly used a booking platform that employed an algorithm to determine taxi prices for all participating drivers. The algorithm set the price based on predetermined criteria such as the length of journey, the hour of service, traffic congestion, and so on. The price was non-negotiable. This arrangement was found to constitute an agreement to fix prices. It was nonetheless exempted on the grounds that the efficiencies it generated (including reduction of wait time and lower prices for some consumers) were larger than the harm caused by the coordination, and that these efficiencies could not be achieved by less-restrictive means. Much depends, however, on the specific facts of a given case, including the price formula used by the algorithm and the efficiencies it creates.
Should the algorithm not create large, countervailing benefits for consumers, its employment might constitute an illegal cartel. The U.S. Department of Justice opposed the Google Books Settlement on such grounds. There, Google agreed with the associations of book authors and publishers that a pricing algorithm will set the default prices for the use of Google Books. The U.S. Authority argued that it is unlawful for competitors to agree with one another to delegate pricing decisions to a common agent, unless the agreement creates countervailing benefits. Interestingly, the fact the pricing algorithm was designed to mimic pricing in a competitive market was regarded as insufficient. Actual bilateral negotiations on book prices were seen as preferable. This argument was not pursued further by the courts.
The more challenging cases arise when algorithms are designed independently by competitors to include decisional parameters that react to other competitors’ decisions in a way that strengthens or maintains a joint coordinated outcome. For example, suppose each firm independently codes its algorithm to take into account its competitors’ probable and actual reactions, as well as their joint incentive to cooperate, and the combination of these independent coding decisions leads to higher prices in the market. Coordination occurs even though no prior agreement to coordinate exists. Even more difficult questions arise when algorithms are not deliberately designed in a way which facilitates coordination. Rather, the algorithm is given a general goal, such as "maximize profits," and it autonomously determines the decisional parameters it will use. The interaction between such algorithms may lead to coordination and higher prices. Yet does an illegal agreement exist in such scenarios?
The answer is currently being debated by competition authorities, scholars, and courts worldwide. While it is currently impossible to draw clear bright lines, four basic guidelines already emerge. First, the fact that coordination is achieved through algorithmic interactions does not prevent proof of an agreement. This can be exemplified by the requirement of an intent to engage in an agreement. Obviously, algorithms cannot have a mental state of "intent." Yet algorithms "intend" to achieve certain goals by using certain strategies, including reaching a coordinated equilibrium with other algorithms. Alternatively, the intent of the designer to create coordination through the use of algorithms, and the intent of the user to employ such algorithms, can sometimes fulfill this requirement. Likewise, while algorithms generally do not sign agreements, shake hands, wink to each other, or nod their consent, they can communicate through the decisional parameters coded into them or set by them in the case of machine learning. Competitors can then rely on such communications when determining their own actions.
Second, the mere use of algorithms does not prevent the imposition of legal liability on their designers and users. As the European Commissioner for Competition stated, "legal entities must be held accountable for the consequences of the algorithms they choose to use." For legal liability to arise, the designer or the user should be aware of the pricing effects created by it. This can be exemplified by the European Eturas case, involving 30 Lithuanian travel agencies that used the same online booking system. The system operator programmed the algorithm so that the agencies could not offer discounts of more than 3%, and notified the agencies of this restriction via its internal messaging system. The agencies employed the algorithm. The question was whether these events implied an agreement between the travel agencies to change the algorithm and reduce competition. The European Court of Justice made awareness of the change in the algorithm a necessary condition for a finding of a cartel. Disregard to the algorithm’s probable effects may also, under some circumstances, be sufficient to prove awareness. It remains an open question what type of awareness would be required in cases in which an algorithm, which is designed to autonomously determine the decisional parameters, facilitates collusion.
Algorithms are not immune from competition laws.
Third, the use of an algorithm is not prohibited if it simply reacts to market conditions set by others, without reaching an agreement. Accordingly, if a designer simply codes his algorithm to react to the prices set by other algorithms, this, by itself, will most likely not be treated as illegal by any jurisdiction. Accordingly, such algorithms fall within the secured zone.
Lastly, to help prove the existence of an agreement, many jurisdictions rely on evidence of intentional, avoidable actions that allow competitors to more easily and effectively coordinate, and that do not increase welfare. Such actions include, for example, exchanges of non-public information on future price increases. Under some circumstances, algorithms might be treated as such actions. To illustrate, red flags might be raised when competitors consciously use similar algorithms that generate relatively similar outcomes even when better algorithms are readily available; when programmers or users of learning algorithms consciously give them similar training data to that used to train their competitors’ algorithms, despite it not being the best training data readily available; or when users artificially increase the transparency of their algorithms and/or databases to their competitors. In all these cases, competitors implicitly communicate their intentions to act in a certain way, as well as their reliance on one another to follow suit. They do so by using avoidable acts that facilitate co-ordination. Such conduct can, therefore, trigger deeper investigation.
Nonetheless, given that algorithms perform many beneficial functions in the digital environment, the algorithm’s ability to facilitate coordination must be balanced against its pro-competitive effects, including the potential efficiencies created by the speed of reacting to changes in market conditions. Accordingly, while competitors should not be allowed to mask their cartels through algorithms, regulators should also ensure what we gain by limiting the use of some algorithms is greater than what we lose by limiting the range of allowable design choices. Most courts around the world are already going in this direction, and computer scientists have an important role to play in educating enforcers on such matters. It should be stressed, however, that algorithms will not necessarily be treated as indivisible; a court might prohibit only the coordination-facilitating part of the algorithm.
Algorithms are not immune from competition laws. While the use of algorithms is not prohibited, certain uses of algorithms may be considered illegal. Programmers and users should be aware of the potential legal consequences of such uses. Yet, except in easy cases, regulators are still figuring out when the use of pricing algorithms is prohibited. Indeed, Part of the challenge is that "smart coordination" through algorithms requires "smart regulation"— setting rules that limit the harms of increased coordination, while ensuring the benefits of algorithms are not lost.