To quote this article (if you feel like it):
Thibault Schrepel, Here’s why algorithms are NOT (really) a thing, Concurrentialiste, May 2017 (online)
You have probably noticed that algorithms are the new black in the antitrust world.
As I pointed out last year, writing about “big data” was fashionable in 2015 and 2016, but things have changed, algorithms are now KINGS. But here lies actually a doctrinal cheese soufflé (a great French reference, I had to) that focus much attention on a subject that probably does not deserve so much.
Let me first recall what Frank H. Easterbrook famously wrote in its Cyberspace and the Law of the Horse (1996):
When he was dean of this law school, Gerhard Casper was proud that the University of Chicago did not offer a course in « The Law of the Horse. » (…) Dean Casper’s remark had a second meaning-that the best way to learn the law applicable to specialized endeavors is to study general rules. Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on « The Law of the Horse » is doomed to be shallow and to miss unifying principles.
Now you can see the meaning of my title. When asked to talk about « Property in Cyberspace, » my immediate reaction was, « Isn’t this just the law of the horse? » I don’t know much about cyberspace; what I do know will be outdated in five years (if not five months!); and my predictions about the direction of change are worthless, making any effort to tailor the law to the subject futile.
We are now in 2017 and it is time to draw a parallel between “cyberspace” and “algorithms-based practices.” A very large number of articles and books published in recent months describe the functioning of what they call “algorithm competition.” Most famously, Ariel Ezrachi & Maurice E. Stucke have published a book entitled Virtual Competition: The Promise and Perils of the Algorithm-driven Economy which was very well received by the antitrust community. In fact, we learn a lot when reading it but it seems to me that two points must be raised:
1. The two authors make a great deal of importance to a phenomenon which is not quantified, and we know that it is not unusual for the doctrine to amplify a phenomenon which, in reality, is not so numerous (it is the case for “reserve payment settlement,” for instance). In addition, the issue with most writing dedicated to the subject is that they never quantify the damage done to the consumer. One therefore has the impression – which is more than just an “impression”, in fact – to read articles of legal sci-fi whose true importance it not assessed.
In fact, I did some research based on WestlawNext, and here is what I found (below). The amount of antitrust law U.S. cases dealing with algorithms is stable since 2007… And there is no reason to think that it would be different in Europe. In fact, the disconnection between the number of articles written on the subject of algorithms in antitrust law and the number of cases is blatant. I should also point out that several of the cases identified here (with the keywords « algorithms » and « antitrust » on WestLawNext) are not really interested in algorithms, but only mention them once.
2. Even though it would be possible to quantify the importance that should be given to the use of algorithms – as well as to the harm caused to consumers, it remains that algorithms are just a new means of implementing the same old anti-competitive practices (cartel on prices, information sharing…). The use of algorithms is a simple way to achieve an anti-competitive result. It does not imply, at least according to the literature I have read, wholly new anti-competitive practices that would necessitate to adapt/rethought competition rules. Sure, algorithms may be used in new kinds of practices which appeared recently, but they only make them easier, they are not, in themselves, the reason why these practices are created. Here is why, also, algorithms-based competition is NOT a thing. As for the law of horses described by Easterbrook, studying algorithms is to be done within the typical competition law frameworks.
Why all of these books/papers dealing with algorithms then? As it was pointed out to me on Twitter, algorithms imply computers and robots… and everyone loves robots (at least I think). Also, we face what some have called a “publication bias.” A publication bias defines the erroneous or exaggerated appreciation of a phenomenon because of its capacity to draw attention from the largest number.
I note, moreover, that a majority of the literature that is interested in algorithms ask for their regulation. In fact, we should remain prudent not to heed the siren’s call that asks for more interventionism each time there is a new technical/technological evolution. As described by Henry Hazlitt in its Economics in One Lesson, each evolution of production techniques/emergence of new means is always the occasion for some to reintroduce the same old arguments asking for a greater degree of government intervention. It was already the case in the late 19th century (see David A. Wells and his book Recent Economic Changes), in the 1930s, the 1960s and the 1970s (Gunnar Myrdal accused machinery to reduce the amount of work available to humans, calling… for intervention). Today, the New Economy triggers the same desire. The emergence of high-tech has indeed become the pretext to regulate what we understand very little. After all, if robots are cool, playing wizard apprentices by regulating/forbidding practices and mergers can also be entertaining.
And yet, notwithstanding these zombie theories, far from me the idea of implying that this debate on algorithms is useless. Algorithms will occupy an increasingly important place in the Economy, but it is not to be mistaken about their nature. It seems to me that algorithms – by accelerating the realization of anti-competitive practices – are particularly interesting in terms of how to detect such practices. Incidentally, it poses the question on how to address evidence. But that’s all – which is already quite a lot, I’ll grant you that.
In fact, let me conclude by stressing that this inflation of literature about algorithms would not be problematic if it didn’t monopolize so much of the attention, and consequently, partially prevented studies on other subjects. The development of new technologies raises legal problems that need to be addressed much more urgently. It is the case for practices of predatory innovation which for the time being are not apprehended by actual antitrust rules (see this article, another one – in English – will arrive during the month of June). What if the emergency was elsewhere?