Diane Coyle: “Practical approaches to data in competition policy”

Dear readers,

I am delighted to announce that this month’s guest article is authored by Diane Coyle, Bennett Professor of Public Policy at the Bennett Institute, University of Cambridge. Diane explores xxxx. I am confident that you will enjoy reading it as much as I did. Diane, thank you very much! All the best, Thibault Schrepel

****

Practical approaches to data in competition policy

Digital markets tend to become ‘winner-take-all’ markets in the first instance because of network effects whereby all participants benefit the more people join a platform. The big tech companies also have the benefit of enormous economies of scale in most of their operations, creating a high intrinsic barrier to entry. But there is a third factor making successful entry into these markets almost impossible: the data loop.

As we described in the Furman Review, “The extent to which data are of central importance to the offer but inaccessible to competitors … may confer a form of unmatchable advantage to the incumbent business.”  Users provide data to the incumbent, who can use it to sell targeted advertising. Both the ad revenues and the insight into individuals can improve service quality, which in turn retains and attracts users. While there may be diminishing returns to acquiring more and more data, the existing hoard presents a potentially insuperable barrier to smaller competitors. Some of the stakeholders we consulted during the Review believed GDPR had made matters worse by making it more costly for these competitors to accumulate user data themselves, or simply inhibiting it altogether. There is indeed some empirical evidence than GDPR has had a chilling effect.

As Peter Klein noted in last month’s guest article, the concept of data “ownership” as if it were a standard economic good has been a misleading framing for the data debate; and there are real challenges to implementing proposed remedies such as common standards and data openness (as we recommended in the Furman Review). Nevertheless, it is important not to conclude that this is all too difficult and nothing can be done.

For one thing, there is a growing body of empirical evidence suggesting that the use of data is important to increasing productivity. Across the OECD economies, the top 5% of businesses ranked by productivity are pulling ever further ahead of the remaining 95%, and this growing outperformance seems to be due to the use of digital tools and data analytics. The key inputs are the skills of data scientists and analysts, in short supply everywhere, and of course the data. The well-known ‘productivity puzzle’ of a sharp slowdown in productivity growth in recent years will not be reversed unless the bulk of businesses gain the skills and the data to catch up and compete effectively. Meanwhile, markets across many sectors of the economy have been becoming steadily more concentrated.

Another consideration, increasingly prominent in competition policy debates, is the fact that the big tech companies with data hoards are either American or Chinese. The intrusion of geopolitics into competition analysis is uncomfortable but unavoidable. There are implications for data policy and trade policy too. The pressures are for data localisation but also for policies to enable home-grown digital competitors, such as the planned sovereign cloud provider Bleu.

Hence the need to develop a suite of policies to address the data loop and enable more competition via data use. Economics 101 implies markets alone will not produce the best outcome for society, given the non-rival nature of data and the many spillovers it involves, from potential invasion of privacy to the positive opportunities from joining together data held in different corporate silos. At the same time, regulators will want to avoid harming the growing ecosystem of data analytics companies providing value added services such as economic and financial nowcasting or consultancy using fine-grained satellite data. Devising appropriate policy interventions will require a lot of attention to detail and technical expertise

Standards and interoperability matter, with interoperability in this context meaning the ability to access data through an API (and one not subject to frequent or sudden changes by the company holding the data). The UK’s Open Banking framework is the model here. Once it is appreciated that the interoperability or portability operate through an interface, without actually transferring any data across platforms, some of the apparent difficulties involved in this approach melt away.

There are interesting approaches such as data trusts that propose vehicles to hold data provided by individuals, which again could be made available to different service providers through a standardised interface. However, these are in their infancy and do not yet seem to be compelling, perhaps because of real or perceived legal or regulatory barriers. They also require individuals to spend more time thinking about giving access permissions than many people would want, so there may be behavioural barriers too. Finally, the structure of such vehicles would need to enable service providers to combine data from different individuals in some form to gain the analytical insights that can improve services.

Away from the realm of personal data, supply chain data sharing is a potentially important enabler of productivity growth and improved services. In the recent petrol shortages in the UK, petrol stations had to be exempted from competition law to share information between operators about which forecourts did and did not have petrol available. In Germany, supply chain data sharing arrangements in steel and chemicals sought comfort from the Bundeskartellamt to go ahead. There are many supply chains where data sharing could increase productivity and resilience, such as construction, or food processing. Yet there are relatively few data sharing arrangements in practice.

Another area for data policy to get to grips with is the provision of open reference data. Governments are responsible for providing much of this, from standard economic statistics to geospatial data. How much should they invest in data provision as a public good? The answer is probably more than now, but an idea of the size of the likely benefit or eventual return to the taxpayer will be needed to make the investment case. In some countries, including the UK, government agencies are also required to charge some users for data access. They should be asked to do so with an eye to competition, including perhaps charging the big digital companies considerably more than smaller competitors. Again, some empirical handle on both pricing and the trade-offs in terms of social benefit

would be useful. Perhaps the National Health Service can charge a high fee for pharma companies to access patient health data from primary care, a rich trove of information. But what licensing conditions should it impose in terms of access for competitors to the same data, or indeed the future payback to the health service and patients when it comes to pricing and access to products derived from use of the data?

Finally, should some of the data already held by big tech companies itself be deemed to be accessible reference data? If they are selling data analytics services, should they be required to post prices for standard products? The CMA fined Facebook £50.5m recently for failure to report statutorily required information during an investigation; should penalties for failing to give corporate data to tax authorities and statistical agencies be much tougher? To give just one example, there is no statistical information on the scale of use of cloud computing or the price of cloud services, yet this is important information for understanding the digital economy.

There are still many difficult challenges to tackle if the data advantage of big incumbents is ever to be overcome. But there are compelling reasons to try, for the benefits of competition, and also to realise the potential for this new asset, data, to deliver broader benefits for society. Data is the source of insights and ideas, and ideas have been the driver of economic progress since the dawn of the industrial Revolution. Without a solid policy and regulatory framework enabling access, there will be only limited scope for using data to tackle challenges from climate change to supply chain shortages.

Diane Coyle

***

Citation: Diane Coyle, Practical approaches to data in competition policy, CONCURRENTIALISTE (November 2, 2021)

Read the other guest articles over here: link

Related Posts