The combined forces of strengthening copyright law and the explosion of information has led to huge transaction costs in managing legitimate transactions of copyright material, the top economist from Google said Tuesday at the World Intellectual Property Organization.
These transaction costs are the “friction” that prevent trades from occurring, said Hal Varian, who is Google’s chief economist but who was speaking in his capacity as a former professor of economics at the University of California, Berkeley (US).
Varian spoke on 14 September as a part of a WIPO seminar series on the economics of intellectual property [pdf]. The use of economic analysis and in general evidence-based policy-making has been a rising trend at intellectual property institutions in recent years, with WIPO appointing its first chief economist, Carsten Fink, in 2009.
Economic analysis can be of benefit in two ways when looking at issues such as transaction costs and copyright, Varian said in an interview after the event.
The first is conceptual, by providing information on “how to look at the problem, how to isolate this issue of transaction costs and how to structure incentives to deal with it,” Varian said.
The other way economics can help inform policy is by looking at data analysis, “trying to do statistical methodology that is good for analysing data,” Varian told Intellectual Property Watch. With such analysis, one can “run some experiments, analyse some data, reach an understanding and implement a policy, and then review the policy.”
Varian demonstrated the ‘conceptual’ use of economics during his lecture by dissecting the issue of orphan works; that is, works for whom the rights holder is unknown or ambiguous, and therefore it is difficult to tell how to legitimately licence the work (or if a licence is needed).
The term of copyright protection in the United States has crept steadily upward over the last several centuries, from 14 years with an option to renew for another 14 in 1790, to 28 years with an option to double that in 1909, to life plus first 50 years (1976) and then plus 70 years (1998), said Varian. This means that many works that have been out of print for decades may still be under copyright protection.
Searching for the owners of these copyrights is not done at a socially optimal level, Varian told the audience. Economic analysis helps tease out why: the costs of searching can be high, there can be perverse incentives (e.g., in the case of submarine patents – where a rights holder can benefit from an infringement so much that there is an incentive to seek to have rights infringed). Also, there can be a “pricing holdup” – that is, searching for the rights holder is a cost that might not be worth it if the price of a licence is not known or negotiated beforehand. This is because the expense of the search cannot be recouped if the rights holder and potential buyer cannot agree on a price, reducing the incentives to search in the first place.
The “pricing holdup” problem is also an issue in internet neutrality, Varian said later. There is reduced incentive to create an internet based content if it may be filtered before it can reach potential consumers, he explained.
The “pricing holdup” problem is also an issue in internet neutrality, Varian told Intellectual Property Watch later. There is reduced incentive to create an internet based content if it may be filtered before it can reach potential consumers, he explained. This is a “gatekeeper” problem, he explained: “suppose that I’ve developed this fantastic new technology and I come up to the gatekeeper and say ‘I need to have access to these consumers'” and the gatekeeper responds with how much it will cost. If this cost extracts all the benefits the innovator would get from access, it discourages innovation in the first place. “That’s why net neutrality is a concern, you don’t want someone who is extracting all the value from the transaction and therefore discouraging the transactions from taking place.”
But this is only a problem, he added, if there is “a single gatekeeper,” which is not the case with mobile telephone companies. “When you have sufficient competition this isn’t really an issue,” which is why Google distinguished situations in its recent deal with Verizon between situations where there was a single provider and one in which there are several. For more on the Google-Verizon deal, see the IP-Watch special report here.
The best solution to the problem of transaction costs caused by intellectual property with regards to orphan works is a clearinghouse that would not only contain a registry of potential rights holders, but would also indicate prices different sellers might ask for the licence on their work.
This is “fairly explicitly” the goal of the Google Books project, Varian later explained to Intellectual Property Watch: to create a central point where potential buyers can “examine the material, decide what you want to view and carry out the transaction.”

[…] High copyright transaction costs cause ‘friction’, Google Economist tells WIPO (IP Watch) […]
There’s already such a registry. It’s called the Copyright Clearance Center, and it’s quite comprehensive, including most rights that seem likely to have value.
The argument that Google is offering an unalloyed social good also fails to convince because their solution is opt-out rather than opt-in. If it were really good for publishers, then all of them would rush to opt-in, and they wouldn’t need to be trapped into it.
Another issue is that Google’s system for connecting works to their registered rightsholders, and for respecting opt-outs, has gained the reputation among independent publishers for being inaccurate.
The last issue I’ll allude to is that the Google Settlement affects larger publishers and smaller ones very differently. But the discussions of the issues seem to only focus on the large publishers and the large players. Unfortunately, the tiny ones add up to a significant and growing chunk of the market in the aggregate. Ignoring that fact, as has been done so consistently, invites unintended consequences.
I’m not seeing any real data here as to how such costs cause “friction” (whatever that means) or prevent publishers from reprinting works. Dover and others manage to do it all the time and have for many years.
Furthermore, people who want to reprint copyrighted works or create derivative works from them are not entitled to do so. If the cost is too high for them, or it’s too much trouble, they can just pass up that project and do something else. It’s just like publishers acquiring new books: They pass up good, new books agents and authors offer them, all the time, because there are only so many books they can afford to publish.
And what Google does not mention is that their proposed Book Registry will take a cut of all these revenues. Meaning that, with the Google middleman in place, costs are likely to become HIGHER for publishers and authors who wish to do reprints or create derivative works, because Google AND the copyright holder would have to be paid for license. Do you detect just the teensiest whiff of self-interest in Google’s statement that everyone else should pay Google–who did none of the work and bore none of the costs of creating any of the books they scanned without permission–for permission to use those books?