);

Competition Law and Scholarly Publishing

Competition Law and Scholarly Publishing

Two academics who are critical of “traditional” scholarly publishing, Jon Tennant (https://twitter.com/Protohedgehog) and Bjorn Brembs (https://twitter.com/brembs), transmitted a complaint to the DG Competition in November 2018 about alleged competition law abuses in the EU on the part of my former employer Elsevier (part of RELX Group) and other large publishers in the sector (Springer Nature, Wiley, Taylor & Francis). This complaint seems to build on a related complaint earlier this year by Tennant over Elsevier’s participation in the EU’s Open Science Monitor, and a statement made by the UK’s Office of Fair Trading (now known as Competition & Markets Authority) in 2002 and a 2016 referral to the CMA, and is supported by the European University Association. As I describe below, these complaints are in my view unfounded given that the scholarly publishing market is by definition unconcentrated and open to significant new entrants, and due to the need by government to be neutral and professional in their bidding procedures.

If I follow the thread of the Tennant/Brembs and EUA arguments correctly, they assert that:

  • The scholarly publishing market sector is dysfunctional
  • University and library customers (and perhaps corporate & SME customers) have few market alternatives to the major publishers
  • Each journal article is a monopolistic market in and of itself (with limited substitutability)
  • Thus high barriers to entry
  • Publishers contribute little to the journal publishing process and literature
  • Major publishers own the high-prestige journals and use this as leverage in “big deal” negotiations
  • Library customers are forced to purchase the “big deal” because of this reputational element
  • Profits and margins are too high for the established publishers & costs should have been reduced by the transition from print to electronic
  • Transformation to OA is too slow (and slowed down by existing players)
  • Non-disclosure provisions in customer agreements work to prevent price transparency and stifle negotiation
  • The “read and publish” negotiating stances of DEAL (Germany) and Bibsam (Sweden) are intended to counteract the “double-dipping” issue in hybrid OA journals (journals that have two potential revenue streams, OA plus subscriptions)
  • New adjacent researcher-oriented service businesses, or publisher-infrastructure services, are being used by established publishers to extend monopolistic behavior to these adjacent markets (the “lock-in” idea)

Why haven’t the authorities reacted?

One can reasonably ask– if there is this much smoke, why is there not a little more fire? Apparently, the authorities at the CMA and European Commission have not accepted the view that this is a market with anti-competitive or dominant player behaviors. Perhaps that is because this market sector has seen remarkable developments over the past 10-16 years, including:

  1. Substantial increase in new entrants, both at the journal and publisher level, generally entrants with a “Gold OA” or author-funder-institution pays model (Hindawi, PLoS)
  2. General understanding that this is a far from concentrated market—even the complainants note that collectively 4 or 5 major publishers do not add up to more than 50% of the market
  3. Understanding that library customers are free to subscribe to individual journals or take bundles of journals
  4. Sense that library customers are in fact exercising significant negotiating leverage through consortia and national-level negotiations
  5. Finally, perhaps a sense that the sector is showing significant innovation in new services and increased online availability.

What scholarly publishing as a sector really looks like

Scholarly publishing is a large sector with more than 10,000 journals, depending very much on market definitions (inclusion of arts and humanities, etc). There are many publishers with large portfolios of journals including Elsevier, Springer Nature, Wiley and Taylor & Francis, but it is equally true that many of the most important journals across science and in individual scientific disciplines are owned and managed by scientific or medical societies who do not have large portfolios of journals (Science, BMJ, NEJM). Journals are available through a variety of means, including through distributors such as subscription agents, on an individual title basis, or as collections or bundles of journals (not all of which represent the all-in “big deal”) from publishers, and journal articles are available through a variety of means including document delivery services, authorized interlibrary loan activities, and as pre-formal documents such as author manuscripts and preprints (often made available in institutional repositories or preprint repositories).

 

Each journal article is a unique artifact—the article identifies the research or scholarly issue at hand, the experiment or argument around that issue, and the results or conclusions. However, journals are discipline-specific and each discipline will have a few top journals in the field that compete with each other—in that sense competition is at the discipline and journal level, and that competition is quite intense. Journal articles are not the only artifacts from scholarly research or discourse—there are pre-formal versions of papers as noted, but in addition and of increasing importance is the actual research data from experiments and projects. Formally published journal articles, the version of which is managed by the scholarly publisher, is not the only method to obtain understanding of the underlying research or scholarship—it is however a particularly convenient method to gain a thorough understanding of the issues and background and to see at least some of the data involved, and provides some comfort with respect to general soundness of the article through the peer review process.

 

Publishers are still producing legacy journals in print as well as in the new electronic formats—because some customers still prefer print. The management of electronic services and databases has substantial costs as well. Costs are managed by publishers through a variety of means, including automation and offshoring, but costs are certainly not substantially reducing in the electronic age. The famous Scholarly Kitchen article by Kent Anderson on “things publishers do” is relevant here in its discussion on managing databases and online platforms.

 

Publishers organize the refereeing and publishing system, have done so for perhaps centuries, and appear to do reasonably well in providing value and services. If publishers contributed so little to the system, then preprint services would be perfectly fine substitutes (they are not of course—readers prefer to rely on the editorial processes identified above, not to mention annotation and reference linking and the other “things publishers do”). Academics provide many services to journals as editors and peer reviewers. The tradition has been that editors are paid by publishers but not peer reviewers, and similarly, that journal article authors are not paid “royalties”—that could change of course but likely would create other and new problems. My view is that editor payments make sense in that editors provide more continuous and extensive service to their journals than do authors and reviewers—although I completely agree that the important work of reviewers should receive more recognition, and that reviewer burdens should be shared more equitably.

 

Library customers and publishers have been struggling with pricing in the new electronic environment since at least the early 2000’s, and non-disclosure provisions have helped in ensuring that negotiations around pricing and discounts can remain confidential as between the parties. List prices for print journals are of course completely transparent and are highly relevant as those prices are or have been part of the calculation of online subscription packages. Print journal price increases have been more stable over the past 20 years than they were in the 20 years prior (compare with reports from the early 2000’s re increases of over 10% per year). Further, as Gantz noted in her 2013 article in Learned Publishing, actual serials spend at ARL libraries may be increasing at a lower clip than print journal list price increases, suggesting that libraries are benefiting substantially from aggregate discounts through a variety of means along with the “big deal” licenses.

 

Institutional and library budgets, however, have not been stable and have failed to keep pace with the growing number of researchers and amount of research to be published. This is well understood and has been reported frequently in the STM association’s “STM Report” (see p.28). These problems, and the issue of over-reliance on journal impact factors in tenure decisions or rankings that Tennant/Brembs criticize, are not caused or exacerbated by publishers.

 

The DEAL negotiations demonstrate the strength of library consortia in negotiations—the push for the “read and publish” model is in my view quixotic and irrational—in that it requires that non-German (non-OA) author articles are also made available openly with no payment model—but it nonetheless shows the negotiating model is real and equitable– neither side has all the market power. A good summary of recent discussions (August) can be found here. Double-dipping is not, to my knowledge, an issue in the negotiations and in any event is a non-issue at least with respect to Elsevier journal pricing, where print journal price increases (and occasionally decreases) are based around increases or decreases in the number of subscription-model articles (thus OA articles are not counted for this purpose)—see https://www.elsevier.com/about/policies/pricing#Dipping.

 

Elsevier and other major publishers are contributing mightily towards a migration to an OA business model and other forms of OA activity. Elsevier and Springer Nature along with PLoS are now the largest publishers of OA content. These changes however require that there be some underpinning business model—it appears that Tennant/Brembs criticize the Gold OA model chosen by the UK coming out of the Finch Report—but the only replacement for this would be a model that demands that journals and publishers continue to provide services while the journal content is made available without charge—how can this possibly work? Whether the EU succeeds in its ambitious 2020 Plan or the new Plan S is probably more dependent on the behavior of researchers, funders and research institutions than on publishers. It is unlikely, in my view, to succeed if publishers are the only stakeholders subject to persuasion or coercion.

 

Finally, from a competition law perspective, in theory dominant market players can exercise control in adjacent markets (such as academic infrastructure) by essentially tying the purchase of one product or service with the purchase of another product or service. That is simply not the case in scholarly publishing or research infrastructure. First, to my knowledge, there are no requirements to purchase one type of service as a condition to purchase of another. Second, this sector is rich in alternatives including cross-industry initiatives such as CrossRef. There is no evidence of any dominant tying behavior in any of these new services, rather the evidence, I would argue, is that this shows competition working as it should to support innovation in new services.

Criticism of Elsevier

You might say that of course, I would say, as former General Counsel to the Elsevier division, that many of the critical comments and observations by the EUA and Tennant/Brembs are inaccurate. For example, critics sometimes equate the whole of RELX’s profits or all of Elsevier’s revenues to scholarly journal publishing.  RELX obviously operates 4 business divisions and has healthy operating margins across those divisions. Elsevier produces publications and analytics services in several sectors including its health services business (almost entirely unrelated to the library journals market and with a very different customer base in mind), and publishes books and databases in addition to journals. It is a complex business that in my view is well managed and provides good value to its respective customers. The people that I know at Elsevier are committed to providing those services and value, and they work hard at offering solutions and alternatives for customers at many levels and in many sectors. Journal article authors are not forced to publish in any journal, let alone Elsevier journals, but in 2017 scholars submitted more than 1.6m papers to Elsevier journals for consideration, of which more than 400,000 were eventually published. Elsevier works extensively with journal editors on quality issues, and while not every Elsevier journal is a star in its field (several are, but not all), the precepts of service and quality are emphasized at all levels in the organization and with editors. A large and complex business, of course, does not always run optimally and will occasionally make mistakes— human error does occur. The hallmark of a well-run business is that it identifies and corrects those mistakes.

Mark Seeley

Oracle v Google

Oracle v Google

The federal circuit ruling (27 March) on copyright grounds in this long-standing dispute over Oracle’s JAVA platform and Google’s use of Java APIs in its Android phone operating system has been criticized by several copyright-skeptic scholars as a step backwards in fair use analysis (not to mention the underlying foundational question of Java API copyrightability in the first place).   Some have even suggested that the federal circuit, created to be an exclusive appeals court for patent cases, is out of its depth in copyright.  For more pro-copyright scholars and advocates, however, the CAFC fair use analysis correctly emphasizes the fourth factor, impact on the market, not inconsistent with the recent 2nd Circuit TVEyes decision.  Probably not surprisingly, I tend to the latter camp, while acknowledging that the entire case (more than 8 years long) has been convoluted and confusing, including some core questions on what issues are matters of law and which are matters of fact.

The March 2018 decision is not about software copyrightability—that decision was already made in the CAFC’s 2014 decision, after which the district court was instructed to evaluate the fair use defense (which Google won).  The CAFC however was not satisfied with the district court and jury verdicts and process, and determined that Google’s use was not fair as a matter of law.  There might be a question here about facts vs law, and the role of the jury in such cases, but in my view the CAFC’s fair use analysis is not in and of itself remarkable or clearly flawed.

The decision and the case are controversial because of the foundational question of software copyrightability, and the role of copyright in an API infrastructure.  After all, APIs usually involve entities creating code and then offering unaffiliated developers those methods to access that code to develop further applications.  Some have suggested that the entire world of APIs will now be subject to a chilling effect.  But has Oracle/Java been unclear about what kinds of developments and applications they support with free API access and which they want to consider with commercial considerations?  As noted below, I think Oracle has been explicit about what uses require licenses and which do not.

Professor Samuelson has already criticized the court’s 2014 prior decision (as noted this case has meandered across many sub-decisions) for its reliance on the 3rd circuit’s 1996 Whelan “structure, sequence and organization” or SSO type of analysis for software products.  In her 2015 article “Functionality and Expression in Computer Programs: Refining the Tests for Software Copyright Infringement” , Samuelson described the SSO test as one of four approaches among the circuits—with the SSO (3rd circuit) approach being essentially a merger analysis—where the underlying expression is protectable if there is more than one way of performing the function—while noting that the SSO approach is “now mostly discredited,” and suggesting that the court’s outright merger analysis was also overly simplistic. Samuelson argues that the other circuits take a more sophisticated and more appropriate “filtering” approach (Altai, 2nd circuit, et al.) by analyzing whether some copying of the original software was required for “interoperability,” and that courts in future decisions would be better served by using these other filtration tests.

Although I have not re-read all of these cases recently, it is easy to understand why the Whelan test would be easier to meet, and therefore offer more copyright protection to more software modules.  Does this rise to the level of judicial error?  These methods for testing software copyrightability are all about functionality—and perhaps even idea/expression dichotomy.  In my view, there is error only if the Whelan test would provide protection for something that is intrinsically utilitarian, and the notion that there are at least multiple methods of skinning the Java API cat, and the fact that the copyrightable elements found by the CAFC in Java are not about appearance, format or underlying algorithms, suggests that there is no clear fundamental error.  There may be better tests, and there may be a statutory question about whether more or less copyright protection should be afforded software, but those questions do not rise to the level of judicial error.  More protection might mean more permission-seeking, licensing arrangements (commercial or O/S), and less assumptions made about copying APIs—in my view this is not a bad policy result.

With respect to Oracle’s Java APIs, the 28 March decision noted that Oracle provides the programming language itself free and available for use without permission, but had “devised a licensing scheme to attract programmers while simultaneously commercializing the platform,” part of its “write once, run anywhere” approach.  Consequently, if entities wanted to use the Java APIs to create a competing platform or in new devices, which would clearly include Google’s Android platform, Oracle would want to license such activities on a commercial basis.  The parties were unable to reach agreement on licensing terms, and Google decided to take the risk that Oracle would not enforce or that a court would find Google’s use non-infringing.  It is in this sense that allegations of bad faith were made, although I do not think the bad faith narrative impacted the final decision.

To me, then—the fundamental question is: shouldn’t a copyright holder be able to make exactly those kinds of commercial determinations — a strategy to be partly “open” (wanting to attract programmers) while being more commercial when it comes to completing platforms and products?  Why shouldn’t Google/Android have to share more revenue from a very successful platform built at least in part on Java?

Once the foundational copyrightability question is accepted (however reluctantly by some), the question then is the step-by-step 4-factor fair use analysis, where the CAFC in the recent decision found that Google had little in its fair use arguments, and had even perhaps conceded some points.  In my view the analysis is straight-forward, and the court does not seem to suffer markedly from any patent-law myopia

Turning to the first factor, the purpose of the use (where the question of “transformative” use is often discussed), the court found that Google’s use was not transformative as the copy’s purpose was the same as the original’s, and the smartphone environment was not a new context.  The court also found that Google had a clearly commercial purpose, notwithstanding that Google does not charge for the Android license—reaching back to Napster and finding that even free copies can constitute commercial use, and that Google derives substantial revenue from related advertising.

The second factor analysis relied much on assumptions about a jury’s possible views on the creative aspect of writing software, but then discounted the overall impact of this factor.  Perhaps the court felt it was not necessary in any event given its strong views on the first and fourth factors.

On the amount of the work used for the third factor, there is no dispute that Google copied the entirety of the 37 APIs in question—but Google argued that since it actually only used portions of the works, this should be viewed in its favor, citing Kelly v Ariba Soft.  The CAFC however asserted that Kelly should be viewed in more limited fashion, as being applicable only when a transformative use was first found, and noted that Google’s copying of more than was required also weighed against a fair use finding (although the decision later describes this factor as somewhat neutral in its overall weighing of the factors).

Finally in the effect on the market, the court was not impressed with Google’s argument that Oracle was not making smartphones or developing a smartphone platform (so no market harm), noting that potential markets for derivative works are still highly relevant (this might seem to be critical of the Google Books decision, where nascent markets were not given much credence—although the smartphone market might have been further along than an archival e-book market).  The Oracle record also showed the impact of Android products in Oracle’s negotiations with Amazon, and the CAFC found the Oracle-Google negotiations also highly relevant.  The court also seemed to find Oracle’s strategy of being “partly open” and “partly commercial” when it comes to competing platforms, as described above, a clear and reasonable market approach.

The matter now goes back to the district court for assessment of damages, and the street press has those damages at billions of dollars.

A response to the February 2018 report of Dr. Eleanora Rosati

A response to the February 2018 report of Dr. Eleanora Rosati

A response to the February 2018 report of Dr. Eleanora Rosati (U Southampton) for the Policy Department for Citizens’ Rights and Constitutional Affairs

The recent report by Dr. Rosati echoes concerns she has raised before in her IP Kat blog  about whether limiting the TDM exception in the proposed DSM directive is “ambitious” enough, noting in her report that innovation could come from TDM projects undertaken by business concerns.  This seems to be a topic where I must disagree with Dr. Rosati, who I think usually demonstrates thoughtful balance on matters of IP and is a reliable reporter about new cases.  In this paper Dr. Rosati suggests that because there might be useful insights obtained by copying copyright or database content and applying TDM technologies to it, for commercial purposes by commercial actors, then this should be the basis for a copyright exception (and presumably a database directive exception).  This is not really however a standard that should be applied to exceptions, which of course are governed by the Berne 3-step test as traditionally applied in EU directives, with respect for rights holders and commercial licensing alternatives.  Those traditions also focus on non-commercial research or educational purposes, as does the UK exception from 2014 and the Google Books case from a US “fair use” perspective, both of which are quoted in the report.

Dr. Rosati’s report suggests an expansion of current proposals to business actors (extending this from the proposed public-private partnership concept) and apparently to all kinds of copyright material, from academic literature to news, perhaps to film as well.  Rosati discounts the licensing and permissions options and alternatives that currently exist, including through the CCC’s “RightFind” program (fair disclosure—I am a member of the CCC board) and direct permissions, options and policies of scholarly journal publishers, apparently asserting that there is still too much “legal uncertainty”.  Interestingly some examples given of IBM Watson projects, touted as quite significant from a research or business results perspective, all most likely used the existing permission or licensing mechanisms.  If the content that will be subject to the TDM exception for academic research, the scholarly journals literature, is largely available through policy, permissions or licenses, then Dr. Rosati has not made a convincing case for a non-permission-less environment.  Rosati must add orphan works and out-of-commerce works, and mentions news, general media sources, photographs, in order to bolster a narrative of legal uncertainty (and the orphan works evidence from UK cultural institutions, as Rosati acknowledges, were documented prior to the adoption of the Orphan Works directive).

Fundamental to any law-making is the question of what problem or issue the law is intended to address, and then to attempt to formulate and implement the law so as to avoid unintended consequences—to emphasize the right tool for the job.  As the STM association has said (again full disclosure I chaired the STM Copyright Committee during this time) in 2016 that “STM publishers support, invest in and enable text and data mining” noting specifically the CrossRef initiative, which was not mentioned in the Rosati report (see report).  The Commission itself in its 2015 working plan for the DSM noted the link between exceptions and licensing alternatives—indicating that in the EU as in most countries the question often is whether there is a market “gap” that rights markets are not currently addressing, in discussing new methods of copying content , even while noting possible research gains through greater legal certainty for TDM rights.

From a purely legal perspective, I believe that Dr. Rosati omitted several important recent European decisions which suggest that indexing and linking activities do implicate the communication rights from the InfoSoc directive (see some recent decisions in cases brought by the Dutch organization Stichting Brein , and I believe that Dr. Rosati’s discussion of the Google Books fair use decision is somewhat simplistic.  With respect to the latter, Dr. Rosati is correct in noting the 2nd Circuit’s discussion of the utility of searching across the entire Books corpus for linguistic matching—and that this did factor into the court’s finding of fair use.  However fair use analysis in the US is always a matter of consideration of a number of factors, of which the purpose of the use (particularly as to whether an activity is more akin to a non-commercial research activity or a more commercial activity) and impact on the market are two very critical factors.  If the facts of the case are varied just a little (for example if Google was not displaying just snippets but whole works, or was generating more commercial revenue through advertising), then a very different result might have been reached.  In my view, Google Books does not stand for a wide fair use finding from scanning books into a database to be used for “non-consumptive” purposes (to use a phrase that the Google lawyers coined).   In fact a new decision this week from the same 2nd Circuit, Fox v. TVEyes, also involves the creation of a database of content that is indexed for the convenience of users, but adds a viewing opportunity as well, which the court found went too far for a fair use finding.  The court commented that even in the Google Books decision, the court “cautioned that the case test[ed] the boundaries of fair use” (TVEyes decision).

Text and data mining might well lead to important ativities and research results—and for this reason most STM journal publishers are on record and are strongly supporting academic research projects (some go further, as Dr. Rosati mentions).  In fact by working with organizations such as CCC and CrosRef, they are actively enabling the normalization activities that Rosati mentions as still being critical to the technical processes (see also the STM Declaration covering twenty-one leading publishing houses.  Other copyright sectors would be rightfully concerned about their works being caught up in an exception intended for scholarly research.  Commercial beneficiaries are currently obtaining licenses and permissions, and doing so on a commercial and pragmatic basis, as demonstrated by Dr. Rosati’s own list of IBM Watson projects.  It is not at all clear to me why an exception should be applied to an active and growing copyright market for the benefit of large technology companies.

The bottom line:  there is not a strong legal basis, and an even weaker policy basis for expanding the proposed exception to all types and forms of copyright and database content, or for expanding the number of beneficiaries.  Doing so would violate some of the law-making fundamentals noted above, as well as the EU’s Berne obligations.

Mark Seeley

Reflecting on My Time at Elsevier

Reflecting on My Time at Elsevier

I’ve been at Elsevier since 1995, and worked to support the growth and reach of the business.  It has been amazing to see Elsevier and other STM publishers embrace the online Internet world, face the challenges of digital (and old fashioned print) piracy, change business models (agents to subscriptions to OA), expand internationally, and look to add a series of analytical tools and services on top of our traditional content.  I reflected on this a bit earlier this month at the STM association Innovations Seminar in London and recorded a podcast interview in CCC’s “Beyond the Book” series at http://bit.ly/2kXsgrd where I spoke about the digital innovation in STM publishing and impact on copyright law and policy development.  The slideshow I used at the STM event is attached.

I’ve been immensely proud of the work that Elsevier has done in these areas of innovation, and I continue to think the world is substantially improved by Elsevier’s commitment to quality and utility.

Anyone can publish anything online these days, but in my view ensuring an independent voice for professional scientific and medical communications is vital to a well-functioning society.

My plans are to continue observing and commenting on copyright issues as they pertain to science publishing, and to do some consulting at times on these points, in addition to traditional retirement activities.

More posts and comments to come!

Mark Seeley