Early in February, several CCC executives, staff and “friends of the firm,” (as well as a roomful of other fans of copyright) participated in a one-day conference on “Copyright in the Age of Artificial Intelligence,” (AI) which was co-hosted by the US Copyright Office and the World Intellectual Property Organization. Mark Seeley, a CCC Board Member, wrote the following observations on this timely event for CCC:
This one-day conference held in the Washington offices of the Copyright Office was the second in a series of conferences under the aegis of WIPO and the USCO to help establish an appropriate legal framework to consider the implications of AI (in its various applications and forms). Both Francis Gurry (WIPO Director General) and Maria Strong (acting Register of Copyrights) indicated in their introductory remarks that the focus of these discussions would be around identifying whether the appropriate questions are being asked concerning the intersection of intellectual property (IP) and AI.
Gurry noted that IP has been built around the notion of “property” and that AI raises the question of how AI interacts with copyright-protected content used for data ingestion. He identified the two key issues as (1) the interaction of AI and the current copyright system and (2) the new concept of machine-created content. In recent court decisions in China, the courts struggled to identify the “dominant” human being or proximate inventor in AI projects. Gurry spoke of the concern about deep fakes (while noting that there are other potential legal remedies, such as defamation claims or claims of infringement of rights of publicity), and he also pointed out questions about the use of medical information for health AI applications. Concerns about deep fakes, including voice fakes, were also expressed in one of the afternoon panels by Sarah Howes (SAG-AFTRA).
Gurry and Strong both noted that human creativity is fundamental to the current structure of IP laws. Andrei Iancu (Director of the U.S. Patent and Trademark Office) noted that the PTO is currently reviewing submissions made in connection with a series of questions on AI and IP it posed in the fall of 2019 and indicated a report will issue by the spring of 2020.
The panels of experts that followed spoke to specific AI applications in various fields, primarily in artistic fields, and nearly all of the applications fell into the category that CCC described in its PTO submission as “Machine Learning” (ML) techniques. These techniques start with a proposition or goal, develop a ML algorithm that is applied to sets of data (which can be unstructured content), and then refine the techniques through feedback. The relevance and quality of data is often a critical factor in successful AI projects.
Professor Ahmed Elgammal (Rutgers) gave a history of the utilization of photographs and other images, but uniquely described a computer system that would itself generate images based on its increasing understanding of what certain objects might look like – this did not involve merely “mining” and extracting data from data sets but creating a new work entirely. This was described by Elgammal as involving a generator (using no data) working in conjunction with a discriminator (a “critic” using data access). The generator creates an image with feedback from the critic. Other examples provided included creating false face images. Importantly the creative aspect (a “Creative Adversarial Network” or CAN) can be used to “break out of a style”.
This made a nice contrast with the “Next Rembrandt” project (through TU Delft and the Mauritshaus Museum) which involved scanning the oeuvre of Rembrandt paintings and reproducing techniques (including the dimensionality of the application layers of paint) to create a “new” painting in the style of Rembrandt, as discussed by Andres Guadamuz (senior lecturer at the University of Sussex). Sandra Aistars, of the Antonin Scalia law school at GMU, read from a critical review by Jonathan Jones in The Guardian about the project. Jones described The Next Rembrandt project as foolish and empty – focused more on style and not on the heart or substance of the artist’s world view. Astairs went on to question whether the project had more to inform us about forgery than about new creativity.
The panel on the administration of international copyright systems included Ros Lynch (UK IP Office), Ulrike Till (recently appointed head of WIPO’s new AI directorate) and Michele Woods , also of WIPO. Lynch noted that protection in the UK for “computer-generated works” has been in existence since the late 1980’s, with the author identified as the human who made the “necessary arrangements” for the creation of the work. The UK provisions have a 50-year duration (shorter than the more traditional “life of the author plus 70 years”), but Lynch noted duration could be something reviewed in the future. The question of originality is an important factor concerning possible protection, and Lynch noted that the legal analysis would concern the choices made or the degree of “personal touch” reflected. Secondary infringement analysis of such works is still probably an open question. UK law on computer-generated works most likely reflects the more “utilitarian” view that UK copyright law has often adopted, for example with compilations of factual information.
The formation of a new AI division within WIPO reflects the importance with which WIPO views the matter, as noted by Ulrike Till. Michele Woods of WIPO also discussed the various AI tools used by WIPO for IP prosecution and policy development (such as translation engines and prior art data); Woods noted that the tools are under considerable revision now so that we can expect to see new versions of some of these tools, or new web site platforms, rolled out over the course of 2020.
The discussion of collective licensing as a possible solution to the problem of the need for lawful large-scale ingestion of copyrighted content came up in several presentations, including that by Mary Rasenberger , Executive Director of the Authors Guild), Professor Astairs. A little later in the afternoon came the music AI panel, comprised of Joel Douek (EccoVR), Michael Harrington (Berklee Online), David Hughes (RIIA) and Alex Mitchell (Boomy). That last one, Boomy, is a fascinating new technology play for producing AI-generated music, whose tag line says it all in a nutshell: “Make Instant Music with Artificial Intelligence.”
Suggestions made for an expansion of fair use, or an interpretation of fair use to permit AI exceptions, were advocated by panelists such as Meredith Rose (Public Knowledge), Julie Babayan (Adobe), and Amanda Levendowski (Georgetown).
Technical developments that affect the potential copyright protection for works created by AI were very well explored in the all-day session, with examples from across the spectrum, and the framing questions on authorship (must a human be involved?) and ingestion seem like the appropriate questions. One gets the sense that examples from collective works such as films might be helpful in discussing questions about authorship and contribution, and that collective licensing (such as that offered through CCC and other organizations) for text and data mining can be helpful in addressing some of these problems, may be followed up in later conferences. It certainly sounds like there will be plenty to talk about.