Information Technology and Innovation Foundation (ITIF)

Les informations de ce site Web à été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fournit par les sources externes n'est pas assujetti aux exigences sur les langues officielles et la protection des renseignements personnels.

Innovation, Science and Economic Development Canada

RE:  Consultation on a on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things

To Whom It May Concern:

We write in response to the request for comments regarding a modern copyright framework for artificial intelligence and the Internet of Things.

The Information Technology and Innovation Foundation (ITIF) comments draw on prior research about the critical role intellectual property plays in spurring innovation. ITIF is a non-profit, non-partisan public policy think tank focusing on a host of critical issues at the intersection of technological innovation and public policy. Its mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress.

Sincerely,

Nigel Cory
Associate Director, Trade Policy, Information Technology and Innovation Foundation

Contents

Overview

Intellectual property (IP) is based on the idea that those who combine the spark of imagination with the grit and determination to see their vision become reality in books, film, music, technology, medicines, designs, sculpture, services, and more deserve the opportunity to reap the benefits of their innovation—and that this reward incentivizes more creative output. In the past, IP law operated under the assumption that all creative works would be entirely created by people. However, the advent of artificial intelligence (AI) has raised the prospect that in the future a significant number of works may be created by an autonomous computer system without direct human involvement. Canadian policy should protect the principle on which IP law is based, whether the works are generated by people or computer systems.

ITIF welcomes Canada’s detailed and thoughtful review of copyright implications for AI and the Internet of Things.Footnote 1 Central to Canada’s consideration of IP and AI should be a pragmatic and realistic understanding of AI and its capabilities. While AI systems can be increasingly autonomous and creative, they still have a considerable way to go before they achieve the sophistication that many people imagine. However, Canada should not create AI-specific requirements for activity that is already legal and being done through non-technical means.

ITIF’s submission focuses on two key recommendations:

  1. Canada should recognize and protect AI-generated IP and the need to assign ownership of AI-generated work to the person or organization that owns the AI system.
  2. Canada should allow all users—whether commercial or non-commercial—to use AI for text and data mining (TDM) so long as they have legal access to the material. There should be no additional, special approvals for the use of TDM tools. Canada should avoid making its copyright framework overly complicated in considering TDM-specific requirements and exceptions and instead ensure that the use of TDM is consistent with current IP law, whether this relates to infringement or licensing. 

Authorship and Ownership: AI-based Work Should Receive IP Protection

Work produced by AI should receive IP protections. The innovation-incenting function of IP law should not change based on whether a computer or a person creates the output. While it is true that an AI system does not respond to the financial incentives that are central to IP-based innovation policy, the entities who develop and own AI systems most certainly do. Canada should ensure its laws and policies recognize (and not prohibit) non-human creativity and innovation, while clarifying the criteria, rules, and legal linkages for assigning rights to relevant owners (whether individuals, organizations, or firms).Footnote 2

As AI becomes ever-more sophisticated, it is moving closer to joining the “creative class.” While this occurrence has been debated within IP circles for decades, we’re finally at a point where it is becoming a common practical issue, rather than a hypothetical scenario or ad hoc and novelty occurrence.Footnote 3 There are two ways of looking at AI, but both come to the same conclusion about how AI-created works should be treated by IP law.

One way is to consider AI as just another tool, analogous to dice, in providing non-deterministic outputs.  In these situations it is still the human creator who determines how to use non-deterministic outputs to generate creative content.Footnote 4 Another is to consider AI as something different than a tool, and more like an independent person. Imagine a parent trains a precocious child on research techniques, and then that child uses this training to discover something new on their own. Generally speaking, this child would get to claim full IP rights for their discovery.

Extending this analogy to AI systems, some may say an AI system is like the child trained by a grown up, and that anything it then creates should be the rightful IP of the AI system. While the human may have trained the AI system, as the parent trained the child, the AI system did all the work. And while an AI system could not exist without its human owner (to pay for hardware and electricity and keep it running), neither would a child survive without the parent (paying for food, clothing, and shelter). Indeed, AI systems are now capable of creating new works and new ideas on an increasingly autonomous basis. If IP law does not recognize works produced by AI systems, then there is a risk that humans will have little incentive from IP law to create and train AI systems that can autonomously create new ideas. So in this situation, as in the first one, the human owner of the AI system should receive the IP rights for any discoveries and creative works produced by an AI system.

Canada needs to ensure its IP law allows humans to be assigned authorship of IP produced by AI systems given that AI will likely be able to produce genuinely new and novel creations that would receive IP protections if produced entirely by a human. Canadian IP law should not be biased against using machines.

For example, in 1998, John Koza developed an algorithm as part of an “invention machine” that created simple circuit designs. In 2005, Mr. Koza’s machine passed one of the first “Turing tests” after a patent examiner (not knowing it was created by a computer) showed demonstratable creativity (a “non-obvious step” in IP law) and granted it a patent, making it one of the first intellectual property protections ever granted to a nonhuman designer.Footnote 5 However, as ITIF argues should be the case, Mr. Koza was the one who actually obtained the patents generated from the machine’s output.

As long as the AI-based creation meets the conditions that statutory subject matter must meet before receiving IP protections, it should receive protection. Policymakers should be consistent in recognizing AI-based creations that would otherwise receive protection if it had a human creator/inventor. Although the AI system itself may be the proximate creator of the work, others, such as the owner/controller of the AI system at whose initiative the work is ultimately created, should be entitled to ownership of the AI’s works.Footnote 6

Nothing in international treaty law explicitly authorizes, or prohibits, protections for such computer-generated works.Footnote 7 The Berne Convention states the Union is created, “for the protection of the rights of authors in their literary and artistic works,” without defining who the author is, which it does due to the fact that, “national laws diverge widely, some recognizing only natural persons as authors, while others treat certain legal entities as copyright owners.” Furthermore, this debate is not new. In 1965, the U.S. Copyright Office’s annual report (in the section “Problems Arising From Computer Technology”) raised concerns about whether computers can author musical works.Footnote 8 In the late 1980s and early 1990s, the World Intellectual Property Organization (WIPO) considered protections of “computer-produced works” in discussions of a possible (but never achieved) model copyright law.Footnote 9 The debate around AI and IP is simply the latest iteration of this IP policy debate.

Canada should build a framework to make two key differentiations: between users who merely push a button on a computer that uses AI and the genuine authors/creators of AI; and between the use of computers and AI as part of the process for creation (as a general tool) and when AI contributes substantially to the creation of IP itself. Virtually all types of copyrightable work is regularly created using computers, whether it’s Microsoft Word for literary works, architectural works using Autodesk’s AutoCAD, or pictorial and graphic works enabled by Adobe Photoshop. There is a clear dividing line between the creative techniques used by digital and analog authors. Cutting and pasting is easier and faster on a computer, but the verbs "cut" and "paste" betray their analog origin.Footnote 10 The AI we’re referring to is the segment of technology that can be considered sophisticated and smart, not simply a process for automating a process as a matter of efficiency. But some AI systems go beyond relatively minor increases in efficiency and perform virtually all of the key tasks that lead to the creation of a new work.

If Canada wants to be a world leader in innovative digital technologies, it needs to ensure it doesn’t discriminate against what is likely to be one of the most transformative general-purpose technologies (similar to the microprocessor) of the future. Some advocates argue that computer-generated works should become public property.Footnote 11 While there is and should not be any restrictions on allowing an individual to use AI to create public domain content, requiring this would remove the incentive for people and organizations to invest the considerable time and effort into using an increasingly important driver of innovation. In the case of AI, the proximate creator may not be human, but the owning and controlling entity will be, whether it is an individual or organization. Despite AI’s increasing sophistication, AI does not operate autonomously, in a vacuum, outside of an

Other countries provide realistic, pragmatic models and reference points for Canada, such as countries that provide copyright protection to computer-generated works, such as in Hong Kong, India, the United Kingdom, New Zealand, and Ireland.Footnote 12 In 1988, the United Kingdom became the first country to include provisions on “computer-generated works” as part of its Copyright, Designs and Patents Act (“CDPA”). It states that these are works “generated by computer in circumstances such that there is no human author of the work.”Footnote 13 Furthermore, as it relates to authorship, the CDPA provides that, “[i]n the case of a literary, dramatic, musical or artistic work which is computer generated, the author shall be taken to be the person by whom the arrangement necessary for the creation of the work are undertaken.”Footnote 14 Of note, this protection only extends to literary, dramatic, musical and artistic works and not to media works (sound recordings, films, cable programs, and published editions), although a similar system also applies with regard to design rights.Footnote 15 It’s therefore not a major leap to see how this could be adapted and extended to include AI-created copyright material and patented products. Rather than treating AI-based work as a work “generated by a computer in circumstances such that there is no human author of the work,” a computer-generated work should be a work “generated by a computer in circumstances such that the computer, if a natural person, would be an author.”Footnote 16

Such a pragmatic approach stands in contrast to parts of the debate in Europe, which has tended to focus on much broader legal and ethical debates about how to account for and manage AI. For example, a 2017 report from the European Parliament’s Committee on Legal Affairs included a diverse, confusing range of legal, policy, and ethical issues and proposals around AI and robotics.Footnote 17 In this same report, the Commission on Civil Law Rules on Robotics called for the European Commission to elaborate on “the criteria for ‘own intellectual creation’ for copyrightable works produced by computers or robots.” Meanwhile, the Committee on Industry, Research and Energy stated that next steps should “respect contractual freedom and leave room for innovative licensing regimes” and “cautions against the introduction of new intellectual property rights in the field of robotics and artificial intelligence that could hamper innovation and the exchange of expertise.”Footnote 18

The report also called on the European Commission to create a specific legal status for robots, including bestowing “electronic personality” to robots when considering legal responsibilities in cases where they’ve caused injuries or damages.Footnote 19 In reaction, a group of 156 AI experts from 14 European countries sent a letter to the European Commission warning that granting robots legal personhood would be inappropriate from a legal and ethical perspective. (One could easily envision this leading to absurd results, such as a robot having to spend time in jail for injuring a human). The main focus of the opposition to the European Parliamentary proposal was that it would absolve robotics manufacturers of legal responsibilities for the actions of their machines (“I didn’t build a faulty robot, the robot itself was faulty.”)Footnote 20

This debate in Europe focuses on one avenue for accounting for AI-based legal concerns, such as those related to IP creations: bestowing a degree of personage to the machine. A pop-culture analogy can be found in the “The Measure of a Man” (episode nine of the second season) of Star Trek, the Next Generation, which deals with question about Lieutenant Commander Data (a male android with a built-in AI system) and whether he is a piece of Starfleet equipment or a sentient being.Footnote 21 In line with this, Malta is considering a robot citizenship test as part of its national AI strategy.Footnote 22 Similarly, others have suggested that computers should hold IP rights and that these could be shared under contract.Footnote 23 But of course, the big difference between “Mr. Data” and a Roomba robot is that the former had consciousness and free will. Roomba has neither and just bumps into things until it changes direction. And for the foreseeable future, AI will resemble Roomba, not Mr. Data.

AI-Based Copyright Work Should Reside with its Owning and Controlling Entity

The human involvement that Canadian policymakers should focus on is on that of the owning and controlling entity using the AI for creative purposes. The involvement of a natural person should not be required for AI-based works to receive copyright protection, as long as it meets all the other criteria. But there needs to be a clear connection to an owner/operator, whether this is a person or corporation. For decades, machines have been autonomously generating works which have traditionally been eligible for copyright and patent protection.Footnote 24

In doing so, Canada needs to differentiate between computer-generated works and the associated problem of who should be considered the rightsholder of these works. This issue of assigning authorship/ownership for AI-based work is akin to other, analogue copyright issues. The same challenges will arise in determining whether the work is an infringing copy, an unlawful derivative work, a lawful derivative work, a joint work, or a sole-authored work. Yet, this is more a legal question about how IP is generated and shared and used, rather than a question about whether the IP should be recognized at all.

To see this, imagine one scenario where someone buys an “AI machine” and sets it up so it generates songs and automatically posts those songs online with a royalty-free license as free for anyone to use. And it produces and uploads 100 songs a minute in perpetuity. The relevant IP question is not should the purchaser of the AI machine be able to copyright the songs or whether the songs are in the public domain. The relevant question is what the owner of the machine chooses to do (license the AI’s output, or let be in the public domain) and whether the AI machine is producing infringing work.

In line with this, Canada should (conceptually) look at AI as some combination of an autonomous tool and employee/subcontractor—both work within the confines of a legal entity, who benefits from their labor, including their creativity. Outside of computer-generated works, U.S. copyright law already has a mechanism for authorship by artificial “persons.” In the case of a work made-for-hire, the employer for whom the work was prepared is considered the author for legal purposes.Footnote 25 Regardless of how autonomous AI may be, AI will be operating within the confines of a legal entity’s operations.

For example, in the case of drug discovery, researchers at Harvard University, the University of Toronto, and the University of Cambridge created a generative model, which they trained on 250,000 drug-like molecules, that generated plausible new molecular structures.Footnote 26 The AI can create drug compounds more independently of humans (although it does still suggest structures which are sometimes nonsensical) and without lengthy simulations.Footnote 27 These institutions controlled the AI and therefore should benefit from their work by being assigned any associated IP.

Questions about whether there should be some threshold or criteria for involvement for someone to be recognized as an author/creator related to AI is similar to debates that afflict analogue IP applications about who should or shouldn’t be recognized as an author/creator. Obviously, it would be useful for Canada to use this consultation process as the foundation for further research (as much as it can in the absence of relevant court cases and legal precedents) on where and how to draw the line between human involvement in AI that could allow an individual to qualify for authorship of an AI-based work, such as those on curating training data, observing processes, or setting AI’s orientation. However, this will be difficult in the absence of legal changes that first allow AI-based works to receive IP and corresponding court cases that help analyze these contextual factors that will likely go into a decision about where the line should be drawn.

The Acohs vs. Ucorp case in Australia highlights some of the potential issues Canada will need to consider as part of its research, especially the potential difficulties in establishing authorship in the context of computer-aided or generated work.Footnote 28 Both firms in the case used software to help clients produce individualized reports (which each contain unique source code) about hazardous materials. Their systems are interoperable in that Ucorp used reports from Acohs. Acohs initiated a copyright-infringement case against Ucorp on the basis that its system was infringing the copyright of its reports. Acohs was not challenging Ucorp’s use of its reports, which is part of the implied license.Footnote 29 The court rejected Acohs’s claim that the source code was an original literary work as there was no evidence that the user, in entering the details of the hazardous material they needed information on, had in mind the resulting source code when they did this. However, the judge did recognize that the reports were original literary works, but Acohs’s case for protection failed on the basis that the judge could not identify joint authorship (of both the source code and the report output). The judge rightly rejected Acohs’s position on the ground that it was an artificial concept that the computer programmers and the clients (who entered their data) collaborated with each other in the writing of the source code and in creating the reports. This highlights the challenge Canadian policymakers face in not only protecting the source code of AI, but the products they produce, and assigning authorship within the context of both scenarios.

Text and Data Mining and Copyright: Allow AI to Use Material Just Like Any Other User Who Has Legal Access to Copyright Material

Text and data mining (TDM) are powerful tools that allows researchers in a wide range of disciplines, from bioinformatics to digital humanities, to plough through texts and datasets and interpret minute details. In the field of computational linguistics (e.g., human language technology, or natural language processing), some experts estimate that TDM already accounts for about 25 to 30 percent of all research projects.Footnote 30 For scholars and scientists, access to the rigorously scrutinized work of their peers, such as academic journals and databases, has always been a vital resource. Researchers who subscribe to these sources can explore them using traditional keyword searches and meta-tags predefined by publishers, but that has serious limitations. However, manually reviewing all of these sources is a slow and tedious process, the results of which are often inaccurate and incomplete. TDM gives researchers the ability to not only find a needle in a haystack, but to quickly find and categorize all manner of small objects hidden in many hundreds or thousands of haystacks.

For example, medical researchers can use technologies like natural language processing to quickly analyze the outcomes of thousands of clinical trials. Similarly, TDM technologies can be used to process the data contained in a large collection of scientific papers in a particular medical field to suggest a possible association between a gene and a disease. In line with this, Elsevier— a global information analytics business—allows subscribers to carry out independent text mining research and offers customized text mining services in the fields of life sciences and pharmaceutical research.Footnote 31 Indicative of its management of IP, Elsevier can add (upon request, no doubt after payment, licensing, and rightsholder recognition issues are agreed) other licensed content to the database that its clients want to use for TDM.Footnote 32 This type of analysis supports efforts to develop data-driven precision medicine initiatives that use the latest evidence to deliver personalized treatments. Data mining cannot provide all of the insights gained from human experts closely studying texts, but it does allow researchers to use rapidly developing tools to draw on a much larger pool of literature and data to support their work.

Copyright law can stifle the use of text and data mining on resources they can legally access and analyze with non-automated means. The use of data mining on copyrighted material often falls foul of existing intellectual property laws because the technical process involves accessing and extracting data from its original source and copying it into another database for analysis. For example, even if individuals or firms can lawfully access and read the material, such as through a university library subscription, copying a substantial part of works may infringe copyright in those works (what is ‘substantial’ depends on the context and circumstances). Canadian copyright law should allow publishers to set the subscription fees for access to their content, prohibit unauthorized reproductions of their content, and receive appropriate compensation. But it should not require people with lawful access to content, such as paid subscribers, to seek approval from publishers for using automated research methods.

However, combining copyrighted works (assuming the user has legal rights to use the material) into a searchable database that yields useful information is a permissible fair use. There is case law that suggests that acts of incidental or intermediate copying which do not ultimately result in the external re-use of protectable (expressive) parts of a copyright work should not be considered infringing.Footnote 33 After all, there is nothing illegal about “mining” databases manually; AI-based technology only automates the process. For example, the “Google Book’s” project provides a searchable index of quotes and snippets of text from 25-plus million books it scanned (under fair use).Footnote 34 (The main legal issue was whether the quotes and snippets go beyond fair use.) Canada should ensure that its IP frameworks allow people/firms to do this, while respecting the rightsholder and ensuring that proper licensing arrangements and payments are made and followed.

At the heart of the issue with TDM and copyright is the need to strike the right balance between access and use and respecting and supporting the role of IP and rightsholders’ ability to benefit from it. TDM has become a copyright issue around the world due to various ways to set this balance.Footnote 35 In the United States, TDM is governed by “fair use,” especially as it relates to “non-consumptive uses.” The U.S. copyright regime is considered more favorable to TDM practices than what appears to be the case under European laws, in part, because of the inherent flexibility of the U.S. fair use doctrine.Footnote 36

Canada should keep in mind a central point as it considers TDM and copyright—that AI should use (as part of computational analysis) copyrighted material like everyone else. In many cases AI simply automates processes and uses that were already (legally) taking place. If users already have legal access to the material they are using, then this shouldn’t be an issue. This highlights the fact that there is no need to rewrite the rules of copyright simply because AI systems are now one of the potential users of publicly shared content. In line with this, Canada should not create an additional permission to access data that they already have legal access to use. In practice, copyright holders (or those that manage their portfolios) will well find ways to “upsell” on TDM access. For example, by creating APIs that allow for easier data requests or multiple copies or licenses that allow for sharing the data with other third-party data processors. That is fine, and useful, but doesn’t require new IP rights.

TDM mining should not be based on getting special approval, require any specific misuse provisions that are already covered by IP law, and force the single use of data. The consultation details three potential safeguards and limitations that are misguided and redundant. Canada should not make a TDM-specific conditionality as it does in its listing of potential safeguard and limitations (point 1). Whether it’s via a license or contract, or via creative commons material online, if the access is legal, TDM users should not need an additional approval. Similarly, a user should not have to take additional measures to prevent the distribution of reproductions of works or other subject matter if using TDM as if this is done illegally, it would already be covered by existing IP law (point 2). Furthermore, Canada should not require a person using TDM to not further reproduce any works as this would prevent re-use of data that a user has already legally collected and used (point 3).

Overall, Canada should be ambitious and allow all users—whether for commercial or non-commercial, such as academia—to use TDM on resources they are legally accessing and analyzing with non-automated means. In 2014, the United Kingdom enacted legal changes that provide an exception for non-commercial uses.Footnote 37 Similarly, the European Union considered a specific exception for research institutions to carry out text and data mining on lawfully accessed, copyright-protected works.Footnote 38 This exemption is reasonable because it creates a special dispensation for data mining and does not alter other laws that prohibit the unauthorized extraction or reproduction of copyrighted works. However, Canada, the United Kingdom, and the EU should allow everyone to take advantage of these more efficient and effective data-driven research methods.Footnote 39

Relevant Use Cases

The consultation asks how individuals and organizations are using AI to produce or to assist in the production of works or other copyright subject matter. This section details: the use of the same AI technology used in “deep fakes” for genuine, creative purposes; the use of creative commons licenses for facial recognition; and the rise of AI musicians and implications for fair use/fair dealing and copyright.

Use Case: “Deep Fakes” and the Legitimate Uses of AI-Based Technology for Creative Purposes

The use of AI-based technology for creative purposes, and how this relates to concerns about “deep fakes,” is indicative of how organizations are using AI to produce or to assist in the production of works or other copyright subject matter.

The increasingly realistic AI-based technology that allows content creators to tell the stories of real-life people and events in TV shows and movies—such as through docu-dramas—can also be used maliciously to insert a person’s likeness into objectionable content, especially fake pornographic content featuring famous actresses and falsified speeches by politicians. Policymakers around the world are only just beginning to consider the impact of digital replicas and voice and image manipulation software on entertainers and expressive works. And while it is not yet a prevalent issue, policymakers have only begun to grapple with the risk that deep fakes present for the spread of disinformation, including political disinformation and election misinformation.Footnote 40

The underlying technology has many legitimate creative uses. Movie/TV producers have used similar, but less advanced, technology for some time.Footnote 41 More recently, major digital effects studios have used AI to convincingly map a famous actor's likeness onto another performer's to add value to their stories, as well cut down production time and costs. For example, 2018’s Guardians of the Galaxy 2 showcased a de-aged, 1980s version of star Kurt Russell, while 2016's Rogue One re-created Peter Cushing's Grand Moff Tarkin character from 1977's Star Wars: A New Hope, despite the fact that Cushing died in 1994.Footnote 42

While “deep fakes” are a troubling and rapidly growing problem, like many other technology issues, policymakers shouldn’t ban the underlying technology given its many legitimate uses, but target cases where it is used for specific malicious purposes.Footnote 43 For example, without explicit exceptions for expressive works, some U.S. states’ proposed bills (such as New York) targeting the use of deep fakes for malicious purposes would preclude movie/TV show creators from telling the stories of real-life people and events.Footnote 44 The First Amendment of the U.S. Constitution protects creators working on fiction, non-fiction, or some hybrid of the two. This is why any new policy related to deep fakes should explicitly exempt first amendment-protected uses, including news reporting, commentary, and analysis.

It can be more straightforward for a person to be protected from deep fakes through privacy laws when it involves non-consensual sex scenes, deep-fakes porn, and non-consensual nude performances.Footnote 45  Obviously, the challenge then becomes identifying the individuals involved in creating the deep fakes. Many U.S. states and countries also have “revenge porn” laws that criminalize the publication of an intimate image without consent (although it’s a matter of seeing whether courts apply this law to the case of deep fake pornography videos).Footnote 46 There’s the potential for these deep fake videos to constitute criminal harassment (depending on whether a state/country’s laws include provisions about misusing a person’s image/likeness).Footnote 47 Beyond using existing and new legal tools to address the malicious use of this technology, companies like Google and Facebook are actively developing tools to help identify deep fakes.Footnote 48

This AI-based technology is also relevant to IP issues when it is used to infringe publicity and likeness/personality rights. The right of publicity prevents the unauthorized commercial use of an individual's name, likeness, or other recognizable aspects of one's persona. It gives an individual the exclusive right to license the use of their identity for commercial promotion. In the United States, the right of publicity is largely protected by state law (where it varies from state to state). Only around half of the states recognize it.Footnote 49 However, federal unfair competition law provides a related statutory right to protection against false endorsement, association, or affiliation. Internationally, the rights analogous to the right of publicity are sometimes recognized as “personality rights,” “rights of persona,” or other similar terminology.

However, intellectual property law already provides a framework for the legitimate use of this AI-based technology. Musicians, actors/actresses, or their respective estates can give permission for others to use their likeness/publicity/personality rights as part of some new creation.Footnote 50 For example, hologram-based concerts featuring dead musicians may become more popular as the technology gets better. There have already been sold-out shows (with $200 tickets) for a holographic performance by Roy Orbison, but again, this is done with the consent of the performer’s estate.Footnote 51 This leads to a key question for policymakers as to the intellectual property connection to cases where a “deep fake” is derived from copyright-protected material, as it would be infringing these rights given it’s an unauthorized modification and republication.

Similarly, performers may not want their likeness to enter the public domain upon death. In the United States, the right of publicity and rights to digital replicas have long been a priority for the SAG-AFTRA union and its members.Footnote 52 If there is value to a person’s likeness after their death, their family or whoever makes up their estate (whether this is friends or a designated charity) should receive the fruits of their labors. Some people would also no doubt want to be excluded from commercial use of their likeness all together (such that a person’s.

Use case: Facial Recognition and Creative Commons Licenses

The consultation asks how individuals and organizations are using AI to produce or to assist in the production of works or other copyright subject matter. The recent backlash to the idea of using publicly available photos to train facial recognition systems highlights some misunderstandings in how copyright law permits the use of copyrighted works for computational purposes, such as training an AI or machine learning systems.Footnote 53

In March 2019, IBM created the “Diversity in Faces” dataset to provide a set of photos of peoples’ faces of various ages and ethnicities to help reduce bias in facial recognition systems.Footnote 54 IBM compiled the dataset from photos people shared online with a license which allows others to use the images for any purpose (under a creative commons license).Footnote 55 The widespread use of these licenses has been a tremendous boon to society and the economy by creating a wealth of valuable content that others can freely use and adapt for their own purposes, and the “Diversity in Faces” dataset is a perfect example of how openly licensed works generate valuable benefits.

It is clear that IBM can lawfully distribute images with Creative Commons licenses. While some people may be opposed to facial recognition technology, and not like that their images were used to train some company’s algorithms, that does not mean copyright law is broken or needs to be changed, or that IBM did anything wrong. As Ryan Merkley, the chief executive officer of Creative Commons, notes, “copyright is not a good tool to protect individual privacy, to address research ethics in AI development, or to regulate the use of surveillance tools employed online.”Footnote 56 It would be unfortunate if general public angst about AI led to the popularization of licensing agreements that explicitly prevent the computational use of data. Platforms like Flickr should resist any pressure and continue to offer technology neutral licenses, ensuring that any data a human can access, a computer can also access. And companies like IBM should be encouraged to continue to package datasets for public use.

Unfortunately, misunderstandings about how open licenses work are commonplace and responsible for new waves of outrage. More recently, an October New York Times article called attention to another facial recognition training dataset compiled from openly licensed Flickr photos called MegaFace.Footnote 57 People included in the database expressed similar distaste and frustration, failing to acknowledge that their (or, for people whose childhood photos were included, their parents’) use of open licenses explicitly allows for this kind of use. Ultimately, there are limits to the amount of control individuals have over content they share publicly—whether they do this online, in print, or in person. There is no need to rewrite the rules of copyright simply because AI systems are now one of the potential users of this publicly shared content.

Use case: AI musicians, Fair Use, and Copyright

The case of Avia Technologies and its AI performer—called Avia, which stands for “Artificial Intelligence Virtual Artist”—highlights how AI can be applied creatively and legally.

AVIA composes classical music, which is used in soundtracks for film directors, advertising agencies, and game studios. All of AVIA’s music is copyright protected. AVIA was the first AI to be official recognized as a composer, having been registered under France and Luxemburg’ authors’ right society (where all of its copyright resides under its own name).Footnote 58 However, while AVIA’s music is largely indistinguishable from the work of human musicians, its compositions still require human input with regards to orchestration and musical production, which no doubt features in its copyright applications. But this points toward a broader potential model for revised intellectual property protections for AI creations, given it recognizes the role of owners/creators in managing their AI, much like current law would with regard to a human composer that worked as an employee or contractor.

AVIA uses machine learning to understand and model high-level abstractions in data, such as the patterns in a melody. On top of this, AVIA uses reinforcement learning, which teaches the software to decide what action to take next (which means it does not need labelled inputs or outputs of data). In this way, AVIA improves its performance without needing explicit instructions. Avia Technologies taught AVIA by feeding it large databases of classical music, which allows it to capture concepts of music theory in order to develop its own model of music theory, allowing it to compose its own music.

The founders of Avia Technologies focused on classical music for two reasons: it’s the predominate style used in the background of movies, games, etc.; and all the training data—the classical music by Mozart, Beethoven, etc.—are in the public domain (as their copyright protection has expired).Footnote 59 This highlights a critical point—that just because digital content may be available, does not mean that the legal framework should disregard existing IP protections if it used as an input to AI creations.

In the case of copyright-protected material, Canadian law should look at AI-created music, and related concerns about infringement, in much the same way it already does in cases where another human artist who creates a song that sounds like (or samples directly from) an existing one. Much as human artists are influenced by other musicians, AI-based systems should also be able to listen to music in creating their own, original compositions. Obviously, if it creates undistinguishable copies or samples of the music without permission, that would be an infringement, just as George Harrison’s composition of “My Sweet Lord” was ruled an infringement on the rights of Bright Tunes, a New York publisher, for the song for “He’s So Fine” written by Ronnie Mack and part of its catalog. Similarly, if AI-based musicians are marketed as sounding like a particular musician (after being trained on their music), that raises the prospect of infringing publicity and likeness rights. Inevitably, there will be scenarios where assessing infringement by AI will be difficult, but this is not unique to AI. Just as the current debate about infringement between human artists and whether one uses notes, lyrics, or a melody of a song from another, Canadian courts would have to analyze the gray zone to determine when a case involves AI, music, and potential infringement.

Endnotes