What We Heard – Consultation on the development of a Canadian code of practice for generative artificial intelligence systems

Table of contents

Background

Recent advances in artificial intelligence (AI) technology have resulted in calls from experts in a variety of fields for governments to take action to ensure that generative AI systems are developed and deployed in a manner that respects health, safety, and human rights. The Government of Canada has been engaged in domestic and international discussions about the need for standards and safeguards for generative AI systems, including through work to support the Artificial Intelligence and Data Act (AIDA). In support of Canada's further efforts in this space, Innovation, Science and Economic Development Canada (ISED) launched public consultations in early August 2023 on a code of practice for generative AI systems. The code would be implemented on a voluntary basis by Canadian firms, ahead of the coming into force of AIDA. Consultation to support the development of the code was also designed to inform and support Canada's contributions to a number of international deliberations on emerging codes of conduct for generative AI systems.

Who we heard from

The consultation was open to the public, and ISED sought feedback from a diverse range of stakeholders through roundtables, bilateral meetings, and written feedback. ISED held seven roundtables with stakeholders between August 10th and September 15th, including representatives from academia, civil society, Canada's AI research institutes, and Canadian firms of all sizes. ISED also received 24 written submissions, and engaged with Canada's Advisory Council on Artificial Intelligence. In total, ISED received feedback from 92 stakeholder groups and individuals.

What we heard

Framing and objectives

A number of stakeholders commented on the framing of the code and requested clarifications regarding the role that it was expected to play in the ecosystem. Some suggested that the title "code of practice" could be confusing, given that many codes of practice are developed by industry and that codes of practice are identified, within a specific context, under the Consumer Privacy Protection Act.

A number of participants questioned what function the code was intended to serve in relation to the draft AIDA and Canada's international engagements in AI governance. For example, several stakeholders asked for clarification on the connection between the code and future regulations made under AIDA and the role that the code might play following the coming into force of AIDA. There were also comments on ensuring international alignment with efforts being deployed in like‑minded countries, such as the United States, as well as through forums like the G7. This was especially important for firms with commercial activities taking place outside of Canada's borders. Some participants also asked that the terms and definitions used in the code be aligned with international norms.

Some respondents expressed skepticism regarding the utility of a voluntary code, both in terms of whether firms could be expected to adhere to it, and with regard to the possibility that a single code that applies across sectors may not be relevant to the range of business activities that fall under it.

In addition, a number of stakeholders commented on the framing of the code as applicable to generative AI systems. Some thought this was too restrictive, and that the principles and elements listed would be relevant to all AI systems. Others commented that generative AI was too broad a category to be helpful in managing risks, and that the term encompasses lower-risk systems that correct spelling or grammar, in addition to more capable systems. A number of respondents commented that the code should apply primarily to "frontier systems" in order to align with efforts in the United States.

There was also a divergence of opinions regarding the role of the code. Some stakeholders commented that the code should set out high-level guiding principles, without providing specific guidance, while others requested more specific definitions and guidance in order to provide greater certainty regarding how the code could be implemented.

Scope and application of the Code

Stakeholders provided detailed feedback on the scope in terms of 1) which types of systems should be included, 2) how the code should apply to different actors in the value chain (e.g., developer, deployer, or operator), and 3) how it should apply across different risk categories.

Some stakeholders proposed that the code focus solely on "frontier systems" (i.e., those more capable than the current generation of general-purpose generative systems). Others suggested that the code should only apply to generative systems put to use in high-impact contexts, such as health care or sensitive decision-making contexts. A number of respondents emphasized the importance of maintaining a risk-based approach consistent with that found in AIDA, in order to avoid recommending one set of practices across systems to which they may not be applicable.

In particular, some stakeholders highlighted the difference in risk levels between publicly available generative systems, such as OpenAI's ChatGPT, and systems that are only made available in an enterprise context, with a more restricted set of users and uses. Some noted that the risks reflected in the consultation document were more relevant to generative AI systems directly available to the general public, and that systems designed for use within an organization should not be included.

In addition, many stakeholders commented that clear definitions of the roles listed in the code of practice were needed, and that these should align with the terms used in the draft AIDA framework. A number of participants conveyed the view that the distinction between deployers and operators was not clear, and that the measures associated with each role should be appropriately tailored to the responsibilities of firms operating at each stage of the value chain.

With regard to the scope of issues that the code should cover, a variety of views were heard. Some favoured expanding the code's scope to make it more comprehensive, including in areas such as privacy, copyright, and environmental sustainability. Others advocated against scope expansion, citing that there are existing laws that cover these issues, such as the Personal Information Protection and Electronic Documents Act or the Copyright Act. Concerns about expanding the scope of the code were mainly related to a desire for the code to be specific and instructive in AI policy prior to the coming into force of AIDA, while avoiding the confusion that could be created by including guidance on practices in areas already subject to legal requirements.

A number of stakeholders commented on the importance of distinguishing between safety and security. Some emphasized the importance of addressing security threats, including cybersecurity threats to advanced AI systems and threats that could emanate from "frontier systems" in the future.

Implementation challenges and feedback on specific measures

In feedback received, stakeholders raised concerns about how the measures would apply to organizations who wished to adopt the proposed code. The primary concern was whether certain measures could be realistically implemented by all identified actors. For example, one stakeholder indicated that the requirement to have multiple lines of defence would not be implementable by smaller firms and would not be necessary for lower-risk systems. In addition, stakeholders asked for clarity on where one organization's responsibilities ended and another's began. There were also questions about whether the term "operator" included end users of a system or exclusively referred to those managing system operations. If it included users, a number of stakeholders indicated that some of the measures should not apply (e.g., measures to mitigate the risk of biased output).

A number of stakeholders expressed concerns regarding smaller companies' ability to adhere to the code, as well as potential impacts on investment in Canada. Despite the fact that the code was intended to be voluntary, some respondents indicated that if it were seen as impractical for small businesses, investors could take this into consideration. This concern was often connected to concerns regarding the need to differentiate between systems with different levels of risk. For example, some respondents noted that small businesses would have difficulty implementing measures such as multiple lines of defence or red teaming, and that these were not needed for lower-risk applications of generative AI.

Differing views emerged on the measures proposed under the transparency pillar. Some stakeholders stressed the importance of transparency regarding data inputs and system design so that third parties, such as academics, could better identify risks to the public. Proponents of this view advocated for disclosure of details of the system function and design to a limited group of vetted researchers, and disclosure of the datasets and the methods used for training the system. Other respondents expressed concerns that some of this information would amount to disclosure of intellectual property, or could expose them to litigation risks. Some suggested that transparency should be focused on the information needed by users and downstream firms to inform their safe use or operation of the system.

Stakeholders also provided feedback on a number of specific requirements.

  • Some commented that fine-tuning should not be used as a potential mitigation measure, as it could increase bias depending on how it is performed.
  • Others suggested revisions to the measure on assessing and curating datasets, as the use of words such as "avoid" and "non-representative" could be unclear.
  • Some highlighted the importance of mitigating sources of bias that were not related to the input data, such as model bias. Several roundtable participants stressed that the code must be clear that bias must be assessed throughout the full AI lifecycle, and not be measured based just on the training data.
  • Certain respondents made specific recommendations to include further measures to support fairness and safety, such as the establishment of ethics boards.

Participant List

  • AccessPrivacy
  • Advanced Symbolics
  • AI Governance and Safety Canada (AIGS)
  • AI Redefined
  • Alliance of Canadian Cinema, Television and Radio Artists (ACTRA)
  • AltaML
  • Amazon
  • Andrew Clement (University of Toronto)
  • Anthropic
  • Appen
  • Artificial.Agency
  • BlackBerry
  • Blake Richards (McGill University)
  • BMO
  • BrainBox AI
  • Campaign for AI Safety
  • Canadian Banking Association
  • Canadian Chamber of Commerce
  • Canadian Forum for Digital Infrastructure Resilience (CFDIR) AI/ML Working Group
  • Canadian Life and Health Insurance Association (CLHIA)
  • Canadian Marketing Association
  • Canadian Tire
  • Catherine Régis (Université de Montréal)
  • Centre for International Governance Innovation
  • Christelle Tessono (Princeton University)
  • CIBC
  • Coalition for the Diversity of Cultural Expressions
  • Cohere
  • Colin Bennett (University of Victoria)
  • Contextere
  • Council of Canadian Innovators (CCI)
  • Davies, Ward, Phillips & Vineberg, LLP
  • Desjardins
  • Digital Governance Council
  • DLA Piper
  • Doina Precup (McGill University)
  • EmoScienS
  • Fairly.AI
  • Golnoosh Farnadi (Université de Montréal)
  • IBM
  • Information and Telecommunications Technology Council
  • iNovia
  • INQ Law
  • Interactive Advertising Bureau of Canada
  • Jeff Clune (University of British Columbia)
  • Lightspeed Legal
  • Lobana Consulting Group Inc.
  • Macdonald-Laurier Institute
  • Maroussia Lévesque (Harvard University)
  • Mastercard
  • Michael Geist (University of Ottawa)
  • Microsoft
  • Monique Crichlow (University of Toronto Schwartz-Reisman Institute)
  • Montreal AI Ethics Institute
  • Moov AI
  • National Ballet of Canada and Canadian Opera Company (joint submission)
  • OBVIA
  • Office of the Privacy Commissioner
  • OpenText
  • Osler
  • Paladin AI
  • Patrick Pilarski (University of Alberta)
  • Professional Association of Canadian Theatres
  • Questrade
  • Radical Ventures
  • RBC Borealis AI
  • Responsible AI Institute
  • Retail Council of Canada
  • Sam Andrey (Toronto Metropolitan University)
  • Samdesk
  • Scotiabank
  • Screen Composers Guild of Canada (SCGC), Songwriters Association of Canada (SAC), and the Société professionnelle des auteurs et des compositeurs du Québec (SPACQ) (joint submission)
  • ServiceNow
  • Sonja Solomun (McGill University Centre for Media, Technology and Democracy)
  • Sunlife Financial
  • TD Bank
  • Telus
  • Teresa Scassa (University of Ottawa)
  • TrojAI
  • University of Western Ontario Centre for Digital Justice, Community and Democracy
  • Variational AI
  • Vector Institute
  • Vidcruiter
  • Vooban
  • Women's Enterprise Organizations of Canada
  • Written submissions from Canadians (three submissions, unaffiliated)