The Artificial Intelligence and Data Act (AIDA) – Companion document

Table of contents

Introduction

Artificial intelligence (AI) systems are poised to have a significant impact on the lives of Canadians and the operations of Canadian businesses. In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022. The AIDA represents an important milestone in implementing the Digital Charter and ensuring that Canadians can trust the digital technologies that they use every day. The design, development, and use of AI systems must be safe, and must respect the values of Canadians.

The framework proposed in the AIDA is the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses. The Government intends to build on this framework through an open and transparent regulatory development process. Consultations would be organized to gather input from a variety of stakeholders across Canada to ensure that the regulations achieve outcomes aligned with Canadian values.

The global interconnectedness of the digital economy requires that the regulation of AI systems in the marketplace be coordinated internationally. Canada has drawn from and will work together with international partners – such as the European Union (EU), the United Kingdom, and the United States (US) – to align approaches, in order to ensure that Canadians are protected globally and that Canadian firms can be recognized internationally as meeting robust standards.

AI is a powerful enabler, and Canada has a leadership role in this significant technology area. That is why the Government's proposed approach in this area has attracted a lot of attention. This document aims to reassure Canadians in two key ways. First, the Government recognizes that Canadians have concerns about the risks associated with this emerging technology and need to know that the Government has a plan to ensure that AI systems that impact their lives are safe. The recently published Report of the Public Awareness Working Group of the Advisory Council on AI reveals significant interest among Canadians in the opportunities offered by AI, but also concerns regarding potential harms. Nearly two-thirds of respondents believed that AI has the potential to cause harm to society, while 71% believed that it could be trusted if regulated by public authorities.Footnote 1 Thus, we aim to reassure Canadians that we have a thoughtful plan to manage this emerging technology and maintain trust in a growing area of the economy.

At the same time, AI researchers and innovators are concerned by the uncertainty that exists regarding future regulation. Recognizing that the regulation of this powerful technology is now an emerging international norm, many in the field are worried that regulation will be inflexible or that it will unfairly stigmatize their field of work. Such an outcome would have significant impacts on opportunities for Canadians and the Canadian economy. This document aims to reassure actors in the AI ecosystem in Canada that the aim of this Act is not to entrap good faith actors or to chill innovation, but to regulate the most powerful uses of this technology that pose the risk of harm. Specifically, this paper is intended to address both of these sets of concerns and provide assurance to Canadians that the risks posed by AI systems will not fall through the cracks of consumer protection and human rights legislation, while also making it clear that the Government intends to take an agile approach that will not stifle responsible innovation or needlessly single out AI developers, researchers, investors or entrepreneurs. What follows is a roadmap for the AIDA, explaining its intent and the Government's key considerations for operationalizing it through future regulations. It is intended to build understanding among stakeholders and Canadians on the proposed legislation, as well to support Parliamentary consideration of the Bill.

Canada and the global artificial intelligence (AI) landscape

Canada is a world leader in the field of artificial intelligence. It is home to 20 public AI research labs, 75 AI incubators and accelerators, 60 groups of AI investors from across the country, and over 850 AI related start-up businesses. Footnote 2 Canadians have also played key roles in the development of AI technology since the 1970s.Footnote 3 Canada was the first country in the world to create a national strategy for AI, releasing it in 2017, and is a co-founding member of the Global Partnership on AI (GPAI). The federal government has allocated a total of $568 million CAD to advance research and innovation in the AI field, develop a skilled talent pool, as well as to develop and adopt industry standards for AI systems as part of the national strategy for AI.Footnote 4, Footnote 5 These investments have been instrumental in the development of the Pan-Canadian AI Strategy to position Canada as a leading global player in AI research and commercialization.Footnote 6

Revenues from the global artificial intelligence market have been growing in recent years and are expected to surpass $680 billion in 2023.Footnote 7 Market research has projected that the global AI market will reach a size of $1.2 trillion CAD by 2026,Footnote 8 and suggests the market could grow to over $2 trillion CAD by 2030.Footnote 9

Artificial intelligence enables computers to learn to complete complex tasks, such as generating content or making decisions and recommendations, by recognizing and replicating patterns identified in data. Over the last 10 years, the capabilities of AI systems have advanced significantly to the point where they are able to perform tasks that previously required human intelligence, such as identifying and modifying images, performing translation, and generating creative content. AI systems are increasingly being used to make important predictions or decisions about people, such as with regard to credit, hiring, and digital services.

AI systems are being developed and used in Canada today for a variety of applications that add value to the Canadian economy and improve the lives of Canadians. Technology that seemed unthinkable just a short time ago is now a part of everyday life. AI offers a multitude of benefits for Canadians, among which are:

  • Enabling advances in healthcare such as cancer screenings and improving at home healthcare services; Footnote 10
  • Improving agriculture precision harvesting Footnote 11 and improving energy supply chain efficiency; Footnote 12
  • Introducing new smart products and personalized services;
  • Increasing the capabilities of language processing technologies, including translation and text-to-speech; and
  • Enhancing citizens' abilities to find and process information.

Why now is the time for a responsible AI framework in Canada

In the digital economy, uses of AI are quickly becoming ubiquitous. As its capabilities and scale of deployment expand, it is important for standards to emerge for businesses and the public to have clear expectations regarding how the technology needs to be managed. Absent clear standards, it is difficult for consumers to trust the technology and for businesses to demonstrate that they are using it responsibly.

While many AI systems have the potential to change lives for the better, high-profile incidents of harmful or discriminatory outcomes have contributed to an erosion of trust, for example:

  • A resume screening AI system used by a large multinational company to shortlist candidates for interviews was found to discriminate against women.Footnote 13
  • An analysis of well-known facial recognition systems showed evidence of bias against women and people of color.Footnote 14
  • AI systems have been used to create "deepfake" images, audio, and video that can cause harm to individuals.Footnote 15

The increasing importance and prevalence of AI technology across industries today, as well as growing public concern regarding both impacts on individuals and potential systemic impacts, has led to rapid international mobilization around the need to guide and govern AI.Footnote 16 Since 2021, a draft AI Act has been introduced in the European Union, the United Kingdom has published a proposal for regulating AI, and the United States has published its Blueprint for an AI Bill of Rights. If Canada's advanced data economy is to thrive, it needs a corresponding framework to enable citizen trust, encourage responsible innovation, and remain interoperable with international markets.

Canada's approach and consultation timeline

Canada already possesses robust legal frameworks that apply to many of the uses of AI. The Personal Information Protection and Electronic Documents Act provides important guardrails around how businesses use personal information. The Government has proposed the Consumer Privacy Protection Act as part of Bill C-27 to modernize this law in the context of the digital economy, and it is also undertaking broader efforts to ensure that laws governing marketplace activities and communications services keep pace. In addition, a number of other frameworks for consumer protection, human rights, and criminal law apply to the use of AI, including:

  • The Canada Consumer Product Safety Act;
  • The Food and Drugs Act;
  • The Motor Vehicle Safety Act;
  • The Bank Act;
  • The Canadian Human Rights Act and provincial human rights laws; and
  • The Criminal Code.

Indeed, existing consumer protection regulators are already moving to address some of the impacts of AI within their legislative authorities. For example, Health Canada has issued guiding principles for the development of medical devices that use machine learning,Footnote 17 and the Office of the Superintendent of Financial Institutions is working on updating its model risk guidelines to account for the use of new technologies, including AI.Footnote 18 Human rights commissions are also moving to understand the implications of AI for discrimination and other human rights issues.Footnote 19

However, the Government is cognizant that developments in AI have created regulatory gaps that must be filled in order for Canadians to trust the technology. For example:

  • Mechanisms such as human rights commissions provide for redress in cases of discrimination, however individuals subject to AI bias may never be aware that it has occurred;
  • Given the wide range of uses of AI systems throughout the economy, many sensitive use cases do not fall under existing sectoral regulators; and
  • There is a need for minimum standards as well as greater coordination and expertise to ensure consistent protections for Canadians across use contexts.

In this context, the Government has developed a framework intended to ensure the proactive identification and mitigation of risks in order to prevent harms and discriminatory outcomes, while recognizing the unique nature of AI ecosystem and ensuring that research and responsible innovation are supported. As the technology evolves, new capabilities and uses of AI systems will emerge, and Canada needs an approach that can adapt to the shifting landscape. The Government will take an agile approach to AI regulation in the coming years by developing and evaluating regulations and guidelines in close collaboration with stakeholders on a regular cycle and adapting enforcement to the needs of the changing environment. Implementation of the initial set of AIDA regulations is expected to take the following path:

  • Consultation on regulations (6 months)
  • Development of draft regulations (12 months)
  • Consultation on draft regulations (3 months)
  • Coming into force of initial set of regulations (3 months)

This would provide a period of at least two years after Bill C-27 receives Royal Assent before the new law comes into force, meaning that the provisions of AIDA would come into force no sooner than 2025.

How the Artificial Intelligence and Data Act would work

The AIDA is intended to protect Canadians, ensure the development of responsible AI in Canada, and to prominently position Canadian firms and values in global AI development. The risk-based approach in AIDA, including key definitions and concepts, was designed to reflect and align with evolving international norms in the AI space – including the EU AI Act, the Organization of Economic Co-operation and Development (OECD) AI Principles, Footnote 20 and the US National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) – while integrating seamlessly with existing Canadian legal frameworks. For example, the definition of artificial intelligence systems in AIDA aligns with concepts developed through the OECD that are also represented in the EU AI Act. Inter-operability with legal frameworks in other jurisdictions would also be a key consideration in the development of regulations, in order to facilitate Canadian companies' access to international markets.

The AIDA proposes the following approach:

  1. Building on existing Canadian consumer protection and human rights law, AIDA would ensure that high-impact AI systems meet the same expectations with respect to safety and human rights to which Canadians are accustomed. Regulations defining which systems would be considered high-impact, as well as specific requirements, would be developed in consultation with a broad range of stakeholders to ensure that they are effective at protecting the interests of the Canadian public, while avoiding imposing an undue burden on the Canadian AI ecosystem.
  2. The Minister of Innovation, Science, and Industry would be empowered to administer and enforce the Act, to ensure that policy and enforcement move together as the technology evolves. An office headed by a new AI and Data Commissioner would be created as a centre of expertise in support of both regulatory development and administration of the Act. The role would undergo gradual evolution of the functions of the commissioner from solely education and assistance to also include compliance and enforcement, once the Act has come into force and ecosystem adjusted.
  3. Prohibit reckless and malicious uses of AI that cause serious harm to Canadians and their interests through the creation of new criminal law provisions.

The AIDA would ensure accountability for risks associated with high-impact AI systems used in the course of international and interprovincial trade and commerce. It identifies activities involved in the lifecycle of a high-impact AI system and imposes obligations for businesses carrying out those activities in order to ensure accountability at each point where risk may be introduced.

High-impact AI systems: considerations and systems of interest

Under AIDA, the criteria for high-impact systems would be defined in regulation, in order to allow for precision in the identification of systems that need to be regulated through this framework, for inter-operability with international frameworks such as the EU AI Act, and for updates to occur as the technology advances. This would enable the Government to avoid imposing undue impacts on the AI ecosystem.

The Government considers the following to be among the key factors to be examined in determining which AI systems would be considered to be high-impact:

  • Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
  • The severity of potential harms;
  • The scale of use;
  • The nature of harms or adverse impacts that have already taken place;
  • The extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
  • Imbalances of economic or social circumstances, or age of impacted persons; and
  • The degree to which the risks are adequately regulated under another law.

Would the AIDA impact access to open source software, or open access AI systems?

An AI system generally requires a model, as well as the use of datasets to train the model to perform certain tasks. It is common for researchers to publish models or other tools as open source software, which can then be used by anyone to develop AI systems based on their own data and objectives. As these models alone do not constitute a complete AI system, the distribution of open source software would not be subject to obligations regarding "making available for use."

However, these obligations would apply to a person making available for use a fully-functioning high-impact AI system, including if it was made available through open access.

The Government is cognizant that the impacts of AI systems depend on their capabilities and the contexts in which they are used. The following are examples of systems that are of interest to the Government in terms of their potential impacts:

Screening systems impacting access to services or employment

These AI systems are intended to make decisions, recommendations, or predictions for purposes relating to access to services, such as credit, or employment. They carry the potential of producing discriminatory outcomes and economic harm, particularly for women and other historically marginalized groups.

Biometric systems used for identification and inference

Certain AI systems use biometric data to make predictions about people: for example, identifying a person remotely, or making predictions about the characteristics, psychology, or behaviours of individuals. Such systems have the potential to have significant impacts on mental health and autonomy.

Systems that can influence human behaviour at scale

Applications such as AI powered online content recommendation systems have been shown to have the ability to influence human behavior, expression, and emotion on a large scale. The potential impacts of these systems include harm to psychological and physical health.

Systems critical to health and safety

Certain AI applications are integrated in health and safety functions, for example making critical decisions or recommendations on the basis of data collected from sensors. These include autonomous driving systems and systems making triage decisions in the health sector. These AI systems have the potential to cause direct physical harm, while bias may also result if risks have not been adequately mitigated.

Individual harms, collective harms, and biased output

The AIDA addresses two types of adverse impacts associated with high-impact AI systems. First, it addresses a range of harms to individuals. Second, it is the first legal framework in Canada to address the adverse impacts due to systemic bias in AI systems in a commercial context.

Harm includes physical harm, psychological harm, damage to property, or economic loss to an individual. It is intended to encapsulate a broad range of adverse impacts that may result across the sectors of the economy. Harms may be experienced by individuals independently or may be experienced broadly across groups of individuals, increasing the severity of the impact. For example, more vulnerable groups, such as children, may face greater risk of harm from a high-impact AI system and necessitate specific risk mitigation efforts.

Under the AIDA, biased output occurs when there is unjustified and adverse differential impact based on any of prohibited grounds for discrimination in the Canadian Human Rights Act.Footnote 21 This includes differentiation that occurs directly or indirectly, such as through variables that act as a proxies for prohibited grounds. Adverse differentiation could be considered justified if it is unavoidable in the context of real-world factors affecting a decision or recommendation.Footnote 22 For example, individual income often correlates with the prohibited grounds, such as race and gender, but income is also relevant to decisions or recommendations related to credit. The challenge, in this instance, is to ensure that a system does not use proxies for race or gender as indicators of creditworthiness. For example, if the system amplifies the underlying correlation or produces unfair results for specific individuals based on the prohibited grounds, this would not be considered justified.

How does the AIDA protect Canadians from collective harms?

Currently, individuals have recourse for discrimination under the Canadian Human Rights Act (CHRA) or provincial human rights legislation.

However, some uses of AI systems may pose risks of causing harm to historically marginalized communities on a large scale if not properly assessed for bias. The AIDA would address this risk by requiring businesses conducting regulated activities to proactively assess and mitigate the risk of bias on grounds prohibited in the CHRA.

Regulatory requirements

AIDA would require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output prior to a high-impact system being made available for use. This is intended to facilitate compliance by setting clear expectations regarding what is required at each stage of the lifecycle.

The obligations for high-impact AI systems would be guided by the following principles, which are intended to align with international norms on the governance of AI systems:

Human Oversight & Monitoring

Human Oversight means that high-impact AI systems must be designed and developed in such a way as to enable people managing the operations of the system to exercise meaningful oversight. This includes a level of interpretability appropriate to the context.

Monitoring, through measurement and assessment of high-impact AI systems and their output, is critical in supporting effective human oversight.

Transparency

Transparency means providing the public with appropriate information about how high-impact AI systems are being used.

The information provided should be sufficient to allow the public to understand the capabilities, limitations, and potential impacts of the systems.

Fairness and Equity

Fairness and Equity means building high-impact AI systems with an awareness of the potential for discriminatory outcomes.

Appropriate actions must be taken to mitigate discriminatory outcomes for individuals and groups.

Safety

Safety means that high-impact AI systems must be proactively assessed to identify harms that could result from use of the system, including through reasonably foreseeable misuse.

Measures must be taken to mitigate the risk of harm.

Accountability

Accountability means that organizations must put in place governance mechanisms needed to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used.

This includes the proactive documentation of policies, processes, and measures implemented.

Validity & Robustness

Validity means a high-impact AI system performs consistently with intended objectives.

Robustness means a high-impact AI system is stable and resilient in a variety of circumstances.

Businesses would be expected to institute appropriate accountability mechanisms to ensure compliance with their obligations under the Act. They would be held accountable for the creation and enforcement of appropriate internal governance processes and policies to achieve compliance with the AIDA.

Measures would be set through regulation and would be tailored to the context and risks associated with specific regulated activities in the lifecycle of a high-impact AI system. The regulated activities laid out in the AIDA would then be associated with distinct obligations that are proportionate to the risk. It is important to note that activities such as research or the development of methodologies are not in themselves regulated activities under AIDA. Depending on the specific context and value chain configuration, multiple businesses could be involved in carrying out regulated activities for a single AI system.

The specific measures required by regulation would be developed through extensive consultation and would be based on international standards and best practices in order to avoid undue impacts on innovation. Under AIDA, businesses putting in place such measures would have to monitor compliance with the measures, as well as their effectiveness. The regulations that would follow the Royal Assent of the AIDA would ensure that responsibilities for monitoring would be proportionate to the level of influence that an actor has on the risk associated with the system.

How do I know what my company is responsible for?

Currently, there are no clear accountabilities in Canada for what businesses should do to ensure that high-impact AI systems are safe and non-discriminatory.

Under the AIDA, businesses conducting regulated activities would be accountable for ensuring that employees implement measures to address risks associated with high-impact AI systems.

  • Businesses who design or develop a high-impact AI system would be expected to take measures to identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed.
  • Businesses who make a high-impact AI system available for use would be expected to consider potential uses when deployed and take measures to ensure users are aware of any restrictions on how the system is meant to be used and understand its limitations.
  • Businesses who manage the operations of an AI system would be expected to use AI systems as indicated, assess and mitigate risk, and ensure ongoing monitoring of the system.

For example, certain AI systems perform generally applicable functions – such as text, audio or video generation – and can be used in a variety of different contexts. As end users of general-purpose systems have limited influence over how such systems function, developers of general-purpose systems would need to ensure that risks related to bias or harmful content are documented and addressed.

For example, businesses involved only in the design or development of a high-impact AI system but with no practical ability to monitor the system after the development would have different obligations from those managing its operations. Individual employees would not be expected to be responsible for obligations associated with the business as a whole. In addition to obligations associated with risk assessment and mitigation, businesses responsible for regulated activities associated with a high-impact system would also be required to notify the Minister if a system causes or is likely to cause material harm.

The table below illustrates the types of measures that could apply at each stage of the lifecycle of an AI system. The design and development requirements would need to be met before a high-impact system is made available for use.

 

Regulated activity Examples of measures to assess and mitigate risk
System design – includes determining AI system objectives and data needs, methodologies, or models based on those objectives.
  • Performing an initial assessment of potential risks associated with the use of an AI system in the context and deciding whether the use of AI is appropriate
  • Assessing and addressing potential biases introduced by the dataset selection
  • Assessing the level of interpretability needed and making design decisions accordingly
System development – includes processing datasets, training systems using the datasets, modifying parameters of the system, developing and modifying methodologies, or models used in the system, or testing the system.Footnote 23
  • Documenting datasets and models used
  • Performing evaluation and validation, including retraining as needed
  • Building in mechanisms for human oversight and monitoring
  • Documenting appropriate use(s) and limitations
Making a system available for use – deployment of a fully functional system, whether by the person who developed it, through a commercial transaction, through an application programming interface (API), or by making the working system publicly available.Footnote 24
  • Keeping documentation regarding how the requirements for design and development have been met
  • Providing appropriate documentation to users regarding datasets used, limitations, and appropriate uses
  • Performing a risk assessment regarding the way the system has been made available
Managing the operations of a system – supervision of the system while in use, including beginning or ceasing its operation, monitoring and controlling access to its output while it is in operation, altering parameters pertaining to its operation in context.
  • Logging and monitoring the output of the system as appropriate in the context
  • Ensuring adequate monitoring and human oversight
  • Intervening as needed based on operational parameters

Oversight and enforcement

In the initial years after it comes into force, the focus of AIDA would be on education, establishing guidelines, and helping businesses to come into compliance through voluntary means. The Government intends to allow ample time for the ecosystem to adjust to the new framework before enforcement actions are undertaken.

The Minister of Innovation, Science, and Industry would be responsible for administration and enforcement of all parts of the Act that do not involve prosecutable offences. In addition, the AIDA would create a new statutory role for an AI and Data Commissioner, who would support the Minister in carrying out these responsibilities. Codifying the role of the Commissioner would separate the functions from other activities within ISED and allow the Commissioner to build a centre of expertise in AI regulation. In addition to administration and enforcement of the Act, the Commissioner's work would include supporting and coordinating with other regulators to ensure consistent regulatory capacity across different contexts, as well as tracking and studying of potential systemic effects of AI systems in order to inform administrative and policy decisions.

This model was chosen in careful consideration of a number of factors given the unique AI regulatory context and the objectives of the regulatory scheme.Footnote 25 The governance and regulation of AI is an emerging area which will evolve rapidly in the coming years. As a result, administration and enforcement decisions have important implications for policy, and the two functions would need to be work in close collaboration in the early years of the framework under the direction of the Minister.

The Minister would have powers to ensure the safety of Canadians. In cases where a system could result in harm or biased output, or where a contravention may have occurred, they may take actions such as:

  • order the production of records to demonstrate compliance; or
  • order an independent audit.

In cases where there is a risk of imminent harm, the Minister may take actions such as:

  • order cessation of use of a system; or
  • disclose publicly information regarding contraventions of the Act or for the purpose of preventing harm.

How would the three different enforcement mechanisms be used?

AIDA provides for two types of penalties for regulatory non-compliance – administrative monetary penalties (AMPs) and prosecution of regulatory offences – as well as a separate mechanism for true criminal offences.

AMPs are a flexible compliance tool that could be used directly by the regulator in response to any violation in order to encourage compliance with the obligations of the Act. While the Act allows for the creation of an AMPs regime, it would require regulations, and consultations, to come into force.

Regulatory offences could be prosecuted in more serious cases of non-compliance with regulatory obligations. Due to the seriousness of the process, guilt must be proven beyond a reasonable doubt, and a firm could defend itself by demonstrating that it had shown due care in complying with its obligations. The Minister would not have any influence on whether to prosecute an offence, and the Public Prosecution Service of Canada would need to determine that a prosecution is in the public interest before it could proceed. For example, a firm could be prosecuted for committing a regulatory offence if it refused to comply with a regulatory obligation, even after receiving a Ministerial order under AIDA.

True criminal offences are separate from the regulatory obligations in AIDA and relate only to prohibiting knowing or intentional behaviour where a person causes serious harm. For example, a person could be prosecuted if they made an AI system available that caused serious harm, and they were aware that it was likely to cause such harm and did not take reasonable measures to prevent it.

It is important to keep in mind that the Minister would have no role in determining who should be prosecuted under the Act. The Minister would only have the ability to refer cases to the Public Prosecution Service of Canada, which could choose at their discretion whether or not to proceed. The Minister's regulatory powers could not be used to investigate the criminal offences, discussed in the next section, which are "true crimes" that require criminal intent and are punishable by imprisonment.

The AIDA would also mobilize external expertise in the private sector, academia, and civil society to ensure that enforcement activities are conducted in the context of a rapidly developing environment. This would occur through:

  • The designation of external experts as analysts to support administration and enforcement of Act;
  • The use of AI audits performed by qualified independent auditors;
  • The appointment of an advisory committee to provide the Minister with advice.

Case example: a system developed by multiple actors

Consider the case of an AI system with multiple development steps, involving both research and commercial activities.

  • Step 1: A researcher at a university publishes a new model that can be used to develop AI systems – No regulatory requirements or liability under AIDA, as it is not a commercial activity and the model alone is not a complete AI system.
  • Step 2: Firm A uses this model to develop a high-impact system by training it on data under their control, and then place it on the market for use – Firm A would need to comply with the requirements for development and making available for use, which would be laid out in regulation (e.g., testing, ensuring that all measures needed for safe and fair operation are in place, providing documentation to firms purchasing the system). If Firm A did not fulfill these obligations, they could be liable for penalties, including AMPs once the scheme has been brought in through regulation. If Firm A made the system available for use knowing that it was likely to cause serious harm, they could be prosecuted for a criminal offence.
  • Step 3: Firm B puts the system into operation for their own commercial purposes and manages the operations Firm B would need to comply with the requirements for managing operations (e.g., ensuring that this use is appropriate given the risks and limitations documented by Firm A, monitoring the system, publishing a description of the system). If the operating system causes harm, Firm B would be only liable if they did not meet the obligations related to managing operations. If in operating the system Firm B showed reckless disregard for the safety of other persons, they could be prosecuted for a criminal offence under the Criminal Code.

In addition, voluntary certifications can play an important role as the ecosystem is evolving. The AI and Data Commissioner would assess the progress of the ecosystem over time and ensure that administration and enforcement activities take into account the capabilities and scale of impact of regulated organizations. For example, smaller firms would not be expected to have governance structures, policies, and procedures comparable to those of larger firms with a greater number of employees and a wider range of activities. Small and medium-sized businesses would also receive particular assistance in adopting the practices needed to meet the requirements.

Once the ecosystem and regulatory framework have sufficiently matured, the AIDA does provide for the creation of an administrative monetary penalty (AMP) scheme through regulation. The penalties would be designed in a proportionate manner to the objective of encouraging compliance, including with respect to the relative size of firms. For example, AMPs could be applied in the case of clear violations where other attempts to encourage compliance had failed.

In addition to possible administrative penalties, non-compliance with the requirements would also constitute a prosecutable offence, consistent with other legal frameworks intended to protect the public from harm. The most serious cases of non-compliance could be prosecuted at the discretion of the Public Prosecution Service of Canada (PPSC). These offences are intended to capture only those who had a responsibility to ensure that requirements were met. For example, if a firm obstructs attempts to verify whether they have complied with their obligations, or provides false or misleading information, they could be subject to prosecution.

Criminal prohibitions

The Criminal Code codifies most of the criminal offences in Canada. These are behaviours that are both sufficiently harmful to Canadian society and exhibit moral blameworthiness on the part of those who do them. Consequently, they carry strong punishments, including imprisonment and lead to significant social stigma following conviction. These are distinct from regulatory non-compliance offences, which are primarily related to an unreasonable failure to live up to regulatory obligations. Due to the severity of the consequences of conviction for a criminal offence, prosecution of these offences requires proof beyond a reasonable doubt, not only that a particular act was committed, but that it was intentional.

While many Criminal Code offences can apply to malicious (or even grossly negligent) uses of AI systems, these offences are not targeted to this behaviour, and their application to potential uses of the technology involves some uncertainty and novelty. AIDA creates three new criminal offences to directly prohibit and address specific behaviours of concern. These criminal offences are completely separate from the regulatory obligations and related offences discussed in the previous section. These offences of a criminal nature aim to prohibit and punish AI-related activities that are done by someone who is aware of, or who appreciates, the harm they are causing or at risk of causing. The three offences are the following:

  1. Knowingly possessing or using unlawfully obtained personal information to design, develop, use or make available for use an AI system. This could include knowingly using personal information obtained from a data breach to train an AI system.
  2. Making an AI system available for use, knowing, or being reckless as to whether, it is likely to cause serious harm or substantial damage to property, where its use actually causes such harm or damage.
  3. Making an AI system available for use with intent to defraud the public and to cause substantial economic loss to an individual, where its use actually causes that loss.

These crimes could be investigated by law enforcement and prosecuted at the discretion of Public Prosecution Service of Canada

The path ahead

The AIDA is one of the first national regulatory frameworks for AI to be proposed. It is designed to protect individuals and communities from the adverse impacts associated with high impact AI systems, and to support the responsible development and adoption of AI across the Canadian economy. It aligns with the EU's draft AI Act by taking a risk-based approach and would be supported by industry standards developed over the coming years.

Following Royal Assent of Bill C-27, the Government is intending to conduct a broad and inclusive consultation of industry, academia, civil society, and Canadian communities to inform the implementation of AIDA and its regulations. This is expected to include:

  • The types of systems that should be considered as high-impact;
  • The types of standards and certifications that should be considered in ensuring that AI systems meet the expectations of Canadians;
  • Priorities in the development and enforcement of regulations, including with regard to an AMPs scheme;
  • The work of the AI and Data Commissioner; and
  • The establishment of an advisory committee.

Following this process, the Government would pre-publish draft regulations in Part 1 of the Canada Gazette and conduct another consultation for 60 days. The initial set of regulations would then be published in Part 2 of the Canada Gazette. ISED would continue to assess the effectiveness of the regulations as it administers and enforces the Act. It would also work together with and support other regulators operating in the AI space in order to ensure that Canadians are protected in a consistent and effective manner across regulatory contexts.