The Advisory Council on Artificial Intelligence (AI) provides strategic advice to the Minister of Innovation, Science and Industry and to the Government of Canada to ensure Canada's global leadership in AI policy, governance and adoption while supporting the growth of a robust AI ecosystem, based on Canadian values.
Below are summaries of Advisory Council meetings held since the beginning of November 2021. Summaries will continue to be posted following Council meetings.
September 22, 2023 – Meeting summary
Date: September 22, 2023
Introduction and Summary of Feedback
- This meeting of the Advisory Council on Artificial Intelligence (AI) was chaired by Mark Schaan, Senior Assistant Deputy Minister (SADM) at Innovation, Science and Economic Development (ISED).
- SADM Schaan emphasized steps being taken by the international community to address risks associated with generative AI systems.
- He noted that the Government of Canada had launched a consultation on a code of conduct for generative AI, in order to provide a set of guardrails ahead of the passage of the Artificial Intelligence and Data Act (AIDA) and to inform Canada's contributions to international efforts.
- He outlined the code's alignment with Canada's responsible AI principles and application of these principles to the generative AI context.
- He set out the consultation process, including written feedback, roundtables, and public submissions, and asked members of the Advisory Council:
- How well they felt their feedback on the code of conduct had been incorporated;
- How well they felt the ecosystem was progressing in terms of taking the right measures to build trust in this highly charged environment; and
- What advice they could offer on how ISED could support the adoption of this voluntary code of conduct.
- Advisory Council members provided feedback and discussed aspects of the code.
- Members emphasized the need for security measures even in non-public generative AI systems, stressing the importance of democratic governance to align with societal values. They expressed the importance of oversight, in both public and non-public systems, considerations related to excluding business to business (B2B) applications from stringent requirements, and the need to consider firms outside of the tech sector.
- Members also noted the importance of considering not only primary but also secondary and tertiary uses of AI systems. They stressed the importance of focusing on potential harm rather than distinguishing between generative and non-generative systems.
- Members raised questions about distinguishing between developers and managers as well as the feasibility of watermarking, and sought further detail about the code's legal obligations.
- Members proposed rephrasing aspects of the code, including:
- Broadening the code's principles to include emerging identification, registration, and incident reporting obligations;
- Ensuring fairness and equity are not framed as a data problem, given that model design is also a potential issue;
- The use of the terms "adversarial," given the shifting nature of global relations, and "firms", with members suggesting that the code should apply to organizations as well;
- The application of requirements related to release, given that the release of a system is often not a single event; and
- The prescriptive nature of the adversarial testing described in the code.
- Members inquired about whether there are any incentives for companies to comply with the code, whether there are any metrics around impact and target groups that define success for the code, and about how they can support the government in its communication efforts regarding the code.
- SADM Schaan closed the meeting, expressing gratitude to Advisory Council members for their valuable contributions. He emphasized the importance of these contributions in the broader context of the international dialogue on generative AI.
June 29, 2023 – Meeting summary
Date: June 29, 2023
- The June 29th meeting of the Artificial Intelligence (AI) Advisory Council was chaired by Mark Schaan, Senior Assistant Deputy Minister (SADM) at Innovation, Science and Economic Development (ISED).
- The meeting aimed to gather feedback on the Government of Canada's guidance for the use of generative AI in the public sector and discuss the next steps for the Public Awareness Working Group.
- The Advisory Council received an overview on the proposed guidance from the Treasury Board of Canada Secretariat (TBS) for federal institutions on generative AI use. A co-chair of the Working Group subsequently provided an update.
TBS Guidance on Generative AI
- TBS officials introduced a draft of proposed guidance to the Advisory Council. They stressed the guidance's significance as the initial step in establishing governance for the use of generative AI within the Government of Canada.
- The guidance acknowledged both the benefits and legal and ethical risks associated with generative AI. These risks encompassed data protection, public servant autonomy, content quality, legal issues, and environmental impacts.
- TBS officials emphasized the importance of adhering to FASTER principles (Fair, Accountable, Secure, Transparent, Educated, and Relevant) and ensuring compliance with the Directive on Automated Decision-Making and other relevant federal policies and legislation.
- SADM Mark Schaan led a roundtable discussion where Advisory Council members expressed overall support for developing AI use guidelines for public servants. Questions were raised about the scope of the proposed guidance, specifically whether it would apply to consumer-facing applications of generative AI beyond internal government use.
- Questions were raised about the potential misuse of large-language models (LLMs), especially their capacity to manipulate public opinion or target individuals in influential roles.
- With the rapid progression of LLM technology, members noted the necessity to keep guidance adaptable. They particularly emphasized the evolving interaction methods with LLMs and the importance of potential third-party validation for AI models in future iterations of the guidelines.
- Recognizing the importance of effective implementation, SADM Schaan highlighted ISED and TBS's partnership with the Canada School of Public Service. This collaboration aims to ensure public servants are aptly educated to align AI use with the guidance.
- The balance between regulations and economic opportunities for Canadian tech firms was noted. Advisory Council members stressed the significance of ensuring Canada's AI regulations are in sync with those of its trading partners, ensuring economic growth without compromising safe AI practices.
- The interplay between cybersecurity and AI was discussed, with members highlighting the need for a comprehensive system that identifies and addresses AI model vulnerabilities promptly.
- SADM Schaan acknowledged these insights, underscoring ISED's ongoing efforts with the private sector, especially concerning privacy certification for smaller enterprises. He also emphasized the balance between economic opportunities and responsible adoption, which is the reason Canada has opted for a risk- and principles-based approach, and will continue to iterate the guidance as the technology matures.
Public Awareness Working Group Update
- An update on the Working Group's May 11th meeting highlighted three proposed government action priorities based on the Working Group report from January 2023. They include public awareness campaigns, funding support for AI initiatives (especially for rural areas), and deep citizen engagement on key policies.
- Advisory Council members expressed support for these priorities, and a recommendation letter to the Minister is to be prepared by Public Awareness Working Group co-chairs.
- SADM Schaan closed the meeting, thanking members for their contributions and encouraging their continued engagement with ISED and TBS.
April 14, 2023 – Meeting summary
Date: April 14, 2023
Innovation, Science and Economic Development (ISED) Deputy Minister (DM) Simon Kennedy chaired the April 14th meeting of the Artificial Intelligence (AI) Advisory Council (the Council). Following introductory remarks, the Council was addressed by the Honourable François-Philippe Champagne, who emphasized the desire for the Government of Canada to respond effectively to concerns raised by AI practitioners about the risks of giant AI experiments to society and to democracy. This was followed by a roundtable discussion with the Council on how the Government of Canada can mitigate harms and ensure the responsible development of AI through policy or other measures.
Members noted that it is essential for regulatory processes to be accelerated, given that AI capabilities have now surpassed a certain threshold, making them increasingly difficult to distinguish from humans. They noted the potential for such technologies to be manipulated in ways that can destabilize democracies. They emphasized the need to take swift action in addition to Bill C-27, and suggested watermarking as one measure.
Concern was expressed that the recently published Open Letter on AI frames AI technology itself as being inherently bad. Members suggested launching an immediate public awareness campaign to limit propaganda and prevent attacks on democracy, and implementing specific and targeted taxes on the use of systems that replace human beings. They noted concerns they had heard about the potential effects of generative AI within education, and stated that the action taken must go beyond ad campaigns due to the national security risks involved. They explained that if a campaign were to be launched to address the pros and cons of such systems, it must analyse how AI use might have varying impacts across different sectors.
Council members highlighted the opportunity the Artificial Intelligence and Data Act (AIDA) and a renewed Pan Canadian AI Strategy (PCAIS) present for Canada to show leadership and build trust in AI, and the importance of immediate international coordination on principles and regulations that do not stifle innovation or stigmatize AI. Members also emphasized the importance of developing regulations within Canada, while also noting the work being done in jurisdictions such as the UK, US, and EU to move towards AI governance. Members discussed the importance of advocating publicly for the swift passage of AI in Parliament.
The Council discussed the concerns being raised by technology start-ups about the potential implications of AIDA in practice. Members noted that there is widespread concern regarding the freedom to explore novel technologies. They also explained that there is a significant overhead burden required to satisfy the requirements for even small experiments, and the inability to do so may restrict the ability for companies to attract talent.
Members emphasized the importance of precise messaging and noted that in order to maintain Canadian leadership in AI, Canada must invest in AI at the same scale as other countries.
Finally, considerations were noted about the passage of AIDA suggesting that it alone would not bring about regulation and that it was merely a placeholder for regulation. Noting the complaint made to the Privacy Commissioner about ChatGPT, members stressed the role existing agencies can play in responding to AI and the need to investigate the interaction of AI with other legislation.
Minister Champagne concluded the meeting with summary remarks, noting that Bill C-27 would provide an important legislative framework to allow swift efforts on the regulation of AI, and thanked the Council for their comments.
February 14, 2023 – Meeting summary
Date: February 14, 2023
Innovation, Science and Economic Development (ISED) Deputy Minister (DM) Simon Kennedy chaired the February 14 meeting of the Artificial Intelligence (AI) Advisory Council (the Council). At the meeting, ISED officials provided an update on the Digital Charter Implementation Act, 2022, and specifically the AI and Data Act (AIDA). Council members discussed feedback collected through ISED's stakeholder engagement on AIDA, and next steps.
Update and Path Forward: Artificial Intelligence and Data Act (AIDA)
ISED officials provided an overview of the feedback received from stakeholders on AIDA. The presenters then outlined areas to be covered in a forthcoming companion document on AIDA, including intended outcomes for AIDA; how high-impact systems will be defined; and the proposed role of the AI and Data Commissioner as both a centre of expertise and a coordination body for regulatory enforcement. They noted that the path forward for AIDA is intended to be highly consultative, with an estimated two-year timeline for consultations on the first round of regulations following Royal Assent.
Council members raised questions on the proposed enforcement approach and application of AIDA. In response to a question regarding the incidence of offences, ISED clarified that liability would lie with the person responsible for the non-compliance, whether a natural or legal person, and that individuals playing a minor role with regard to a system, such as a co-op student assisting on a project, would not be liable.
Members asked about the applicability of AIDA to non-Canadian companies that deploy AI systems in Canada or to Canadian audiences. In response, ISED confirmed the extra-territorial application of AIDA would be similar to that of the Personal Information Protection and Electronic Documents Act (PIPEDA). If an entity collects the personal information of a large number of Canadians, it is generally subject to PIPEDA. Similarly, a company operating an AI system in Canada would be subject to AIDA.
Council members asked whether AIDA will only address individual harms or will expand to cover collective and environmental harms. ISED said that these questions have been raised previously by stakeholders, and ISED is considering them carefully, while also keeping in mind the need for clear obligations that do not overlap with existing frameworks.
The Council then discussed the proposed approach to identify and monitor high-impact systems and how to appropriately assign accountability for impacts. Council members expressed concern that the legislation has focused too much on the development phase of the AI lifecycle, rather than on misuse or unintended use of systems. ISED noted that there are provisions within AIDA concerning the deployment of high-impact systems.
ISED noted that a key area for future regulation will be the delineation between the responsibility of the developer and the deployer for a system's impact. The developer would have to document the assessment and mitigation that had been put in place in accordance with the system's intended use. If the system is then misused while in operation, the person managing its operations would then have liability. ISED explained this is similar to other existing consumer product regulation, and how the criminal offences would only apply in cases of malicious intent or negligence.
The Council then sought to clarify the alignment between AIDA and the Consumer Privacy Protection Act (CPPA), and data governance issues raised by AIDA. Council members noted that AIDA has separate obligations about anonymized data used in AI. They asked about the breadth of application for these data-related obligations, and whether there may be inconsistency between the language in AIDA and in the CPPA. ISED indicated that they are sensitive to concerns about the scope of obligations under AIDA and that anonymization is intended to have a consistent meaning in CPPA and AIDA.
ISED was asked why these data provisions were not included in the CPPA where there would be some oversight by the Privacy Commissioner. ISED clarified that the intent behind including controls on anonymized data in AIDA is to provide assurance to Canadians that the data will be managed appropriately in this sensitive use case. The duty to ensure that the data a system uses complies with AIDA lies with the system developer. Members asked why a requirement for explainability of algorithmic decisions is still present in CPPA but not in AIDA. Members also noted that AIDA may infringe provincial sovereignty, particularly where systems use health data. ISED explained that these issues will be further considered as the Bill moves to the committee stage.
Finally, concern was expressed that a two-year delay from Royal Assent to the first enforcement of regulation is too long and could reduce investment in the Canadian AI ecosystem. ISED explained that opportunities to identify complimentary tools such as standards development, toolkits, or code efforts could be pursued in parallel to the development of regulation. ISED noted there is a risk that this delay could contract the market, but that the uncertainty of informally regulated AI is also a risk for firms.
September 23, 2022 – Meeting summary
Date: September 23, 2022
Time: 3 pm to 4:30 pm ET
Innovation, Science and Economic Development (ISED) Associate Deputy Minister (SADM) Francis Bilodeau chaired the September 23 meeting of the Artificial Intelligence (AI) Advisory Council (the Council). The purpose of the meeting was to seek the Council's views regarding the third review of the Treasury Board Directive on Automated Decision-Making.
Third Review of the Treasury Board Directive on Automated Decision-Making
The Chief Data Officer of Canada and Treasury Board of Canada Secretariat (TBS) officials provided an overview of the Government of Canada's approach to responsible artificial intelligence (AI), followed by a short presentation on the third review of the Treasury Board Directive on Automated Decision-Making. They detailed the 12 policy recommendations and related amendments to the Directive proposed in the review, including an expansion of the scope of the Directive to include internal services.
The Council was invited to provide feedback on the proposed amendments to the Directive. Overall, members were supportive of the effort and commended TBS for their approach to ensure the responsible use of AI in the Government of Canada.
On proposed amendments to the explanation requirement, Council members noted the challenge of selecting models that are compatible with user needs and able to provide adequate explanations to clients. They highlighted a potential trade-off between model performance and explainability. While an algorithm may be unacceptable if we are unable to understand how it arrived at a decision, a more explainable model may have lower accuracy. Explanation requirements must consider this tradeoff when weighing efficiency and harm reduction.
Council members inquired about the possibility of reverting to human decisions in instances where there is no explanation available from AI models. In response, TBS officials noted that the Directive already requires that humans make the final decision in the case of high-impact automated decision systems. Council members shared their concerns about whether it is practical or realistic to have a human be the final decision-maker, potentially negating the efficiencies to be gained through automation, and whether this is effective in mitigating the risks from AI. Both Council members and TBS officials noted human reviewers would require training and AI literacy to make informed judgments in an automated decision-making process.
Council members highlighted the importance of ensuring citizens are given the opportunity to challenge automated decisions. They asked who would be responsible for reviewing the system and whether peer reviews would be trustworthy and independent.
Council members also proposed the use of confidence scores to aid the assessment of system outputs, and clarify in explanations how changes to inputs could alter decisions.
May 25, 2022 – Meeting summary
Date: May 25, 2022
Time: 3 pm to 4:30 pm ET
Innovation, Science and Economic Development (ISED) Senior Assistant Deputy Minister (SADM) Mark Schaan chaired the May 25 ad-hoc meeting of the Artificial Intelligence (AI) Advisory Council (the Council). This meeting was a follow up to the March 2 meeting, intended to deepen engagement with Council members on key themes that could form the basis of the future regulatory work related to AI. The Council was provided an overview of key policy themes related to the Digital Charter, followed by an open discussion on considerations to be taken into account as policy frameworks for data and AI are implemented.
Digital Charter Implementation
SADM Mark Schaan provided an overview of some of the constraints and considerations when developing potential market framework policies, such as the constitutional division of powers and ensuring policy provides a whole-of-economy perspective tailored to the realities of new technologically driven environments. Charles Taillefer, Director of Privacy and Data Protection Policy at the Marketplace Framework Policy Branch at ISED, then outlined three policy themes to guide the discussion that could form the basis of future regulatory frameworks: defining artificial intelligence systems; high-risk AI systems; and determining key elements and approaches for responsible, ethical, and/or human-centric AI programs.
The first portion of the discussion focused on considerations related to the scope of a regulatory framework for AI. Council members raised the following points:
- Consideration should be given to a broad scope, given the lack of a clear boundary delineating AI systems from other complex computing systems.
- The scope should account for potential changes to the technology.
- It is important to define AI for the purpose of regulating AI systems as something that is broader than just software. For example, the OECD definition includes machine based systems, which is broader and more inclusive of the different elements of AI.
- Work on defining the scope of such a regulatory framework should also take into consideration the perceptions of AI that those definitions will inevitably create.
The second portion focused on identifying and mitigating harms that arise from AI and AI systems. Council members noted the following:
- Rather than focus on a specific, discrete set of harms that AI may generate, it is helpful to instead create an agile regulatory infrastructure that can be responsive to all of the complex sets of challenges we face (e.g. by incorporating certification standards, creating a licensing regime for certifiers, etc.).
- The degree of harm needs to be a key factor in restrictions on the technology, not necessarily focused on specific uses of the technology, but instead on the direction toward which it is being used.
- To address important systemic harms, bias should be included in any regulatory framework for AI.
The third and final part of the discussion sought to understand what elements should be included in a responsible AI program.
- Council members generally suggested there is no clearly agreed or exhaustive list of elements that should be included in a responsible AI program.
- Bias assessment systems and algorithmic impact assessments are increasingly common in responsible AI programs.
- More agile ways of auditing would make it easier for SMEs and start-ups to satisfy standards.
- Some Council members noted key responsible AI principles like fairness, accountability, human intervention, recourse, and transparency.
- On the question of transparency, this could be particularly challenging as companies keep the code and data underlying AI systems tightly guarded. It may be difficult for governments or auditors to know what they need to ask for in order to assess risks. Instead, it will be important to get to a point where AI systems' code is visible to auditors.
March 2, 2022 – Meeting summary
Date: March 2, 2022
Time: 3 pm to 4:30 pm ET
Innovation, Science and Economic Development (ISED) Senior Assistant Deputy Minister (SADM) Mark Schaan chaired the March 2 ad hoc meeting of the Artificial Intelligence (AI) Advisory Council (the Council) to seek the Council's views regarding the strengthening of protections and support for responsible innovation in the context of implementation of the principles of the Digital Charter. Council was provided an overview of key considerations, followed by an open discussion on recommendations that could help strengthen protections and support responsible innovation.
Digital Charter Implementation
SADM Schaan provided an overview of extensive consultations on responsible innovation that have previously taken place with industry and key stakeholders. SADM Schaan noted some of the key takeaways from those consultations, including that stakeholders felt the Government of Canada had not yet done enough to address risks related to the privacy of minors, as well as those related to the rapidly evolving development and deployment of AI technologies. Jennifer Miller, Director General of the Marketplace Framework Policy Branch at ISED, then provided an overview of the key considerations for potential new approaches to implementing Digital Charter principles, including potential new protections for minors' personal information, and potential approaches to data and AI governance. ISED officials noted the importance of maintaining a principles-based approach that still provides sufficient deterrence for red-line inappropriate uses of AI technologies.
Council was invited to share their views on Digital Charter implementation. Key discussion points raised included:
- The burden of consent on individuals to manage their preferences for how their data is analyzed and understood by AI systems, and the default of regulators to consider compliance-based regimes;
- The pace and scope of reform, given the potential breadth and depth of privacy, privacy-adjacent and technology-focused policy. Council members acknowledged that regulatory approaches must be broad enough to be responsive to current realities on privacy and transparency, without being too dense and overly onerous for Canadian businesses in their implementation;
- The ability to audit large datasets and ensure methods will be compliant with evolving standards, with notable focus on better defining AI "explainability" and the nuances of this term in a regulatory setting. Some Council members suggested a framing of "justification" may be preferable to "explainability".
- The ability to consider how to articulate data ownership model that is responsive and appropriate for the needs of communities instead of individuals. Broadly, regulatory approaches to date have considered rights and ownership based on an individual consent model, while Indigenous communities often focus on the agency and rights of the community.
- Ensuring that Canadian companies are provided the tools and time to respond to any potential new regulations, particularly Canadian small and medium-sized enterprises.
- Council noted the importance of underlying infrastructure components as a critical enabler of effective AI regulation, and considering the practice and application of auditing and understanding AI models to understand robustness and explainability.
- Council noted the importance of placing emphasis on transparency, and noting the need for firms to document key metrics (data, algorithms, training information) on their AI systems.
February 1, 2022 – Meeting summary
Date: February 1, 2022
Time: 3 pm to 4:30 pm ET
Following the Council's renewal in November 2021, Council members were engaged in a discussion on the strategic vision and forward agenda for the Council's renewed mandate. The Council also shared insights and recommendations to inform Canada's positioning on the draft European Union (EU) regulations governing Artificial Intelligence (AI) technologies.
Renewal and vision for the Council
The Co-Chairs shared their vision for the renewed Council. They suggested that the Council should play an early and active role in informing AI policy and provide expert advice on AI initiatives led by Innovation, Science, and Economic Development Canada (ISED), and other federal government departments. Specific themes mentioned in the discussion included health, climate change, AI use in government, and AI adoption/procurement. Some members suggested that Council should explore regulatory models for AI, including through the creation of a dedicated working group on AI regulatory modernization. ISED officials committed to working with Co-Chairs to develop a work plan that would be presented at a future meeting.
Consultation on the draft European Union (EU) Regulations governing AI
Officials from the Technical Barriers and Regulations Divisions of Global Affairs Canada (GAC) provided a briefing on the draft EU Regulations governing AI, and requested advice from the Council on specific considerations or feedback that could inform Canada's positioning on the regulations. Key elements of the Council discussion included barriers to Canadian small- and medium-sized enterprises developing AI technologies and solutions; intellectual property; third-party testing of AI systems; and the role of international standards. GAC officials committed to take these considerations into account when providing feedback to the EU on the draft AI regulations.
The mandate of the Council's Public Awareness Working Group will be extended for an additional year to implement Working Group recommendations, and to develop a path forward for advancing engagement with Indigenous communities.
November 17, 2021 – Meeting summary
Date: November 17, 2021
Time: 3:30 pm to 5 pm ET
The Council received a briefing on Artificial Intelligence (AI) research security from Canadian Security Intelligence Service (CSIS) officials. Following the briefing, the Council discussed the work and recommendations of the Public Awareness Working Group.
Briefing on AI and research security
Officials from CSIS provided an unclassified briefing on AI research security in Canada. Council members noted the importance of protecting Canadian IP, data, and research while maintaining commitments to open science and open data. CSIS officials will look for further opportunities to engage Council and to provide information to Canadian research experts on threats to Canada's AI research security.
Recommendations from the Public Awareness Working Group
The co-chairs of the Public Awareness Working Group shared the initial findings from its pan-Canadian efforts in spring 2021 to engage with people in Canada on AI awareness. The Working Group recommended a number of measures that could increase awareness of AI technologies, including public awareness and literacy campaigns; sustained dialogues and engagements; and the creation of an AI awareness community of practice with a specific focus on engaging diverse, underrepresented communities. The Council renewed the mandate of the Working Group for another year to allow them to implement its recommendations and further engage with Indigenous communities.