Meeting summaries: Advisory Council on Artificial Intelligence

The Advisory Council on Artificial Intelligence (AI) provides strategic advice to the Minister of Innovation, Science and Industry and to the Government of Canada to ensure Canada's global leadership in AI policy, governance and adoption while supporting the growth of a robust AI ecosystem, based on Canadian values.

Below are summaries of Advisory Council meetings held since the beginning of November 2021. Summaries will continue to be posted following Council meetings.

September 23, 2022 – Meeting summary

Date: September 23, 2022
Time: 3 pm to 4:30 pm ET
Location: Videoconference


Innovation, Science and Economic Development (ISED) Associate Deputy Minister (SADM) Francis Bilodeau chaired the September 23 meeting of the Artificial Intelligence (AI) Advisory Council (the Council). The purpose of the meeting was to seek the Council’s views regarding the third review of the Treasury Board Directive on Automated Decision-Making.

Third Review of the Treasury Board Directive on Automated Decision-Making

The Chief Data Officer of Canada and Treasury Board of Canada Secretariat (TBS) officials provided an overview of the Government of Canada’s approach to responsible artificial intelligence (AI), followed by a short presentation on the third review of the Treasury Board Directive on Automated Decision-Making. They detailed the 12 policy recommendations and related amendments to the Directive proposed in the review, including an expansion of the scope of the Directive to include internal services.

Open Discussion

The Council was invited to provide feedback on the proposed amendments to the Directive. Overall, members were supportive of the effort and commended TBS for their approach to ensure the responsible use of AI in the Government of Canada.

On proposed amendments to the explanation requirement, Council members noted the challenge of selecting models that are compatible with user needs and able to provide adequate explanations to clients. They highlighted a potential trade-off between model performance and explainability. While an algorithm may be unacceptable if we are unable to understand how it arrived at a decision, a more explainable model may have lower accuracy. Explanation requirements must consider this tradeoff when weighing efficiency and harm reduction.

Council members inquired about the possibility of reverting to human decisions in instances where there is no explanation available from AI models. In response, TBS officials noted that the Directive already requires that humans make the final decision in the case of high-impact automated decision systems. Council members shared their concerns about whether it is practical or realistic to have a human be the final decision-maker, potentially negating the efficiencies to be gained through automation, and whether this is effective in mitigating the risks from AI. Both Council members and TBS officials noted human reviewers would require training and AI literacy to make informed judgments in an automated decision-making process. 

Council members highlighted the importance of ensuring citizens are given the opportunity to challenge automated decisions. They asked who would be responsible for reviewing the system and whether peer reviews would be trustworthy and independent.

Council members also proposed the use of confidence scores to aid the assessment of system outputs, and clarify in explanations how changes to inputs could alter decisions.

May 25, 2022 – Meeting summary

Date: May 25, 2022
Time: 3 pm to 4:30 pm ET
Location: Videoconference


Innovation, Science and Economic Development (ISED) Senior Assistant Deputy Minister (SADM) Mark Schaan chaired the May 25 ad-hoc meeting of the Artificial Intelligence (AI) Advisory Council (the Council). This meeting was a follow up to the March 2 meeting, intended to deepen engagement with Council members on key themes that could form the basis of the future regulatory work related to AI. The Council was provided an overview of key policy themes related to the Digital Charter, followed by an open discussion on considerations to be taken into account as policy frameworks for data and AI are implemented.

Digital Charter Implementation

SADM Mark Schaan provided an overview of some of the constraints and considerations when developing potential market framework policies, such as the constitutional division of powers and ensuring policy provides a whole-of-economy perspective tailored to the realities of new technologically driven environments. Charles Taillefer, Director of Privacy and Data Protection Policy at the Marketplace Framework Policy Branch at ISED, then outlined three policy themes to guide the discussion that could form the basis of future regulatory frameworks: defining artificial intelligence systems; high-risk AI systems; and determining key elements and approaches for responsible, ethical, and/or human-centric AI programs.

Open Discussion

The first portion of the discussion focused on considerations related to the scope of a regulatory framework for AI. Council members raised the following points:

  • Consideration should be given to a broad scope,  given the lack of a clear boundary delineating AI systems from other complex computing systems.
  • The scope should account for potential changes to the technology.
  • It is important to define AI for the purpose of regulating AI systems as something that is broader than just software. For example, the OECD definition includes machine based systems, which is broader and more inclusive of the different elements of AI.
  • Work on defining the scope of such a regulatory framework should also take into consideration the perceptions of AI that those definitions will inevitably create.

The second portion focused on identifying and mitigating harms that arise from AI and AI systems. Council members noted the following:

  • Rather than focus on a specific, discrete set of harms that AI may generate, it is helpful to instead create an agile regulatory infrastructure that can be responsive to all of the complex sets of challenges we face (e.g. by incorporating certification standards, creating a licensing regime for certifiers, etc.).
  • The degree of harm needs to be a key factor in restrictions on the technology, not necessarily focused on specific uses of the technology, but instead on the direction toward which it is being used.
  • To address important systemic harms, bias should be included in any regulatory framework for AI.

The third and final part of the discussion sought to understand what elements should be included in a responsible AI program.

  • Council members generally suggested there is no clearly agreed or exhaustive list of elements that should be included in a responsible AI program.
  • Bias assessment systems and algorithmic impact assessments are increasingly common in responsible AI programs.
  • More agile ways of auditing would make it easier for SMEs and start-ups to satisfy standards.
  • Some Council members noted key responsible AI principles like fairness, accountability, human intervention, recourse, and transparency.
  • On the question of transparency, this could be particularly challenging as companies keep the code and data underlying AI systems tightly guarded. It may be difficult for governments or auditors to know what they need to ask for in order to assess risks. Instead, it will be important to get to a point where AI systems’ code is visible to auditors.
March 2, 2022 – Meeting summary

Date: March 2, 2022
Time: 3 pm to 4:30 pm ET
Location: Videoconference


Innovation, Science and Economic Development (ISED) Senior Assistant Deputy Minister (SADM) Mark Schaan chaired the March 2 ad hoc meeting of the Artificial Intelligence (AI) Advisory Council (the Council) to seek the Council’s views regarding the strengthening of protections and support for responsible innovation in the context of implementation of the principles of the Digital Charter. Council was provided an overview of key considerations, followed by an open discussion on recommendations that could help strengthen protections and support responsible innovation.

Digital Charter Implementation

SADM Schaan provided an overview of extensive consultations on responsible innovation that have previously taken place with industry and key stakeholders. SADM Schaan noted some of the key takeaways from those consultations, including that stakeholders felt the Government of Canada had not yet done enough to address risks related to the privacy of minors, as well as those related to the rapidly evolving development and deployment of AI technologies. Jennifer Miller, Director General of the Marketplace Framework Policy Branch at ISED, then provided an overview of the key considerations for potential new approaches to implementing Digital Charter principles, including potential new protections for minors’ personal information, and potential approaches to data and AI governance. ISED officials noted the importance of maintaining a principles-based approach that still provides sufficient deterrence for red-line inappropriate uses of AI technologies.

Council was invited to share their views on Digital Charter implementation. Key discussion points raised included:

  • The burden of consent on individuals to manage their preferences for how their data is analyzed and understood by AI systems, and the default of regulators to consider compliance-based regimes;
  • The pace and scope of reform, given the potential breadth and depth of privacy, privacy-adjacent and technology-focused policy. Council members acknowledged that regulatory approaches must be broad enough to be responsive to current realities on privacy and transparency, without being too dense and overly onerous for Canadian businesses in their implementation;
  • The ability to audit large datasets and ensure methods will be compliant with evolving standards, with notable focus on better defining AI “explainability” and the nuances of this term in a regulatory setting.  Some Council members suggested a framing of “justification” may be preferable to “explainability”.
  • The ability to consider how to articulate data ownership model that is responsive and appropriate for the needs of communities instead of individuals. Broadly, regulatory approaches to date have considered rights and ownership based on an individual consent model, while Indigenous communities often focus on the agency and rights of the community.
  • Ensuring that Canadian companies are provided the tools and time to respond to any potential new regulations, particularly Canadian small and medium-sized enterprises.
  • Council noted the importance of underlying infrastructure components as a critical enabler of effective AI regulation, and considering the practice and application of auditing and understanding AI models to understand robustness and explainability.
  • Council noted the importance of placing emphasis on transparency, and noting the need for firms to document key metrics (data, algorithms, training information) on their AI systems.
February 1, 2022 – Meeting summary

Date: February 1, 2022
Time: 3 pm to 4:30 pm ET
Location: Videoconference


Following the Council's renewal in November 2021, Council members were engaged in a discussion on the strategic vision and forward agenda for the Council's renewed mandate. The Council also shared insights and recommendations to inform Canada's positioning on the draft European Union (EU) regulations governing Artificial Intelligence (AI) technologies.  

Renewal and vision for the Council

The Co-Chairs shared their vision for the renewed Council. They suggested that the Council should play an early and active role in informing AI policy and provide expert advice on AI initiatives led by Innovation, Science, and Economic Development Canada (ISED), and other federal government departments. Specific themes mentioned in the discussion included health, climate change, AI use in government, and AI adoption/procurement. Some members suggested that Council should explore regulatory models for AI, including through the creation of a dedicated working group on AI regulatory modernization. ISED officials committed to working with Co-Chairs to develop a work plan that would be presented at a future meeting.  

Consultation on the draft European Union (EU) Regulations governing AI

Officials from the Technical Barriers and Regulations Divisions of Global Affairs Canada (GAC) provided a briefing on the draft EU Regulations governing AI, and requested advice from the Council on specific considerations or feedback that could inform Canada's positioning on the regulations. Key elements of the Council discussion included barriers to Canadian small- and medium-sized enterprises developing AI technologies and solutions; intellectual property; third-party testing of AI systems; and the role of international standards. GAC officials committed to take these considerations into account when providing feedback to the EU on the draft AI regulations.

Public Awareness

The mandate of the Council's Public Awareness Working Group will be extended for an additional year to implement Working Group recommendations, and to develop a path forward for advancing engagement with Indigenous communities.

November 17, 2021 – Meeting summary

Date: November 17, 2021
Time: 3:30 pm to 5 pm ET
Location: Videoconference


The Council received a briefing on Artificial Intelligence (AI) research security from Canadian Security Intelligence Service (CSIS) officials. Following the briefing, the Council discussed the work and recommendations of the Public Awareness Working Group.

Briefing on AI and research security

Officials from CSIS provided an unclassified briefing on AI research security in Canada. Council members noted the importance of protecting Canadian IP, data, and research while maintaining commitments to open science and open data. CSIS officials will look for further opportunities to engage Council and to provide information to Canadian research experts on threats to Canada's AI research security.

Recommendations from the Public Awareness Working Group

The co-chairs of the Public Awareness Working Group shared the initial findings from its pan-Canadian efforts in spring 2021 to engage with people in Canada on AI awareness. The Working Group recommended a number of measures that could increase awareness of AI technologies, including public awareness and literacy campaigns; sustained dialogues and engagements; and the creation of an AI awareness community of practice with a specific focus on engaging diverse, underrepresented communities. The Council renewed the mandate of the Working Group for another year to allow them to implement its recommendations and further engage with Indigenous communities.