you need to be
AI regulatory frameworks and technical standards should operate as seamlessly as possible across nations and regions.
Artificial intelligence (AI) tools are not new. For years, they have been powering everyday activities from online banking to drafting emails and commuting to work.
But 2023 saw significant advancements, with generative AI moving the needle on important issues like productivity, disease, and clean energy. This leap forward has also raised important questions about the creation and deployment of these powerful tools.
Academics, industry, civil society, and governments alike are hard at work to ensure risks posed by AI are mitigated and used responsibly for the benefit of all. These efforts have led to important milestones in AI governance, spanning the G7, the United Nations, and like-minded allies, with both U.S. and Canadian governments working to regulate this evolving space. One by one, Canada’s closest allies are signalling their path forward. This month, Australia announced its intention to avoid a single regulatory law like the European Union’s AI Act in favour of a risk-based approach that more closely aligns with models in the U.S. and United Kingdom.
This progress is important and welcomed. But to truly unlock the full scope of what AI can offer, all while ensuring it is both safe and accessible, regulation must be carefully crafted, with international operability top of mind.
International operability emerged from a recent roundtable discussion hosted by the American Chamber of Commerce in Canada (AmCham Canada) as the crucial component of any AI regulatory framework. Attended by senior representatives from the U.S. Embassy in Ottawa, the United Kingdom, Japan, Canada, and Australia, participants strongly felt that for AI to reach its full potential, effective regulatory and legislative frameworks must be developed through collaboration rather than in silos.
Domestic and international governance initiatives are still in their infancy, though we are witnessing distinct approaches to policy-making in various jurisdictions. The divergence in approach to regulating AI has the potential to undermine fledgling international initiatives, and challenge regulatory interoperability, as noted by Montreal’s International Centre of Expertise in Artificial Intelligence (CEIMIA). That is why AmCham Canada welcomes the emergence of an international framework for responsible AI innovation, via efforts like the G7 international code of conduct for responsible AI and the United Nations AI advisory group.
By neglecting to establish such frameworks, the risk of a siloed regulatory environment is high and the impacts significant, from impeding innovation for startups and scale-ups and slowing global development and adoption of powerful and helpful tools, to undermining responsible and accessible development efforts, and increasing the likelihood of bad actors taking advantage of a fragmented environment. The OECD has noted that governments, experts, and other stakeholders are increasingly calling for the development of AI accountability mechanisms and interoperability between burgeoning frameworks. This would help drive efficiency and compliance while reducing costs.
Given the global nature of the digital economy, AI regulatory frameworks and technical standards should operate as seamlessly as possible across nations and regions. Striving for consensus on AI regulation, particularly within the context of trade, will streamline the adoption, use, and interoperability of AI technologies across diverse jurisdictions – something that will, if done right, benefit society everywhere.
But a one-size-fits-all approach to AI regulation can also stifle innovation and adoption, particularly for smaller organizations. Voluntary consensus standards that are internationally acknowledged are the best path to ensuring both adoption and adaptation, particularly for those who are new to incorporating AI into their businesses.
Achieving interoperability, even on a voluntary basis, will be no small task. Fortunately, organizations such as the Partnership on AI, MLCommons, and the International Standards Organization, are building common technical standards that can align practices globally and develop industry-wide frameworks that can “both demonstrate conformity with emerging AI regulation and promote interoperability among different jurisdictions.”
For North American businesses, cross border co-operation among regulators is also critical to helping governments jointly develop and deploy AI to address global challenges related to public health, humanitarian assistance, sustainability, and disaster response. In keeping with these efforts, important work has been underway across multiple sectors since the launch of the Canada-United States Regulatory Cooperation Council in 2011.
A regulatory framework that enhances innovation and interoperability by reducing trade barriers and simplifying compliance will be critical to helping small organizations scale internationally and compete effectively – which is good for businesses and consumers on both sides of the border.