The importance of adopting a Responsible AI Program
By Alan Khalil, Director AI Products, Bell Business Markets
Over the past decade, my colleagues and I have been developing Data and AI solutions at Bell to drive new and innovative business opportunities. It’s been an amazing journey, one that’s been governed by disciplined internal policies to ensure that every solution meets our customers’ expectations with respect to protecting their privacy and data, as well as the various federal regulations that govern information and security. This means complying with The Personal Information Protection and Electronic Documents Act (PIPEDA), preparing for C-27 (Digital Charter Implementation Act, 2022) and adhering
to more broad regulations like the Consumer Product Safety Act, Human Rights Act and even the Criminal Code.
However, in 2022, OpenAI1 unleashed a game-changer: the free preview of ChatGPT. Suddenly AI wasn’t just a topic for techies, the entire world was able to experience, firsthand, the power (and magic) of AI. The excitement was felt globally.
With this excitement came caution. What would happen to data privacy, jobs and our society in general? It became obvious that we were entering a new industrial era and consumers, practitioners and researchers were looking for regulations to ensure our safety and well-being. Governments were called on to act quickly – faster than ever before – to prepare for this exciting new AI era. The AI race was on!
The Canadian Artificial Intelligence and Data Act (AIDA):
The Government of Canada took its first stab at regulating AI in June 2022 with the release of the Artificial Intelligence and Data Act (AIDA)2 as part of Bill C-27.3 However, AIDA was missing key definitions. For example, what is a “high-impact AI system”? In response, the Canadian government released a companion document in March 2023 with clarification on AIDA’s scope and definitions.
There are a number of resources out there that do a great job of summarizing AIDA (and C-27 overall). Here is a brief overview:
AIDA will regulate High Impact AI Systems, including:
- Screening systems impacting access to services or employment
These AI systems are intended to make decisions, recommendations or predictions for purposes relating to access to services, such as credit or employment. - Biometric systems used for identification and inference
Certain AI systems use biometric data to make predictions about people. For example, identifying a person remotely, or making predictions about the characteristics, psychology, or behaviours of individuals.
- Systems that can influence human behaviour at scale
Applications such as AI-powered online content recommendation systems have been shown to have the ability to influence human behavior, expression, and emotion on a large scale.
- Systems critical to health and safety
Certain AI applications are integrated in health and safety functions. For example, making critical decisions or recommendations based on data collected from sensors. These include autonomous driving systems and systems making triage decisions in the health sector.
For anyone building, managing, or making these systems available for use, certain obligations must be followed, including:
- Human Oversight & Monitoring:
Systems must be designed and developed in such a way as to enable people managing the operations of the system to exercise meaningful oversight. This includes a level of interpretability appropriate to the context.
- Transparency:
Providing the public with appropriate information about how high-impact AI systems are being used.
- Fairness and Equity:
Building high-impact AI systems with an awareness of the potential for discriminatory outcomes. Appropriate actions must be taken to mitigate discriminatory outcomes for individuals and groups.
- Safety:
Systems must be proactively assessed to identify harms that could result from use of the system, including through reasonably foreseeable misuse. Measures must be taken to mitigate the risk of harm.
- Accountability:
Organizations must put in place governance mechanisms needed to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used.
- Validity & Robustness:
Validity means a high-impact AI system performs consistently with intended objectives.
Robustness means a high-impact AI system is stable and resilient in a variety of circumstances.
AIDA is expected to be introduced at the earliest, in July 2025, allowing time for businesses to adapt. But what happens until then? Moreover, how do we ensure that non-high impact AI Systems are developed responsibly – an area that is not currently addressed in the act?
Bell’s Responsible AI Policy
At Bell, we are acting now by prioritizing the responsible development and use of AI technologies in alignment with our business ethics, social obligations and privacy and security objectives. In support of these commitments, we’ve published a summary of the guiding principles we follow when designing and building AI systems:
Guiding principle | Commitment |
Responsible and safe deployment |
|
User empowerment and accountability |
|
Research and Innovation leadership |
|
Robust governance and transparency |
|
Proactive approach to risk management |
|
To implement these principles, we’ve created processes across the entire delivery and management lifecycle of our AI systems. This includes executive level governance, automated data compliance processes, and automated monitoring of high-risk metrics affected by model outputs.
Additionally, we’ve paid careful attention to the fast-paced development of novel algorithms and frontier models by designing a flexible program that can react to, and incorporate, potential future changes with agility.
Innovating responsibly
I'm incredibly excited about the potential of AI - how it can transform the way we work, interact, and do business. It's already contributing significantly to our own Network Operations, Contact Centre, & Digital Marketing businesses as well as driving employee productivity. And with AI & Data solutions, like speech analytics & Google Cloud Contact Centre AI (CCAI) from Bell, we’re eager to extend the value of AI to our customers too.
That being said, I also recognize the risks that come with it and the responsibility that we all have to do everything we can to protect individuals, our environment and society. It requires diligence, focus and a deep commitment to establishing clear internal principles alongside the adoption of AIDA, once in place.
I’m confident that by adopting responsible AI policies within Bell, we can continue to build for the future while honouring the longstanding trust that our customers have granted us for well over a century.
About the author
Alan leads the AI & Data Engineering Practice at Bell Canada, responsible for driving the strategic roadmap execution of the Data & AI Practice. Alan is passionate about AI and its immense business potential. He has led numerous AI initiatives across Consumer and Business markets at Bell and is committed to working with Canadian businesses and governments as they pursue their digital transformation journeys.
Alan will be a regular blog contributor, sharing his unique perspectives and deep AI insights.
Sources: