The path to pragmatic AI governance: lessons learned
By Shuo Wang, Data Scientist, and Jason Milnes, Director Artificial Intelligence
The transformative power of AI is undeniable. We're seeing amazing things from generative AI alone, with McKinsey predicting a contribution of $2.6 to $4.4 trillion to the global economy.1 Yet some experts like AI pioneer Geoffrey Hinton warn of its existential risks. So, how do we balance innovation with control?
At Bell, we've been working on this balance for a while, and we've learned a lot about how to manage AI responsibly. We've established an AI Centre of Excellence and built Responsible AI programs to transform our entire organization. It’s been quite a journey with many lessons learned along the way. Let’s dive into our top five.
Lesson 1: Secure organization-wide support
Implementing an AI policy requires buy-in from all levels. Executive sponsorship ensures accountability and provides the necessary resources for successful implementation. Clearly articulate the benefits of AI governance to leadership, emphasizing risk mitigation, enhanced reputation and competitive advantage.
Equally important: make sure to engage developers and end-users through workshops and feedback sessions to address their concerns, incorporate their expertise and foster a sense of ownership.
A dedicated cross-functional steering committee with representation from various departments is crucial to ensuring commitments to safe and responsible use of AI are met.
Lesson 2: Leverage existing frameworks and resources
Don't reinvent the wheel. Integrate AI governance within existing compliance functions, such as information security and privacy. This avoids duplication of effort and ensures consistency.
When creating a parallel AI function and policy, clearly define roles and responsibilities, establish reporting lines, and integrate AI considerations into existing decision-making processes.
Furthermore, leverage established Responsible AI principles from respected sources like the NIST AI Risk Management Framework, the ISO/IEC 42001 standard on AI management systems, and leading tech companies like Google's AI Principles to tailor a policy specific to your organization's industry, size and risk appetite.to tailor a policy specific to your organization's industry, size and risk appetite.
Lesson 3: Anticipate future regulations
The rapid pace of technological advancements often outstrips current legislation. By proactively adhering to responsible and ethical AI practices, such as fairness, transparency and explainability you can position yourself for compliance when regulations eventually catch up.
It’s important to also stay informed about emerging regulations and best practices by monitoring legislative developments, participating in industry forums, and consulting with legal experts.
Consider conducting ethical impact assessments for high-risk AI systems to identify and mitigate potential harms. This forward-thinking approach not only minimizes future legal risks but also builds trust with customers and stakeholders.
Lesson 4: Implement and iterate
Start by addressing the most significant risks associated with artificial intelligence, particularly those related to high-impact, public-facing systems. Develop a risk assessment framework to categorize AI systems based on their potential impact and likelihood of harm.
Centralizing AI implementation through an AI Center of Excellence can improve efficiency, promote knowledge sharing and ensure consistent application of governance principles. Regularly report progress to the steering committee and executive sponsors, using clear metrics to track key performance indicators.
Strive for diligent implementation of your AI governance framework, recognizing that it's an iterative process requiring ongoing improvement. Consider engaging an external, neutral third-party auditor to evaluate the effectiveness of your framework and identify areas for enhancement.
Lesson 5: Communicate effectively
Equip all employees with in-depth knowledge about AI capabilities, potential risks, and ethical considerations. Tailor training programs to specific roles and responsibilities, offering specialized technical training for AI developers and governance professionals.
Make sure to also establish clear communication channels to keep everyone informed about policy updates, regulatory developments, and best practices. Foster a culture of open communication and encourage employees to voice concerns and report potential issues related to AI ethics and governance.
Finally, make it a priority to regularly communicate the organization's commitment to responsible AI to external stakeholders, including customers, partners, and the public.
Overall, building a robust AI governance framework is an ongoing process, not a one-time achievement. By adopting a practical approach, incorporating industry best practices, and cultivating a culture of continuous improvement, organizations can effectively navigate the dynamic AI landscape while harnessing its transformative potential responsibly.
The essence of successful AI governance lies in finding the right balance between innovation and control, ensuring that AI technology serves humanity's best interests.
Advance your AI journey with Bell
At Bell, we recognize the crucial role of well-structured AI governance. Our Data Engineering and AI Center of Excellence have been instrumental in driving change, and our governance model ensures that innovation and modernization are seamlessly integrated with organizational support and disciplined execution. We believe you can achieve similar success.
Get in touch with one of our experts to learn more.
Source:
1. “The economic potential of generative AI: the next productivity frontier”, McKinsey, June 14, 2023