In a recent article with Business Plus Magazine, David O’Sullivan warns that failing to prepare adequately for AI adoption can lead to organisations falling prey to these risks, potentially harming stakeholders.
The AI Act, which was recently passed into law by the European Parliament, also introduces the risk of penalties: Businesses found guilty of the most serious infractions face potential fines of up to 7% of global turnover.
There is, however, still a clear consensus across industries: most large organisations appear to be aiming to introduce artificial intelligence into their business within the next three years. But the readiness for the added oversight that will be required varies significantly. Businesses therefore need to be asking the tough questions now:
- What AI model is the best fit for our needs?
- What performance targets should we be aiming to achieve?
- Do we possess the skills required to maintain AI systems?
- And crucially, will we be compliant with the AI Act?
To ensure a responsible and sustainable introduction of AI that safeguards stakeholders and prepares for compliance with the AI Act, organisations must be proactive and begin constructing their own AI governance frameworks immediately. An effective framework not only safeguards against legal repercussions but also sets the scene for responsible and ethical AI integration that aligns with organisational values and societal well-being. Embracing AI with a well-structured governance framework is thus not merely a choice – it’s imperative for future success.
As an industry, the financial services sector is leading the way on this front – a report from Singapore’s Personal Data Protection Commission found DBS Bank, HSBC and Visa Asia Pacific were among the standard-setters in this area. But while the internal AI governance frameworks of firms can vary in structure, many of them contain the same critical elements.
- Cross-functional collaboration: Given the intricacies of AI, creating an effective framework requires a cross-functional team. Leveraging existing forums, such as Data Protection Governance groups, is one example of a strategic approach. That was the direction taken by Mastercard - their AI governance council comprises of the organisation’s Chief Data Officer, Chief Privacy Officer, Chief Information Security Officer, and Data Science representatives. Legal expertise should, of course, also be included, where relevant, to ensure compliance with evolving laws.
- Executive sponsorship: Effective decision-making within the governance group requires executive sponsorship. As well as supporting ideas presented to the board, the executive sponsor also makes real-time decisions. Again, Mastercard achieved this by having the Executive Vice President of the AI Centre of Excellence chair the AI Governance Council.
- Trust and accountability: Trust and accountability should be emphasised in any framework. Some organisations, like IBM and Google, have established their own set of ethical principles. But not all organisations need to take this step as the AI Act introduces six key principles: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, and social and environmental well-being. A framework without key principles will not ultimately stand up to scrutiny.
- Strategy and risk management: The AI strategy chosen should consider associated risks and impacts. Considerations must include whether the strategy involves procuring training data or using proprietary data. The governance framework should inform the strategy and evaluate proposals, drawing on existing internal frameworks and external standards, such as the NIST AI risk management framework which was published last January
- Performance monitoring: The framework should define the performance metrics applicable to AI tools and systems and continuously monitor performance against these metrics. Metrics may encompass data quality, stakeholder outcomes, compliance, energy consumption, and environmental impact. Monitoring is essential to ensure that AI systems meet their intended outcomes and comply with relevant regulations.
- Cultural integration: The successful adoption of any new technology, including AI, depends on fostering a culture that embraces change and adheres to compliance. The governance framework should consider aspects such as training, awareness, and accountability, ensuring that employees have the necessary tools and knowledge to adapt to the changes effectively.
- Societal Impact: Societal implications should be a core consideration in any framework. This impact, as recognized in the AI Act, extends to environmental considerations.
Organisations are adopting AI in their droves, and those that are ineffective in doing so will risk being left behind. As seen in technological advances over the years, those who do not adopt or decide to inhibit the proliferation of such advances eventually fail. Embracing new technology is important, but not as important as doing so in a safe and sustainable manner. The best starting point for any journey is via a thorough framework with clear and unambiguous guiding principles.