Many of those leaders are currently pursuing how they can harness its power to achieve their goals – whether that’s increasing employee productivity, reducing costs, simplifying operations, improving their security posture or enhancing the customer experience, to name just a few benefits. They are working through practicalities as they transition from AI ideation to implementation.
With such enormous potential, it can be tempting to green light a major investment in AI with the view that the sooner you start, the faster you’ll start reaping the benefits. This especially can be the case if your competitors are already using AI. However, without having the right foundations in place for AI, that can prove to be a costly mistake. It is critical that organisations understand where risks reside and who is responsible for them before embarking in their AI journey.
For example, a recent global survey suggests the main barrier to successfully implementing AI is a lack of the right talent (35%), followed by data privacy and cybersecurity concerns (31%).
“It’s important to get back to basics,” says Asam Malik, Partner, Technology & Digital Consulting at Forvis Mazars. “Do you understand the risks? Do you understand the opportunities? Can you identify the relevant regulations? Are you producing a business case or a return on investment (ROI) analysis on AI? Because if you get AI wrong, rather than benefitting from artificial intelligence, you’re just left with artificial information.”
Here, we list six key foundational steps you should put in place before going all-in on AI and to unlock its potential.
1. AI cannot be fully leveraged without a Digital Strategy
AI is a disruptive technology, and to handle it properly, businesses need to implement a digital strategy that integrates AI into the rest of their technology landscape. As part of that, business leaders need to facilitate the adoption of AI, not restrict it. They need to think about how to disrupt their own organisation. At the same time, before AI is implemented the organisational use cases for AI must be identified and pinned down.
2. AI is only as good as the data that it is fed
For AI to be effective, the organisation’s underlying systems need to be stable and the data needs to be accurate and comprehensive, otherwise it will be the old IT adage of ‘Garbage in, Garbage out.’
“AI has become very accessible,” says Malik. “It’s easier to get AI tools and products now than it ever has been. But the underlying data that AI is pulling information from is still the same. And if that data isn’t accurate, or not comprehensive, you’re creating more of a risk because you’re making judgments off the back of incomplete or inaccurate information.
“The key thing with AI is having confidence in your data and making sure it’s comprehensive. You can’t miss that foundational step.”
3. The security risks around AI in particular cyber are often overlooked
Criminals are leveraging AI to launch sophisticated cyberattacks on organizations. For example, they’re using AI to craft convincing phishing emails and even acting as the company’s CEO in a deepfake video conference. So it’s just as important to put the right safeguards in place at your organization when you’re implementing AI.
“After implementing AI, one client gave it free rein across the entire network – which you should do to leverage the benefits of AI as you need to get access to as much data as possible. But you also need to lock down the data that you don’t want AI to have access to,” says Malik. “At this particular organisation, someone tapped into the AI system to find out confidential information about employees’ finances and HR records. So you’ve got to be careful around putting the safeguards in place around it.”
4. The data privacy risks around AI are not always considered
Data privacy laws and regulations are about making sure that you are only using the data you need for the purpose for which you have gathered it, and that you’ve got the permission from the individuals involved to do so. But there is the potential with AI to use that data for analysis around much broader areas than for which it was originally collected. That, of course, can lead to regulatory fines, reputational damage and a loss of confidence from customers, employees and shareholders.
5. Boards often do not have NEDs of Execs that understand how to leverage AI and the risks.
We’ve discussed how business leaders need to have good technology skills and can speak to business opportunities, benefits and risk. AI is really exposing that gap.
“If this talent is missing, there is a strong chance that board members won’t make informed decisions about AI – which comes at a huge risk,” says Malik. “If there’s no one on the board that understands AI, you don’t understand the true potential of that technology. And you don’t really understand the risk of it, either.”
6. Skills – it’s a two-fold problem. There is both a significant gap in AI skills in the market, and there needs to be better AI literacy for employees.
The research we mentioned earlier shows the shortage of talent required for innovation is a global issue, but it’s felt more acutely in the UK, with 72% of IT and business decision-makers acknowledging this gap compared to the global average of 67%. This underlines the urgent need for learning agility, AI fluency, and creative thinking.
“Similarly, you can implement any technology across an organization, but if you don’t provide the training in the first place, people won’t know how to use it or be able to leverage its benefits,” says Malik.
You may think these steps are obvious, but they are often overlooked. And the only way to successfully leverage the power of AI is to have the proper foundation in place first.
Get in touch
If you would like to discuss any of the details from the article, please get in touch.
Contact us today