Artificial intelligence (AI) technology has advanced significantly in recent years, becoming a component of many organisations' technology roadmaps and business goals. AI technology is having a massive impact on business and society. Reduce the risk of getting it wrong and maximise the advantages it can deliver with a people-centric and compliance-focused strategy.
AI is creating many positive opportunities and enhancements. Nevertheless, it is also introducing many risks and those who create, deploy, and utilise AI systems must ensure these risks are managed for all stakeholders.
The EU AI Act, which is set to be enacted in late 2023 or early 2024, imposes obligations on all uses of AI through a set of AI principles. Additional obligations apply to high-risk AI systems, foundational models, and systems that engage directly with individuals.
What is AI?
The AI Act has proposed the following definition of AI:
AI is, “…a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”
There are several key characteristics: autonomy, output generation and learning.
What is the AI Act?
The EU’s AI Act is a regulation that is designed to ensure that a risk-based approach is taken to the design and deployment of AI systems in the EU market. While it is not yet in effect, businesses should begin preparing for it now as the legislation will encompass all systems in use or intended for use in the EU market from when it is in place.
The Act places obligations on parties that are involved in AI with these becoming more stringent the higher the level of risk. There are four levels in the text approved by the European Parliament in June 2023:
1. Unacceptable risk
These are AI systems that are prohibited, and the use of such a system will incur maximum penalties. These include systems that impact an individual’s ability to make an informed decision, using dark patterns or subliminal techniques, and systems that exploit someone’s vulnerabilities.
2. High-risk systems
These systems are listed in the Act, many of which relate to the public sector and law enforcement, but they also include AI systems used in employment, recruitment and credit analysis. Parties engaging in these systems are subject to a wide range of obligations.
3. Generative AI
These include tools such as ChatGPT and other systems that are used to generate content such as text, image and sound. These have additional transparency obligations.
4. Limited risk
This is essentially everything else. The systems will be required to comply with transparency obligations and apply the principles of AI.
Our approach to AI at Forvis Mazars
AI presents opportunities for organisations to transform how they do business, enable economies of scale, reach new markets, reduce costs, and reap a variety of other benefits. Organisations wanting to unlock these benefits should do so in a manner that manages the associated risks.
Our team take an ethical and responsible approach to the development of AI frameworks in order to ensure that the adoption of AI will have a long-lasting positive impact on both business and society.
We take a principle-based approach, that aligns with those agreed in the AI Act:
- Human agency and oversight.
- Technical robustness and safety.
- Privacy and data governance.
- Transparency.
- Diversity, non-discrimination and fairness.
- Social and environmental wellbeing.
Our AI services
Our AI services help prepare your business or organisation by establishing responsible AI and compliance frameworks that are people and business-driven.
- Dedicated AI officers.
- AI governance forums.
- AI governance training.
- AI awareness.
- Quality management framework.
- Risk management framework.
- Fundamental rights assessments.
- Transparency development.
- Data governance.
- AI Act gap assessment and compliance roadmap.
To find out how we can support your business or organization maximise the benefits and minimize the risks associated with AI, more get in touch with Liam McKenna or David O’Sullivan.