
The ESG impact of AI: Balancing progress and responsibility
This article is part of a series on CES 2025, highlighting the latest tech trends and innovations showcased in Las Vegas from 7-10 January, where Forvis Mazars participated alongside top companies and engineers to form new partnerships and witness cutting-edge technologies at the premier global tech event.
Environmental footprint: AI’s growing energy demand
One of the most pressing ESG concerns surrounding AI is its environmental impact. Training large AI models requires vast amounts of computational power, leading to skyrocketing energy consumption. Some estimates suggest that by 2030, global data centres could account for up to 10% of total electricity use. At CES, companies such as Nvidia highlighted initiatives to improve AI efficiency by reducing the power demands of their processors. However, the industry must accelerate the development of sustainable AI infrastructure to mitigate its environmental footprint.
Data centres housing AI systems require massive cooling systems, further contributing to their carbon footprint. Some technology firms are exploring alternative solutions, such as liquid cooling and renewable energy-powered data centres, to reduce emissions. Companies like Google and Microsoft have committed to achieving carbon neutrality in AI operations, but widespread adoption of greener practices remains slow.
Additionally, AI’s role in enabling sustainability solutions is growing. AI-driven energy management systems are being developed to optimise power consumption in smart buildings, cities, and industrial settings. By leveraging AI to predict and balance energy loads, industries can reduce waste and improve efficiency. Moreover, AI-powered analytics are being used in climate change research to model future scenarios, track carbon footprints, and develop solutions for reducing global emissions.
However, the trade-off between AI’s benefits and its environmental costs remains a subject of debate. While AI can help drive sustainability efforts, its own resource intensity must be mitigated through more energy-efficient algorithms, better hardware design, and regulatory frameworks that encourage responsible AI development.
Social considerations: Data privacy and bias
AI’s ability to analyse and predict human behaviour raises serious ethical questions. The extensive use of consumer data, particularly in personalised services and predictive analytics, necessitates robust data protection measures. Europe’s General Data Protection Regulation (GDPR) remains a benchmark, but as AI advances, global regulatory frameworks must evolve accordingly.
Algorithmic bias remains another critical issue. AI systems trained on non-representative data sets risk perpetuating discrimination in hiring, finance, and law enforcement. The industry must prioritise transparency and inclusivity in AI model development to avoid deepening existing inequalities.
Furthermore, AI’s influence on employment is a growing concern. While automation and AI-powered tools can improve efficiency and reduce operational costs, they also pose a risk of job displacement in various sectors. Reskilling and upskilling initiatives are essential to ensure that workers can transition into new roles in an AI-driven economy. Companies and policymakers must work together to create educational programs and employment policies that support workforce adaptation.
Governance challenges in regulation and corporate responsibility
Governance concerns surrounding AI include accountability and oversight. Governments and regulatory bodies are struggling to keep pace with AI’s rapid development, leading to inconsistencies in global standards. The European AI Act aims to address these gaps, setting stringent requirements for high-risk AI applications. Meanwhile, corporate responsibility in AI ethics is gaining traction, with tech firms increasingly committing to frameworks that ensure fairness and accountability in AI-driven decisions.
There is also a need for greater transparency in AI development. Many AI algorithms operate as “black boxes,” making it difficult for users and regulators to understand how decisions are made. Explainable AI (XAI) is a growing field that aims to make AI models more interpretable and accountable. By enhancing transparency, organisations can build trust and ensure that AI systems align with ethical standards.
A sustainable AI future
The key to ensuring AI’s positive ESG impact lies in proactive governance, ethical development, and sustainable technological advancements. The industry must balance innovation with responsibility, ensuring AI serves society equitably without exacerbating environmental and social challenges. Collaboration between governments, businesses, and academia will be essential in shaping AI policies that foster responsible development while harnessing the technology’s transformative potential.