Commentary

AI Governance Pushes Digital Risks to ESG Forefront

Tess Buckley, AI Ethics Senior Analyst at EthicsGrade, outlines how advances in AI and its implications for investors are seeing digital risks rise up the ESG agenda.

Artificial intelligence (AI) and digital governance are big ESG issues but have been thus far neglected in broader ESG discourse. However recent advances in AI, not least, OpenAI’s ChatGPT exploding into the mainstream has seen AI and ESG simultaneously gaining increasing shares of the limelight in public discourse.   

At the end of 2022, a report by PwC forecasted that the number of ESG-related AuM will rise to US$33.9 trillion by 2026. Institutional investors in Europe are seeing that ESG criteria, especially social factors now hold greater sway in investment decisions. 

However, despite their concurrent growth, digital risks remain frustratingly divorced from the broader ESG discourse. One would be lucky to find more than a mention of digitalisation and AI risks in a typical corporate ESG report. Yet digital technologies are pervading every industry, with digital transformations leading to further automation of businesses in all sectors. Whilst digitalisation promises an abundance of opportunities for our society and our environment, it also brings a multitude of risks that require strong mitigation through digital governance frameworks and ESG strategy.  

Why is AI not included in the ESG agenda? 

A primary reason behind AI and digital governance being broadly neglected in ESG discourse is the misconception that digital risk concerns are largely confined to companies whose offerings are explicitly digital. That is big tech companies such as Microsoft, Meta, Google and Amazon. Yet digital technologies are reshaping the business models of industries we wouldn’t historically associate with digital technology.

With real-world issues exemplified in the agricultural and financial service industries, the impacts of digitalisation are widespread. In agriculture, AI-enabled precision agriculture could help to significantly reduce food waste and protect crop yields for farmers. However, if this integration is not met with strong digital governance principles, it could also perpetuate the competitive advantage of larger-scale farmers who are better placed to scale using technology.

This form of digitalisation risks dissolving operational autonomy for farmers, whose limited digital skill sets would require their dependence on the tech companies providing such farming software whenever they (inevitably) face technological issues. This digital risk was highlighted in 2022 when two lawsuits were filed against the agricultural machinery manufacturer John Deere for designing their software in a way which makes it deliberately impossible for farmers to self-sufficiently repair their farming technology.

In financial services, AI-assisted decision-making in financial services could perpetuate existing social inequalities by encoding unintentional biases against marginalised groups into its data-driven predictions. Financial service firms have already been quick to automate their business processes to reduce costs and increase profits with the potential of AI. The financial sector has been less forthcoming in establishing digital governance principles, and auditing of AI systems, which would ensure a digital transformation that affords efficiency while avoiding the further discrimination of marginalised groups. 

Now what? AI governance as ESG strategy 

With AI propelling itself deep into the mainstream, and ESG benchmarking growing as a means of informing socially conscious investing and users, this could be the first signs of a perfect storm for poorly governed tech companies. Investors and portfolio managers will increasingly be scrutinising the ESG impacts of digital technologies to inform their investment decisions. Putting pressure on companies from all industries to consider their corporate digital responsibility as part of their ESG strategy.  

AI and digital governance are ESG issues that have been thus far neglected in broader ESG discourse. ESG strategy that accounts for the risks of digital technology is a necessity in 2023. ESG considerations not only decrease reputational risk but increase value. Choosing to implement these digital ESG standards is currently a commercial decision. However, government action and enforced AI policy are right around the corner, with laws such as the UK’s Online Safety Bill and EU’s AI Act highlighting digital ethics as a priority.  

The incoming regulations are designed specifically to deal with the potential harms of AI and digital technology. Namely, the Online Safety Bill, considered one of the most far-reaching attempts to regulate how tech companies deal with users’ content to date, and the AI Act, which aims to harmonise rules placed on AI systems in the EU.  

The Online Safety Bill will force platforms to tackle and remove illegal material relating to terrorism, child sexual exploitation and abuse, on their sites. Any platform that fails to comply will be met with fines of up to 10% of the platform’s revenue with the highest penalty the regulators can enforce resulting in the platform is blocked. This bill will request high-risk platforms address their user’s exposure to harmful material (cyber flashing or epilepsy trolling) in their terms and conditions. The services will also have to include ‘empowerment tools’ to give adults control over how they interact and whom they interact with on the platform.  

The AI Act will aim to establish and enforce horizontal legislation, making sure that AI systems regulation is compatible. This bill not only sets a precedent in the EU but will be treated as a global standard which inspires AI regulation internationally. The act shares rules for the trustworthy and safe deployment of any product that uses AI on the EU market. This is a major regulation that clarifies AI risk categories for industry leaders.  

The EU is also introducing AI ethical standards with its proposal for an AI legal framework, which considers regulation to be essential to the development of AI tools that consumers can trust. Companies will find that ESG continues to evolve and part of this will be making digital ethics in ESG strategy soon-to-be non-negotiable. Successful and ethical business engagement around inclusive AI will include considerations of the incoming regulation, a compelling vision of AI as an ESG issue which is rooted in accountability to advancing responsible AI and internal frameworks that encourages inclusive innovation. Finally, sharing a sense of urgency to take action that ensures AI deployment or development which is fundamentally ethical. 

Luke Patterson, Senior Insight and Evaluation Analyst at EthicsGrade, contributed to the article.

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2023 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top
Newsletter SignupReceive all the latest stories from the ESG Investor editorial team

Subscribe to our free weekly newsletter below and never miss a story.

Share via
Copy link
Powered by Social Snap