Features

In the Eye of the AI Storm

As artificial intelligence’s influence continues to spread and regulation begin to arrive, investors must be able to hold companies accountable on governance. 

The prevalence of artificial intelligence (AI) surged in 2023, a trend certain to continue this year, but the technology’s growth is blighted by risks arising from poor governance. 

AI’s use is already widespread, as illustrated by a survey from management consulting firm McKinsey which found 79% of respondents have had some exposure to generative AI and 22% are already regularly using it in their work. 

The most well-known AI system, ChatGPT, passed 100 million unique users two months after its November 2022 launch, and was estimated to have been used by 1.7 billion people within a year. According to research by IT firm Infosys, AI will add US$14 trillion in gross value to corporations by 2035, while Goldman Sachs estimated that generative AI could increase global GDP by 7% by 2033. 

AI systems are already deployed by governments across areas including housing, employment, transport, education, health, accessibility, and justice, with the use of the technology likely to further increase in 2024.  

In the public and private sectors, it could be transformative, improving efficiency, cutting costs for corporations, and accelerating automation. This also means job losses: a March 2023 analysis from Goldman Sachs projected AI could replace the equivalent of 300 million full-time jobs. 

However, there are serious risks posed by AI, including facilitating human rights violations, exacerbating societal power imbalances, perpetuating racial inequality and disrupting democratic processes. 

These risks are largely avoidable and can stem from the poor governance of the technology. 

These risks pose a threat to investors with investment in AI firms – which is predicted to reach US$200 billion globally by 2025 – as well as companies that use AI, in the form of regulatory, financial and reputational risks. 

“Responsible governance of AI is a massive missing piece and an emerging threat,” David Rowlands, KPMG’s recently appointed Global Head of AI, says. “A strong governance framework is required to mitigate these risks and any new regulatory requirements.” 

“The speed of development alongside the relative lack of maturity in responsible use of powerful AI systems, and therefore lack of oversight and compliance processes, is AI’s main risk from a governance perspective,” he adds. 

Investor action required 

Tomas van der Heijden, CEO at Berlin-based software firm Briink, tells ESG Investor 2024 marks a “critical juncture” in the responsible governance of AI, with investors needing to play a pivotal role in shaping the trajectory of AI integration. 

There are a number of active investor initiatives to engage with investee firms using AI to ensure its responsible development and governance. This includes the World Benchmarking Alliance’s (WBA) Collective Impact Coalition for Digital Inclusion – an engagement campaign aiming to drive technology companies to advance ethical AI policies and practices. 

The coalition builds on the alliance’s Digital Inclusion Benchmark, which assesses 200 firms on whether they are enabling greater access to digital technologies. In September 2023, just 52 of these firms were found to have adopted basic ethical AI principles. 

Next month, the alliance is set to release an updated methodology for the benchmark which includes an additional layer focusing on governance. WBA says governance and oversight mechanisms specific to AI remain “poorly explained and understood”. 

Norway’s Storebrand Asset Management participates in several investor initiatives engaging on AI with companies, including asking investee firms to conduct ongoing human rights impact assessments. 

Jan Erik Saugestad, CEO at Storebrand, notes large tech companies involved in AI software tend to have “weak business ethics systems and poor product governance”, which makes them “less robust in terms of regulatory compliance”. 

He adds investors must engage with companies to ensure responsible corporate behaviour around AI in order to avoid financial, reputational risk and to make sure these firms are ready to navigate new regulations.  

Guillaume Couneson, TMT Partner at Linklaters, says it is “not good enough” for boards and top management just to know AI is being used somewhere within the company. There are increased expectations for them to be fully “in control of the policy and the governance around it”.  

Anita Dorett, Director of the Investor Alliance for Human Rights (IAHR), also points to an increase in investors engaging with Big Tech companies on AI-related risks, including a surge in new shareholder proposals “demanding greater accountability” in the development and deployment of AI.  

IAHR is a 200-member strong collective action platform for responsible investment. Last February, 149 of its investor signatories called for enhancements to the EU AI Act to “incentivise responsible business conduct” and ensure the “trustworthy use of AI”. 

“The rapid development and deployment of technology, particularly AI systems without comprehensive human rights due diligence processes, have contributed to adverse human rights impacts and will continue to do so in 2024, a super election year,” Dorett adds. 

Tomas van der Heijden, CEO at Berlin-based software firm Briink, says investors need to ask investee companies “critical” questions focusing on the firms’ AI governance framework, ethical considerations in AI development, and measures to prevent bias and discrimination. 

He adds investors should ask how investee firms are adapting to changing regulations, fostering responsible innovation, and investing in ongoing AI education and research. 

Regulation ramp-up 

Last year saw the introduction of numerous AI regulations and increased attention to the technology from policymakers, with 2024 set to see the rollout of further rules. Nikki Gwilliam-Beeharee, Investor Engagement Lead at WBA, notes there has been an “uptick in interest from a regulatory perspective”. 

In December, the European Parliament and Council finalised the long-awaited EU AI Act. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected, as well as establishing obligations for AI based on its potential risks and level of impact. 

The act establishes clear obligations on AI, including a mandatory fundamental rights impact assessment, as well as targeting the harmonisation of rules on AI systems in the EU. 

The rules will apply two years after the ratification of the act, expected in the middle of the year. Under its data governance provision, firms face fines of up to 7% of global revenue up to €7.5 million (US$8.2 million). 

“The urgent need for robust global regulations is crucial to incentivise and enable responsible AI development, Dorett says. “While the EU has led on this, investors are closely watching developments in the US on tech policy.” 

In October, the US passed an executive order on ‘safe, secure and trustworthy AI’ requiring companies to share the test results of AI systems with the government before their release.  

However, executive orders can be cancelled or altered by future presidents, leaving its future unsure. The progress of regulations for AI oversight in 2024 is likely be slow due to November’s looming presidential election. 

Nevertheless, Julian Cunningham-Day, Global Co-head of Fintech at Linklaters, expects a “busy 12 to 18 months” characterised by “significant participation by the heads of big tech companies in the legislative dialogue”. 

Last year, AI firms faced three dozen hearings and nine insight forums in the Senate, as well as more than 50 AI legislation bills introduced by Congress. 

The US is also working with the EU to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The code’s objective is to set standards for the use of AI technology, bridging the gap until formal laws including the EU AI Act come into force. 

The code is being created under the US-EU Trade & Tech Council, which was established in 2021 to build trust and foster cooperation in tech governance and trade. It was expected to be in place by the end of 2023.  

Safety and transparency 

In November 2023, the UK hosted the two-day AI Safety Summit. The event saw 28 countries sign the Bletchley Declaration, recognising the “urgent need” to understand and collectively manage potential AI risks. However, the initial set of signatories was dominated by the West, US, China and other developed nations. 

The declaration noted the importance of “increased transparency by private actors” that are developing frontier AI capabilities. 

In May 2024, South Korea is due to host a virtual AI summit, followed by a November in-person summit in France.  

The United Nations (UN) is also currently working on a Global Digital Compact, which includes ensuring transparency, fairness and accountability is at the core of AI governance and take into account the responsibility of governments to identify and address the risks that AI systems could entail. 

The compact is set to be launched in September the UN’s Summit of the Future 

Gwilliam-Beeharee suggests the compact could offer a Paris Agreement equivalent for digital, setting expectations for companies and increasing the overall level of maturity around AI. 

However, Storebrand’s Saugestad warns that uneven global regulatory standards are a key gap facing the responsible governance of AI, and IAHR’s Dorett previously warned ESG Investor regulation will never be able to keep pace with technological advancements.  

Briink’s van der Heijden agrees that despite recent regulatory advancements, key gaps persist in responsible AI governance. These gaps will likely require industry participation – including from investors – to help shape best practice. 

Investors should “advocate for clearer guidelines” on ethical AI development, he says, as well as pushing for increased transparency, ongoing monitoring of AI applications, mechanisms to address societal concerns and “contributing to the resolution of existing gaps”. 

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2024 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top
Share via
Copy link
Powered by Social Snap