As artificial intelligence advances at rapid speed, investors are collaborating to understand and address the fast-evolving risks to privacy, job security, democracy and society.
The rapid progress and sophistication of artificial intelligence (AI) is dominating headlines, with generative AI – new technology capable of creating content indistinguishable from human work – arousing fear and wonder in equal measure.
In March, tech moguls Elon Musk, Steve Wozniak and more than 1,200 other founders and top research scientists signed an open letter calling for a six-month pause on generative AI experiments to better understand the risks, benefits and impacts of the technology on the world.
Goldman Sachs said last month that AI could replace the equivalent of 300 million full-time jobs and Italy banned popular generative AI tool ChatGPT over privacy concerns. Sir Jeremy Fleming, director of UK Intelligence GCHQ, reportedly warned last week that disinformation is a primary threat from advancing AI.
A key issue is that AI is both omnipresent and conceptually slippery, making it notoriously hard to regulate, says Matt O’Shaughnessy, a visiting fellow in the Technology and International Affairs Programme at the Carnegie Endowment for International Peace.
Europe and China are trying to take the lead on AI, he says, with the latter rolling out regulations targeting AI capabilities and algorithms and the EU racing to pass its sweeping draft Artificial Intelligence Act (EU AI Act).
Human and civil rights considerations
But investors say that the EU AI Act needs to go further to act on human and civil rights risks emerging through AI misuse and have signed a joint letter demanding further action. They call for additional provisions in the Act such as human rights impact assessment requirements for developing and deploying AI systems and publicly viewable database requirements for AI providers to be available to users.
Aviva Investors is one of 149 investor signatories to the letter. Louise Piffaut, Head of ESG Equity Integration at Aviva Investors,tells ESG Investor that, while AI’s primary goal is to serve a positive purpose, its increasing importance within society has also resulted in significant harmful impacts on society.
“As an investor, we have been engaging on human rights for a number of years across sectors and regions. AI will eventually impact all sectors in which we invest. Ultimately, we want to make investment decisions that respect human rights,” she says.
Piffaut says as AI is currently largely unregulated, the full spectrum of human right risks that occur through AI’s value chain, from product development to the use of an AI system, remain largely underappreciated.
“Safeguards are scarce, and the companies that build these systems are therefore rarely held to account on the technology’s impacts on people,” she says. “Regulation that is risk-based, as proposed in the EU AI Act, will provide clear rules to ensure worst impacts are prohibited. The regulation proposes different rules and levels of transparency which will incentivise companies to better manage key risks.”
Engagement on AI
In the absence of robust regulation on AI as yet, investors are engaging with companies in areas of AI which they feel pose significant human rights risks. In 2021, Brussels-based investor Candriam began an engagement drive on the human rights risks of facial recognition technology.
Benjamin Chekroun, Stewardship Analyst, Proxy Voting and Engagement at Candriam, tells ESG Investor that it started noticing the risk associated with facial recognition technology around the 2020s when civil society groups started campaigns to ban its use by police forces and companies introduced moratoriums on the sale of products and systems to law enforcements, especially in the US, following the Black Lives Matter movement.
“Some cities and countries also introduce bans,” he notes. “At end 2021, the European Parliament called for a ban on police use of facial recognition technology in public places, and on predictive policing.” Chekroun says there are one billion cameras in the world that can be linked to facial recognition, in an area that is poorly regulated but where the technology is advancing rapidly.
“It’s really tempting to use because it’s cheap,” he adds. “The technology is coming online so quickly and it is always miles ahead of regulation. That gap is going to grow wider – technology is going to go exponential, where regulation is going to go relatively linear – and that is a risk for investors.”
Candriam is the lead on an engagement campaign, alongside 20 investors, targeting 30 companies, to improve transparency on their use of AI and how they are dealing with ethical and societal issues linked to the technology.
The engagement campaign managed to speak to 15 companies, including Microsoft and Motorola.
Candriam has entered into the second stage of engagement where it is advocating for companies to have one board director to have experience or responsibility for ethics and AI and a department that reports to the board on the issue.
Existential threat
Like Candriam, Fidelity International is another investor concerned about the rapid progress of AI and the absence of regulation.
“Since the public release and awareness of ChatGPT, AI is obviously evolving much more rapidly than many may have anticipated, and certainly way in advance of any common oversight or restrictions in terms of governance or regulation,” says Christine Brueschke, Sustainable Investing Analyst at Fidelity International.
“Recent developments have shown that we have moved quickly from concerns about social risks such as privacy concerns, algorithmic bias and job security, to actual existential concerns for the future of democracy and even humanity.”
Fidelity International is co-leading an investor workstream to conduct collaborative engagement with companies to promote ethical AI under the umbrella of the World Benchmarking Alliance’s (WBA) Digital Collective Impact Coalition. Brueschke says it is informed by the WBA’s Digital Inclusion Benchmark, which among other things measures companies’ public commitments to ethical AI.
The workstream sent out a letter last year to 130 digital technology companies, asking them to promote a more inclusive and trustworthy digital economy and sustainable society.
“The response to our collaborative engagement from corporates has been somewhat mixed, but overall positive and encouraging,” says Brueschke. “Many companies are absolutely considering the ethical issues of AI, but there is a long way to go.”
Jamie Jenkins, Head of Global ESG Equities at Columbia Threadneedle, says AI offers huge potential advantages to modern living such as computational power and autonomous technology, but there are also dangers with the application of AI that may require an oversight process that is clear, inclusive and transparent.
“The current geopolitics of the 21st century make some of that global standardisation a bit more tricky. But I think the creation of specific, globally applicable guidance for ensuring that AI activities seek to maximise public good and minimise misuse would be desirable,” says Jenkins.
“It’s not a huge leap to hypothesise that there could be misuse of AI in terms of the proliferation of misinformation.”
Piffaut from Aviva Investors agrees, saying: “Our society and its functioning could be jeopardised. We agree with the recent calls for further guidance on mitigating and reducing the risks, as per numerous open letters these past few months.”
According to Piffaut another major risk posed by AI is job security.
“One of the big ESG topics we have been giving more thought to is around what a just transition means in the context of a transition to a low carbon economy. Equally, investors should start thinking about the just transition that will need to happen in parallel as a result of technology and AI advances.”
Ultimately, with technology which advances so rapidly, outpacing regulation, Chekroun from Candriam says governance will be critical. “We need companies to embrace ethics and include AI in their human rights principles.
“One thing we noticed when we spoke to companies in the field affected by facial recognition was that those that were the closest to actually writing the algorithm were the ones that realised ‘we have bias here and it’s really important we don’t screw this up’ and they were more willing to speak about publishing principles.”
