News

Tech Giants Under Pressure to Commit to Ethical AI

Investor-led initiatives and regulatory shifts are paving the way towards new standards. 

The world’s “utility-like dependence” on social media and related information technologies has prompted increased scrutiny by investors of companies’ use of artificial intelligence (AI) and machine-learning capabilities. . 

“Like electricity or water, technology is part of the necessary infrastructure of our lives, so [investors and other stakeholders] need to make sure that the way investee companies are using it is supportive of sustainable development,” said Lourdes Montenegro, Research and Digitisation Director at the World Benchmarking Alliance (WBA). 

“We cannot have a digital transformation that’s inclusive unless there are constraints on the development and use of AI. Those constraints must be guided by ethical principles and human rights considerations to make sure that no one is left behind,” she told ESG Investor. 

As jurisdictions – such as the UK and EU – continue to debate how to regulate AI, it’s up to investors and other stakeholders to work with companies to formulate guiderails, Montenegro added.  

Collaborative multi-stakeholder initiatives are emerging to raise awareness of the financial and reputational risks, as well as the social costs, posed by misuse of AI and to begin outlining best practice for companies.  

Although firms in the IT sector are the main focus of these initiatives, investors should be considering how all investee companies are using AI and machine-learning in their future engagement efforts, said Lauren Compere, Head of Stewardship and Engagement at Boston Common Asset Management. “It’s applicable to any company with an interface,” she said.  

Ramping up collaboration 

Last week, WBA announced the official launch of the multi-stakeholder Collective Impact Coalition (CIC), alongside UK-headquartered Fidelity International and Boston Common Asset Management, an independent, women-led firm. 

“This is a group of global investors really pushing for transparency around the governance of AI and algorithms and the associated risks, such as perpetuating racial or gender discrimination or disrespecting users’ rights to privacy,” said Compere. 

The dangers of social media platforms for teenagers and children were once again highlighted during an inquest in September, following a 14-year-old taking her own life in 2017 after viewing “hideous, graphic, harmful” content on social media sites.  

Members of the coalition, which includes 29 financial institutions collectively representing US$6.2 trillion in assets, aim to raise awareness of the importance of responsible and ethical AI and encourage IT companies to make a public commitment to responsible use and transparency. 

“Before complicating things with more specific expectations around usage and disclosure, we first need to get everyone in the door and ensure companies make a public commitment to upholding ethical AI principles,” noted Montenegro. 

CIC’s launch follows the findings of WBA’s 2021 Digital Inclusion Benchmark, which assessed and ranked 150 of the world’s largest technology companies on their commitments to advance a more inclusive digital society. Out of the 150 companies, just 20 had disclosed any kind of commitment to establish and follow ethical AI principles. 

In August, WBA started the engagement process with the 130 companies that have yet to make such a commitment, encouraging them to do so.  

WBA will now be extending its coverage of companies in the IT space and is aiming to publish the 2022 Digital Inclusion Benchmark in March 2023, Montenegro said.  

“AI and algorithms aren’t inherently bad – they will play a critical role in expanding access to finance and healthcare – but we collectively want to ensure there is an understanding of the real risks so they can be mitigated,” noted Boston Common’s Compere. 

Global investment manager, Candriam – also a member of CIC – recently published the first results of its Facial Recognition Technology (FRT) investor initiative 

Working with 20 other investors, Candriam defined the four biggest areas of concern that companies involved in FRT developments should look to address: racial and gender bias, questionable accuracy, privacy concerns, and misuse.  

“With regulation remaining limited in the technology sector, companies are not yet taking full account of their responsibilities in managing the societal impacts of FRT, depending on where in the value chain they are involved,” said Louise Piffaut, Senior ESG Analyst at Aviva Investors, a member of the initiative. 

“Investors have an important role to play in pointing to best practice and engaging with companies on the issue.” 

Candriam will now work with leading engagements with the assessed companies, outlining how they can implement best practice into their business operations. An update will be published in 2023.  

Setting boundaries 

Individual investors are also outlining their expectations for investee companies using AI-driven technologies. 

Last week, the Church of England’s (CoE) Ethical Investment Advisory Group (EIAG) published a report advising investors with Christian values on how to approach IT companies, covering issues such as data storage, human rights and AI ethics. The guidance will inform future engagement efforts across all the CoE’s investing bodies.  

The CoE has asked IT firms to make public commitments to ensure “verifiable transparency”, promote human-centred design, enable the “flourishing of children and other vulnerable groups”, and foster an IT ecosystem that “serves the common good”. 

“The EIAG wanted to produce a gold standard for values-driven AI and technology,” said Charles Radclyffe, Parter at ESG data company EthicsGrade. The firm worked on the EIAG report and will be supplying it with portfolio data “aligned to these commitments”, Radclyffe added. 

“By having the relevant data, investors can identify and then take whatever action is necessary to improve company processes.”  

Big Tech was first identified as a priority sector for engagement by the CoE Pensions Board in 2021, with the asset owner noting the sector’s poor social performance and tax avoidance.  

Asset managers have also been making their views on AI clear.  

In April, HSBC Asset Management published a white paper outlining its expectations of investee companies on responsible AI in human capital management. The firm has asked investee companies to disclose against a variety of KPIs, including providing evidence that vendors incorporate data privacy protection by design principles in their AI systems.  

Official guiderails 

Policymakers are debating new rules for companies on their ethical use of AI. 

The UK published a policy paper in July setting out its early proposals for what a regulatory framework for AI might look like. The government is keen to attract AI developers to the country, promising a regulatory environment that will encourage innovation.  

The paper noted that regulators would be asked to impose and implement their own sector- or domain-specific restrictions and regulations following individual assessments carried out by the regulators, focusing on the promotion of safety, security, transparency, fairness and contestability.  

The UK government aims to publish a public consultation later this year. 

Following growing concerns around internet safety, the UK government is also in the process of developing the Online Safety Bill, which would require platforms hosting user-generated content to proactively identify, remove and limit the spread of any illegal or harmful content or they will be fined up to 10% of their turnover by online harms regulator Ofcom.  

The European Commission’s proposed AI Act will introduce central regulatory principles with less scope for adaptation by regulators. The rules will cover both high-risk AI systems in products that are already regulated, like cars, as well as those used in specific industries, such as education. Some AI applications that the EU determines to pose “unacceptable” risks will be prohibited altogether. This latter category includes the social credit system used in China.  

Any other application of AI that is considered low-risk will remain unregulated, the Commission has said. 

The European Parliament and Council entered into discussions on the AI Act last month and it’s expected they will agree a position by the end of the year.  

“Policy will never fully catch up with the rate of change we’re seeing in the technology space; it’s like chasing after a car that’s already left the garage,” said WBA’s Montenegro. 

“This is why stakeholders need to foster more awareness from companies around the risks of AI and the importance of being transparent and responsible. We need to start addressing these problems.” 

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2023 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top
Share via
Copy link
Powered by Social Snap