IAHR members highlight “growing anxiety” on human and civil rights risks emerging through the misuse of artificial intelligence.
A group of 149 investors has called for a series of enhancements to the EU Artificial Intelligence (AI) Act to create an environment with “parameters that incentivise responsible business conduct” and ensure the “trustworthy use of AI”.
The signatories are members of the Investor Alliance for Human Rights (IAHR), a 200-member strong collective action platform for responsible investment representing over US$12 trillion in AuM. In the statement members underlined the need for ongoing human rights due diligence to regulate the development and use of AI and mitigate against potential risk.
Anita Dorett, Director of the IAHR, told ESG Investor that the organisation’s focus is on “equipping, enabling, and driving investors to address their respect for human rights in their investment activities”. For engagement around issues such as human rights the IAHR uses framework of the UN’s Guiding Principles and the OECD Guidelines for Multinational Enterprises.
The AI Act targets the harmonisation of rules on AI systems in the EU. It follows a risk-based approach, regulating the prohibition of certain AI systems, and sets out several obligations for the development and use of said systems.
The proposed law assigns applications of AI to three risk categories, including applications and systems that create an “unacceptable risk”, “high-risk” applications subject to specific legal requirements, and applications not explicitly banned or listed as high-risk which are left largely unregulated.
The IAHR highlighted “growing anxiety” amongst legislators, civil society organisations, and members of the investment community surrounding human rights and civil rights harms that are emerging through the misuse of AI, including incidents of invasions of privacy and discrimination.
The IAHR has outlined several additional provisions investors say must be included in the final regulation to ensure the rights of all people are protected and civic freedoms and democratic processes will be respected.
Dorett said that before companies develop, deploy and use AI they must “pause and say what are the impacts, beyond those to the bottom line, on people, be it the users, communities, consumers and workers”.
“We are advocating for meaningful human rights due diligence and impact assessments to be a requirement for AI systems developers and users,” she said.
She also underlined the importance of disclosure and transparency in terms of human rights impact assessments that are conducted.
Amy Orr, Director of U.S. Shareholder Engagement at signatory firm Boston Common Asset Management, said: “Advocating for legislative and regulatory measures to enable ethical AI, therefore, means advocating for appropriate transparency, prohibitions, safeguards, and accountability. All the above are required for the just application of digital human rights.”
Investors also flagged the misuse of AI for “predictive policing” whereby authorities use the technology to make predictions, profiles, or risk assessments to predict crimes, with the systems found to be discriminatory.
They also expressed concern over the “significant” risks of AI’s use in a military or national defence context. “Regulation hasn’t been able to keep up with technology and honestly, it never will be,” said Dorett, adding that investors have been flagging the ways that technology has the power to undermine the rights of vulnerable and marginalised communities for many years.
“We want innovation but we need it just done in a responsible way,” she said.
In its current form, the AI Act has been criticised for being inflexible. If in two years’ time a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as “high-risk”.
The EU Council adopted its common position on the AI Act in December 2022. Work on harmonised standards is due to begin in Q4 2023 or Q1 2024, with standards expected to be finalised in early 2025 before the EU AI act is applied.
