G7 Ministers Outline Actions to Contain AI Risks

Regulation of artificial intelligence (AI) should be “risk-based and forward-looking”, while also protecting human rights, democratic values and personal privacy, according the Group of Seven group of leading economies. A communique released following a two-day meeting of digital ministers in Japan noted the potential for “significant impacts on society” from the rapid development of AI, including applications such as ChatGPT. Ministers endorsed an AI action plan for “promoting global interoperability between tools for trustworthy AI” and committed to future meetings on regenerative AI, covering governance, intellectual property rights, transparency and misinformation.  Last Tuesday, four US federal agencies issued a joint statement reaffirming their commitment to promoting responsible innovation in automated systems protecting the public from bias in AI tools. Separately, US Senator Michael Bennet introduced a bill that would create a task force to look at US policies on AI, and identify how best to reduce threats to privacy, civil liberties and due process. Also on Thursday, politicians in Brussels reached a preliminary agreement on a new draft of Europe’s planned AI Act. Dr Geoffrey Hinton, cognitive psychologist and computer scientist, was reported by the New York Times to have resigned his position at Google to speak freely about his concerns over societal risks from the development of AI.

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2023 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top