Regulation of artificial intelligence (AI) should be “risk-based and forward-looking”, while also protecting human rights, democratic values and personal privacy, according the Group of Seven group of leading economies. A communique released following a two-day meeting of digital ministers in Japan noted the potential for “significant impacts on society” from the rapid development of AI, including applications such as ChatGPT. Ministers endorsed an AI action plan for “promoting global interoperability between tools for trustworthy AI” and committed to future meetings on regenerative AI, covering governance, intellectual property rights, transparency and misinformation. Last Tuesday, four US federal agencies issued a joint statement reaffirming their commitment to promoting responsible innovation in automated systems protecting the public from bias in AI tools. Separately, US Senator Michael Bennet introduced a bill that would create a task force to look at US policies on AI, and identify how best to reduce threats to privacy, civil liberties and due process. Also on Thursday, politicians in Brussels reached a preliminary agreement on a new draft of Europe’s planned AI Act. Dr Geoffrey Hinton, cognitive psychologist and computer scientist, was reported by the New York Times to have resigned his position at Google to speak freely about his concerns over societal risks from the development of AI.
Strong calls at 1st day of @G7 Digital & Tech meeting, including:
👉New #internet subsea cable connecting 🇪🇺🇯🇵🇨🇦🇺🇸
👉Promotion of trusted vendors to increase #cybersecurity of our #digital infrastructure
👉New governance forum for democracies to enable safe flow of #data pic.twitter.com/bXkaDo2Spb— Margrethe Vestager (@vestager) April 30, 2023
