The Group of Seven (G7) countries have reached an accord on a code of conduct for companies and institutions that develop artificial intelligence (AI) systems. The code for AI developers aims to promote the development of “secure and trustworthy” AI systems internationally. It also targets the mitigation of risks derived from such technology such as disinformation and the violation of privacy or intellectual property. AI industry actors are requested to voluntarily commit to complying with the code. New guidelines for organisations developing and using advanced AI systems have also been released, to promote safe, secure, and trustworthy AI worldwide. The guidelines include commitments to mitigate risks and misuse and identify vulnerabilities, encourage responsible information sharing, report incidents, invest in cybersecurity, and introduce a labelling system so users can better identify AI-generated content. The guidelines formed the basis for the new code of conduct. Both were developed under the so-called “Hiroshima Artificial Intelligence Process”, which was established at the G7 Summit in May 2023 to promote guide rails for advanced AI systems on a global level. It was around that time that US and EU authorities unveiled plans to develop a voluntary code of conduct for AI to help curb risks associated with the rapid emergence and growth of the use of generative AI technologies such as ChatGPT. The US has also issued an executive order that sets out new standards for AI safety and requirements for developers and users of AI systems.
The code of conduct for AI developers seeks to promote the development of “secure and trustworthy” #AI systems internationally.https://t.co/bZA3X8G1d0
— Regulation Asia (@RegulationAsia) October 31, 2023
