Commentary

Responsible Design can Ensure AI Serves Society

Tess Buckley, AI Ethics Senior Analyst, EthicsGrade, examines the relationship between artificial intelligence regulation and innovation.

Considering the potential for widespread harm, as BigTech deploys artificial intelligence (AI) systems, policymakers are confronted with the daunting task of determining how to effectively regulate emerging technology. AI can be an effective tool for progress when its responsible design is regulated in the public interest.

In conversations with investors and entrepreneurs, questions about how AI regulation could hinder innovation and economic growth are at the forefront. There is a narrative that the enforcement of regulation will stifle innovation; but there is a nuanced relationship between regulation and innovation of AI. Regulation can sometimes stifle innovation, yet its necessity remains. Using approaches such as the UK’s sandbox strategy can strike a balance. One could argue that regulation actually helps innovation and steers product development to serve society. The crucial factor, however, is the well-timed implementation of such regulation.

Integrating responsible design approaches and ethics in the development of AI products will avoid project holds due to legislative uncertainty and delaying value realisation of potential business opportunities. Regulation will impose ethical standards on AI, influencing designers’ choices, and providing different directions for innovation. Ultimately, good regulation allows for an environment of incentives for innovation and design that leads to the creation of AI products that serve society, rather than exploit it.

Current AI regulation: the UK’s sandbox approach

A potential approach to mitigating the supposed inhibitory nature of regulation is through the implementation of a regulatory sandbox. This concept allows organisations to experiment within defined limits, striking a balance between innovation and safety.

The UK offers a sandbox environment with a low barrier to entry, designed to attract companies for their innovative ventures, typically during the initial phases of idea development. The trajectory of an idea, as it transitions from conception to market, inherently amplifies its potential risks when exposed to a wider audience. From an innovator’s perspective, an idea initially harbours the potential to only affect the originator. However, once the idea is shared with a research team, it becomes exposed to a broader realm of influence, thereby increasing its potential to cause harm. As the idea progresses through the stages of research and design, the necessity for stringent regulation intensifies, as the idea evolves into a concrete product with the genuine capacity to inflict harm.

Regulators are having to be agile to attract businesses and protect economic growth realised by AI development. The extent of BigTech’s economic influence raises a significant apprehension regarding regulations potential to stifle innovation. BigTech supports the US economy, with the market power of these companies continuing to grow. In 2018 Apple was the world’s first company to clear $1 trillion in market value, and Amazon followed later in the year.

At the Global Entrepreneurship Congress, Céline Kauffman, head of the entrepreneurship at OECD, stated a key focus area is to make regulation more agile: “The problem that you have with regulation and innovation is twofold. On the one hand, you don’t want to prevent new ideas, innovation, entrepreneurship. But on the other hand, regulation is in place to protect consumers…You don’t want to lower too much your level of protection just to let the business prosper, because you never know what’s going to happen.”

Essentially, regulatory sandboxes provide a safeguarded space where companies are temporarily exempt from certain regulations. This freedom enables them to innovate and develop new products for a predetermined period, typically one or two years. Subsequently, the outcomes are evaluated, and the need for regulation is assessed without impeding the capacity for innovation.

Responsible design in AI: a proactive approach 

With the market expansion in AI, there are increasing pressures and options for organisations to improve their productivity and profitability. Yet, this rush has seen as many risks as opportunities, including breaches of customer privacy and security concerns: Samsung’s internal data has been leaked by employees to ChatGPT, Getty Images is suing stability.ai for copyright infringement, and HireVue is experiencing questions of bias in facial recognition recruitment technologies.

These public cases exhibit the importance of companies developing comprehensive responsible AI frameworks that include governance, internal responsibilities, and frameworks for tech teams to increase transparency and trust across the AI lifecycle, especially from a design stage. Ultimately, regulation in AI is reactive, while responsible design is a proactive approach to mitigating the risks of emerging technologies.

The responsible creation of AI requires a design-first approach that considers stakeholders through the lifecycle of a product. Ethics by Design is a framework presented by the EU to offer guidance for adopting an ethically-focused approach while designing AI-based solutions. Following such a framework mitigates governance risks for companies by proactively analysing, auditing, and including public feedback.

Responsible design of AI in practice includes establishing a review board on AI ethics that represents cross-functional disciplines in the organisation. Other key aspects of responsible design can include internal governance structures that focus on security, prioritise a culture of fairness tests, and explainable features in AI tools that are easily understood by all teams. Creating ethical AI can also include active stakeholder engagement, with investors, at early stages, frequently meeting to identify and mitigate the potential harm of a product while driving its development to address the customer’s needs.

Defensive design is a practice which supports responsible AI by planning for contingencies in the design stage of a project. Defensive design is a measurable and trainable skill which will soon be expected in R&D teams across sectors. Defensive design considerations include input sanitisation, authentication of users, and maintainable code which other programmers can understand. Teams ask questions like: “How could this device cause harm?” “How could bad actors use this for deceptive purposes?” By answering, the company can anticipate all possible ways that an end-user could misuse a device, or the device could cause harm. The product is then produced to make the potential misuse impossible, or to minimise the negative consequences. The result is a product that has fences around predicted misuse, to prevent bad actors from causing harm, risking damage to the company and its customers, ultimately increasing innovation while decreasing risk.

AI that adheres to regulatory guidelines and industry best practice not only safeguards consumers and companies from harm, but also ensures a positive direction for progress. Regulation allows for an environment of incentives for innovation and design that leads to the creation of AI products that are aligned with social interests.

It also helps the industry and its investors by providing clarity on the risks being prioritised by governments. The three AI risk categories presented by the EU AI Act are: unacceptable risks (e.g., government-run social scoring), high-risk applications (e.g., CV scanning to rank applicants), and unregulated high-risk applications (e.g., AI enabled chatbots). These risk categories not only set a precedent in the EU but will be treated as a global standard that impacts AI regulation internationally and has already inspired the Brazilian congress in their development of a legal framework for AI.

The potential benefits of AI and ethical considerations shouldn’t be viewed as conflicting forces. To fully realise the advantages of AI, responsible design must be an integral part of the development process. Ethical considerations act as a protective shield against risks and provide a competitive edge. Regulations that necessitate responsible design can empower innovators to feel confident in their designs without the fear of regret. In essence, regulation that prioritises defensive design of AI doesn’t counteract the potential benefits; rather, it complements them by creating a safe environment for innovation.

Timing matters: allowing the innovators room  

Balancing the timing of regulation is imperative to avoid stifling innovation. We must regulate the development of AI at the right stage because regulating prematurely could hinder the development of novel technologies. On the other hand, delaying regulation poses risks, such as investments becoming trapped, or public confidence eroding due to harm that has been caused by an AI product. Activities such as scenario testing, post-implementation reviews, and horizon scanning are all suggested by the UK government as ways to address this issue.

The journey of an idea from its inception to its realisation in the market involves a transition of responsibility from innovators to researchers and developers. Correspondingly, the establishment of governance should evolve in tandem with the idea’s progress. This path can be encapsulated by the notion that innovation often has an inverse relationship with governance. There is justified concern that excessive regulation at the wrong time might stifle creativity to some extent. However, innovators do need a certain degree of space for creativity before governance, research, and development anchor their ideas.

In various domains, the regulation of AI is imperative, as it fosters consumer trust and mitigates risks for companies. The key lies in determining when regulation should be integrated into the stages of a project. Introducing regulation prematurely could hinder an idea, while delaying it might adversely impact consumers and consequently, the company.

The relationship: regulation and innovation 

Mariana Mazzucato, Professor in the Economics of Innovation and Public Value at University College London, argues that global and national administrations need to be more focused on the direction of AI innovation. She presents a market-shaping approach that can help align public and private interactions to drive AI towards advancing public interests. Mazzucato suggests that innovation requires a bolder global technology policy agenda to align AI development and diffusion, and govern the technology in the public interest.

Regulation can play a constructive role in fostering innovation. Markets serve as platforms where buyers and sellers convene to conduct transactions, forming the cornerstone of innovation. This dynamic exchange process, inherently driven by competition, not only spurs innovation but ensures that its advantages extend to consumers. Regulation assumes a vital function in establishing and nurturing markets while simultaneously upholding and safeguarding the competitive dynamics. This responsibility is prominently shouldered by competition authorities and economic regulators. Additionally, various other regulatory bodies contribute to shaping market parameters and competitive landscapes, underscoring the necessity for their conscientiousness about their impact.

Crucially, the regulation of AI will contribute to instilling public confidence in its use. The knowledge that emerging technologies must adhere to defensive design approaches bolsters public trust in adopting and embracing these innovations. Connected to this, regulation addresses potential public concerns in a proactive manner which is pivotal in encouraging investment. Regulations can inadvertently shape the progress of innovative technologies by redirecting ideas to serve society.

Rather than hindering innovation, forward-looking regulation that requires responsible AI design will serve as a catalyst for creativity, offering companies and developers the freedom to explore and embrace AI risks without being burdened by worries regarding legal, reputational, or ethical repercussions. The notion that regulation opposes innovation can be reframed as a boundary that compels companies to create products which serve society, rather than developing devices that exploit it.

 

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2023 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top
Share via
Copy link
Powered by Social Snap