Commentary

Investing in AI

Tess Buckley, AI Ethics Senior Analyst, EthicsGrade, highlights some questions investors should consider raising with investee companies.

Responsible investment in venture capital is growing and for artificial intelligence (AI)-related investments the case for weighing ESG considerations is particularly strong. Like many fast-paced industries, emerging technologies pose significant risks and opportunities for all stakeholders involved. To understand what questions to ask portfolio companies about how they are addressing the risks and opportunities arising from the current and expected acceleration in the use of AI, we must first come to understand the key issues.

Reminiscent of the dotcom bubble

The euphoria around investing in AI is reminiscent of the Internet boom in the mid-to-late 1990s. Many investors were overwhelmed by the technology and dotcom companies, resulting in the dotcom bubble which burst in 2002. This is not to say that AI is not to be invested in, but it serves as a reminder to make sure that what you are being told is innovative AI is truly that. Further, the Internet was transformative, and companies such as today’s Amazon saw early expansion during that era. Investors should place capital in sustainable AI companies that will not burst when the AI bubble does.

When you see the term AI in a potential portfolio company, be cautious doing your diligence in the best practices. You should expect and inspect an inventory of models, risk management of the models, and quality control of the processes, alongside the usual business plan, quality management team and an understanding of emerging technology risks.

For all the opportunities in AI, there are ‘underbellies’; false positives in identifying patterned activity in users when detecting anomalies, false positives in the identification of credit card fraud and bias in credit risk scoring. Yet many of these undesired consequences of AI use can be mitigated by employing an ethics by design approach, following AI regulatory requirements, and partaking in responsible innovation.

Impact on the environment

One must also consider the extraordinarily high carbon emissions from these technologies as well as e-waste, not to mention the potential of perpetuating biases at scale.

Consider reviewing evidence of the carbon impact of activities by investment firms and others, such as machine learning (ML) model development and governance to ensure that energy consumption is proportionate to the value created and has the minimum carbon impact wherever possible. For organisations that operate in creating products, such as consumer device manufacturers, look at the circularity of the product lifecycle and particularly policies and actions when it comes to questions such as planned obsolescence.

AI regulation: what is being disrupted?

Consideration of AI’s disruption to democratic processes must be integrated into the ESG evaluations of portfolio companies. Often the reason why AI applications disrupt democracy can be seen to lie in the new forms of manipulation it enables. The bed of democracy is citizens’ autonomy, the principle of equal participation and the public forum, all of which AI disrupts.

Another point to consider when investing in AI is how governments are interacting with it. Look into what technologies are being endorsed by respective governments to consider their greater chance of success and implementation in emerging regulatory environments.

Look for organisations that are forward-thinking in terms of the risks that stem from their activities. Forward-thinking companies will seek to understand diverse viewpoints on these risks and proactively look to mitigate them long before those risks manifest.

A good example is those risks pertaining to the impact of automation on employment. Some organisations shrug off their responsibilities regarding the long-term employability of their workforce and across their industry, and others are keen to ensure the risks to a healthy workforce are managed and mitigated. This is a vital issue that will develop in prominence over the coming months and years.

Security and data privacy

Considerations for data privacy have significantly improved after the release of the European General Data Protection Regulations (GDPR) assisted in organisations’ understanding of what best practice is and how to achieve it.

Investors should still be concerned that there is far more to be done on this subject. Review the privacy policies of the organisations you are considering, understand if the company is exploitative regarding harvesting user data, and those that respect the privacy, security, and sovereignty of the data they collect and process on behalf of others.

Company structure

A great way to see if responsible technology is truly being practiced at a company is to review frameworks which have been integrated into how the company operates. For instance, does the company have a chief ethics office? Does the company have a code of AI conduct? An AI ethics board?

Organisations need to connect their AI governance efforts to their corporate governance policies. Boards of organisations are not equipped to control the levers of risk unless they can be confident that there is a connection between policies that they set and operational controls on the ground. Look for evidence of strong corporate governance and governance structures that extend from Boards to those on the front-line building or selling AI systems. Consider if the company has a semi-independent oversight board or committees, particularly those where the accountable executive is in a position of responsibility and high visibility such as the chief digital officer (or equivalent).

Deepening the digital divide

The rapid acceleration of emerging technology can deepen the digital divide between advanced and developing economies on a macro level and increase the information gap between rich and poor on a micro level. Socioeconomic inequalities could grow with the use of AI for two key reasons: access and education. Firstly, access to technology and cloud computing often comes at an expense, yet even if individuals from marginalised groups were able to gain access to such technologies this access would not promise their successful use.

Education and AI literacy training is of the utmost importance to support across-culture uptake and create a bridge the visible widening gulf of the digital divide. Investors and firms have a crucial role to play in ensuring that individuals from all backgrounds have access to quality AI education which can lead to inclusive economic growth and social development.

Individual firms and their investors can provide education and training programs focused on AI literacy, particularly in underserved communities. By investing in educational infrastructure, technology accessibility and skill development investors can contribute to reducing the digital divide. Additionally, firms can collaborate with educational institutions to develop curriculum and training programs that align with industry needs, ensuring that individuals are equipped with relevant AI skills. Firms can also encourage or require their portfolio companies to prioritise AI education to foster inclusive growth, expand market opportunities and ensure a skilled workforce for the AI-driven future.

Technical barriers to trust

Investors should be concerned with controls that pertain to the use of data and the lifecycle of models. If these controls are not in place and the appropriate monitoring set up, then biased and discriminatory outcomes are too often what result.

It is common for automotive manufacturers to publish data on their vehicles’ fuel efficiency, and for vehicles to be subjected to annual emissions testing. It could be conceived that soon it will be very common for organisations to be publishing the key metrics on the functioning of their AI systems. As an investor, this is something to consider and perhaps give additional credit to organisations that are early to market on this important aspect of providing metrics and information on their governance.

Fiduciary duty

Financial services firms have specific responsibilities, specifically fiduciary duty, which refers to the obligation that one party (investor) has in a relationship with another one to act entirely on the other party’s (client) behalf and best interest. If an investor fails to account for the AI risks of the companies they invest their client’s funds into, they are risking providing sound advice.

As an investor, looking into the responsible AI practices of a company is now non-negotiable. Asking the right questions to companies about their responsible technology practices can allow investors to understand and evaluate the potential risks and opportunities that could arise from using or deploying AI.

Visible consequences of AI include privacy violations, accidents, discrimination, and manipulation of political beliefs. While invisible consequences of AI can include loss of human life, compromise of national security, and potential for disinformation in military systems and medical algorithms. These challenges can significantly impact organisations through reputational damage and loss in revenue due to criminal investigations, loss of public trust and regulatory fines.

Technology is set to improve our lives in myriad ways yet with its great value comes a host of undesired high-risk consequences. As an investor, some of these risks are visible and can be accounted for in portfolios while others are hidden. We must become adamantly aware of the downfalls to reap AI’s benefits and confront the risk.

What should investors look for?

When evaluating AI ethics risk, investors can consider the type of AI application in their portfolio, the purpose of a third-party evaluation and/or the company’s AI responsibility. The following should be considered when attempting to address the risks and opportunities arising from the current and expected acceleration in the use of AI.

During the screening and due diligence of portfolio companies, investors should prioritise and require an evaluation of the company’s AI ethics. With the AI Act being passed in the EU, investors can now decipher the broad risk levels which divide AI applications into risk categories from unacceptable to low risk. Using a third-party service to assess an AI system’s risks in detail allows for further clarity on the potential risk in a company’s use and development of AI. These can be found in AI ethics auditors and ESG service providers that specialise in AI. Finally, investors should assess how a company uses AI ethics codes, chief ethics officers, responsible practices in their workflow, and product research and development. If you can identify risk in a start-up early enough, you are more likely to detect and fix AI risk which could prove financially burdensome in the future with fines supported now by regulations.

It is now imperative that investors ask firms in their portfolios about how they are addressing the risks arising from the current and expected acceleration in the use of AI.

 

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2023 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top
Share via
Copy link
Powered by Social Snap