Industry

Investors Urged to Get Hands-on with AI

Briink CEO calls for “sensible regulation” that strikes balance between innovation and oversight. 

Despite artificial intelligence (AI) being in its nascent stages investors need to “get familiar” with the technology, due to the competitive advantages it can offer in regard to understanding regulatory change and ESG risks to portfolios. 

“The main takeaway for any investor that is thinking about generative AI and how it might apply within their ESG teams is don’t sit on the fence, get started now,” Tomas van der Heijden, CEO at Berlin-based software firm Briink, told ESG Investor. 

He pointed out the role AI can play in helping investors understand the requirements imposed by new rules, which often involve “dense regulatory texts”.  

He also noted the potential for AI in assessing ESG-related data, such as evaluating to what extent portfolio companies meet ESG targets.  

According to van der Heijden, AI covers the “full gamut” of current sustainable investment teams’ workload, adding that companies “sitting on their hands” with regards to AI adoption risk being “left behind”. 

However, he warned that without careful implementation of AI into existing risk models investors heighten their risk of greenwashing.  

“You have to make sure that you carefully implement AI into these risk models to ensure that it examines data accurately, so the outputs it generates are reflective of reality rather than just made-up nonsense.”  

Van der Heijden said that there is “still a lot of mistrust or lack of comprehension” of AI causing hesitancy in the adoption of the technology but highlighted that is offers promising opportunities for growth and efficiency. 

US Securities and Exchange Commission Chairman Gary Gensler recently expressed concerns that the growing use of AI could amplify financial fragility and encourage herding behaviour among investors, leading to increased market instability and challenges for risk management within the financial system.  

Gensler stressed the critical need for proper regulation of AI to address potential risks and avoid exacerbating the interconnectedness of the global financial system, with the technology likely to play a central role in future financial crises. 

Balancing act 

The EU Council adopted a common position on the EU AI Act in December 2022. Work on harmonised standards is due to begin in Q4 2023 or Q1 2024, with standards expected to be finalised in early 2025 before the EU AI act is applied.   

In February, a group of 149 investors – all members of the Investor Alliance for Human Rights – called for a series of enhancements to the act to create an environment with “parameters that incentivise responsible business conduct” and ensure the “trustworthy use of AI”. 

The AI Act targets the harmonisation of rules on AI systems in the EU, following a risk-based approach, regulating the prohibition of certain AI systems, and setting several obligations for the development and use of such systems.  

However, in its current form, some have criticised the AI Act as being inflexible.  

Earlier this year, the UK government noted the vital part AI will play driving science and technology forward.  

In the UK, AI is currently regulated through existing legal frameworks such as the Financial Services and Markets Act 2000, with the government flagging the risks created by the gaps in existing regulation.  

The UK government underscored the importance of “getting regulation right” to ensure that innovators can thrive, and the risks posed by AI can be addressed, while being aware of industry warnings on “regulatory incoherence” possibly “stifling” innovation and competition. 

China has adopted some of the world’s earliest and most detailed regulations on AI, including measures governing recommendation algorithms and artificially generated images. 

In the US there is currently no federal legislation solely dedicated to AI regulation, though last month a bipartisan group of legislators introduced a bill to create a commission focused on regulating AI, including the potential development of a risk-based framework. 

Van der Heijden warned against regulation being “too heavy handed” in the early stages of AI implementation in Europe as it could risk stunting innovation and negatively impacting the competitiveness of the region in the sector.  

“We just don’t know enough yet about how these AI models will develop and how they will be used within businesses,” he said. “To know where all the guardrails should be, I think we need to give it a bit of time to see how these models start to develop how they are adopted within industries.”  

Van der Heijden called for “sensible regulation” that takes onboard industry feedback into account to ensure that there’s a “good balance” between regulation and innovation. 

Rising AI adoption 

A number of companies in the sustainable investment space have introduced AI-powered tools or incorporated it into their practices as the technology continues to become more widespread.  

Manifest Climate’s risk planning solution software uses AI to extract and analyse data from public company disclosures, with the software examining up to 200 data points per company to provide into board insights and management team perspectives on climate change, risk management practices, and strategy development. 

Last month, ESG ratings agency EthicsGrade integrated generative AI into its corporate digital responsibility ratings and research, which look to complement the work of analysts to give users of EthicsGrade’s free platform a “rounder summary” of why companies have been given a certain ESG rating.  

Iceberg Data Labs recently launched an AI assistant for ESG analysts, which generates real-time, text-based and fully sourced explanations in response to questions on the ESG data of portfolio companies from documents inputted by users. 

Meanwhile, Briink has partnered with Berlin-based AI startup Nyonic to develop “safe and trustworthy” AI models that comply with Europe’s new privacy and safety standards.  

According to van der Heijden, the collaboration offers a “huge opportunity” for sustainable investment teams within financial institutions and corporates to start using generative AI in their daily workflows. 

“I’ve been working in AI since 2016, and I’ve never seen the pace like we have seen in the last six months,” he said. 

Briink develops generative AI technology solutions for ESG teams, primarily within the asset management industry.  

“Essentially, we are developing technology solutions that augment their work using AI,” van der Heijden said. 

The company works with asset management firms to help them integrate generative AI into parts of their workflows, such as reporting, regulatory analysis, policy screening, and any other aspects that require extensive manual text-based analysis. 

One of the most widely known AI platforms, ChatGPT, currently has over 100 million monthly active users, with the platform generating an estimated 1.6 billion visits in June 2023 illustrating its continued prevalence. 

According to van der Heijden, the asset management industry is looking at AI and its use cases, with wider adoption and implementation “inevitable”. 

The practical information hub for asset owners looking to invest successfully and sustainably for the long term. As best practice evolves, we will share the news, insights and data to guide asset owners on their individual journey to ESG integration.

Copyright © 2023 ESG Investor Ltd. Company No. 12893343. ESG Investor Ltd, Fox Court, 14 Grays Inn Road, London, WC1X 8HN

To Top
Share via
Copy link
Powered by Social Snap