Tulia Machado-Helland

AI - the biggest challenges are yet to come

An interview with Tulia Machado-Helland on the risks and responsible governance of Artificial Intelligence (AI) technology.

By  Storebrand Asset Management
ARTICLE · PUBLISHED 16.01.2024

While artificial intelligence (AI) technology has existed for a while, it reached a tipping point last year when the technology company OpenAI launched its GPT-4 artificial intelligence powered tool.  
As AI technologies are deployed more broadly, they are also of more relevance to investors. Here, Storebrand Head of Human Rights and Senior Sustainability Analyst Tulia Machado-Helland, provides some perspectives of how AI risks and governance are relevant for investors, and what to look out for in the next year or so. 
 
What are some of the key risks that exist for investors, and the companies they invest in, when it comes to responsible governance of Artificial Intelligence?   

To begin with, AI systems have huge environmental footprints, which makes up a risk for companies that use them. Yet, this is a critical point that is overlooked in most AI discussions at the moment. AI systems use huge amounts of computing power and data to train their systems, which in turn requires processing and data storage resources that have massive carbon emissions. With the exponential growth rates we are seeing for AI, these footprints are literally growing by the second. For instance, according to a recent MIT Technology Review by Karen Hao, training a single AI model can emit more carbon dioxide than the total lifetime emissions of an average automobile. 

What this adds up to, is that the use of AI by companies actually poses a huge risk that they suddenly might not be able to achieve their climate goals and commitments, in addition to relying on operational platforms that lack resiliency to the effects of climate and nature.  
 
In terms of social issues, AI increases the ability for companies to, easily and on a massive scale, take actions that potentially violate human rights, treat individuals unfairly, and worsen existing social inequalities and power imbalances. This can happen on countless dimensions, but some examples include automating discriminatory policing of visible minorities, mistakenly attributing fraud to social welfare recipients, enabling of worker surveillance, and putting refugees at risk by relying on AI-generated translation services. Such actions can not only disproportionately affect marginalised people and communities, but they can also amplify negative societal effects. 
 
On the governance dimension, the risks of AI include the potential for bias or discrimination in decision-making, as well as the threat of data breaches due to inadequate security measures. A lack of technology skills in both senior management ranks and the general workforce can leave firms vulnerable to grave mistakes.

AI technology also enables the generation of content almost instantaneously, which can lead to rapid dissemination of disinformation, either intentionally or accidentally. This is significant given that the World Economic Forum Global Risks Report 2024, published this month, revealed that risk specialists rank AI-generated misinformation and disinformation as the risk most likely to trigger a global crisis in the next couple of years. From this perspective, AI can pose systemic risks to societies and democratic systems. 
 
The risks are heighted by the fact that AI systems use closed algorithms that block public and regulatory transparency of the decision-making processes that could lead to such outcomes. 
  
In general, are AI-driven companies good at following the regulatory directives?

The history of the tech industry so far, shows that there are differences in how well this is managed by companies in different subsectors. The companies involved in AI software tend to have weak business ethics systems and poor product governance. This makes them less robust in terms of regulatory compliance, than for example the semiconductor sector, which has had stronger systems on both dimensions.   
  
There are examples of companies fighting regulation, such as Meta pushing back against a U.S. Federal Court decision allowing the U.S. Federal Trade Commission (FTC), which has accused Meta of violating children's privacy, to seek to impose new limits on its use of facial recognition and on the amount of money that the platform can make from its users who are underage.   
  
The consequences and costs of data breaches, which have been increasing dramatically in the last few years, are also a concern. Furthermore, under the EU AI Act, companies that violate the data governance provision could face fines of up to EUR 35 million (USD 38 million) or 7 percent of their company's global revenue.  
  
How can investors ensure that their investee companies adequately address their AI-related risks? 
  

Investors need to continue engaging with companies to ensure responsible corporate behaviour to avoid financial and reputational risk and prepare them to navigate all the new regulation coming. The financial cost is too high if the companies don't behave responsibly.   
  
At Storebrand, we participate in several investor engagement initiatives focused on these topics, where we have been asking companies to conduct ongoing human rights impact assessments. These should be undertaken by businesses - both AI providers that produce the AI tools and software and AI users that include these external systems into their business models – at all stages of the product and service cycle. 
 
When we say all stages, we mean from design to deployment to end-use, taking into account potential contexts for use or misuse and potential unintended harms. This is to ensure the ongoing protection of and accountability to stakeholders and rightsholders in the value chain.  
 
How will the recently agreed EU AI Act impact AI companies and investors? Are there any other upcoming regulations that will have a significant impact? 

The European Union’s institutions have reached an agreement on the EU AI Act entering into force next year when finally ratified. However, the new rules will be subject to further technical discussions, to hammer out the details. The devil is in the details, i.e., how the individual EU states interpret the directive and incorporate it into their national laws.  

This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field. It establishes obligations for AI based on its potential risks and level of impact. Certain applications of AI are banned, such as AI systems that manipulate human behaviour to circumvent their free will or AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation) and will have consequences for businesses currently using them guiding investors and strengthening arguments for companies to respect human rights due to the regulatory risk.  

Another relevant European-wide regulation, the CSDDD (Corporate Sustainability Due Diligence Directive), passed late last year and therefore serves as guidance regarding respect for human rights – for companies themselves, as well as for investors regarding what they should expect from the companies.   
 
Within Norway, the country’s recent Transparency Act, which is already being enforced, requires human rights due diligence for companies in all sectors. So, the impact there is immediate.   
  
Are there any other gaps in the responsible governance of AI? And how should investors be pushing companies and regulators to address these gaps?
   
Key gaps in the responsible governance of AI include issues around poor oversight of low-risk AI applications, and around unevenness of global regulatory standards (i.e., EU vs. U.S.). Digital rights experts and civil society organisations contend that the act does not go far enough to protect against discriminatory AI systems and mass surveillance. For example, instilling a partial ban on live facial recognition in public spaces with the exceptions around prevention of crime and terrorism or ensuring national security might still pave the way for the use of biometric mass surveillance, which is incompatible with fundamental rights.
 
Investors need to continue engaging with companies on these topics based on new regulation in addition to the UN Guiding principles, OECD Guidelines and Principles on AI as well as the work of UN B-Tech group on Advancing Responsible Development and Deployment of Generative AI.

Investors should push for more robust and harmonised global regulations. This is why Storebrand, together with other investors at the Investor Alliance for human rights, has been requesting EU regulators to adopt a robust EU AI act that is respectful of human rights.  

How much bigger will AI get in 2024 for investors?
 

Looking ahead, the use of AI technology will require a significant focus on the part of all investors in 2024. The macro context for this is that while 2023 has been a year where AI was introduced, AI developments in 2024 are likely to be all about finding out how and where to use the technology and to define strategies, regulations, and appliances.  
 
Companies will most likely start to develop prototypes and pilots and the most advanced will start incorporating AI into their business and operational processes.   
  
Institutional clients are also asking questions to investors about how we are managing AI risks. The adoption of new regulation makes these risks more concrete and provide some guidance as to how they are to be managed. This will be the main focus for investors, who are not already deep into these issues. These developments will also make the matter of AI more concrete, which in turn will increase the leverage we investors have in terms of influencing companies in the right direction.  

 

 

Latest insights

Can the business sector close the human rights due diligence gap?

20.12.2024 Storebrand Asset Management
Roundup and reflections on the recent UN Forum on Business and Human Rights

The Big Interview: Tulia Machado-Helland

16.12.2024 Storebrand Asset Management
On the relevance of human rights to managing portfolio risk in an increasingly conflict-filled and tech-driven global landscape.

Frontline looks ‘tempting’, says Storebrand’s $1.6bn fund manager as shipping stock rally unwinds

13.12.2024 Jonas Walsgård, TradeWinds
Funds hold Golden Ocean, Hafnia, Stolt-Nielsen and Wallenius Wilhelmsen

Historical returns are no guarantee of future returns. Future returns will depend, among other things, on market developments, the manager's skills, the fund's risk profile and management fees. The returns can be negative as a result of price losses. There is risk associated with investments in the fund due to market movements, developments in currency, interest rates, economic conditions, industry- and company-specific conditions. Before investing, customers are advised to familiarize themselves with the fund's key information and prospectus, which contains further information about the fund's characteristics and costs.