For several years, Storebrand has been working with digital rights as one of its focus areas, including issues such as the ethics of artificial intelligence (AI) technologies. Through this experience, we have found that it is often most productive for investors to engage them through collective initiatives. This is based on the broad, complex and far-reaching range of the issues, along with the scale and influence of the companies that must be engaged in order to have a reasonable chance of making an impact.
New phase begun in 2024
Since September 2022, members of WBA’s Ethical AI Collective Impact Coalition have been engaging companies assessed by WBA’s Digital Inclusion Benchmark on ethical AI, focussing initially on companies that did not yet have publicly available ethical AI principles.
In February 2024, the second phase of the Collective Impact Coalition for Ethical AI was launched, supported by investors such as Storebrand Asset Management. In total, the investors involved represent over US$ $8.5 trillion in assets under management.
In the current phase, we in the WBA AI initiative are encouraging companies to implement policies and mechanisms to ensure the ethical development and application of AI, guided by respect for human rights and the principle of leaving no one behind.
Progress in latest assessment
The latest assessment by the WBA shows that of the 200 largest digital companies, 71 companies, a third of them, now have AI principles in place, up from 52 companies a year ago. More than half of the principles established include human rights considerations, also a positive finding.
To some degree, companies have made progress on some dimensions. The development of comprehensive ethical AI documents has seen notable growth. Sixty-six companies now have AI principles that they developed themselves (as opposed to endorsing third-party principles), and 60 of those companies have released standalone documents outlining their commitments.
That said, progress in this area has been slower than expected and needed. While the number of companies with ethical AI principles has grown, the portion of those that define and include explicit human rights considerations is relatively small, and many companies haven’t integrated these considerations into their AI frameworks.
Of the 71 companies that now have ethical AI principles, only 29 actually publicly disclose how they implement these principles. Other findings from the assessment include a steady but slow growth in the, the number of companies with relevant internal governance structures, such as ethical AI committees, that would help convert conceptual commitments into tangible action in operation.
Of most concern is the mere 16 companies that actually conducted human rights impact assessments (HRIAs) in 2024. This points to huge risks, given that new regulations such as the EU Artificial Intelligence Act, require Fundamental Rights Impact Assessments (FRIAs) for high-risk AI systems, from 2026 onward.
What's next?
While these commitments are a positive step, much remains undone. The next challenge lies to track how companies implement these principles. Many companies’ reporting on their AI operations lacks transparency, making it difficult to assess whether they are truly living up to their ethical AI commitments.
Through the Collective Impact Coalition for Ethical AI, we will also be continuing to push companies to move beyond symbolic statements, to show real progress in operationalizing their AI principles. One major obstacle in this regard is the lack of comprehensive, clear guidelines for conducting HRIAs in the context of AI systems. Developing these guidelines is therefore an urgent next step.
These steps, along with much national-level legislation by countries, are needed to secure ethical AI becomes a reality, and we will be working towards getting them in place.