Navigating Rates
Responsible AI: a sustainable approach
Artificial intelligence is reshaping the world at an unprecedented pace but, as with any revolutionary technology, there are risks and trade-offs. We think AI can only sustain its growth by respecting environmental, social and governance standards. Can AI be responsible?
Key takeaways
- Responsible AI aims to apply standards and safeguards to maximise the benefits and minimise the harms caused by artificial intelligence.
- Although there is not yet a unified global standard on responsible AI, legislation such as the EU AI Act provides guidance.
- Far from hindering growth, we believe a responsible approach can help to build trust in AI and ultimately accelerate growth.
As AI systems become more powerful and deeply embedded in the economy, careful consideration of environmental, social and ethical issues is required. Given the scale of investment required for AI, it is important for investors to recognise and tackle systemic risks if they are to seize the long-term value opportunity. Here, we discuss what is meant by responsible AI, what it requires of investors, and which core principles should be integrated into investment decisions.
From AI to responsible AI
AI systems are machine-based technologies designed to perform tasks of varying degrees of autonomy with human-like capabilities such as reasoning, learning, planning and creativity1. These systems vary in complexity and function. Machine learning may identify patterns in data to help forecasts, deep learning may use layered neural networks for image and speech recognition, and generative AI may help create new content.
There is great excitement about the potential of the technology. In healthcare, for example, AI is improving the speed and accuracy of cancer detection. Proponents of AI says its capacity to process vast volumes of data, optimise complex systems and unlock new efficiencies presents unprecedented opportunities across sectors and geographies. The AI value chain is more extensive and nuanced than it appears at first glance.
Exhibit 1 provides a summary of the value chain, which can be simplified into three core segments:
- Developers and infrastructure providers – determine fairness, resilience and footprint of systems.
- Deployers – determine the reach of AI into products and services.
- Users – determines the impact of the use of AI.
Each of these groups bears responsibility for aligning AI with environmental, social and governance standards, and that responsibility is collective and continuous.
Exhibit 1: Understanding the AI value chain
Source: Allianz Global Investors - Sustainability Research
However, AI has the potential to do harm as well as good. Because of the way large language models are trained, there are concerns these systems may reflect biases – against marginalised groups, for example – which may deepen inequalities. There are also concerns that AI may be misused by hostile actors. Cyber criminals are already using AI to deceive their victims – in phishing attacks, for example – and there are fears these methods will become ever more effective as AI technology advances.
Indeed, the World Economic Forum’s Global Risks Report 2024 ranks misinformation and other AI-related harm among the top long-term global threats2. These risks are no longer theoretical, they are already materialising into real-world consequences for individuals, organisations and the environment.
And the risks are only going to intensify. According to the OECD, the rapid and widespread deployment of AI systems across sectors and societies is increasing the likelihood and severity of both AI hazards and AI incidents3. Where those risks result in societal and human rights challenges, the impacts may be hard to quantify – but deeply consequential.
How can these tools be used responsibly?
There is clearly a need for standards and safeguards to minimise harms from AI. Responsible AI is about striking a balance – maximising the benefits of innovation while mitigating its risks. We argue that this represents not only a technical challenge but a moral imperative – to ensure AI is developed and applied in ways that align with human values and serve the public good.
However, the pace of AI innovation makes this a formidable task given that technical advances are outpacing governance and relevant policymaking. Diverse regulatory strategies are evolving in different regions, as regulators rise to the challenge of creating frameworks that balance technological progress with accountability. The most comprehensive and structured AI governance framework currently appears to be the EU AI Act4.
The EU AI act provides guidance on unacceptable risks, transparency requirements and how to balance AI innovation in Europe with appropriate safeguards. This regulation is complemented by existing frameworks from the likes of the OECD and UN on human rights, data protection and cybersecurity. It can be further supported by best practices on resilience and readiness.
Although there is not yet a unified global standard on responsible AI, we believe that these existing resources constitute a useful guide to navigating the challenges. It is only the start, but it is a foundation we can build on.
Our contribution: a responsible AI investment framework
As an investment company that manages assets across a range of regions and sectors, it was important for us at AllianzGI to develop a sustainable approach to investing in this emerging technology. After reviewing key guidance and operating standards, we have developed a responsible AI investment framework, with a focus on the social implications. Our view is that AI needs to be trusted to fulfil its potential, and that trust spans users, developers, regulators and governments.
Our framework is underpinned by three dimensions:
- Robustness – resilience, reliability and safety of systems
- Ethical integrity – ensuring accountability and fairness for social outcomes
- Sustainability – managing the environmental and social impact of AI development
A truly responsible approach to AI requires a coordinated and cohesive approach to all three dimensions to maximise socioeconomic benefits over the long term.
Exhibit 2: Responsible AI treats human needs and trust as priorities
Source: Allianz Global Investors - Sustainability Research
It is important to recognise that the adoption and use of AI differs across sectors. Within our sector frameworks, we perform a materiality assessment and prioritise the topic of responsible AI where it is most impactful, guided by our AI value chain.
An example of a sector that will be heavily impacted by AI is banking and insurance, where customer service, inclusive credit policies and the effectiveness of cybersecurity and data privacy all depend on AI being implemented responsibly. Following our analysis, best or lagging practices will influence a company’s final score.
Engagement with companies making or using AI
Engagement with investee companies on the topic of responsible AI will be critical in establishing appropriate standards. We find that genuine dialogue and sharing perspectives can be highly constructive – for both investors and investees. We have already seen a number of AI-related shareholder resolutions, which typically fall into three major categories:
- Board oversight and accountability
- Disclosures on impact of AI
- Disclosures on risk frameworks and management.
To date, most shareholder resolutions have focused on AI developers, primarily on the topics of product and service safety (including misinformation). However, we expect this to broaden to those enterprises deploying AI solutions. Exhibit 3 illustrates our engagement framework on responsible AI.
Exhibit 3: Framework for discussing AI with makers and users
Source: Allianz Global Investors - Sustainability Research
Responsible AI can accelerate growth
We view responsible AI products and services as critical to long-term financial performance. We see three focus areas of responsible products and services:
- Environmental – full environmental footprint life-cycle assessment and strategic planning.
- Human-centred – fair, transparent, inclusive and respectful to all stakeholders.
- Regulatory-compliant – prioritisation of robust governance frameworks directs capital to more effectively risk-managed operators, who can unlock opportunities with confidence in regulatory oversight.
AI remains divisive, marked by both promise and concern. But we believe that resilient and innovative technologies are key to long-term economic and social development. In our view, the effort to make AI responsible will aid the growth of AI, not hinder it. As AI enters its next phase of mainstream growth, investors can help shape and direct capital to the most responsible and successful operators.