Navigating Rates

Responsible AI: a sustainable approach

Artificial intelligence is reshaping the world at an unprecedented pace but, as with any revolutionary technology, there are risks and trade-offs. We think AI can only sustain its growth by respecting environmental, social and governance standards. Can AI be responsible?

Key takeaways
  • Responsible AI aims to apply standards and safeguards to maximise the benefits and minimise the harms caused by artificial intelligence.
  • Although there is not yet a unified global standard on responsible AI, legislation such as the EU AI Act provides guidance.
  • Far from hindering growth, we believe a responsible approach can help to build trust in AI and ultimately accelerate growth.

As AI systems become more powerful and deeply embedded in the economy, careful consideration of environmental, social and ethical issues is required. Given the scale of investment required for AI, it is important for investors to recognise and tackle systemic risks if they are to seize the long-term value opportunity. Here, we discuss what is meant by responsible AI, what it requires of investors, and which core principles should be integrated into investment decisions.

From AI to responsible AI

AI systems are machine-based technologies designed to perform tasks of varying degrees of autonomy with human-like capabilities such as reasoning, learning, planning and creativity1. These systems vary in complexity and function. Machine learning may identify patterns in data to help forecasts, deep learning may use layered neural networks for image and speech recognition, and generative AI may help create new content.

There is great excitement about the potential of the technology. In healthcare, for example, AI is improving the speed and accuracy of cancer detection. Proponents of AI says its capacity to process vast volumes of data, optimise complex systems and unlock new efficiencies presents unprecedented opportunities across sectors and geographies. The AI value chain is more extensive and nuanced than it appears at first glance.

Exhibit 1 provides a summary of the value chain, which can be simplified into three core segments:

  • Developers and infrastructure providers – determine fairness, resilience and footprint of systems.
  • Deployers – determine the reach of AI into products and services.
  • Users – determines the impact of the use of AI.

Each of these groups bears responsibility for aligning AI with environmental, social and governance standards, and that responsibility is collective and continuous.

Exhibit 1: Understanding the AI value chain
Fast computer chips underlie AI innovation

Source: Allianz Global Investors - Sustainability Research

However, AI has the potential to do harm as well as good. Because of the way large language models are trained, there are concerns these systems may reflect biases – against marginalised groups, for example – which may deepen inequalities. There are also concerns that AI may be misused by hostile actors. Cyber criminals are already using AI to deceive their victims – in phishing attacks, for example – and there are fears these methods will become ever more effective as AI technology advances.

Indeed, the World Economic Forum’s Global Risks Report 2024 ranks misinformation and other AI-related harm among the top long-term global threats2. These risks are no longer theoretical, they are already materialising into real-world consequences for individuals, organisations and the environment.

And the risks are only going to intensify. According to the OECD, the rapid and widespread deployment of AI systems across sectors and societies is increasing the likelihood and severity of both AI hazards and AI incidents3. Where those risks result in societal and human rights challenges, the impacts may be hard to quantify – but deeply consequential.

 
How can these tools be used responsibly?

There is clearly a need for standards and safeguards to minimise harms from AI. Responsible AI is about striking a balance – maximising the benefits of innovation while mitigating its risks. We argue that this represents not only a technical challenge but a moral imperative – to ensure AI is developed and applied in ways that align with human values and serve the public good.

However, the pace of AI innovation makes this a formidable task given that technical advances are outpacing governance and relevant policymaking. Diverse regulatory strategies are evolving in different regions, as regulators rise to the challenge of creating frameworks that balance technological progress with accountability. The most comprehensive and structured AI governance framework currently appears to be the EU AI Act4.

The EU AI act provides guidance on unacceptable risks, transparency requirements and how to balance AI innovation in Europe with appropriate safeguards. This regulation is complemented by existing frameworks from the likes of the OECD and UN on human rights, data protection and cybersecurity. It can be further supported by best practices on resilience and readiness.

Although there is not yet a unified global standard on responsible AI, we believe that these existing resources constitute a useful guide to navigating the challenges. It is only the start, but it is a foundation we can build on.

Our contribution: a responsible AI investment framework

As an investment company that manages assets across a range of regions and sectors, it was important for us at AllianzGI to develop a sustainable approach to investing in this emerging technology. After reviewing key guidance and operating standards, we have developed a responsible AI investment framework, with a focus on the social implications. Our view is that AI needs to be trusted to fulfil its potential, and that trust spans users, developers, regulators and governments.

Our framework is underpinned by three dimensions:

  • Robustness – resilience, reliability and safety of systems
  • Ethical integrity – ensuring accountability and fairness for social outcomes
  • Sustainability – managing the environmental and social impact of AI development

A truly responsible approach to AI requires a coordinated and cohesive approach to all three dimensions to maximise socioeconomic benefits over the long term.

Exhibit 2: Responsible AI treats human needs and trust as priorities
Graphic showing the AI value chain from developers to users and subjects

Source: Allianz Global Investors - Sustainability Research

It is important to recognise that the adoption and use of AI differs across sectors. Within our sector frameworks, we perform a materiality assessment and prioritise the topic of responsible AI where it is most impactful, guided by our AI value chain.

An example of a sector that will be heavily impacted by AI is banking and insurance, where customer service, inclusive credit policies and the effectiveness of cybersecurity and data privacy all depend on AI being implemented responsibly. Following our analysis, best or lagging practices will influence a company’s final score.

Engagement with companies making or using AI

Engagement with investee companies on the topic of responsible AI will be critical in establishing appropriate standards. We find that genuine dialogue and sharing perspectives can be highly constructive – for both investors and investees. We have already seen a number of AI-related shareholder resolutions, which typically fall into three major categories:

  • Board oversight and accountability
  • Disclosures on impact of AI
  • Disclosures on risk frameworks and management.

To date, most shareholder resolutions have focused on AI developers, primarily on the topics of product and service safety (including misinformation). However, we expect this to broaden to those enterprises deploying AI solutions. Exhibit 3 illustrates our engagement framework on responsible AI.

Exhibit 3: Framework for discussing AI with makers and users
Graphic showing how a responsible approach to AI is robust, sustainable and ethical

Source: Allianz Global Investors - Sustainability Research

Responsible AI can accelerate growth 

We view responsible AI products and services as critical to long-term financial performance. We see three focus areas of responsible products and services:

  • Environmental – full environmental footprint life-cycle assessment and strategic planning.
  • Human-centred – fair, transparent, inclusive and respectful to all stakeholders.
  • Regulatory-compliant – prioritisation of robust governance frameworks directs capital to more effectively risk-managed operators, who can unlock opportunities with confidence in regulatory oversight.

AI remains divisive, marked by both promise and concern. But we believe that resilient and innovative technologies are key to long-term economic and social development. In our view, the effort to make AI responsible will aid the growth of AI, not hinder it. As AI enters its next phase of mainstream growth, investors can help shape and direct capital to the most responsible and successful operators.

Investing involves risk. The value of an investment and the income from it may fall as well as rise and investors might not get back the full amount invested. Past performance does not predict future returns. If the currency in which the past performance is displayed differs from the currency of the country in which the investor resides, then the investor should be aware that due to the exchange rate fluctuations the performance shown may be higher or lower if converted into the investor’s local currency. This is for information only and not to be construed as a solicitation or an invitation to make an offer to buy or sell any securities. The views and opinions expressed herein, which are subject to change without notice, are those of the issuer or its affiliated companies at the time of publication. The data used is derived from various sources and assumed to be accurate and reliable at the time of publication. but it has not been independently verified; its accuracy or completeness is not guaranteed and no liability is assumed for any direct or consequential losses arising from its use, unless caused by gross negligence or willful misconduct. The duplication, publication, extraction or transmission of the contents, irrespective of the form, is not permitted, except for the case of explicit permission by Allianz Global Investors.

This material has not been reviewed by any regulatory authorities.

This document is being distributed by the following Allianz Global Investors companies: In Australia, this material is presented by Allianz Global Investors Asia Pacific Limited (“AllianzGI AP”) and is intended for the use of investment consultants and other institutional/professional investors only, and is not directed to the public or individual retail investors. AllianzGI AP is not licensed to provide financial services to retail clients in Australia. AllianzGI AP is exempt from the requirement to hold an Australian Foreign Financial Service License under the Corporations Act 2001 (Cth) pursuant to ASIC Class Order (CO 03/1103) with respect to the provision of financial services to wholesale clients only. AllianzGI AP is licensed and regulated by Hong Kong Securities and Futures Commission under Hong Kong laws, which differ from Australian laws; in the European Union, by Allianz Global Investors GmbH, an investment company in Germany, authorized by the German Bundesanstalt für Finanzdienstleistungs-aufsicht (BaFin) and is authorized and regulated in South Africa by the Financial Sector Conduct Authority; in the UK, by Allianz Global Investors (UK) Ltd. company number 11516839, authorised and regulated by the Financial Conduct Authority (FCA); in Switzerland, by Allianz Global Investors (Schweiz) AG, authorised by the Swiss financial markets regulator (FINMA); in HK, by Allianz Global Investors Asia Pacific Ltd., licensed by the Hong Kong Securities and Futures Commission; in Singapore, by Allianz Global Investors Singapore Ltd., regulated by the Monetary Authority of Singapore [Company Registration No. 199907169Z]; in Japan, by Allianz Global Investors Japan Co., Ltd., registered in Japan as a Financial Instruments Business Operator [Registered No. The Director of Kanto Local Finance Bureau (Financial Instruments Business Operator), No. 424], Member of Japan Investment Advisers Association, the Investment Trust Association, Japan and Type II Financial Instruments Firms Association; In mainland China, it is for Qualified Domestic Institutional Investors scheme pursuant to applicable rules and regulations and is for information purpose only. in Taiwan, by Allianz Global Investors Taiwan Ltd., licensed by Financial Supervisory Commission in Taiwan; and in Indonesia, by PT. Allianz Global Investors Asset Management Indonesia licensed by Indonesia Financial Services Authority (OJK).

Admaster 4830222

Recent insights

Sustainability | ~4 min read

As growth in sectors such as energy and digital strains global resources, circular strategies will be key to aligning economic growth and resilience.

Discover more

Climate | ~4 min read

The electric car market is growing. Could improving vehicle efficiency accelerate consumer uptake?

Discover more

Sustainability | ~4 min read

Sovereignty is becoming a key theme across Europe, encompassing the ability to self-govern. We examine how this could fuel growth in the region.

Discover more

Allianz Global Investors

You are leaving this website and being re-directed to the below website. This does not imply any approval or endorsement of the information by Allianz Global Investors Asia Pacific Limited contained in the redirected website nor does Allianz Global Investors Asia Pacific Limited accept any responsibility or liability in connection with this hyperlink and the information contained herein. Please keep in mind that the redirected website may contain funds and strategies not authorized for offering to the public in your jurisdiction. Besides, please also take note on the redirected website’s terms and conditions, privacy and security policies, or other legal information. By clicking “Continue”, you confirm you acknowledge the details mentioned above and would like to continue accessing the redirected website. Please click “Stay here” if you have any concerns.