Stewardship | ~4 min read
Responsible AI: early signals from our company engagements
The integration of artificial intelligence (AI) into daily life and across the economy is accelerating. As companies race to adopt these technologies, we are engaging with them on AI to learn how they are balancing innovation with the safeguards needed to mitigate emerging risks.
As investors, we have strong conviction around the transformative benefits and investment potential of AI. But as active stewards of our clients’ assets, we believe that effective governance and responsible deployment are critical to long-term value generation. With companies committing sizeable capital expenditure across multiple sectors, ensuring that AI is used ethically and with appropriate oversight is essential.
Alongside our research paper published last year on prioritising a responsible approach to AI, we have also intensified and formalised our stewardship activities in this area. As AI is an evolving theme, our first step was to develop a framework for approaching engagement, targeting the following areas:
- Governance and board oversight
- Risk and opportunity assessment
- Transparency and accountability
- Environmental impacts
- Workforce and skills
- Regulatory compliance and certification.
Using this framework, we started engaging with portfolio companies in 2025, initially testing our approach with industry leaders and then honing it for a group of other companies. We consider it equally important to engage both with companies at the forefront of AI development and with those that are major users of the technology, such as large web platforms. To date, we have identified two key insights from these conversations:
1. Engagement is the most effective way to understand strategy
There is no one-size-fits-all approach for engaging on AI and our individual dialogues with companies have allowed us to better understand how they are seeking to maximise AI opportunities. For example, when we engaged with one US financial services firm, they provided clear insights into the use-case approval process for AI projects. On earnings calls, the company did share some information on the user case however, we still lacked clarity on the use case approval process more generally – an added dimension that became the subject of our engagement. Discussions covered the approval process from beginning to end, including how opportunities are identified, governance frameworks, and any safeguards that require human-only decision-making.
2. Reporting significantly lags development
When engaging a European software company, we sought to understand the governance of two internal bodies overseeing responsible AI – a business-led working group and an external advisory panel. While we focused on understanding the activity. scope and interaction between the two groups, we pushed for enhanced reporting on governance frameworks and structures that would be useful for investors. The company welcomed this feedback.
High-quality engagements are a circular process, where each discussion shapes and guides the next one. We have focused our work to date on the materiality of AI for the investee company and its positioning in portfolios. As the AI agenda evolves, we will continue our structured approach to support investor conviction on this high-profile segment.
Read more: Digital resilience: more data, less downtime | AllianzGI