A Responsible AI Framework for Investors

We’re excited to present this essential guidance for investors on how to assess the responsible use of Artificial Intelligence (AI). This framework not only delivers insightful research, but it provides practical tools for investors by operationalising Australia’s 8 AI Ethics Principles. It is hoped the investment community – and companies more broadly – will embrace these tools as standard practice for responsible AI measurement.

This report presents the insights and outcomes developed through a collaborative partnership between CSIRO’s Data61. It is intended to be used by equity investors who want to assess the environmental, social and governance (ESG) implications of the design, development and deployment of Artificial Intelligence (AI). It can also be used as a guide for listed companies and other stakeholders that are considering how best to integrate efforts in Responsible AI (RAI).

We encourage you to:

• Read the 10 key insights from the company engagements and research.
• Understand Australia’s AI Ethics Principles.
• Follow the ESG-AI framework’s assessment steps (1 to 3, as needed)
• Use the spreadsheet templates provided.

Responsible investment in the age of AI

Wondering if the companies you invest in are using AI responsibly? We’ve created a framework for that.

  • Before now, there was no specific guidance for investors to analyse AI-related environmental, social and governance (ESG) risks.
  • We’ve co-developed a framework to help the investment community assess responsible AI practices and integrate ESG considerations.

Many investors use environmental, social and governance (ESG) frameworks to assess non-financial metrics like climate change, human rights and corporate governance. But with the meteoric rise of businesses using AI, how can investors identify whether companies are implementing AI responsibly? Partnering with Australia’s national science agency, CSIRO, we interviewed 28 listed companies to find out.

Nothing to see here

While most ESG reports are public, we found only a small percentage of companies publicly disclose their responsible AI (RAI) policies.

Forty per cent of interviewed companies had internal RAI policies, yet only ten per cent shared these publicly. Despite this, 62 per cent of companies were starting or had implemented an AI strategy.

Global companies were more advanced than Australian companies in implementing these strategies. Even the companies that were doing considerable RAI work didn’t reflect it in their external reporting. Some companies fail to mention AI in their risk statements, strategic pillars and annual reports, despite expressing enthusiasm about it in discussions and making significant investment into exploring the technology.

Caution to the wind, kind of.

Many companies express concerns about the potential negative impacts of AI on their reputation, consumer trust, and regulatory consequences.

We found that companies with good overall governance structures are more likely to balance AI threats and opportunities and therefore had a healthy curiosity of new technology. Conversely, companies with weak overall governance are unlikely to show leadership characteristics when it comes to developing and implementing RAI. This could limit the opportunities that AI offers. For example, some companies we interviewed restricted employees from using AI tools such as ChatGPT, while others took an educational stance.

ESG: the way to see

So, if public reporting isn’t commonplace and a balanced view of threats and opportunities is needed to mitigate harm and leverage AI benefits, where can investors look for answers?

We found that a strong track record in ESG performance is an indicator of confidence for investors. Companies that carefully consider how their actions affect people, their reputation, and how they’re seen by society will approach new technologies like AI with the same care. These companies generally have well-respected Boards, robust disclosures and ESG commitments. They’re also likely to implement AI responsibly and in a measured way.

Because AI is evolving so rapidly, good leadership on existing topics like cyber security, diversity and employee engagement suggests that the impact of AI will also be considered thoughtfully.

But wait, there’s more

While looking to existing ESG frameworks is a handy stop-gap for investors, we’ve created something much more robust. In our report, The intersection of Responsible AI and ESG: A Framework for Investors, we’ve created a framework to help the investment community assess RAI practices and integrate ESG considerations.