Just 2% of firms have adequate responsible AI measures in place

Poorly implemented AI can create reputational risk and financial losses, but only 2% of companies have adequate measures in place to protect themselves.
In an Infosys survey of over 1,500 business executives across Australia, France, Germany, the UK, US and New Zealand, 86% of firms said they anticipate more risks from AI implementation, and 95% of C-suite and director-level executives reported AI-related incidents in the past two years – including privacy violations, ethical violations, bias or discrimination, regulatory non-compliance and inaccurate or harmful predictions, among others.
AI incidents cause direct financial losses
Among these, 39% said the AI-related issues caused severe or extremely severe damage. In more than three-quarters of cases (77%), AI incidents cause direct financial losses, with 26% resulting in losses of more than US$1 million.
In this context, a wide majority (78%) of executives see responsible AI as a business growth driver. But only a small percentage of firms achieve high standards when it comes to their AI implementation: 2% can be considered responsible AI leaders, and another 15% perform well.
Being a responsible AI (RAI) leader brings significant benefits: those in this group experienced 39% lower financial losses and 18% lower severity from AI incidents.
Jeff Kavanaugh, Head of Infosys Knowledge Institute, Infosys, said: “Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of responsible AI, there's a substantial gap in practical implementation. Companies that prioritise robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era.”
Responsible AI measures
So what are responsible AI measures and how can they be implemented? Infosys recommends observing best practices and learning from the leaders in this field, combining the product and platform operating model, building guardrails into the platform, and establishing a proactive responsible AI office.
Balakrishna D.R., EVP, Global Services Head, AI and Industry Verticals at Infosys explained: “Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasising ethical, unbiased, safe, and transparent model development.
“To realise the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralised RAI office plays as enterprise AI scales, and new regulations come into force.”
Member discussion