Case Study: Rescuing failed AI initiatives

Al Cranswick

July 7, 2025

Case Study: Rescuing failed AI initiatives

Discover how AI governance and innovation capability building transformed 15 failed AI proof-of-concepts into successful deployments with 77% user adoption in 6 months.

Executive Summary

The Challenge: The client had a growing collection of failed AI proof-of-concepts (POCs) with zero user adoption, despite significant investment in AI implementation and internal communications and change management.

The Results:

  • User Adoption: 77% uptake rate across successfully deployed AI tools

  • Project Recovery: 8 of 15 failed POCs successfully deployed after remediation

  • AI Risk Management: Adoption of an AI Governance framework that protected the client against some common reputational risks with AI

  • Innovation Capability: In-house innovation capability established with templates, common language, governance framework and human-centred design process

The Challenge

Company Profile: A large retailer, facing pressure to modernise operations through AI adoption.

The Problem

The organisation had accumulated 15 failed AI proof-of-concepts developed by various consultants and internal teams over 18 months, with investment totalling $680K. Despite technical functionality, none had achieved meaningful user adoption. Leadership was questioning the entire AI strategy and considering abandoning further investment.

Key Issues:

  • Zero User Adoption: Deployed AI tools were being ignored by staff, despite management mandates.

  • Failed User Acceptance: Multiple POCs couldn't pass user testing requirements.

  • Investment Waste: $680K spent on tools that weren't being used, with no clear path forward.

Why This Mattered: Competitors were successfully deploying AI tools while this organisation was falling behind. The Board was questioning leadership's technology strategy and considering external acquisition of AI-enabled competitors. But the more important deficit was that competitors were improving their maturity of AI risks, mitigations and governance, while this client was making mistakes that it could have avoided by learning from the AI failures of other organisations.

Our Consultants' Approach

Our consultants applied a dual-track diagnostic and capability building methodology, combining human-centred design principles with AI governance frameworks. The engagement followed a structured 6-month process:

Phase 1: Root Cause Analysis (6 weeks)

  • Conducted user interviews across all failed POCs

  • Our qualified data scientist performed technical audits of existing AI tools

  • Analysed organisational change readiness and barriers

Our consultants had initially hypothesised that the AI tools were not being adopted because staff were afraid that their jobs would be replaced by AI. However, further interviews and inspection of the tools revealed that staff were actually keen to add AI adoption to their CVs but had valid concerns about:

  • Privacy of customers' personally identifiable information that could be exposed by a malicious user of the tools

  • Objectivity and accountability when the tools recommended decisions affecting people's access to employment and consumer conveniences; and

  • Reliability and accuracy of the data and assumptions used by the tools.

Management's initial perspective was that AI systems inherently offer greater objectivity and accuracy than human decision-making, and that privacy concerns had been adequately addressed through traditional approaches such as removing obvious identifiers like names and addresses. However, this view reflected common misconceptions about AI capabilities and limitations that many organisations face when first adopting these technologies.

Our consultants recognised that bridging this knowledge gap required comprehensive education on mitigations for AI risks such as:

  • Privacy complexity: Criminals can abuse modern AI systems to potentially re-identify individuals through seemingly anonymous data patterns, requiring more sophisticated privacy protection measures than traditional data handling

  • Algorithmic bias: AI systems can perpetuate or amplify existing biases in training data, making them potentially less objective than assumed

  • Reliability challenges: AI outputs can appear authoritative whilst containing inaccuracies. AI tools can also struggle with exceptional scenarios that differ significantly from historical avergages.

While extensive guidance exists on governance and mitigations for the above risks, our consultants found that management teams often struggle to translate these technical documents into practical action. The solution was to contextualise this guidance with real-world case studies demonstrating the reputational and operational risks faced by organisations that inadequately govern their AI systems. This approach successfully engaged leadership, resulting in enthusiastic adoption of an AI governance framework.

Phase 2: AI Governance Framework Development (8 weeks)

  • Implemented an AI governance framework with management's enthusiastic support

  • Designed AI risk mitigations for each of the client's AI initiatives.

Phase 3: Innovation Capability Building (12 weeks)

Our consultants also pointed out that as an emerging technology, AI investment is not a pipe where every idea will be deployed, but is more like a design loop. Two key concepts in human centred design thinking are:

  • Creative Ideation - The more diverse ideas you can come up with, the more likely you are to find a winning solution; and

  • Wind tunnelling - Finding low-risk ways to test, iterate and even eliminate some ideas will reduce the likelihood of failed deployments. 

Therefore, our consultants helped the client develop an in-house innovation capability following human-centred design principles. This involved:

  • Aligning the innovation framework with organisational strategy

  • Design of internal innovation process and funnel

  • Innovation team development: New ways of working (process, tools) and capabilities (mindset, skills, behaviour) for team members

  • Building the innovation culture (shared language/common understanding)

Key Analytical Tools: Change readiness assessment, human-centred design thinking, AI governance audit, innovation maturity framework, human-centred design principles

Results Achieved

Primary Outcomes

  • User Adoption Rate: Increased from 0% to 77% across deployed AI tools

  • Project Recovery: 8 of 15 failed AI initiatives (POCs) successfully remediated and deployed

  • Innovation Throughput: New AI initiatives completing POC phase in 6 weeks vs 16 weeks previously

Financial Impact

  • Recovered Investment: $420K in previously written-off AI development costs

  • Efficiency Gains: 35% reduction in routine task completion time across affected workflows, allowing staff to focus on the higher value activities that they enjoyed

  • ROI: 3:1 return on consulting investment within 12 months

  • Avoided Costs: $280K in planned external AI tool acquisitions no longer needed

Why This Approach Worked

Our data scientist’s understanding of the “why” behind published AI governance frameworks: Our data scientist has a deep understanding of risks and mitigations with AI, and was able to explain these in plain language to the client. As a result, our consultant received broad buy-in for implementing an AI governance framework based on internationally recognised standards (ISO/IEC 23053, ISO/IEC 42001:2023) and the Australian Government's AI Safety Guardrails.

Combining Creativity, Rigour and Human Centricity: Humans are naturally creative. They enjoy experimenting and being involved in design and innovation processes. All of these human traits are good for innovation, but they are also good for achieving buy-in and adoption. By involving key stakeholders in the innovation process, many organisations find that they can avoid wasting effort on top-down change management and communication artefacts.

Key Takeaways

When to Consider This Approach:

  • Multiple failed AI initiatives with low user adoption.

  • Organisational resistance to new AI tools despite technical functionality (there could be valid issues that need to be addressed before the AI tools can be responsibly deployed by an organisation that cares about its reputation).

  • Need for sustainable innovation capability rather than one-off implementations.

Critical Success Factors:

Next Steps

Struggling with AI adoption in your organisation? Our consultants' proven methodology can transform failed initiatives into successful deployments.

Schedule a consultation to discuss:

  • Diagnostic assessment of your current AI initiatives and adoption barriers.

  • Customised governance framework appropriate for your industry and risk profile.

  • Innovation capability development roadmap for sustained AI success.

Schedule a consultation with Alastair

About the Consultant: Alastair has 8 years of strategy consulting experience at tier-1 firms, specialising in technology adoption, change management, human-centred design methodologies and innovation capability building. He has post-graduate qualifications in Data Science from the Australian National University, and is a graduate of the Australian Institute of Company Directors (GAICD)

<All Posts