An interview with Andrew, Lead Consulting Partner at Hudson & Hayes
AI readiness isn’t just a technology goal, it’s a leadership one. Andrew’s insights remind us that true transformation happens when operating models evolve to make AI a natural part of how decisions are made every day.
Chelsea:
When organisations talk about “becoming AI-ready,” what does that actually look like in practice?
Andrew:
Becoming AI-ready isn’t about installing the latest tools, it’s about redesigning how your organisation learns, decides, and delivers value.
In practice, it means creating the right conditions for AI to have a lasting impact- technically, operationally, and culturally. That starts with being clear about why AI matters, where it adds value, and how it fits into everyday decision-making.
AI-readiness is less about technology and more about building a business that can learn, adapt, and use insights confidently. You need good data foundations, clear process ownership, and collaboration between business, IT, and data teams; plus people who are data-literate, ethically aware, and comfortable working alongside intelligent systems.
In short: becoming AI-ready means building a business that can think and act intelligently at scale.
Chelsea:
That’s a great way to put it. From your experience, what’s the biggest disconnect between board-level AI strategy and operational execution?
Andrew:
The biggest gap is usually between intent and integration.
At board level, AI is often positioned as a strategic enabler, a lever for innovation or efficiency. But that’s where the conversation tends to stop. Ambitious AI strategies get approved without recognising the dependencies, things like data quality, process redesign, and behavioural change.
The result is often lots of pilots that never scale.
True alignment happens when AI is woven into the organisation’s operating rhythm, its governance, decision rights, and performance measures — rather than treated as a side project. The board’s vision only becomes reality when the operating model evolves to support it.
Chelsea:
Can you share a specific example where aligning the operating model made a measurable difference to AI adoption?
Andrew:
Sure, I worked with a national infrastructure organisation where AI adoption had completely stalled despite heavy investment. The issue wasn’t the technology, it was structure.
Each business unit owned its own data and priorities, so insights couldn’t flow across the organisation. Once the operating model was aligned- data ownership clarified, digital and field teams integrated, and decision frameworks standardised- everything changed.
Predictive models for asset health and network resilience were rolled out nationally, supported by shared data and central governance. Within a year, outages dropped, scheduling improved, and leaders gained a much clearer view of operational risk.
The real success didn’t come from the algorithms, it came from redesigning how the business worked to make AI part of everyday life.
Chelsea:
What are the most common pitfalls leadership teams face when trying to embed AI into existing processes?
Andrew:
There are a few familiar ones:
❌ Treating AI as a bolt-on instead of a redesign
❌ Dropping tools into old workflows and expecting change
❌ Forgetting the people side- trust, confidence, and capability
AI transformation fails when teams don’t understand or believe in it. Success comes from retraining people, aligning incentives, and redesigning workflows so that AI becomes a natural extension of how the organisation learns and decides.
Chelsea:
If you could give one piece of advice to organisations starting their AI transformation journey, what would it be?
Andrew:
Start with purpose, and design for scale.
The best question isn’t “Which AI tools should we buy?” It’s “Which decisions do we want to make smarter?”
Successful organisations start small, with a clear value case, and build governance, ethics, and accountability from the start. AI transformation isn’t a digital project- it’s an operating model challenge.
Align people, process, and data around a shared goal, and AI becomes a capability, not a project.
In the early days of digital transformation, many organisations were a hammer looking for a nail. Today, we’re more like a nail gun looking for a surface- faster, more powerful, but potentially dangerous without precision.
The challenge for leaders isn’t how often to fire the nail gun- it’s how to aim it, with purpose and control. That blueprint is your operating model, the bridge between ambition and action.
When it’s designed for intelligence, AI becomes not just a capability, but a competitive advantage.
For years, organisations have relied on Lean Thinking to streamline operations, eliminate waste, and deliver consistent value. It remains one of the most powerful frameworks for process excellence. But the world of work is changing fast.
We’re entering an era where the next step in performance isn’t simply improving human efficiency, but it’s embedding intelligence directly into the workflow through Agentic AI.
This combination — the structure of Lean with the adaptability of AI Agents is set to redefine operational excellence and reshape how organisations approach AI transformation.
Lean Thinking, popularised by Toyota and later adopted globally, focuses on creating value for customers while minimising waste. It’s been proven to increase productivity, reduce costs, and improve quality.
But even the best Lean systems depend heavily on human vigilance. Someone must see the problem, escalate it, and act. Continuous improvement becomes periodic.
In a world defined by real-time data, complex workflows, and distributed teams, this model struggles to keep up. Teams may know what to improve but lack the capacity or speed to act continuously.
As digital systems evolve, there’s a growing need for self-managing workflows — processes that see what’s happening, decide what to do, and act automatically.
That’s where Agentic AI comes in.
Agentic AI represents a new stage in AI maturity. Instead of responding passively to human prompts, AI agents can sense context, reason through options, and take action autonomously often across multiple systems.
Think of them as digital colleagues rather than digital tools.
Unlike traditional automation or chatbots, AI agents can chain decisions, collaborate with other agents, and continuously learn from data and feedback loops.
Platforms like Microsoft Co-Pilot and Google Duet AI already embed early versions of these capabilities into everyday workflows. But the real transformation happens when organisations use these agents not just in tools — but across end-to-end value streams.
Lean provides the perfect foundation for AI transformation because it defines what good flow looks like.
Agentic AI provides the intelligence to sustain and amplify that flow.
Together, they create a system that’s not just efficient — it’s self-optimising.
Let’s take an example many organisations know well: employee onboarding.
From Manual Flow to Intelligent Flow
In a traditional Lean workshop, teams might map the onboarding process and identify waste:
Using Lean principles, they might reduce cycle time from nine days to five, introduce standard work, and clarify ownership.
A good result, but still dependent on people remembering to act.
Now add Agentic AI into the process:
Suddenly, the process becomes self-directing and adaptive.
Cycle time drops from five days to two. Errors reduce by 80%. Employee satisfaction increases.
This is no longer “continuous improvement”. It's continuous intelligence in action.
The real challenge isn’t the technology. It’s the gap between disciplines.
Lean experts understand flow, value, and system optimisation.
AI engineers understand automation, data, and orchestration.
But rarely do both perspectives exist in the same person or even in the same room.
Closing that gap is the key to scaling intelligent operations.
In practice, this means creating cross-functional AI transformation teams that combine:
This fusion ensures AI doesn’t become another disconnected project but an integrated layer of operational excellence.
One of the most effective ways to bridge Lean and Agentic AI is through structured process modelling — defining how agents interact within a workflow.
Start with the value stream map. Identify where data enters, where decisions are made, and where delays occur.
Then, for each step, ask three design questions:
This approach reframes process design from “who does what” to “what intelligence does what.”
When applied end-to-end, organisations can transform entire value streams, such as customer onboarding, procurement, logistics, or HR — into self-orchestrating systems.
According to PwC’s Global AI Jobs Barometer 2024, AI could contribute over $15.7 trillion to the global economy by 2030.
But the largest share of that value will come from process redesign, not just technology adoption.
Similarly, Gartner predicts that by 2026, over 60% of enterprises will deploy multi-agent systems that act autonomously across functions.
Lean provides the operational discipline needed to scale this transformation safely and effectively.
Without Lean, AI can create complexity faster than it creates value.
Without AI, Lean can only optimise what already exists.
Together, they enable agility, adaptability, and exponential efficiency.
Forward-thinking organisations are already experimenting with Agentic AI and Lean integration.
These examples demonstrate what’s possible when automation meets continuous improvement thinking.
Integrating Lean and Agentic AI isn’t about replacing people or processes. It is about augmenting them.
Here’s how leading organisations are approaching it:
At H&H, this is exactly the capability we’re developing — helping organisations build the bridge between process excellence and digital intelligence, and shaping it into a cohesive roadmap for AI transformation.
The next Lean revolution won’t happen on factory floors or process maps.
It will happen inside intelligent systems that design, learn, and improve themselves.
Agentic AI is not a replacement for Lean Thinking. It is its natural evolution.
Together, they transform how organisations create value:
from optimised processes to autonomous, adaptive, continuously improving ones.
The companies that embrace this combination now won’t just be more efficient.
They’ll be fundamentally more intelligent.
Artificial Intelligence (AI) is now central to the NHS’s long term ambition. The NHS Long Term Plan and AI Roadmap both outline a shift from analogue to digital services, with AI set to play a critical role in diagnostics, patient management, and operational efficiency (NHS England, 2024).
But while AI’s potential is immense, it is not a silver bullet. Successful adoption depends on understanding context, including technology, people, governance, and culture that enable sustainable change.
Over the past year, we have worked closely with NHS England Diagnostics, helping to develop AI literacy programmes, implementation roadmaps, and a patient facing appointment assistant. From this work, we have distilled seven lessons for trusts and ICSs exploring how to integrate AI and, increasingly, Agentic AI (AI agents that can act autonomously) into their processes and patient pathways.
The first step toward success is building AI literacy across clinical, operational, and leadership teams, not just IT.
When people understand the fundamentals of AI and Agentic AI, they can better identify where the technology can deliver value. Literacy also helps manage expectations, preventing unrealistic assumptions about what AI can do.
As NHS England’s AI and Machine Learning Long Read notes, workforce preparedness and education are vital to safely scaling AI in healthcare (NHS England, 2024).
AI must be co-designed with patients, not built for them.
When developing our patient facing appointment assistant, we worked with patient forums to gather Voice of the Customer (VOC) insights, including accessibility requirements and usability preferences.
This real world input helped shape a solution that worked for everyone, not just those who are digitally confident. As the Health Foundation highlights, human centred design and patient involvement are essential for adoption and equity (Health Foundation, 2024).
Each NHS trust has unique technology and compliance requirements. Even when two trusts use the same Electronic Patient Record (EPR) system, they may sit on different instances or tenants, with varying integration needs.
Mapping out technical architecture early prevents costly delays later. This means working closely with IT, data governance, and clinical safety teams from the start to ensure alignment with interoperability standards such as FHIR and NHS Digital’s AI deployment framework (Digital NHS, 2025).
Clinician engagement is not optional, it is essential.
Many AI projects struggle because clinicians were not involved until deployment. Without their input, AI systems risk becoming administrative burdens or being ignored altogether.
Involving clinicians from day one helps identify meaningful use cases, streamline workflows, and ensure clinical safety. Research shows that clinician buy in and interpretability are critical for AI adoption in healthcare (PMC, 2023).
Even the best AI solution loses value if it cannot scale.
Too often, NHS projects operate in isolation, duplicating solutions that solve the same problem. Designing with scalability in mind allows learnings, data models, and tools to be reused across trusts and regions.
This “build once, reuse many times” mindset saves both time and public money, and it accelerates national transformation. The NHS AI Lab promotes this approach through its shared testing and validation frameworks (NHS AI Lab, 2024).
Governance and innovation must move in parallel, not sequentially.
Every AI project should include a governance workstream that develops alongside the technical build. This includes data protection impact assessments (DPIAs), safety case documentation, algorithm audit trails, and bias testing.
Frameworks like the NICE Evidence Standards for Digital Health Technologies and the Central Digital and Data Office (CDDO) guidelines provide practical guardrails (NICE, 2024).
Governance does not need to be a blocker. When integrated early, it builds confidence and speeds up deployment.
The rise of Agentic AI brings both opportunity and risk.
When using Large Language Models (LLMs) in healthcare, clarity of purpose and boundary setting are vital. Guardrails should define where autonomy ends and human oversight begins.
For example, an LLM might be used for patient communication, summarisation, or scheduling support, but clinical advice must always be validated by qualified professionals. NHS guidance on AI enabled ambient scribing tools stresses this balance between creativity and safety (NHS England, 2024).
AI in the NHS is no longer a future concept. It is happening now. But for AI and Agentic AI to deliver real outcomes, NHS organisations must take a balanced approach that combines technical excellence with cultural readiness, patient inclusion, and clinical collaboration.
By embedding these seven lessons into their strategy, literacy, co design, architecture, clinician involvement, scalability, governance, and guardrails — trusts can turn ambition into impact.
AI will not replace people in the NHS. It will amplify them, giving staff more time to focus on what matters most: delivering compassionate, connected care.
Key Stats at a Glance
Oxford University, one of the world’s leading higher education institutions, sought to explore how AI could transform its Professional Services. Administrative functions — from HR to finance and student support — faced increasing demands and rising costs, while academic and research excellence remained the institution’s primary focus.
The University recognised that AI had the potential to reduce administrative burden, unlock efficiency, and enhance staff and student experiences. However, a roadmap was needed to move from theory to practical adoption.
Hudson & Hayes was engaged to develop an AI roadmap, building literacy, identifying opportunities, and charting a path to scalable adoption.
Key issues included:
Hudson & Hayes applied its GenAscend methodology, tailoring the approach to a university environment.
Hudson & Hayes helped Oxford University move from curiosity to clarity on AI adoption in Professional Services. By embedding literacy, defining a prioritised opportunity pipeline, and creating a roadmap, the University is now positioned to leverage AI in a strategic, scalable, and responsible way.
This foundation ensures Oxford can continue to focus on academic and research excellence, supported by professional services that are efficient, future-ready, and digitally enabled.
Key Stats at a Glance
Legal & General, a FTSE 100 financial services leader, sought to scale its business while modernising its technology division. Previous change programmes had struggled to deliver agility at scale, leaving the organisation with complex operating structures and limited flexibility.
To succeed, Legal & General needed more than an external blueprint. It required internal capability: leaders who could confidently design and manage their own operating model transformation.
Hudson & Hayes was engaged to partner with the leadership team and embed capability in operating model design, ensuring Legal & General could continue to evolve independently and sustainably.
Opportunities included:
Hudson & Hayes enabled Legal & General to shift from dependency on consultants to true self-sufficiency in operating model design. By embedding capability, not just delivering solutions, the organisation can now continue evolving its structures with confidence.
The result is a more agile, scalable operating model, positioned to meet the demands of a dynamic market and sustain long-term business growth.
Key Stats at a Glance
A leading provider of employability services, with a mission to help people into work, faced a challenge: frontline staff were spending too much time on administrative tasks, leaving limited capacity to build meaningful relationships with clients.
As demand for services grew and competition intensified, the organisation identified AI as a key lever to improve efficiency and strengthen its competitive edge. Hudson & Hayes was engaged to turn this ambition into action.
The key issues were:
Without intervention, the organisation risked reduced job placement outcomes, lower staff satisfaction, and weaker competitiveness in contract bids.
Hudson & Hayes applied its proven approach, moving quickly from feasibility to delivery.
Key solutions delivered included:
By combining feasibility analysis, co-creation, and rapid delivery, Hudson & Hayes enabled the organisation to transform frontline operations with AI. The result: more time for client engagement, stronger job placement outcomes, and a sharper competitive edge in winning new business
Key Stats at a Glance
A PE-backed enterprise data management business, part of Five Arrows and historically founder-led, was entering a phase of accelerated global growth. To deliver at scale, the organisation needed to modernise how its client management and operations teams worked.
Legacy processes, manual workflows, and siloed organisational structures threatened to constrain growth just as investment expectations were increasing. To unlock efficiency, resilience, and sustainable value creation, the business turned to Hudson & Hayes for transformation support.
Key issues identified included:
Failure to address these challenges risked missed revenue opportunities, operational bottlenecks, and reduced attractiveness to investors.
Hudson & Hayes delivered a cross-functional transformation programme integrating process redesign, organisational design, and AI adoption.
The programme delivered significant impact:
Hudson & Hayes enabled this PE-backed enterprise data management business to move from founder-led ways of working to a scalable, investor-ready model. By redesigning processes, embedding AI, and strengthening organisational agility, the company is now equipped to deliver efficiently at scale while sustaining profitability.
The foundation built ensures not only near-term value creation but also long-term resilience, positioning the business for continued growth in competitive global markets.
Key Stats at a Glance
NHS Diagnostics in the Midlands faced mounting pressures: increasing demand, workforce strain, and ambitious targets set by the NHS Long Term Plan. Reducing “Did Not Attends” (DNAs) and unnecessary appointments was central to improving both patient outcomes and operational efficiency.
While AI and automation were already being explored across individual Trusts, efforts were fragmented. Without a unified, strategic approach, the region risked duplicating work, missing ROI, and failing to deliver on national objectives.
Hudson & Hayes was brought in to design and deliver a cohesive strategy for AI adoption in diagnostics, aligned with both clinical needs and operational realities.
The key pain points included:
At stake was the ability to improve patient access, reduce anxiety and waiting times, and release clinical capacity to focus on those most in need
Hudson & Hayes applied its GenAscend methodology, moving from education through discovery to solution design and prototyping.
The engagement delivered measurable benefits:
By taking a structured, collaborative approach, Hudson & Hayes helped NHS Diagnostics transform fragmented experimentation into a unified AI-enabled programme. The elimination of 90,000 unnecessary appointments demonstrates both the scale of efficiency achievable and the direct positive impact on patient outcomes.
With AI literacy embedded and a validated roadmap in place, NHS Diagnostics is now positioned to scale adoption further, improving access, efficiency, and patient experiences across the region.
When we first started working with NHS England – Midlands a year ago, the goal was simple- to help teams make sense of AI. Not in abstract terms, but in practical, operational ways that improve patient care.
Fast forward 12 months, and that mission has turned into something much bigger. Together with 11 Integrated Care Boards (ICBs) and nearly 100 clinical and non-clinical stakeholders, we’ve been building AI literacy, mapping real-world use cases, and co-developing solutions that respond to genuine challenges across the system.
Like any transformation, there’s been a lot of learning along the way; but that’s where the real progress happens. Our patient-facing AI assistant proof of concept is a great example of how innovation, when applied thoughtfully, can reduce admin pressure, cut DNAs, and enhance access to care.
Next month, we’re bringing these lessons to life in a live webinar:
Date- Wednesday, 5th November 2025
Time- 15:00 – 16:00 GMT
Save your spot here
You’ll hear directly from the people leading this work- Eddie Olla (Chief Digital Officer, NHS England – Midlands), Phil Williams (Head of Digital Transformation), alongside Arron Clarke and Simon Mahony from Hudson & Hayes.
It’s not a glossy presentation or a “look what we built” moment. It’s a candid, experience-led conversation about what it really takes to make AI work in healthcare- the lessons, the missteps, and the breakthroughs.
If you’re leading digital or transformation work within the NHS, or simply curious about how AI is making a tangible difference in diagnostics, we’d love for you to join us.
Register here — and feel free to share with any colleagues who might be interested in the discussion.
Organisations worldwide are investing heavily in Microsoft Copilot, hoping to unlock a step-change in productivity. But simply handing out licences doesn’t guarantee results. To turn Copilot into a genuine productivity engine, you need the right structure, culture, and guardrails in place.
In this post, we’ll walk through how to build a high-impact Copilot adoption strategy—one that delivers measurable returns, mitigates risks, and sets the stage for the future of AI in the workplace.
Based on our work with organisations across multiple sectors, here are the building blocks that make the biggest difference:
Identify and train early adopters who are curious, influential, and enthusiastic. They should not only understand Copilot’s features but also know how to tailor them to their role. These champions become internal role models and trusted advisors for colleagues.
Generic training doesn’t work. Focus on embedding Copilot into specific processes. For example:
Copilot should feel like an accelerator for everyday tasks, not an extra layer of work.
A shared prompt library—built and refined by staff—lowers the barrier to effective use. Keep it dynamic, role-based, and continuously updated with best practices and new discoveries.
Adoption is cultural as much as technical. Leaders must actively use Copilot, share outputs, and ask their teams “Have you tried AI for this?”. Even small gestures—such as showing how a meeting transcript can be turned into action items—help normalise usage.
Treat Copilot agents like product releases:
Define simple KPIs such as:
Regularly review performance, retire low-use prompts or agents, and scale the high-value ones.
Once the foundations are in place, expand Copilot’s reach by integrating with:
This aligns with Microsoft’s own guidance: value compounds when Copilot is tied to business-critical workflows rather than used in isolation (microsoft.com).
Organisations that act now will not only capture immediate productivity savings but also build the cultural and technical maturity needed to harness the next wave of workplace AI.
Microsoft Copilot isn’t just another software upgrade but a catalyst for reshaping how work gets done. The evidence is clear: employees can save significant time every week, and organisations can capture enormous value at scale.
But success doesn’t happen by accident. It comes from intentional adoption strategies: role-based training, living prompt libraries, leadership modelling, strong governance, and clear measurement. Skip these steps, and the risk is wasted investment.
Done right, Copilot doesn’t just save hours. It changes the way people work, collaborate, and innovate, positioning organisations for long-term success in the AI era.
© Hudson & Hayes | Privacy policy
Website by Polar