The hype around AI in procurement is real. But so is the gap between ambition and delivery.
Having worked with organisations across private equity, transport, healthcare, and the public sector on AI transformation, we've gathered lessons that don't tend to appear in vendor brochures. Here are seven of the most important.
Unlike ERP implementations, AI doesn't arrive ready to operate. It requires training, iteration, and continuous feedback. The organisations that succeed are the ones that understand this upfront and build their delivery models accordingly. Think of it less like installing software and more like developing talent.
Generic AI training rarely sticks. The most effective education we've delivered connects AI directly to people's day-to-day roles. A procurement leader needs to see what their morning looks like differently, their emails, their briefings, their supplier reviews, not a theoretical overview of large language models.
If your organisation is at ground zero with AI adoption, the single highest-leverage activity is creating a shared prompt library. Map your team's recurring tasks to specific prompts. Share them. Standardise them. This alone can save individuals an hour or more per day and begins to normalise AI as part of how work gets done.
There is no one-size-fits-all AI strategy for procurement. A large, mature organisation with Ariba or Coupa already deployed needs a different approach to a lean team building a procurement function from scratch. Before you develop your roadmap, be honest about where you are and design accordingly.
Not every AI use case requires a bespoke build. A useful framework puts opportunities into four categories: those you can solve with a standard prompt today; those you can configure using tools like Copilot Studio; those that require a custom build; and those where a vendor solution is the right answer. Knowing which category each use case falls into dramatically improves prioritisation and speed to value.
One of the clearest patterns from real-world delivery: organisations that build AI capability alongside an external partner, rather than handing it over entirely, retain more knowledge, drive faster adoption, and are far better placed to scale. Delivery in waves, with shared ownership and regular knowledge transfer, is the model that works.
If your business case is built entirely around headcount reduction, it will underperform in delivery and in credibility. Procurement's value proposition is increased value and reduced risk. AI contributes to both. In a world where supply chain shocks arrive without warning, the ability to rapidly assess exposure is arguably more valuable than any efficiency saving. Make sure your CFO is hearing that story.
Underneath all of these lessons is a more fundamental change in how procurement leaders need to think. The question to ask isn't "how can we use AI to do what we already do, faster?" It's "what does a future procurement function look like- where AI handles the automatable, and our people focus on judgement, relationships, and strategic value?"
That question leads somewhere far more interesting.
Many large enterprises have a clear-eyed view of what AI could do for them. The strategy decks are full of transformation narratives. And yet, when the rubber meets the road, the overwhelming majority end up investing almost exclusively in making their existing operations incrementally cheaper.
The ambition is there. The execution tells a different story.
The distinction between Efficiency AI and Opportunity AI is one I first encountered through Nathaniel Whittemore on the AI Daily Brief podcast, and it has stuck with me ever since.
Efficiency is about doing the exact same things with fewer resources. Opportunity is about doing things that were previously impossible.
It is a clean and useful framing; but the more important question, and the one I keep running into in practice, is not whether leaders understand the difference. It’s why, despite understanding it perfectly well, they almost always end up defaulting to Efficiency.
I recently ran an opportunity workshop for the business services function of a large organisation. The room was full of sharp, forward-looking people. And yet, within twenty minutes, the conversation had migrated entirely to using AI to squeeze margins and cut processing hours. The underlying assumption was tacit but unmistakable: we have to earn the right to pursue Opportunity AI by mastering Efficiency AI first.
It makes sense on paper. But in practice, treating efficiency as a stepping stone leaves legacy businesses dangerously exposed. If you spend your strategic energy shaving 10% off operational costs, you leave the door wide open for a competitor with zero technical debt to render your entire operating model obsolete. This has always been true. In the age of AI, it is existential.
It’s easy to point the finger at a lack of vision, but in my experience, that’s rarely the culprit. The leaders in these rooms are not blind to the future. The real problem is usually far more mundane: technical friction.
Efficiency AI is attractive to legacy businesses precisely because it is low-friction. You don’t need a pristine, unified data lake to deploy an AI co-pilot to your team, or to buy an enterprise licence for a tool that summarises your meetings. It sits neatly on top of existing systems. It gives the board a measurable, immediate win without disturbing twenty years’ worth of accumulated technical debt.
Opportunity AI is the exact opposite. When a business unit tries to build a fundamentally new, AI-driven operating model, they immediately hit a wall. The business stakeholders are thinking in 2026, but their infrastructure is stuck in 2012. They crash into siloed databases, rigid compliance structures, nine-month enterprise architecture reviews, and entirely legitimate CISO apprehension. The hard reality is that the truly transformative capabilities of Opportunity AI require clean, real-time data and modern architecture; most large enterprises have neither.
This is precisely the wedge that AI-native newcomers are exploiting. Start-ups are not inherently more innovative. They simply do not have legacy systems dragging them down. They build on modern stacks from day one, and the compound advantage of that clean foundation grows with every passing quarter.
If an enterprise tries to force an Opportunity AI initiative through standard IT governance and legacy infrastructure, the project will die before the first prototype is ever built. You cannot build tomorrow’s operating model on yesterday’s plumbing.
The answer is not to tell a CEO to ignore immediate, tangible cost savings. Nor is it to allow the pursuit of those savings to slowly cannibalise your long-term bets. The answer is structural separation: parallel tracks, protected by a deliberate firewall between them.
Think of this as a three-part lifecycle.
You do not evaluate Efficiency and Opportunity initiatives in isolation. You assess the entire value chain at once, identifying immediate friction points alongside open white space. The logic is simple: let the quick, low-friction efficiency wins relieve operational pressure and generate the funding runway for your bolder, longer-horizon bets. One track finances the other.
This is where most enterprises trip up. Once the ideas are on the whiteboard, they cannot be built under the same roof.
Efficiency AI stays embedded in the core business. Operations leaders own it, it runs on existing IT infrastructure, and it is held accountable to traditional metrics: margins improved, process times reduced, immediate ROI.
Opportunity AI must be physically and financially spun out. Give the team ring-fenced funding that will not get raided when earnings look soft, and a clean, isolated cloud environment detached from the legacy stack. The firewall is not bureaucratic caution; it is the only thing that keeps the Opportunity initiative alive long enough to prove itself.
Crucially, you must change the scorecard. If you measure an Opportunity AI project using traditional corporate ROI, you will kill it before it takes its first breath. Standard metrics; hours saved, margin improvement; are lagging indicators. They measure the optimisation of a process that already exists. With Opportunity AI, the process has not been invented yet. You are not optimising; you are searching for a fundamentally different business model. That requires different instruments:
Time-to-First-Prototype: How fast can the team get a functioning, unpolished, unscalable version of the model in front of a real user? This metric forces the team to strip away corporate perfectionism, identify their riskiest assumption, and test it immediately.
Iteration Velocity: Opportunity AI is inherently experimental. The team will get it wrong on the first try. You are not measuring their initial accuracy; you are measuring the speed of their learning loop. How quickly can they deploy, gather data on why it failed, adjust the model, and push the next version?
New Unit Economics: If the legacy business spends £50 and three hours to manually process a complex request, the isolated team must prove they can fundamentally break that equation; not improve it by 15%, but shatter it entirely.
Proving those new unit economics is the bridge back to the core business. But here is the catch: if the Opportunity initiative stays in isolation too long, it becomes an organ transplant that the host body eventually rejects. We have all seen isolated innovation labs build something brilliant that dies the moment they try to hand it back. Usually, the legacy systems cannot support it. The firewall that protected the initiative must eventually come down.
But you cannot simply throw the prototype over the fence to IT and hope for the best. You need hard, unambiguous triggers to force integration at the right moment; not too early (when the initiative is too fragile) and not too late (when it has become a separate business that no longer fits):
The Unit Economic Crossover: The project must prove in its isolated state that its fundamental unit economics are vastly superior to the legacy process. The financial justification should be undeniable before IT is asked to do the heavy lifting of integration.
The Data Ceiling: When the only thing preventing further growth is access to live, core operational data; and the team has exhausted what they can do with sandboxed or historical data; it is time to force the integration conversation.
Risk Parity: Before convergence, the incubation team must demonstrate that they have built sufficient security, data privacy, and hallucination guardrails that the risk profile of the new system matches or beats the legacy one.
When those conditions are met, the narrative shifts entirely. You are no longer asking the core business to absorb a risky, theoretical experiment. You are asking them to scale a proven, superior operating model with a documented financial case. That is a very different conversation.
Mastering Efficiency AI does not earn you the right to pursue Opportunity AI. It just buys you a little time; and perhaps not as much as you think.
The organisations that are going to lose the next decade are not just the ones that failed to invest in AI. They are also the ones that invested heavily; and spent all of it making their existing operating model marginally cheaper to run. They will have dashboards full of efficiency metrics, impressive productivity reports, and a business model that an AI-native competitor has made irrelevant before those reports reach the board.
The organisations that will win are not necessarily the boldest or the best-resourced. They are the ones that build the structural discipline to pursue both tracks at the same time; the firewalls, the ring-fenced environments, the divergent scorecards; and resist the constant organisational gravity that pulls every Opportunity initiative back toward the comfort of incremental optimisation.
The stepping stone logic feels rational. It is, in fact, the path of least resistance dressed up as strategy. And in the age of AI, the path of least resistance leads somewhere very specific: to a business that is exceptionally well-optimised for a world that no longer exists.
A UK transport organisation set out to accelerate its AI journey, building on early experimentation with Microsoft 365 Copilot and initial agent development in Copilot Studio. There was already strong internal momentum, with multiple teams exploring how AI could improve productivity, streamline operations, and enhance decision-making.
However, this momentum brought a critical inflection point. Rather than rushing into rapid deployment, the organisation recognised the need to establish the right foundations—ensuring that any AI capability developed could scale securely, consistently, and under clear governance.
The engagement focused on moving from isolated experimentation to a structured, repeatable, and IT-led AI delivery model.
The organisation faced a familiar but complex challenge: balancing speed of innovation with the discipline required for enterprise-scale delivery.
There was strong ambition to build AI agents quickly, but without the right guardrails, this risked fragmented solutions, inconsistent standards, and potential governance issues. At the same time, growing interest across teams created pressure to define ownership, responsibilities, and a clear path forward.
Key challenges included:
Without addressing these challenges, the organisation risked losing control of its AI estate—leading to inefficiencies, duplication of effort, and increased security or compliance exposure.
The approach centred on a “done with” model—working side-by-side with internal teams to build capability while simultaneously delivering tangible outputs. This ensured that knowledge was embedded, not outsourced.
Hands-on education sessions were delivered to upskill the IT team across key areas, including:
These sessions were practical and applied, enabling teams to immediately translate learning into action.
A secure and scalable technical foundation was established to support ongoing AI development.
Microsoft Foundry played a key role as a unified platform for managing AI models, agents, and data integration—enabling a consistent and scalable development approach.
A core focus of the engagement was designing governance that could scale with demand.
This provided the structure needed to maintain control without slowing down innovation.
Rather than delivering a one-off solution, the engagement focused on creating a repeatable blueprint for AI delivery.
This ensured the organisation could scale AI initiatives independently, without ongoing reliance on external support.
The work was delivered in close partnership with the IT team:
This embedded both confidence and ownership within the internal team.
The engagement delivered both immediate value and long-term capability, positioning the organisation for scalable AI adoption.
Quantitative & Tangible Results
Qualitative Impact
Before vs After
By focusing on foundations rather than speed alone, the organisation has positioned itself to scale AI agents in a way that is both controlled and sustainable.
The combination of capability building, governance design, and technical enablement has created a platform for long-term success—where AI can be developed confidently, securely, and at pace.
With a clear delivery model, established guardrails, and an empowered internal team, the organisation is now equipped to move beyond experimentation and into enterprise-scale AI adoption—turning early momentum into lasting transformation.
Artificial intelligence is rapidly becoming embedded across organisations. From knowledge assistants to policy bots and triage agents, many companies have already deployed their first generation of AI agents. These tools are often valuable and can deliver measurable improvements in efficiency, productivity, and decision-making.
However, a growing number of organisations are discovering an important truth: AI agents alone rarely create true transformation. Real impact comes from rethinking the entire workflow—combining AI capabilities, human expertise, and established disciplines such as lean thinking and service design.
This shift is leading to the rise of Agentic Workflow Design: a structured approach to redesigning value streams where AI agents and humans collaborate intentionally to deliver better outcomes.
If you are considering running a workshop to design an agentic workflow, the following framework offers a practical way to structure the discussion.
Over the past two years, enterprise AI adoption has accelerated dramatically.
Yet many early deployments remain isolated point solutions.
Typical examples include:
While these systems often deliver incremental efficiency improvements, they rarely transform the end-to-end value stream.
A support agent might save five minutes looking up policy information, but the broader workflow—handoffs, approvals, manual data entry, duplicated processes—remains unchanged.
This is why leading organisations are moving beyond isolated AI tools toward agentic workflows, where multiple specialised AI agents collaborate with humans across the entire process.
Agentic workflow design is the practice of reimagining business processes around a hybrid system of AI agents and human capabilities.
Instead of asking:
“Where can we add an AI agent?”
The conversation becomes:
“How should this entire workflow operate if humans and intelligent agents worked together optimally?”
The approach borrows heavily from established methodologies such as:
When applied correctly, it enables organisations to redesign workflows so that humans focus on high-value activities—judgment, empathy, relationships, and strategic thinking—while AI agents handle information-intensive and repetitive tasks.
Agentic workflow design starts with assembling a cross-functional group of participants.
The most productive sessions typically include:
This mix ensures the conversation balances operational reality, technical feasibility, and design thinking.
Before diving into redesign, it is helpful to run a short education session to level-set the group on what modern AI agents can actually do.
For example, today’s agents can:
Understanding these capabilities early helps keep discussions grounded in practical opportunity rather than speculation.
One of the biggest risks in AI transformation is technology-first thinking.
Without clear alignment on the problem being solved, teams can quickly drift into conversations about tools and platforms instead of outcomes.
To avoid this, agentic workflow design begins by clearly defining:
This anchor ensures that any redesign remains focused on customer value rather than technical novelty.
Once the problem is clear, the next step is to map the current workflow or value stream.
This is where classic lean thinking becomes extremely valuable.
Teams should identify:
Research from the Lean Enterprise Institute suggests that in many administrative processes, up to 80–90% of total time is non-value-added activity, often caused by waiting, approvals, and fragmented systems.
Mapping the current state exposes these inefficiencies and creates the foundation for redesign.
Once the current state is understood, the group can begin reimagining the workflow.
A simple but powerful method is to create two swimlanes:
This visual structure forces a deliberate conversation about where each type of capability is most valuable.
Humans typically excel at:
AI agents tend to outperform humans in tasks such as:
Designing workflows around these complementary strengths often produces dramatic improvements in speed, consistency, and customer experience.
A common mistake in early AI design is attempting to build a single “mega-agent.”
In practice, the most effective agentic systems consist of multiple specialised agents, each with clearly defined responsibilities.
Each agent should have:
For example, a customer support workflow might include:
This modular architecture increases reliability, transparency, and scalability.
It also mirrors how modern AI frameworks such as LangChain, Microsoft Copilot, and AutoGen-style agent systems structure collaborative AI workflows.
At this stage, discussions often drift toward platform choices.
Teams begin asking questions such as:
While these are important decisions, they are not the first priority.
The most important task is to design the right workflow.
Technology decisions should enable the design, not constrain it.
This design-first mindset is consistent with research from MIT Sloan, which shows that organisations that focus on business process redesign before technology implementation achieve significantly higher transformation success rates.
Once the future-state workflow is defined, the next step is to quantify the potential benefits.
This typically includes estimating:
According to PwC, AI-driven automation could contribute up to $15.7 trillion to the global economy by 2030, largely through productivity improvements and process optimisation.
However, meaningful transformation rarely happens overnight.
Teams should therefore build a roadmap of iterative improvements, recognising that agentic workflows often evolve through multiple releases and learning cycles.
After the workshop, the process moves into detailed design.
Each agent should be documented clearly, including:
This documentation ensures agents remain governable, auditable, and scalable as they move into production environments.
The most important lesson emerging from early AI adoption is simple:
AI agents alone rarely transform organisations.
Transformation happens when businesses redesign how work flows across humans and machines.
Agentic workflow design offers a structured way to do exactly that—combining AI capability, human judgment, and lean thinking to produce workflows that are faster, smarter, and more customer-centric.
As organisations move deeper into the era of intelligent systems, those that succeed will not simply deploy AI tools.
They will rethink the way work itself is designed.
For years, procurement departments have accepted manual document review and redaction as an unavoidable cost of doing business. However, as organizations pursue broader digital transformation objectives, with 81% of business leaders prioritizing these investments, the limitations of traditional, manual workflows are becoming clear. In 2026, relying on humans to manually locate and obscure sensitive data in vast volumes of procurement documents is not just inefficient; it is a high-cost strategy that creates significant legal and operational risks. AI-powered redaction is no longer an optional "innovation" pilot- it is a critical requirement for scalable, secure procurement operations.
Hudson&Hayes recently worked with a large transport and infrastructure organisation that was dealing with a new regulatory requirement.
Any contract worth more than £5 million now had to be redacted before being published.
On paper, that sounds straightforward. In practice, it wasn’t.
The organisation had already tested several tools, but none of them really worked at scale.
Some were AI-based, but still left metadata behind.
Others only handled basic PII and couldn’t cope with the organisation’s very specific redaction rules.
Manual tools gave more control, but were slow and impractical for large documents.
The hidden costs of manual redaction are staggering, starting with the immense strain on personnel time. In complex procurement cycles, teams often find they have the manual capacity to redact only about 20% of the required documents, creating severe bottlenecks. This labour-intensive process is not scalable and detracts from high-value strategic work, contributing to what many in the industry call the "Excel exodus" as departments seek to move away from fragmented, manual tools.
Perhaps most critically, manual redaction is prone to error. A single overlooked page, paragraph, or piece of metadata in a contract or tender document can result in a catastrophic data breach. In 2026, the global average cost of a data breach is projected to reach $4.88 million, emphasizing the immense financial risk associated with even one manual mistake.
Automated redaction technology, like Redactiv AI, directly addresses these costs and risks, enabling organizations to move from manual experiments to an "AI-native" procurement model. By implementing true, irreversible data removal, Redactiv AI not only reduces the potential for costly breaches but also allows procurement teams to reclaim 20% of their operational capacity, unlocking valuable resources for strategic, non-administrative work.
Many organizations operating within the UK rely on digital tools to process and manage vast amounts of data, with 81% of business leaders citing digital transformation as an essential or necessary objective for success. A significant component of this transformation involves managing compliance with data protection regulations, particularly when responding to Data Subject Access Requests (DSARs).
As organizations generally have a one-month deadline to respond to these requests, which can involve thousands of items of personal data, the pressure to accurately redact information is immense. For years, the standard approach has been to apply manual black boxes or visual overlays, but this method is fundamentally flawed because simple visual blackouts fail to remove underlying text 70% of the time.
The core issue with simple visual overlays is that they are precisely that: overlays. While they visually obscure text, they do not remove the underlying digital data, leaving it fully searchable and recoverable. This sensitive information remains embedded in the file structure and can be extracted by anyone who copies and pastes the document into a text editor, leading to a significant GDPR compliance failure.
This is where Redactiv AI changes the game. Unlike standard PDF editors, Redactiv AI performs True Redaction, which involves the irreversible removal of data. Our software ensures that once information is redacted, it is completely scrubbed from the document at a fundamental level rather than just being visually masked.
While manual redaction is labor-intensive and slow, often creating bottlenecks for disclosure teams, Redactiv AI provides a scalable, automated alternative.
• Deep Layer Scrubbing: Redactiv AI wipes all hidden metadata and underlying text layers, including text hidden behind images.
• Pattern Recognition at Scale: Leveraging natural language processing, the tool automatically identifies Personally Identifiable Information (PII) such as names, addresses, and IDs with far greater speed and accuracy than human review.
• Massive Volume Handling: Redactiv AI can process over 2,000 pages per document all at once, allowing your team to meet strict GDPR deadlines without manual fatigue.
In 2026, relying on visual black boxes is no longer an acceptable practice for UK organizations. The risk of data breaches is significant, with the global average cost of a breach reaching $4.44 million in 2025, while U.S. costs surged even higher due to increased regulatory fines.
By using Redactiv AI, you are not just covering up data; you are removing the risk entirely. Our solution ensures your procurement and legal teams stay compliant while reclaiming significant operational capacity by reducing manual workload.
Primary Sources Used:
• [1.1] Valtech/Backlinko: Digital Transformation Statistics for 2026 (81% of leaders cite it as essential).
• [2.1] IBM: 2025 Cost of a Data Breach Report ($4.44M average cost).
• [3.1] Redactable/Industry Report: The Complete Guide to PII Redaction in 2026 (Visual blackouts fail 70% of the time).
• [4.3] ICO/Kitson Boyce: UK GDPR Guidance on DSAR Time Limits (One-month deadline).
In healthcare transformation, the difference between a successful pilot and a failed deployment often comes down to one thing: clinical grounding. At Hudson & Hayes, our recent work developing a patient-facing AI assistant for the NHS has centered on a specific operational challenge: improving appointment attendance and ensuring patients arrive fully prepared for their procedures.
While the technology is impressive, the "why" is purely operational. Every missed or ineffective appointment is a lost opportunity for care. However, solving this isn’t just about sending a smarter notification; it is about managing the complex intersection of data quality and clinical risk.
When building an AI assistant for patients, the margin for error is non-existent. Our discussions with NHS clinical teams have reinforced that a tool is only as reliable as its training set.
To manage risk effectively, we focused on three core pillars:
Reducing "Did Not Attend" (DNA) rates is only half the battle. A significant operational hurdle in the NHS is the "unprepared patient", someone who attends their appointment but hasn't completed the necessary pre-procedure requirements.
For many complex procedures, specific preparation is mandatory for the appointment to proceed. If a patient arrives without having followed these protocols, the clinical slot is effectively lost. Our AI assistant is designed to bridge this information gap, providing clear, timely guidance to ensure patients are:
One of the pitfalls of modern AI is "feature creep", the tendency to make a tool do too much. For this project, the directive was clear: keep the content helpful but minimal.
By focusing on providing timely, accurate information, we reduce the friction patients face when navigating hospital services. We aren't looking to overshare or complicate the patient journey; we are looking to streamline it. This minimalist approach is, in itself, a form of risk management, reducing the surface area for misinformation and keeps the patient focused on the necessary action.
As technology experts, it is easy to get caught up in "shiny toy" syndrome. We spent significant energy developing a sophisticated AI assistant, yet the feature that generated the most genuine excitement from operational stakeholders was arguably the most basic: a digital pre-assessment form.
This was a humbling and vital lesson. While the AI provides the long-term "intelligence," the pre-assessment solved an immediate, high-friction pain point for the staff and patients. It reminds us that:
Developing for the public sector requires a unique level of vetting and responsibility. Because our work often touches sensitive areas like the NHS and the SFO, our team maintains a rigorous standard for who builds these tools and how they are deployed.
The goal of this AI assistant is to provide a flexible, user-friendly solution that respects the constraints of the NHS while delivering measurable improvements in both attendance and procedural readiness. It’s about technical knowledge meeting clinical reality to create a safer, more efficient patient experience.
The success of this project isn't just about the AI; it’s about the balance between innovation and utility. By prioritising clinical safety and being willing to "meet the customer where they're at," we create tools that healthcare professionals can actually trust.
As we move forward, the goal is to take these lessons and apply them to other areas of the public sector. Whether it’s streamlining procurement or enhancing patient journeys, the principle remains the same: technology must serve the process, not the other way around.
At Hudson & Hayes, we believe that the "AI gap" isn't just about technical skill; it’s about the bridge between a digital tool and a human outcome. We are proud to be building that bridge alongside the NHS.
Accessibility is about designing digital services and ways of working so people can use them regardless of disability, impairment, or long-term condition. Done well, it removes unnecessary barriers and enables the same outcomes through different means. In practice, it benefits far more people than those with formal accessibility needs.
As AI becomes embedded into everyday services and work, accessibility is no longer a secondary concern. It is a signal of whether digital transformation is actually working. If AI-enabled tools and services do not work for everyone, then transformation has not succeeded, no matter how advanced the technology appears.
Legal and regulatory frameworks are reinforcing this shift.
In the UK, the Equality Act 2010 requires organisations to make reasonable adjustments for disabled people. This applies across both services and employment, including digital tools and AI-enabled workflows.
Public sector organisations face additional obligations under the Public Sector Bodies Accessibility Regulations, which require WCAG-aligned accessibility and ongoing transparency. This places responsibility not just on initial delivery, but on how accessibility is maintained over time.
At a European level, the European Accessibility Act brings accessibility directly into product and service design. For organisations operating across markets, accessibility increasingly affects whether services can be offered at all.
These frameworks raise the baseline. But compliance alone does not guarantee usable or inclusive services.
AI can address long-standing accessibility challenges when applied deliberately.
It can help simplify content, produce summaries and plain-language versions, and support captions and alternative text. It can reduce cognitive load by guiding people through complex processes in clearer, more intuitive ways. It can also support personalisation, adapting how information is presented rather than forcing everyone through the same experience.
These capabilities are already available. The challenge is ensuring they are applied with appropriate oversight. AI accelerates outcomes, both good and bad.
AI also introduces accessibility risks that are easy to miss.
Automated guidance can be misleading or inconsistent. Decision-making systems can disadvantage disabled people if differences in behaviour or communication are not considered. AI interfaces such as chat tools and copilots are often deployed without the same accessibility scrutiny applied to traditional systems.
In most cases, the issue is not the AI itself. It is the absence of clear ownership, design discipline, and assurance.
As AI reshapes work, accessibility increasingly determines who benefits.
Productivity gains now depend on digital platforms and AI tools. If those tools are inaccessible, employees are effectively blocked from parts of their role. This creates hidden inequality inside organisations, even where intentions are good.
Organisations that manage this well treat accessibility as part of workforce transformation. They consider how tools interact with assistive technologies, how roles evolve, and how people are supported as ways of working change.
One pattern is consistent. Organisations that embed accessibility into design and delivery move faster over time. Their services are clearer, more robust, and easier to adapt. Risk is reduced through better decisions rather than late remediation.
Accessibility, done properly, is not a constraint on AI or innovation. It is often what enables transformation to scale safely and sustainably.
AI is raising expectations across regulation, service quality, and the workplace. Accessibility sits at the centre of those expectations.
If you are leading digital or AI initiatives and are unsure how accessibility fits into your roadmap, whether your current approach genuinely works for everyone, or how regulation and workforce impact connect, it may be worth taking a step back.
At Hudson & Hayes, we see the biggest gains when accessibility is addressed early as part of how AI-enabled services and operating models are designed. A short, focused conversation at the right moment can prevent much larger challenges later.
Hudson and Hayes worked with a large transport and infrastructure organisation that was dealing with a new regulatory requirement.
Any contract worth more than £5 million now had to be redacted before being published.
On paper, that sounds straightforward. In practice, it wasn’t.
The organisation had already tested several tools, but none of them really worked at scale.
Some were AI-based, but still left metadata behind.
Others only handled basic PII and couldn’t cope with the organisation’s very specific redaction rules.
Manual tools gave more control, but were slow and impractical for large documents.
Most contracts were 500–1,000 pages long, and redacting a single document could take up to four hours. That simply wasn’t sustainable.
They needed something that was:
We started by slowing things down before speeding them up.
Rather than jumping straight into a build, we ran a short design sprint with the client. The goal was to properly understand the problem before touching any code.
Together, we:
Within one to two weeks, we had a working prototype that stakeholders could use and react to. That early feedback shaped everything that followed.
Once the direction was clear, we designed the target architecture and built the solution in short, practical sprints, staying close to the client throughout.
The platform was developed using:
Regular demos and feedback meant the tool improved quickly and stayed grounded in real-world use, not assumptions.
The solution was deployed inside the client’s environment and immediately made a difference.
Redaction time dropped from hours to minutes — typically around five minutes per contract.
At the same time, the organisation:
What started as a compliance problem ended up becoming a practical example of how AI could genuinely improve how work gets done.
This wasn’t about “adding AI” for the sake of it.
It worked because the solution was:
And that’s what turned a frustrating, manual process into something simple, fast, and repeatable.
In the NHS, improvement efforts often start with good intentions but limited scope.
A digital triage tool is introduced. A backlog is addressed in one specialty. A new system is rolled out in isolation.
Yet patients still experience delays, staff remain overstretched, and outcomes vary widely.
The reason is rarely a lack of technology. More often, it is because changes are made to individual steps rather than the full patient pathway.
Meaningful improvement comes from looking at the pathway end to end, from referral through to outcome, and identifying where friction builds, where decisions stall, and where AI can remove unnecessary work safely and responsibly.
An NHS pathway spans every stage of care, often across multiple teams, systems and organisations.
For example:
These pathways cut across clinical, operational and administrative boundaries. Data is fragmented. Ownership is shared. Decisions are made at multiple points, often under pressure.
Optimising a single step rarely improves overall performance if the rest of the pathway remains constrained.
From our work with the NHS, AI creates the most value when applied to visibility, decision support and automation across the pathway rather than isolated use cases.
Many delays occur simply because teams cannot see the full picture.
AI can bring together data from referrals, EPRs, diagnostics and scheduling systems to provide:
This kind of pathway-level visibility supports proactive management rather than reactive firefighting.
NHS pathways often slow down at decision points, not because of poor judgement, but because of volume, complexity and limited information.
AI can support decision-making by:
Used appropriately, this helps reduce variation and ensures clinical time is focused where it is most needed.
A significant proportion of pathway delay is administrative rather than clinical.
AI-enabled automation can help with:
This does not remove the need for oversight, but it reduces repetitive work and frees staff capacity across the pathway.
Based on what we see working in practice, a structured approach matters more than the choice of technology.
Start with reality, not policy.
Map:
This often reveals issues that are invisible when viewed through organisational silos.
Prioritise areas where:
These points usually offer more value than starting with advanced analytics.
AI is only as reliable as the data supporting it.
This means:
Without this, AI risks accelerating existing problems rather than solving them.
Pathways do not sit within a single system.
True optimisation requires orchestration across:
This is where many initiatives fall short. Automating one system in isolation rarely improves the pathway overall.
Pathway optimisation is not a one-off programme.
Track:
Use this data to refine both the pathway design and the AI supporting it.
When AI is applied across the full pathway, organisations typically see:
Just as importantly, staff spend less time chasing progress and more time delivering care.
AI will not fix NHS pathways on its own.
The biggest gains come from understanding how care flows end to end, then using AI deliberately to remove friction, support decisions and coordinate activity across the system.
For NHS organisations under pressure to improve access, productivity and outcomes simultaneously, pathway-led optimisation offers a more sustainable route than isolated digital initiatives.
If you are reviewing a pathway and want a structured way to assess where AI could add value safely and responsibly, this is exactly the type of work we support at Hudson & Hayes.
© Hudson & Hayes | Privacy policy
Website by Polar