Our approach to AI readiness assessments for Public Sector Clients
- The Crown Consulting Group

- Oct 9, 2025
- 6 min read
Project Overview
Our consultancy was engaged by a major public sector organisation seeking to understand how Artificial Intelligence (AI) could support service delivery in a safe, ethical, and evidence-based way. The client recognised that AI had potential to improve efficiency, enhance user experience, and unlock new insights from existing data — but they also knew that unstructured adoption carried significant risks, particularly around data protection, fairness, and accountability.
Our objective was to help them evaluate where AI might add value, where it might not, and what organisational capabilities were needed to explore AI responsibly. Over a 12-week engagement, we partnered with their digital and data teams to deliver a structured AI Readiness Assessment. This provided a clear picture of current capability, identified priority areas for exploration, and set out a practical roadmap to guide safe, ethical AI adoption.
This work formed part of a wider transformation programme focused on digital maturity and service optimisation. Our team combined business analysis, service design, and data ethics expertise to deliver a balanced view — ensuring that the recommendations supported innovation while maintaining strong governance and public trust.
“The team didn’t just deliver an assessment — they became part of our team. Their collaborative approach meant every session built capability within our staff, not dependency on external expertise.”
Head of Digital Strategy
The Problem
Across the public sector, there is growing pressure to “do something with AI”. Yet, many organisations face uncertainty about what that means in practice. The client’s leadership team had received multiple proposals suggesting AI could automate decision-making, streamline casework, and reduce manual processing. However, they lacked a clear framework for assessing these claims or understanding whether their data, infrastructure, and workforce were ready to support such technologies.
The risk was twofold. On one hand, the organisation could over-invest in unproven AI solutions that failed to deliver user value or introduced bias. On the other, it could under-invest and miss opportunities to improve services and efficiency. The absence of a defined approach also created uncertainty among staff — some saw AI as a threat to existing roles, while others viewed it as an exciting but poorly understood opportunity.
From an operational standpoint, there were no shared standards for evaluating AI projects, no agreed position on data ethics, and limited visibility of existing data quality. This combination of enthusiasm, risk, and ambiguity created a complex environment. The client needed a structured way to separate real opportunities from hype and establish the conditions for safe experimentation.

Research and Discovery
We began with a discovery phase focused on understanding the current landscape — not just in terms of technology, but culture, policy, and user needs. Our researchers and business analysts conducted over 40 stakeholder interviews across policy, operations, IT, and data science functions. These conversations helped map existing pain points and clarify expectations around AI.
Through workshops and document reviews, we explored how data currently flowed through key services and identified where manual processes might create opportunities for automation. Importantly, we also surfaced where human judgment, empathy, or contextual awareness were essential — areas where automation could risk degrading service quality.
We developed a high-level service map to visualise where data and decision-making intersected across the organisation. This revealed several “data silos” — localised systems holding valuable but disconnected information. While this fragmentation limited immediate AI potential, it also provided a roadmap for building future capability.
Parallel to this, our team assessed organisational maturity across six dimensions: governance, data quality, infrastructure, ethics, skills, and strategic alignment. The result was an evidence-based view of AI readiness, highlighting both strengths (such as strong data governance policies) and barriers (such as inconsistent metadata standards).
“They took the time to understand our data environment and worked alongside us to co-develop practical tools. By the end of the engagement, our team felt confident continuing the work independently.”
Lead Data Analyst
The discovery phase culminated in a shared understanding of where AI could genuinely improve outcomes, where risks outweighed benefits, and what foundational work was required before piloting any AI solutions.
Design Approach
Building on the findings from discovery, our design phase focused on translating insights into a structured AI Readiness Assessment framework tailored to the public sector. Our approach was collaborative by design — working side-by-side with the client’s digital, data, and policy teams to ensure alignment with organisational strategy and statutory obligations.
We developed a three-part framework:
Opportunity Identification: Mapping potential use cases and evaluating them against user need, data availability, and ethical considerations.
Capability Assessment: Reviewing existing systems, data quality, and governance arrangements to identify technical and organisational readiness.
Ethical and Risk Evaluation: Using an adapted version of the Government’s Data Ethics Framework to assess potential harms, transparency requirements, and accountability mechanisms.
Rather than applying a generic checklist, our consultants facilitated workshops to help the client’s teams self-assess their maturity. This built capability as part of the process — empowering staff to make informed decisions about AI beyond the life of the project.
Our business analysts documented each proposed use case using a standardised template that captured data flows, decision logic, and potential bias points. Service designers then facilitated discussions to explore how AI could enhance — not replace — human service delivery.
We also introduced a decision-support tool built in a secure, lightweight dashboard environment, allowing teams to visualise the relative readiness of different service areas. This tool became a key artefact for prioritising future investment.
Throughout, we maintained transparency and alignment with public sector values. Regular playback sessions ensured findings were shared widely and validated collaboratively. This open approach built trust across departments and reinforced the principle that AI readiness is as much about culture and governance as it is about technology.
Outcome and Impact
The final deliverables included a comprehensive AI Readiness Report, a prioritised roadmap for safe AI adoption, and a capability development plan for staff. Together, these outputs provided the client with both immediate clarity and long-term direction.
Key outcomes included:
Clarity on opportunity and risk: The client gained a realistic view of where AI could genuinely add value — identifying five high-potential areas and six that were not yet viable due to data or ethical constraints.
Improved governance: New decision-making criteria were introduced for evaluating future AI projects, ensuring that ethics, transparency, and proportionality were embedded from the outset.
Upskilled teams: Over 80 staff participated in AI literacy workshops, increasing understanding of both technical fundamentals and ethical implications.
Cultural confidence: Staff reported greater confidence in discussing AI opportunities with vendors and policy leads, supported by a clear organisational position statement on responsible AI.
Tangible next steps: A six-month roadmap was established to pilot one low-risk AI use case within a controlled environment, supported by a robust assurance process.
Within three months of completion, the client’s leadership team used our framework to evaluate two additional AI proposals — one was approved for pilot testing, while the other was deferred pending data quality improvements. This demonstrated that the framework was not just theoretical but operationally useful.
The wider impact was cultural as much as technical. By reframing AI as a question of service design and ethics — not just innovation — the organisation positioned itself to lead responsibly in a fast-evolving space.
“What stood out was how seamlessly they integrated with our existing delivery teams. Knowledge transfer happened naturally through daily collaboration, leaving us with both a framework and the skills to apply it.”
Service Owner
Reflection
This project demonstrated the value of taking a structured, human-centred approach to AI adoption in government. Rather than rushing to deploy technology, the client invested time in understanding readiness, risk, and opportunity. That decision has paid dividends in confidence, capability, and credibility.
From our consultancy’s perspective, this engagement reinforced an important lesson: AI transformation in the public sector succeeds when it’s grounded in evidence, ethics, and collaboration. Technology alone cannot deliver better public services — it must be supported by transparent governance, skilled people, and a clear understanding of user needs.
The readiness framework we developed has since been adapted for other public sector clients, forming the foundation for a consistent and responsible approach to AI exploration across government. It reflects our belief that digital innovation should always serve the public good — measured not only by efficiency, but by fairness, trust, and inclusivity.
We continue to work with public sector partners to refine and extend this model, ensuring AI remains a tool for positive change rather than an end in itself.



Comments