AI Assurance case studies

Since the launch of the AI Assurance Roadmap at the techUK Digital Ethics Summit last year, the Centre for Data Ethics and Innovation (CDEI) has been researching how best to cultivate an effective AI assurance ecosystem in the UK. This has included a wide range of engagement exercises, including the AI Assurance Symposium hosted by techUK in July.
 
As a next step, techUK and CDEI are working together to gather real-world examples of AI assurance techniques being used across different sectors. These will be used to build an online case study repository which aims to drive awareness and use of AI assurance techniques across sectors. The repository will feed into the wider vision of the CDEI AI assurance roadmap; to drive the maturity of the UK AI assurance ecosystem.
The answers to questions 2-17 below will be shared with the CDEI team, who will sift and quality assure responses based on their relevance to AI assurance, AI trustworthiness and the AI regulation principles proposed by the Government.
Your contact details will not be shared with CDEI, however techUK may get in touch to ask if you consent for us to put you in touch with CDEI directly to expand further on your case study. You can find the full techUK privacy policy here.
1.What is your email address?
2.What is the name of the organisation using this AI assurance technique?(Required.)
3.What sector is this case study focused on? (select all that apply)(Required.)
4.Which category does this AI assurance technique sit within? (select all that apply)(Required.)
5.What are the key functions of the AI system being assured? (select all that apply)(Required.)
6.Please describe the AI assurance technique (max 200 words)(Required.)
7.What motivated you to use this particular AI assurance technique? (max 200 words)
8.Which of the proposed cross-sectoral principles for AI regulation is this technique most relevant to? (select all that apply)

You can find more details on the principles in the policy paper here.
(Required.)
9.Where in the system lifecycle is this technique being used?
10.Who (which stakeholder role) is implementing this technique? (select all that apply)
11.Please expand on your previous answer (max 50 words)
12.Who will be informed and/or act upon the output of this AI assurance technique?
13.Please expand on your previous answer (max 50 words)
14.Which other assurance techniques are used alongside this one?
15.Which other business processes (e.g. governance, risk management) feed into or are informed by this AI assurance technique?
16.Within this case study, what were the main benefits of this AI assurance technique? (max 200 words) e.g. helped anticipate risks, provided quantitative evidence
17.Within this case study, what were the main limitations of this AI assurance technique? (max 200 words) e.g. limited by access to reliable information about the system, lack of clear thresholds to evaluate agains