FSA: Developing an AI-based Proof of Concept that prioritises businesses for food hygiene inspections while ensuring the ethical and responsible use of AI
Case study from the Food Standards Agency.
Background & Description
This case study is focussed on the use of AI to support hygiene inspection of food establishments by prioritising businesses that are more likely to be at a higher risk of non-compliance with food hygiene regulations. Currently, this process is manual, labour intensive and inconsistent across local authorities. Using this AI-enabled tool is expected to benefit local authorities by helping them to use their limited resources more efficiently. 皇冠体育app tool was developed as a Proof of Concept to explore the art of the possible. Note that, it was decided not to put the tool into live usage owing to multiple reasons and competing priorities.
Our Approach
皇冠体育app Food Standards Agency鈥檚 (FSA) Strategic Surveillance Service is a data science team that strengthens the FSA鈥檚 food safety mission. This team develops tools and techniques to turn data into intelligence, using machine learning and AI. One such tool is the Food Hygiene Rating Scheme 鈥� AI (FHRS AI) built as a Proof of Concept in collaboration with FSA鈥檚 supplier Cognizant Worldwide Limited, to help local authorities become more efficient in managing the hygiene inspection of food establishments. 皇冠体育app tool supports local authorities to prioritise which businesses to inspect in the first instance by predicting which establishments might be at a higher risk of non-compliance with food hygiene regulations.
FSA has created a Responsible AI (RAI) framework to overlay its 10-week agile sprint methodology. 皇冠体育app framework is based on five RAI principles of Fairness, Sustainability, Privacy, Accountability and Transparency. Underpinning FSA鈥檚 Responsible AI framework is the 鈥榬eflect, act and justify鈥� approach posited by the Turing Institute in their paper 鈥楿nderstanding Artificial Intelligence Ethics and Safety鈥�. Three different risk and impact assessments were conducted during the development of the FHRS-AI:
- Responsible AI Risk Assessment
- Stakeholder Impact Assessment
- Privacy Impact Assessment
FSA鈥檚 process-based RAI framework has specific responsibilities assigned to various stakeholders, including Business (Business Owner, Business SMEs, Executive Leadership, Steering Committee), FSA Legal and Compliance (Knowledge and Information Management and Security team, Legal team) and FSA Strategic Surveillance (Business Analyst, Change Consultant, Development Lead, Development Team, RAI Lead).
In addition to these impact and risk assessments, the AI model outputs were validated using other empirical methods. FSA also participated in the Central Digital and Data Office鈥檚 (CDDO) pilot for the Algorithmic Transparency Standard and published the output.
How this technique applies to the AI White Paper Regulatory Principles
.
Safety, Security & Robustness
- Our Responsible AI Risk Assessment helped identify potential risks related to the use case, data and technology used. Identification of these risks led to the consideration and documentation of potential mitigation techniques. This was conducted iteratively throughout the development of the use case, ensuring risks are continuously identified, assessed, and managed.
- 皇冠体育app FSA considers it a good practice to conduct Privacy Impact Assessment (PIA) when using personal data. We used a structured process to identify and minimise data protection risks. 皇冠体育appse were conducted in an iterative process throughout the design, development, and delivery of the use case.
- 皇冠体育app model was designed and developed by adhering to the guidance provided by FSA鈥檚 Knowledge and Information Management and Security (KIMS) and Legal teams on regulations, information governance, data protection compliance and security.
Appropriate Transparency & Explainability
- .
- .
- Our Responsible AI framework is run alongside the design and development sprint to ensure a robust, structured approach is taken and that all pertinent information is captured. Our methodology and evaluation of the model and associated risks is documented in a way that can be evidenced.
- 皇冠体育app Stakeholder Impact Assessment helped build confidence in the way we designed and deployed the system by bringing to light unseen risks. We used this assessment to demonstrate forethought and due-diligence and that the various stakeholders have collaborated to evaluate the social impact and sustainability of the project.
- All the processing of the data used by the FHRS AI tool is in accordance with FSA鈥檚 Public Task to provide advice and assistance to enforcement authorities to keep food and feed safe.
- Our design and development approach ensures that business and Data Science collaborate to understand feature importance and explainability.
- From a technical perspective, feature importance is assessed on all our model iterations to ensure the transparency and explainability of the model. We assess the model and the predictions at a local and global level during training and inference stages.
- 皇冠体育app tool takes a 鈥榟uman-in-the-loop鈥� approach i.e. 皇冠体育appre is a human check of the rating predicted by the tool before any decisions are made.
Fairness
- Our model development process includes a Fairness Assessment check, via which we can assess attributes for group fairness (Accuracy, Balanced Accuracy, Precision, Recall), compare disparity in accuracy and predictions.
- We ensure collaboration between Business and Data Science to identify whether outcomes are considered fair or whether more in-depth analysis is required.
- For the FHRS AI model, we have considered economic bias. This is being monitored to ensure that the model does not disproportionately affect any group.
- We also used the FairLearn tool to monitor the model for bias. FairLearn helped the developers identify bias by showing how the model鈥檚 prediction deviates from the true value based on a type of input.
- 皇冠体育app combination of the model predictions with officers鈥� local knowledge prior to any decision making helps to avoid any unfair decisions.
- Also, there is a provision for users to feedback outcomes into the model to improve the model鈥檚 predictive accuracy.
Accountability & Governance
- Our Responsible AI framework takes a 鈥榩rocess-based governance approach鈥�. We have designed it to be technology agnostic. It focuses on the processes rather than the specific architectures used to enable AI/ML development.
- We apply tangible processes, artefacts and tooling to the delivery and operationalisation methodology in a way that enables development of AI/ML that aligns with the agreed upon RAI principles of Fairness, Sustainability, Privacy, Accountability and Transparency. 皇冠体育appse processes are completed in parallel with existing delivery processes.
- We have ensured that the right level of authority and control is exercised over the management of each use case. This enables alignment to the accountability principles, as decisions on the use of AI/ML are attributed to responsible stakeholders, with key decisions captured throughout the delivery lifecycle.
- FSA has procedures in place to ensure that all staff with access to the information have adequate Information Governance and data protection training.
Contestability & Redress
- This tool was developed as a Proof of Concept, which wasn鈥檛 put into live usage. Hence, we haven鈥檛 fully tested the Contestability & Redress principle.
- 皇冠体育app processes built around the FHRS AI tool ensure that there is always a 鈥榟uman-in-the-loop鈥� expert involved, thus safeguarding against any potential bias or inaccuracies.
- 皇冠体育app FSA only collects and uses information in a manner consistent with data subject rights and its obligations under the law, including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA). FSA鈥檚 provides further information on data subject rights and how FSA鈥檚 Data Protection Officer can be contacted.
Why we took this approach
This approach was based on good practice approaches to AI ethics and safety proposed by leading academics in this field. It allowed us to exercise due-diligence, identify potential risks and put in place mitigations, and build confidence in the FHRS-AI tool.
Benefits to the organisation
- Iterative process ensuring risks are continuously identified, assessed, and managed
- Demonstrated recognised good practice
- Helped identify unanticipated AI related risks
- Identified potential risk mitigation techniques
- Identify and minimise data protection risks.
- Demonstrated forethought and due diligence
- Built frontline user confidence in the AI system
Limitations of the approach
- Some stakeholders lack the necessary readiness for the successful implementation of an AI tool.
- Different stakeholders follow different methods for evaluating accuracy of AI predictions.
Further Links (including relevant standards)
Further AI Assurance Information
-
For more information about other techniques visit the OECD Catalogue of Tools and Metrics:
-
For more information on relevant standards visit the AI Standards Hub: