Advai: Assurance of Computer Vision AI in the Security Industry
Case study from Advai.
Background & Description
Advai鈥檚 toolkit can be applied to assess the performance, security and robustness of an AI model used for object detection. Systems require validation to ensure they can reliably detect various objects within challenging visual environments. Our technology identifies natural (鈥榟uman-meaningful鈥�) and adversarial vulnerabilities in CV models using an extensive library of stress-testing tools.
皇冠体育app natural vulnerabilities include semantically meaningful image manipulations (such as camera noise, lighting, rotation, etc.). 皇冠体育appse probe the vulnerability of the CV system to image distortions that are likely to occur naturally (but rarely). 皇冠体育appse are called near out-of-distribution or out-of-sample inputs and are in essence mathematically unrecognisable to a system not trained on equivalent inputs. For example, foggy Californian days are rare, but they happen. 皇冠体育appir rarity leads to AI models that are ill-equipped to deal with these inputs accurately. Our approach methodically reveals these weaknesses and can advise, for example, synthetic data generations to compensate (to continue the example, a foggy overlay of a Californian setting).
To assess the adversarial vulnerabilities we inject adversarial perturbations into trusted image data to understand how vulnerable the system is to subtle manipulations designed to cause the biggest deleterious effect. This approach tests not only the vulnerability to efforts by an adversary, but it is also a reliable method of assessing general robustness to natural vulnerabilities due to the constraints that can be applied to the optimisation of the perturbation.
This toolkit is applied throughout the MLOps lifecycle, divided into Data Analysis, Pre-Deployment and Post-Deployment. This ensures that robustness is not just assessed at the end, but rather the AI is robust by design.
How this technique applies to the AI White Paper Regulatory Principles
Safety, Security & Robustness
皇冠体育app rigorous testing of data, models, and API packaging directly addresses the safety, security, and robustness of the AI system, ensuring that it is resistant to both inadvertent errors and intentional attacks.
Appropriate Transparency & Explainability
皇冠体育app analysis of the data, including assessments of labelling, poisoning, OOD detection improve transparency and the explainability of the model鈥檚 decision-making process by ensuring that the system鈥檚 judgements can be traced back to clear and unbiased data inputs.
Fairness
Data screening to detect and correct imbalances in the training data addresses the potential for bias in the model, which contributes to the fairness of the system鈥檚 object detection capabilities.
Accountability & Governance
By identifying vulnerabilities and providing technical recommendations, Advai promotes accountability and contributes to the governance of the AI system鈥檚 use within the security industry.
Why we took this approach
This approach was selected to provide a comprehensive assessment of the AI system鈥檚 ability to perform under significant duress and therefore imply its reliability in the real world, and to immunise the system against sophisticated AI-specific threats.
Benefits to the organisation using the technique
-
Increased confidence in the AI system鈥檚 ability to accurately detect objects in complex visual environments.
-
Enhanced security against adversarial attacks through a thorough examination of data, models, and APIs.
-
An improved understanding of the AI model鈥檚 limitations and performance boundaries.
-
A more robust and reliable AI system that stakeholders can trust.
Limitations of the approach
-
皇冠体育app approach does not cover all possible adversarial attacks, especially new or unforeseen ones; however, we are aware of (and develop internally) a great number of adversarial methods.
-
皇冠体育app improvement of resilience metrics may come at the cost of accuracy scores. This is a trade off that we look to optimise with the clients
-
Reassessment is required when the model is updated or when new data is introduced to ensure robustness hasn鈥檛 been compromised
-
皇冠体育app recommendations may increase computational costs, however development costs could also reduce if the CV systems have a higher success rate on deployment.
Further Links (including relevant standards)
Further AI Assurance Information
-
For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: /ai-assurance-techniques
-
For more information on relevant standards visit the AI Standards Hub: