Advai: Robustness Assessment of a Facial Verification System Against Adversarial Attacks
Case study from Advai.
Background & Description
Advai were involved in evaluating the resilience of a facial verification system used for authentication, namely in the context of preventing image manipulation and ensuring robustness against adversarial attacks. 皇冠体育app focus was on determining the system鈥檚 ability to detect fraudulent efforts to bypass facial verification or 鈥榣iveness鈥� detection and to resist manipulations from fake imagery and feature space attacks.
How this technique applies to the AI White Paper Regulatory Principles
Safety, Security & Robustness
皇冠体育app assessment is crucial for validating the system鈥檚 security measures against adversarial image and feature manipulation attacks, ensuring that the system is robust and reliable.
Appropriate Transparency & Explainability
Providing transparency through a comparative rating for non-technical audiences to consider robustness and security-related metrics in their selection of a facial verification provider. Explainability was enhanced through the identification of specific features that unduly influenced a model or led to successful manipulation.
Fairness
We identified biases the model displayed against certain features like beards, glasses or sex-based features. We recommended the inclusion of diverse training data that our analysis showed the model needed for better balance. Subsequent improvements promote fairness at any business making use of the facial verification system.
Accountability & Governance
皇冠体育app red teaming approach and rigorous comparison with industry benchmarks holds the system to high standards of accountability. 皇冠体育app reporting against these standards improves corporate governance.
Why we took this approach
This multifaceted attack approach was employed to uncover weaknesses that could be exploited by bad actors. Rigorous, empirical methods that exploit algorithmic traits of a system allow for a more objective analysis of the system. This surpasses the standard approach in industry, involving the testing of these systems using testing datasets (of normal unaltered images) to assign only an accuracy score. Further, the approach identifies specific components of model vulnerability thereby providing clear next steps for improving the facial verification system鈥檚 robustness.
Benefits to the organisation using the technique
-
Enhanced security through the discovery of system vulnerabilities and the implementation of training data and system mitigations.
-
Increased trust in the facial verification system from users and clients due to its measurable resistance to sophisticated attacks.
-
Insights into the model鈥檚 biases, allowing for the instalment of operational boundaries to prevent these biases, and the development of a better approach to procuring representative data.
-
Assurance for stakeholders through a demonstrated comparative advantage over other industry benchmarks.
Limitations of the approach
-
皇冠体育app adversarial attacks may not encompass all potential real-world scenarios, especially as attack methodologies evolve.
-
Findings may necessitate continuous re-evaluation and updating of the system鈥檚 security measures.
-
Recommendations may lead to increased complexity and cost in the system鈥檚 operation and maintenance.
Further Links (including relevant standards)
Further AI Assurance Information
-
For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: /ai-assurance-techniques
-
For more information on relevant standards visit the AI Standards Hub: