Safety Analysis of Deep Neural Networks

Safety Analysis of Deep Neural Networks

Dario Guidotti

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 4887-4888. https://doi.org/10.24963/ijcai.2021/675

Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behaviour of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following, I will present some of my recent efforts in this area.
Keywords:
AI Ethics, Trust, Fairness: Trustable Learning
Multidisciplinary Topics and Applications: Validation and Verification
Machine Learning: Deep Learning
Machine Learning: Adversarial Machine Learning