| Title: | Robust and Explainable Defense Strategies Against Realistic Adversarial Attacks in Deep Learning Systems |
| Subject: | Computer science, Applied Artificial Intelligence |
| Level: | Basic, Advanced |
| Description: |
This is also for M.Sc in Cybersecurity
As ML models increasingly support high-risk domains (healthcare, autonomous systems, cybersecurity), they remain vulnerable to adversarial attacks—subtle perturbations that cause misclassification while appearing normal to humans. Most defenses are tailored to synthetic attacks or toy datasets and fail under real-world conditions (transfer attacks, physical-world attacks, distribution shifts).
Problem Statement How can we design practical adversarial defenses that remain robust under realistic threat models while providing transparency for decision-makers?
|
| Start date: | 2026-01-19 |
| End date: | 2026-06-30 |
| Prerequisites: |
ML/DL Techniques
Tools
|
| IDT supervisors: | Mobyen Uddin Ahmed |
| Examiner: | Shahina Begum |
| Comments: | |
| Company contact: |