Bachelor and Master Theses

To apply for conducting this thesis, please contact the thesis supervisor(s).
Title: Robust and Explainable Defense Strategies Against Realistic Adversarial Attacks in Deep Learning Systems
Subject: Computer science, Applied Artificial Intelligence
Level: Basic, Advanced
Description:

This is also for M.Sc in Cybersecurity 

 

As ML models increasingly support high-risk domains (healthcare, autonomous systems, cybersecurity), they remain vulnerable to adversarial attacks—subtle perturbations that cause misclassification while appearing normal to humans. Most defenses are tailored to synthetic attacks or toy datasets and fail under real-world conditions (transfer attacks, physical-world attacks, distribution shifts).


There is a strong need for robust, explainable adversarial defenses suitable for applied industry settings.

 

Problem Statement

How can we design practical adversarial defenses that remain robust under realistic threat models while providing transparency for decision-makers?

 

Start date: 2026-01-19
End date: 2026-06-30
Prerequisites:

ML/DL Techniques

  • Random Forest / Gradient Boosting
  • Autoencoders for anomaly detection (zero-day threats)
  • 1D CNN / LSTM / Transformer-based IDS models

Tools

  • Python, PyTorch/TF
  • Wireshark/Tshark for traffic analysis
  • Splunk/ELK for SOC simulation (optional)

 

IDT supervisors: Mobyen Uddin Ahmed
Examiner: Shahina Begum
Comments:
Company contact: