| Description: |
Background and Motivation: Cyberattacks are growing in volume and sophistication, with zero-day attacks and polymorphic malware increasingly bypassing traditional rule-based security systems. Machine learning (ML) and deep learning (DL) models have shown strong potential in intrusion detection systems (IDS), but they often function as “black boxes” with limited interpretability. This lack of transparency hinders trust, slows response times, and complicates human-in-the-loop security workflows. Recent advances in Explainable AI (XAI) and Large Language Models (LLMs) provide new opportunities to create IDS models that not only detect threats but also explain the reasoning behind decisions in real time. This thesis aims to explore how AI can be used to detect emerging threats while maintaining interpretability for security analysts.
Problem Statement: Current IDS systems struggle with:
-
Detecting zero-day attacks that differ from known signatures
-
Providing interpretable explanations for automated decisions
-
Reducing false positives, which overwhelm security teams
-
Integrating ML/XAI outputs into real-world SOC workflows
There is a need for an AI-enhanced IDS that is both accurate and explainable, improving threat response without sacrificing transparency.
|