ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Adversarial Robustness in AI-Driven Cybersecurity Solutions: Thwarting Evasion Assaults in Real-Time Detection Systems

Journal: International Journal of Advanced engineering, Management and Science (Vol.11, No. 5)

Publication Date:

Authors : ;

Page : 073-082

Keywords : Cybersecurity; Intrusion Detection; Deep Learning; RNN; Transformer;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

The incorporation of Artificial Intelligence (AI), especially deep learning models, into cybersecurity frameworks has greatly improved the identification and mitigation of cyber threats. Nonetheless, these smart systems encounter a significant and rising threat—adversarial attacks. Malicious entities create subtle alterations in network traffic or system actions that mislead AI models into misidentifying threats as harmless, facilitating evasion tactics that can circumvent real-time intrusion detection systems (IDS). This study investigates the susceptibility of deep learning-based Intrusion Detection Systems (IDS) to adversarial examples and suggests a robust detection framework aimed at improving resilience against these evasion tactics. The suggested system merges adversarial training, input sanitization, and resilient model architectures, including adversarial-aware Convolutional Neural Networks (CNN) and defensive autoencoders. Employing benchmark datasets like CIC-IDS2017 and UNSW-NB15, we recreate various adversarial scenarios—created using Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD)—to evaluate the effect on detection performance. Experimental findings indicate that conventional DL models experience a significant decline in performance when exposed to adversarial circumstances, with accuracy decreasing by more than 20% in certain instances. Conversely, our suggested framework shows a noticeable enhancement in adversarial robustness, keeping more than 91% detection accuracy during attacks and considerably lowering false positives.

Last modified: 2025-12-17 14:52:01