ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Artificial Intrusions: The Dark Art of AI Exploitation

Journal: International Journal of Computer Science and Mobile Computing - IJCSMC (Vol.13, No. 8)

Publication Date:

Authors : ;

Page : 54-59

Keywords : Artificial intelligence (AI); AI exploitation; Artificial intrusion; user consent and autonomy; social engineering; phishing; large language mechanisms (LLM); unethical practices;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

AI not only opens a whole new realm of possibilities but also a can of worms and opportunities for dark exploitation. For instance, AI systems exhibit vulnerabilities that stem from the algorithm configurations which are exploited by hackers to initiate either input or poisoning attacks. Such vulnerability exacerbates the susceptibility to prompt injection-based adversarial attacks that jeopardize the AI system and large language mechanisms (LLM) as in the example of ChatGPT shared in the results analysis section. AI has also exacerbated the intensity and effectiveness of social engineering and phishing attacks by enabling realistic deep fakes and cloning of human voices and audio. The irresponsible and malicious application of AI technologies has also aggravated unethical practices like breaching user data privacy rules for profit while foregoing user consent and autonomy. Hence, proactive, holistic and interdisciplinary efforts involving AI developers, users, researchers and regulators among others will be required to counter the evolving nature of AI security threats and attacks. Ultimately, by comprehensively exploring such aspects of Artificial intrusion and exploitation, this research will profoundly contribute to the comprehension of the risks affiliated with AI exploitation and how such risks can be proactively addressed to secure AI systems.

Last modified: 2024-08-18 01:36:40