AI is both an asset and a weapon in cybersecurity. While AI-driven security solutions empower organizations with advanced threat detection and automated Defences, cybercriminals use AI to orchestrate sophisticated attacks. This dual nature presents a challenge: securing digital systems while preserving human oversight in cybersecurity decision-making.
Cybercriminals increasingly leverage AI for deception and exploitation. Deepfake technology has enabled financial fraud by impersonating executives, while AI-driven phishing personalizes emails by analysing an individual’s online footprint, making detection harder.
A multinational bank fell victim to voice phishing (vishing) where attackers cloned the CEO’s voice, instructing an employee to transfer funds. This highlights the urgent need for stronger security measures and employee awareness.
Despite technological advancements, humans remain cybersecurity’s weakest link. AI-powered attacks exploit human psychology—trust in authority, urgency, and familiarity. Automated spear-phishing and AI-generated misinformation manipulate public perception and compromise security at scale.
In a major breach, an employee clicked on an AI-generated phishing email mimicking IT support. The email contained malware that spread across the network. AI-driven malware can mutate in real time, evading traditional security controls, making vigilance and AI-enhanced defence strategies crucial.
While AI enhances security through real-time threat detection, anomaly identification, and automated response, it also introduces risks. AI-driven security tools rely on vast datasets, which attackers can manipulate through data poisoning attacks. Additionally, AI security tools sometimes misclassify legitimate activity, causing business disruptions.
A Fortune 500 company’s Security Operations Centre (SOC) suffered false positives due to an improperly trained AI model. While automation reduced threats, human analysts were still needed to fine-tune AI algorithms and validate alerts.
AI is revolutionizing SOC operations, enabling faster processing of security data but lacking business context and ethical judgment. Security professionals must shift from manual analysis to AI-augmented decision-making.
For instance, a global retailer adopted AI-powered threat detection, automating 80% of security alerts. However, AI misclassified software updates as threats. A hybrid approach—where AI handles routine tasks, and human experts focus on high-risk scenarios—proved most effective.
Despite AI’s efficiency, human intuition remains vital. Cybersecurity awareness training, ethical AI governance, and human intervention in decision-making must be prioritized. Organizations should:
AI presents both challenges and opportunities in cybersecurity. While AI-driven attacks grow in sophistication, AI-powered Defences offer unparalleled protection—when used correctly. However, technology alone cannot replace human judgment, ethical considerations, and adaptability. By balancing AI automation with human expertise, organizations can build a cybersecurity posture that anticipates, detects, and mitigates threats. In the AI era, the human element in cybersecurity remains more critical than ever.
Article is written by Dr. Lalit Gupta, Head of IT GRC & Cyber Security - Al Gihaz Holding, KSA