Google Gemini AI Under Attack: APTs and Cybercriminals Exploit Platform Across the Entire Cyber Kill Chain
- 17 hours ago
- 5 min read

Executive Summary
The recent disclosure by Google's Threat Intelligence Group (GTIG) highlights a significant escalation in the adversarial misuse of the Gemini AI platform by advanced persistent threat (APT) actors and information operations (IO) groups. These threat actors, including state-sponsored groups from Iran, China, North Korea, and Russia, are leveraging Gemini AI to facilitate every phase of the cyberattack lifecycle. While Google has implemented robust safety controls and adversarial training to mitigate the most severe abuses, the platform is being actively exploited for reconnaissance, malware development, phishing, evasion, and campaign automation. The abuse of Gemini AI is not limited to nation-state actors; financially motivated cybercriminals are also capitalizing on the technology, particularly through the sale and use of jailbroken large language models (LLMs) such as FraudGPT and WormGPT. This report provides a comprehensive technical analysis of the tactics, techniques, and procedures (TTPs) observed, the sectors and geographies targeted, and actionable mitigation strategies for organizations concerned about the evolving threat landscape surrounding generative AI.
Threat Actor Profile
The primary threat actors abusing Gemini AI are sophisticated APT groups and IO actors with established histories of targeting government, defense, technology, and critical infrastructure sectors. Iranian APTs, notably APT42, have demonstrated prolific use of Gemini AI for phishing, vulnerability research, and campaign content generation. Chinese APTs, including APT41 and the IO group DRAGONBRIDGE, are leveraging the platform for reconnaissance on US military and IT targets, lateral movement scripting, and influence operations. North Korean actors, such as APT43, are utilizing Gemini AI for infrastructure research, payload development, and support of clandestine IT worker schemes. Russian groups, including KRYMSKYBRIDGE and Doppelganger, have focused on malware rewriting, encryption, and information operations. Financially motivated actors are exploiting jailbroken LLMs for business email compromise (BEC), malware development, and social engineering at scale. The common thread among these actors is the use of Gemini AI to accelerate and automate attack workflows, enhance the sophistication of phishing and malware, and optimize influence campaigns.
Technical Analysis of Malware/TTPs
The technical exploitation of Gemini AI by threat actors spans the entire MITRE ATT&CK framework. During the reconnaissance phase, actors use Gemini AI to gather detailed information on target organizations, individuals, and infrastructure (T1592, T1595). For initial access, the platform is employed to craft highly convincing phishing emails and social engineering lures (T1566), as well as to research and exploit public-facing application vulnerabilities (T1190). In the weaponization and execution stages, Gemini AI is used to generate and obfuscate malware, infostealers, and scripts (T1059, T1027), including code for persistence (T1136, T1547), privilege escalation (T1068), and lateral movement (T1021). Credential access is facilitated through AI-generated scripts for OS credential dumping (T1003), while discovery and collection phases leverage the platform for account and remote system discovery (T1087, T1018) and email collection (T1114). Exfiltration and impact are supported by code for data exfiltration over C2 channels (T1041) and endpoint denial of service (T1499).
A notable technical vector is the use of prompt injection and jailbreak techniques to bypass Gemini AI's safety controls. Threat actors employ publicly available jailbreak prompts and indirect prompt injection methods to coerce the model into generating malicious code, leaking sensitive information, or producing phishing kits. While Google's adversarial training has mitigated the most dangerous prompt injection attempts, the ongoing arms race between attackers and defenders in the LLM space is evident. Additionally, underground LLMs such as FraudGPT and WormGPT—which lack safety controls—are being sold and used for malware development, BEC, and other cybercriminal activities.
Exploitation in the Wild
As of June 2024, there is no public evidence of successful, novel AI-enabled attacks directly attributed to Gemini AI. However, the platform is being actively used to enhance the productivity and effectiveness of existing TTPs. Iranian APTs have used Gemini AI for phishing and vulnerability research targeting defense and policy experts, with a focus on products such as Mikrotik, Apereo, and Atlassian. Chinese APTs have leveraged the platform for reconnaissance on US military and IT service providers, scripting for Active Directory attacks, and reverse engineering endpoint detection and response (EDR) tools. North Korean actors have used Gemini AI for payload development and support of IT worker operations targeting Western organizations. Russian groups have focused on campaign planning and malware rewriting.
Information operations actors from Iran, China, and Russia are using Gemini AI for content generation, translation, persona development, and campaign optimization, including search engine optimization (SEO) and social media reach. The Chinese IO group DRAGONBRIDGE has used the platform for synthetic content creation in influence campaigns. Financially motivated actors are leveraging jailbroken LLMs for BEC, malware development, and social engineering, with underground marketplaces facilitating the sale of these tools.
Notable exploitation vectors include prompt injection research, as documented in academic studies, and the abuse of Gemini AI for phishing via calendar invite manipulation and stealth phishing attacks against Gmail users. While Gemini AI's safety controls have prevented the most severe abuses, the platform remains a valuable tool for adversaries seeking to automate and scale their operations.
Victimology and Targeting
The sectors most heavily targeted by Gemini AI-enabled threat activity include defense, government, technology, and critical infrastructure. Iranian APTs have focused on Middle Eastern, US, Israeli, and European defense and policy organizations, as well as dissidents and NGOs. Chinese APTs have targeted US military, IT service providers, and government agencies in at least eight countries, with a strategic focus on defense, technology, and government sectors. North Korean actors have targeted US, South Korean, and German defense and industrial sectors, as well as cryptocurrency and nuclear technology organizations. Russian APTs have targeted Western organizations for malware development and information operations. Information operations actors are targeting global audiences, with a focus on the US, Europe, Middle East, and Asia. Financially motivated actors are targeting organizations of all sizes with BEC, phishing, and malware campaigns enhanced by AI-generated content.
Mitigation and Countermeasures
Organizations should adopt a multi-layered approach to mitigate the risks associated with Gemini AI and LLM abuse. Security teams must monitor for AI-generated content in phishing and BEC attempts, leveraging advanced email security solutions capable of detecting synthetic text and voice. Logs should be reviewed for anomalous LLM usage, particularly from high-risk geographies or known APT infrastructure. Detection mechanisms for prompt injection and indirect prompt manipulation should be implemented, including the use of adversarial testing and red teaming for LLM-based applications. Organizations should apply Google's Secure AI Framework (SAIF) and adhere to best practices for LLM security, including regular updates, access controls, and monitoring of API usage. It is critical to stay informed of vendor advisories and promptly patch any Gemini-related vulnerabilities as they are disclosed. Security awareness training should be updated to include the risks of AI-generated phishing and social engineering. Finally, organizations should consider leveraging third-party risk management (TPRM) platforms to continuously assess and monitor the security posture of vendors and partners utilizing generative AI technologies.
References
Google Cloud Blog: Adversarial Misuse of Generative AI Infosecurity Magazine: Nation-State Hackers Abuse Gemini AI Tool Wired: Hackers Hijacked Google's Gemini AI With a Poisoned Calendar Tenable: Three New Gemini Vulnerabilities Indirect Prompt Injection Research: Invitation is All You Need ACA Global: Gemini Phishing Risks Security Boulevard: Calendar Invite Abuse BankInfoSecurity: Prompt Injection Risk NVD - National Vulnerability Database
About Rescana
Rescana is a leader in third-party risk management (TPRM), providing organizations with a comprehensive platform to assess, monitor, and mitigate cyber risks across their vendor ecosystem. Our advanced threat intelligence and continuous monitoring capabilities empower security teams to stay ahead of emerging threats, including those associated with generative AI and large language models. For more information or to discuss how Rescana can help your organization manage AI-related risks, we are happy to answer questions at ops@rescana.com.
.png)