top of page

Subscribe to our newsletter

AI-Powered Intrusion Achieves Full Admin Access in 8 Minutes: Detailed Analysis of AWS Cloud Security Breach

  • Feb 4
  • 6 min read
Image for post about 8-Minute Access: AI Accelerates Breach of AWS Environment

Executive Summary

On November 28, 2025, a threat actor achieved full administrative access to an Amazon Web Services (AWS) environment in just eight minutes, marking a significant escalation in the speed and automation of cloud attacks. The operation began with the compromise of valid credentials found in public Simple Storage Service (S3) buckets containing Retrieval-Augmented Generation (RAG) data for AI models. Leveraging these credentials, the attacker conducted rapid reconnaissance, escalated privileges through AWS Lambda code injection, and moved laterally across 19 unique AWS principals. The attack chain included the abuse of Amazon Bedrock for unauthorized AI model access (a technique known as LLMjacking) and the attempted provisioning of high-performance GPU instances for resource abuse. The operation was characterized by the use of large language models (LLMs) to automate reconnaissance, code generation, and decision-making, compressing the attack lifecycle from hours to minutes. The incident highlights the critical risks posed by exposed credentials, overly permissive cloud roles, and the growing use of AI to accelerate and automate cloud intrusions. All technical details and timelines are confirmed by the Sysdig Technical Report (https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes), TechNadu (https://www.technadu.com/ai-assisted-cloud-intrusion-compromises-aws-environment-in-8-minutes-highlights-new-cloud-security-threats/619516/), and CSO Online (https://www.csoonline.com/article/4126336/from-credentials-to-cloud-admin-in-8-minutes-ai-supercharges-aws-attack-chain.html).

Technical Information

The attack began with the discovery and use of valid AWS credentials stored in public S3 buckets. These credentials belonged to an Identity and Access Management (IAM) user with read and write permissions on AWS Lambda and limited access to Amazon Bedrock. The S3 buckets, containing RAG data for AI models, were named using common AI tool conventions, making them susceptible to automated discovery by attackers. The compromised IAM user had the ReadOnlyAccess policy, enabling the attacker to enumerate resources across a wide range of AWS services, including Secrets Manager, Systems Manager (SSM), S3, Lambda, Elastic Compute Cloud (EC2), Elastic Container Service (ECS), Organizations, Relational Database Service (RDS), CloudWatch, Key Management Service (KMS), Bedrock, OpenSearch Serverless, and SageMaker (Sysdig Technical Report, 2026-02-03).

After initial reconnaissance, the attacker attempted to assume roles with names typically associated with administrative privileges, such as sysadmin and netadmin, but was unsuccessful. Instead, they exploited the IAM user’s permissions to update Lambda function code and configuration. By injecting malicious Python code (with comments in Serbian, suggesting a possible regional origin) into an existing Lambda function named EC2-init, the attacker iterated three times before successfully targeting the admin user "frick." The Lambda function’s execution role had administrative privileges, allowing the attacker to create new access keys for "frick" and retrieve them directly from the Lambda function’s output (Sysdig Technical Report, 2026-02-03).

With administrative access, the attacker moved laterally by assuming six different IAM roles across 14 sessions and gaining access to five IAM users, resulting in a total of 19 unique AWS principals involved in the attack. This distribution of activity across multiple identities complicated detection and facilitated persistence, as the attacker only needed access to one principal to maintain a foothold in the environment (Sysdig Technical Report, 2026-02-03).

The attacker collected sensitive data from multiple services, including secrets from Secrets Manager, SSM parameters, CloudWatch logs, Lambda function source code, internal S3 data, and CloudTrail events. They also enumerated IAM Access Analyzer findings, which could provide further insight into the environment’s security posture (Sysdig Technical Report, 2026-02-03).

A notable aspect of the attack was the abuse of Amazon Bedrock for LLMjacking. After confirming that model invocation logging was disabled, the attacker invoked multiple AI models, including Claude Sonnet 4, Claude Opus 4, Claude 3.5 Sonnet, Claude 3 Haiku, DeepSeek R1, Llama 4 Scout, Amazon Nova Premier, Amazon Titan Image Generator, and Cohere Embed v3. This activity demonstrates the attacker’s intent to exploit proprietary AI models for unauthorized purposes (Sysdig Technical Report, 2026-02-03; TechNadu, 2026-02).

The attacker also attempted to provision high-performance GPU instances, initially trying to launch a p5.48xlarge instance (which failed due to capacity constraints) before successfully launching a p4d.24xlarge instance. The script deployed a publicly accessible JupyterLab server on port 8888, providing a backdoor to the instance independent of AWS credentials. The instance was terminated after five minutes, but this action demonstrates the potential for resource abuse and persistent access (Sysdig Technical Report, 2026-02-03).

To evade detection, the attacker used an IP rotator tool to change the source IP address for each request and distributed operations across 19 different principals. They also leveraged role chaining, assuming one role to assume another, further complicating investigation and response (Sysdig Technical Report, 2026-02-03).

The attack chain mapped to the MITRE ATT&CK framework includes techniques such as Valid Accounts (T1078), Unsecured Credentials in Files (T1552.001), Account Discovery (T1087), Lambda Code Injection (T1055), Use Alternate Authentication Material (T1550.003), Data from Local System (T1005), Resource Hijacking (T1496), and Defense Evasion (T1070.004), among others. The technical evidence supporting these mappings is robust and cross-verified across all primary sources.

Affected Versions & Timeline

The incident affected an AWS environment where public S3 buckets contained RAG data and valid IAM credentials. The compromised IAM user had permissions on Lambda and limited access to Bedrock. The attack timeline, as reconstructed from CloudTrail logs and confirmed by the Sysdig Threat Research Team, is as follows (Sysdig Technical Report, 2026-02-03):

At 0:00:00, the attacker extracted credentials for the IAM user from public S3 buckets. By 0:06:00, after initial enumeration, they failed to assume admin roles but successfully assumed existing roles such as sysadmin and netadmin. At 0:08:00, they replaced the code of the Lambda function EC2-init, iterating three times before successfully compromising the admin user "frick." By 0:11:00, they created a new user "backdoor-admin" and attached the AdministratorAccess policy. Between 0:58:00 and 1:51:00, they invoked Bedrock models 13 times. At 1:00:00, they created access keys for users with BedrockFullAccess and AzureADRoleManager. Between 1:05:00 and 1:35:00, they conducted extensive reconnaissance across multiple AWS services. At 1:21:00, they assumed additional roles, and at 1:42:00, they successfully launched a p4d.24xlarge GPU instance. The attack ended at 1:51:00 when the threat actor’s access was terminated.

The attack chain exploited common misconfigurations, including public S3 buckets and overly permissive Lambda execution roles, which are prevalent in organizations rapidly adopting AI/ML workflows.

Threat Activity

The threat actor demonstrated a high degree of automation and technical sophistication, leveraging large language models (LLMs) to accelerate each phase of the attack. The use of LLMs enabled rapid reconnaissance, code generation (including Python scripts with Serbian comments), and real-time decision-making, significantly reducing the time required to achieve administrative access. The attacker’s activities included:

Credential theft from public S3 buckets, providing initial access to the AWS environment. Automated enumeration of AWS services and AI resources using the compromised IAM user’s permissions. Privilege escalation via Lambda code injection, exploiting the UpdateFunctionCode and UpdateFunctionConfiguration permissions to gain admin access. Lateral movement across 19 AWS principals, including role chaining and cross-account role assumption, to distribute activity and maintain persistence. Data collection and exfiltration from multiple AWS services, including secrets, logs, and internal data. LLMjacking by invoking multiple proprietary AI models in Amazon Bedrock after disabling logging. Resource abuse through the attempted and successful provisioning of high-performance GPU instances, with the deployment of a public JupyterLab server for persistent access. Defense evasion techniques, including IP rotation and the use of multiple principals, to complicate detection and response.

The operation’s speed and efficiency, combined with the use of AI-generated code and artifacts, represent a significant evolution in cloud attack methodologies. The incident underscores the need for organizations to reassess their cloud security posture in light of AI-driven threats.

Mitigation & Workarounds

The following mitigation strategies are prioritized by severity, based on the technical evidence and recommendations from the Sysdig Threat Research Team (Sysdig Technical Report, 2026-02-03):

Critical: Immediately remove public access from all S3 buckets containing sensitive data, including RAG data and AI model artifacts. Audit all IAM users and roles for excessive permissions, especially those with UpdateFunctionCode, UpdateFunctionConfiguration, and PassRole permissions on Lambda functions. Restrict these permissions to only those principals that require them for operational purposes.

High: Enable versioning for all Lambda functions to maintain immutable records of code changes and facilitate forensic analysis. Enable model invocation logging for Amazon Bedrock and other AI services to detect unauthorized usage. Monitor for large-scale enumeration activity, including IAM Access Analyzer findings, as this can indicate reconnaissance by threat actors.

Medium: Regularly rotate and audit IAM credentials, especially those used for automation and integration with AI services. Implement network controls to restrict access to management interfaces and sensitive resources, reducing the attack surface for lateral movement and resource abuse.

Low: Educate development and operations teams on the risks of exposing credentials and sensitive data in public repositories and storage locations. Review and update incident response plans to account for the accelerated timelines of AI-assisted attacks.

All mitigation recommendations are based on confirmed technical evidence and are consistent with best practices for securing AWS environments against automated, AI-driven threats.

References

About Rescana

Rescana provides a Third-Party Risk Management (TPRM) platform designed to help organizations identify, assess, and monitor risks in their cloud and supply chain environments. Our platform enables continuous discovery of exposed assets, misconfigurations, and credential leaks, supporting rapid detection and response to incidents similar to the one described in this report. For questions or further information, contact us at ops@rescana.com.

bottom of page