ETSI EN 304 223: Baseline Cybersecurity Standard for AI Models and Systems in Europe
- Rescana
- 7 days ago
- 5 min read

Executive Summary
Publication Date: 15 January 2026
The European Telecommunications Standards Institute (ETSI) has published ETSI EN 304 223, a groundbreaking European Standard (EN) that establishes baseline cybersecurity requirements for artificial intelligence (AI) models and systems. This standard introduces a lifecycle-based framework for developers, vendors, and operators, addressing unique AI threats such as data poisoning and prompt injection. By setting clear, actionable requirements, ETSI EN 304 223 aims to ensure the trustworthy adoption of AI in critical applications across Europe and beyond.
Introduction
The rapid integration of AI into critical infrastructure, business operations, and consumer services has introduced new and complex cybersecurity challenges. Unlike traditional IT systems, AI models are inherently data-driven, highly complex, and subject to evolving threats that can undermine their integrity and reliability. Recognizing these challenges, the European Telecommunications Standards Institute (ETSI) has released ETSI EN 304 223, the first comprehensive European standard dedicated to the cybersecurity of AI models and systems. This report provides an in-depth analysis of the standard’s technical requirements, security implications, supply chain considerations, compliance mandates, and its broader impact on the AI ecosystem.
Technical Details and Core Functionality
ETSI EN 304 223 is structured around the entire lifecycle of AI systems, encompassing secure design, development, deployment, operation, and decommissioning. The standard introduces a set of baseline security requirements that address the unique characteristics and risks of AI technologies. These requirements include data integrity controls, model robustness measures, and operational security practices tailored to the AI context.
A key innovation of the standard is its lifecycle-based approach, which maps specific security controls to each phase of an AI system’s existence. This ensures that security is not treated as a one-time event but as a continuous operational discipline. The standard also introduces controls for AI-specific threats, such as data poisoning, model inversion, prompt injection, and adversarial attacks, which are not adequately covered by traditional cybersecurity frameworks.
Additionally, ETSI EN 304 223 mandates secure supply chain management, requiring organizations to verify the provenance of third-party models and datasets, assess vendor security practices, and maintain accountability throughout the supply chain.
Key Innovations and Differentiators
ETSI EN 304 223 stands out as the first European standard to provide a comprehensive, structured, and lifecycle-based set of baseline security requirements specifically for AI. Its most significant innovations include the treatment of security as a continuous process, explicit coverage of AI-specific threats, and the assignment of clear accountability across technical, operational, and supply chain domains.
The standard’s lifecycle security model ensures that organizations address risks at every stage, from initial design to decommissioning. By explicitly targeting threats like data poisoning and prompt injection, ETSI EN 304 223 fills critical gaps left by existing IT security standards. The emphasis on accountability ensures that all stakeholders, including developers, vendors, and operators, understand and fulfill their security responsibilities.
Security Implications and Potential Risks
The publication of ETSI EN 304 223 reflects a growing recognition of the novel and sophisticated threats facing AI systems. The standard identifies and addresses risks such as data poisoning, where attackers manipulate training data to subvert model behavior; model inversion, which involves extracting sensitive information from trained models; prompt injection, where malicious inputs cause unintended outputs; and adversarial examples, which are crafted to deceive AI models.
To mitigate these risks, the standard mandates robust data validation, continuous model monitoring, and incident response processes tailored to the unique characteristics of AI. These controls are designed to ensure that AI systems remain trustworthy, resilient, and capable of withstanding both known and emerging threats.
Supply Chain and Third-Party Dependencies
A significant portion of ETSI EN 304 223 is dedicated to managing the risks associated with the AI supply chain. Modern AI systems often rely on third-party models, datasets, and components, introducing new vectors for compromise. The standard requires organizations to verify the provenance of all third-party assets, conduct security assessments of vendors and suppliers, and implement ongoing monitoring of supply chain risks, including updates and patches.
By operationalizing accountability across the model, data, and supply chain domains, ETSI EN 304 223 ensures that organizations maintain visibility and control over their extended AI ecosystem.
Security Controls and Compliance Requirements
ETSI EN 304 223 defines a comprehensive set of mandatory and recommended security controls for AI systems. These include secure design and development practices such as threat modeling and secure coding for AI pipelines, data integrity and validation controls, model robustness testing, adversarial resilience measures, access control, audit logging, incident detection and response tailored to AI-specific threats, and supply chain risk management.
The standard also provides a foundation for compliance with future regulatory requirements, such as the EU AI Act, positioning organizations to meet both current and emerging legal obligations.
Industry Adoption and Integration Challenges
Adoption of ETSI EN 304 223 is expected to accelerate as regulatory requirements and market demand for trustworthy AI increase. However, organizations may face challenges integrating the standard’s controls with existing security and compliance frameworks, particularly in environments with legacy systems and processes. Vendor readiness and maturity in implementing AI-specific controls, the complexity of managing AI supply chains, and the need for specialized skills in AI security and risk management are additional hurdles that must be addressed.
Despite these challenges, compliance with ETSI EN 304 223 is likely to become a competitive differentiator, with customers and regulators favoring vendors who can demonstrate robust AI security practices.
Vendor Security Practices and Track Record
Vendors supplying AI models and systems to the European market will be required to demonstrate compliance with ETSI EN 304 223. This includes implementing a secure development lifecycle for AI, maintaining transparent supply chain practices, conducting ongoing vulnerability management and incident response, and providing documentation and evidence of compliance for audits and regulatory reviews.
Technical Specifications and Requirements
The technical requirements outlined in ETSI EN 304 223 cover secure data handling and storage, model training and validation processes, deployment and operational security controls, monitoring and logging of AI system behavior, and the secure decommissioning and disposal of AI models and data. These specifications are designed to ensure that AI systems are protected throughout their entire lifecycle, from initial development to end-of-life.
Cyber Perspective
From a cybersecurity perspective, ETSI EN 304 223 represents a significant advancement in the formalization of AI security. For defenders, the standard provides a clear, actionable framework for securing AI systems against a wide range of threats, improving resilience and supporting compliance with emerging regulations. It also helps build trust with customers and partners by demonstrating a commitment to robust AI security practices.
For attackers, the standard raises the bar for successful exploitation of AI systems by introducing new controls and accountability measures. However, it also highlights new attack surfaces, such as the AI supply chain and third-party models, which require ongoing vigilance and proactive risk management. Organizations that fail to implement the controls outlined in ETSI EN 304 223 may become attractive targets for sophisticated adversaries seeking to exploit AI vulnerabilities.
In the broader market, adoption of ETSI EN 304 223 is likely to become a key differentiator, with organizations that can demonstrate compliance gaining a competitive edge in the rapidly evolving AI landscape.
About Rescana
Rescana provides advanced Third-Party Risk Management (TPRM) solutions that help organizations manage the complex risks associated with AI supply chains and third-party dependencies. Our platform enables you to assess vendor security practices, monitor compliance with industry standards, and gain visibility into your extended risk landscape. Whether you are deploying AI systems or relying on third-party models and data, Rescana delivers the tools and expertise you need to ensure your ecosystem is secure, compliant, and resilient.
We are happy to answer any questions at ops@rescana.com.
.png)