top of page

Meeting the New ETSI Standard for AI Security: A Practical Guide for Tech and Enterprise Leaders

  • Writer: Editorial Team
    Editorial Team
  • Jan 16
  • 4 min read

Meeting the New ETSI Standard for AI Security

In a landmark step for artificial intelligence governance, the European Telecommunications Standards Institute (ETSI) has published ETSI EN 304 223, the first globally applicable European standard for AI cybersecurity. This standard sets baseline security requirements for AI models and systems and is rapidly becoming a critical reference point for organisations embedding machine learning and AI into core operations.


As AI technologies proliferate across industries — from financial fraud detection to automated supply chains and generative content systems — the complexity and novelty of associated risks demand a structured and lifecycle-aware approach to security. Traditional cybersecurity frameworks are insufficient for AI’s unique threat vectors, such as data poisoning, model obfuscation, membership inference and indirect prompt injection — attacks that target behaviours specific to machine learning systems rather than conventional software flaws. ETSI’s new standard responds directly to these challenges.


Why a Standard for AI Security Matters

Organisations increasingly struggle with assigning responsibility for AI risk. Is the developer accountable for secure design? Does the operator own runtime security? What role does a data governance team play? ETSI EN 304 223 addresses these questions by clarifying roles and obligations across the AI lifecycle. It formalises responsibilities for three key stakeholders: Developers, System Operators and Data Custodians, ensuring that risk ownership is explicit rather than ambiguous.

For example, a financial services firm that fine-tunes an open-source model for fraud detection might act as both a Developer and a System Operator. Under the new standard, this dual role triggers clear obligations: securing infrastructure, auditing training datasets for provenance, and documenting design decisions that influence model behaviour.


Lifecycle Security, Not Afterthought

The ETSI standard emphasises that AI security cannot be an afterthought tacked onto a system at deployment. Instead, security must be integrated from design through decommissioning. It requires threat modelling at the earliest stages, explicitly addressing AI-native attacks that evade traditional defensive measures. This means anticipating how an adversarial actor might subvert a model’s predictions or manipulate inputs in ways that compromise output integrity.


One practical requirement is to reduce attack surface by disabling unnecessary model functionality. For instance, if an organisation uses a multimodal model that processes text and images, but its application only needs text capabilities, the image-processing components must be restricted or removed. Unused modalities present unnecessary security risk and unnecessarily inflate the surface area for potential exploitation.


Asset Management and Discovery

Another key provision is a stringent requirement for asset management. Developers and System Operators must maintain inventories of all AI assets — including interdependencies and connectivity — to support detection of unknown or “shadow” AI systems lurking inside an environment. Without such inventories, security teams cannot secure what they do not know exists.


The standard also calls for disaster recovery planning that accounts for AI-specific incidents. Rather than treating model compromises as isolated bugs, organisations must prepare to restore systems to a “known good state,” preserving data integrity and system reliability in the event of a breach or model corruption.


Supply Chain and Third-Party Risks

AI development rarely happens in isolation. Organisations often rely on third-party components, open-source models, and vendor-provided solutions. ETSI EN 304 223 directly addresses supply chain security by requiring justification and documented risk assessment when using components that are poorly described or lack transparent provenance. Cryptographic hashes must be used to verify authenticity of model artifacts, and training data sourced from public repositories must be logged with source authentication details and timestamps.


This approach significantly raises the bar for enterprises: procurement teams can no longer accept “black box” solutions without accountability. Instead, every third-party asset must be scrutinised for security risk and traceability.


Runtime Controls and API Security

If an organisation exposes AI services via an API to external users, the standard requires it to implement controls that mitigate typical adversarial strategies. For example, rate limiting can prevent attackers from reverse-engineering models by brute-force querying or injecting malicious data that could compromise integrity. This elevates API security from a performance concern to a core defensive measure in AI deployments.


Maintenance, Monitoring and End-of-Life

Maintenance isn’t merely patching software anymore; in AI systems it often means retraining models with fresh data. Under the standard, significant updates — including retraining — must trigger renewed security testing and evaluation, akin to rolling out a new deployment.


Continuous monitoring also takes on new meaning: logs must be analysed not just for uptime or performance, but for indicators like data drift — subtle shifts in input distributions or model output patterns that could reveal a covert breach or compromised data pipeline. This elevates operational monitoring to a proactive security discipline.


At the end of the AI lifecycle, models must not be casually retired. Involving Data Custodians in the secure disposal process ensures that sensitive datasets, configuration files, and model binaries do not inadvertently leak when hardware is decommissioned or cloud resources are reclaimed.


Governance, Training and Culture

Finally, ETSI’s standard recognises that technical controls alone are insufficient. It mandates role-specific cybersecurity training — developers learn secure coding practices for AI systems, while broader teams are trained to understand exploit vectors such as social engineering attacks that abuse AI outputs.


According to Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence, this framework helps organisations build AI that is “resilient, trustworthy, and secure by design.” Robust documentation, clear role definitions, and a lifecycle-centred approach not only mitigate risk but also provide a defensible posture for future regulatory audits under frameworks like the EU AI Act.


An upcoming ETSI Technical Report (TR 104 159) will extend these principles specifically to generative AI, addressing threats like deepfakes, disinformation, and intellectual property risks — offering even more tailored guidance where complexity is highest.


Comments


bottom of page