This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

EU cybersecurity agency explores the standardisation landscape for AI

The European Union Agency for Cybersecurity (ENISA) has recently issued a press release relation to its report which assesses the standards for the cybersecurity of AI for the purpose of identifying potential gaps. The report also provides recommendations to support implementation of the upcoming EU AI Act. The agency is focused on ensuring that AI is "cyber secure and robust" in order for its "full potential to unfold". Tech companies should anticipate changes in cybersecurity standardisation requirements in relation to their use of AI.

AI Act

In April 2021, the Commission presented its Artificial Intelligence (AI) package, including the draft AI Act. As we previously discussed, the European Parliament has recently reached a political agreement on new rules to regulate general purpose AI.

The draft AI Act defines AI as a software developed with certain techniques, including machine learning, which generates certain outputs such as predictions, recommendations, or decisions.

The AI Act sets out a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. While high-risk AI systems are permitted, they must comply with strict requirements on risk management systems, data quality and governance, technical documentation, transparency and information to users, and human oversight.

Assessment of AI cybersecurity standards

In its report, ENISA explores the state of play of general-purpose standards for information security and quality management in the context of AI and assesses their coverage to identify gaps. The report focusses on the cybersecurity of AI, highlighting the lack of robustness and vulnerabilities of AI models.

General-purpose software security measures can be used for AI, given that AI is essentially software. Therefore, some risks related to AI can be mitigated with existing organisational and technical standards, such as ISO-IEC 27001 and ISO-IEC 9001.

Nevertheless, ENISA notes that this approach may pose some limitations given that AI can include other elements like hardware and infrastructure. This means that appropriate security measures must be determined after carrying out a system-specific analysis.

The report also examines the draft AI Act, establishing that cybersecurity aspects should be included in the risk assessment of high-risk AI systems.

Key recommendations to ensure standardisation

ENISA report also recommends actions to ensure standardisation for the cybersecurity of AI which include:

  1. Using standardised and harmonised AI terminology for cybersecurity
  2. Developing specific guidance on the application of existing standards related to the cybersecurity of software to AI
  3. Reflecting the inherent features of machine learning in the standards
  4. Establishing liaisons between cybersecurity technical committees and AI technical committees so that potential AI cybersecurity concerns can be addressed coherently
  5. Determining appropriate security requirements based on system-specific analysis and sector-specific standards
  6. Encouraging R&D in areas where standardisation is limited by technological development;
  7. Supporting the development of standards for the tools and competences of the actors performing conformity assessment
  8. Ensuring coherence between the draft AI Act and other laws on cybersecurity, such as the Cyber Resilience Act.

Some recommendations are addressed to all organisations (no. 1 above), while others are oriented to standards-developing organisations (no. 2, 3, and 4 above) and to prepare for the implementation of the AI Act (no. 5, 6, 7, and 8 above).

Next steps

ENISA suggests that while general-purpose software security standards can mitigate some of the risks faced by AI, the concept of AI can include both technical and organisational elements beyond software, meaning that a system-specific analysis will be required to ensure appropriate security measures are in place. Some new standardisation gaps and needs might also arise as AI systems evolve. In addition, changes in the AI Act may affect standardisation needs. 

Tech companies should be prepared to adapt their approach to ensure they can meet changing cybersecurity standardisation requirements with respect to the use of AI.

ENISA is currently collecting relevant information from stakeholders on AI risk management, cybersecurity requirements, and data security. If you want to hear more about it, let us know!

Using adequate [cybersecurity] standards will help ensure the protection of AI systems and of the data those systems need to process in order to operate

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

cybersecurity, artificial intelligence, enisa, ai, data and cyber