European Telecommunications Standards Institute (ETSI) has created new Industry Specification Group on Securing Artificial Intelligence (ISG SAI).
The group is to produce technical specifications to mitigate threats from the deployment of artificial intelligence (AI) throughout multiple ICT-related industries.
This includes threats to AI systems from conventional sources and other systems using AI.
Founding members include BT, Cadzow Communications, Huawei Technologies, NCSC and Telefónica.
Trouble brewing
AI is being increasingly widely deployed by operators, most of whom began by using it to help them improve customer experience, but are now looking to the various flavours of AI to help with operational and business aspects of their organisations.
The consensus is that not attention is being paid to the security implications of AI’s multiples uses for a growing number of purposes.
The group was started in anticipation of autonomous mechanical and computing entities being able to make decisions that act against parties reliant on them, either by design or malicious intent.
The conventional cycle of networks’ risk analysis and the deployment of countermeasures – usually described as the Identify-Protect-Detect-Respond cycle – needs to be re-assessed when an autonomous machine is involved.
Three main strands
The group’s intent is to address three aspects of AI in the standards domain:
• Securing AI from attack, such as where AI is a component in the system that needs defending;
• Mitigating against AI, for example, where AI is the ‘problem’ or is used to improve and enhance other more conventional attack vectors; and
• Using AI to improve security measures against attack from other things, for instance, where AI is part of the ‘solution’ or used to improve more conventional countermeasures.
The purpose of the group is to develop the baseline technical knowledge that is necessary to ensure AI is secure.
Stakeholders affected by the group’s work include end users, manufacturers, operators and governments.
It will undertake three main activities:
AI threat ontology – there is no common understanding of what constitutes an attack on AI and how it might be created, hosted and propagated. The group will seek to define what an AI threat is and how it might differ from threats to traditional systems. It will work to align terminology across the different stakeholders and multiple industries to ensure common understanding and use. ETSI’s specifications will define what is meant by these terms in the context of cyber and physical security, and with a narrative that should be readily accessible to all. The threat ontology will address AI from the point of view of systems, attackers and defence.
Securing AI problem statement – this specification will be modeled on the ETSI GS NFV-SEC 001 Security Problem Statement, which has been influential in guiding the scope of ETSI NFV and enabling “security by design” for NFV infrastructures. It will define and prioritise potential AI threats and recommended actions. These recommendations will be used to define the scope and timescales for follow-up work.
A data supply chain report – as data is a critical component in the development of AI systems, both as raw data and as information and feedback from other AI systems and humans. However, access to suitable data is often limited, resulting in reliance on less suitable sources of data. Compromising the integrity of data is a proven viable attack vector against an AI system.
This report will:
• summarise the methods used to source data for training AI now
• review current initiatives for developing data-sharing protocols, and
• analyse requirements for standards to ensure the integrity in the shared data, information and feedback, and their confidentiality.
The first meeting of ISG SAI will be held in Sophia Antipolis on 23 October.