More
    HomeAutomation/AIAI is the criminal’s friend - ETSI

    AI is the criminal’s friend – ETSI

    -

    Reality check needed to harden AI security for telcos

    Artificial Intelligence (AI) is old, conservative and can be conned with terrible consequences for telcos, according to Alec Brusilovsky, the Rapporteur for the European IT standards body ETSI’s industry Specific Group Securing AI (ISG SAI). As a consequence of haste and hyperbole, telcos could be deluded into thinking their services are both smart and secure, when they are more akin to an insecurity of things, ETSI warned.

    Don’t be so historical

    ETSI is studying AI hardware security investigations, AI privacy and taking a more in-depth look at the explicability of AI. Report on Deep Fakes, algorithmic intelligence, collaborative learning and the impact on AI security will be released later this year. Meanwhile, Brusilovsky has shared his insights on how AI needs to be held to account. It is looking too insecure to be trusted by telcos, Brusilovsky said. Surprisingly, AI’s problem is that it’s nothing new and a bit slow to move with the times, Brusilovsky told Mobile Europe. “AI is based on past data, like training, testing, and validation datasets.

    Black Swan

    By design that makes it conservative,” said Brusilovsky. If a telco’s AI Security encounters something new that it was’t trained to recognise, like the Black Swan attacks, it cannot cope with the new. If the model is trained on the old data it might not recognise the new and will therefore try to solve a new problem based on how it solved the old problem. This, in turn, may transform AI systems from being enablers of solutions to becoming an enabling technology and/or a facilitator to criminals. As the telecoms industry hypes its artificial intelligence, it ignore many other problems – which won’t have gone unnoticed by hacker organised crime and state sponsored attackers. These are that: AI in its current form is nothing new, criminals will have had plenty of time to study it’s weaknesses, AI system flying under a false flag, the technology industry has a track record of launching insecure systems, the hype and the blind faith of telcos could cause further problems.

    Algorithmic intelligence

    Telcos are deluding themselves that their systems are ‘smart’, as if they have the power of rastionale. This leads them to put too much faith in systems that are artificially intransigent according to Brusilovsky. “AI is using, manipulating, and leaning into existing vulnerabilities, sometimes amplifying their importance or mutating them to create problems and distrust. This raises the question of whether we should really be using the term Artificial Intelligence for the current AI environment. It’s more like Algorithmic Intelligence,” said Brusilovsky,

    Blind faith

    We are at the stage of AI development where, in many cases, it is given a set of instructions by a human, provided training and other datasets, and it is acting upon those instructions only by creating flexible policies or new instructions. It isn’t capable of anything beyond that, yet. Brusilovsky called for AI to be held more accountable. “Humans blindly trust the decision made by AI system without understanding how it has come to that decision there is a massive opportunity for criminals to take advantage of that trust to undertake various forms of fraud or theft. This is the key weakness ETSI is aiming to deal with through AI explicability,” said Brusilovsky, Rapporteur in the ETSI Securing AI group (ISG SAI).