More
    spot_img
    HomeAutomation/AIAI-supported attacks are biggest cyber threat to German firms

    AI-supported attacks are biggest cyber threat to German firms

    -

    New survey by Capterra reveals strengths and weaknesses of using AI to fight cyber attacks

    AI-supported attacks are currently the biggest cyber threat to German companies. The software evaluation platform Capterra has examined [in German] which areas companies use AI-supported security systems and what advantages and challenges they encounter.

    Capterra asked 670 professionals involved in German companies’ cybersecurity efforts how AI helps protect the business?

    The study found that there are three main reasons companies are investing in AI-supported defences. Phishing and social engineering attacks scored 41%. Internal threats caused by unintentional or malicious employee actions scored 32 and ransomware attacks 29%.

    Capterra says that to address risks from phishing or unintentional employee actions, there priorities are using AI to improve cloud security (56% score), 55% for email security and 47% for better network security.

    AI-powered cybersecurity

    According to Capterra, traditional tools against cyberthreats are based on static protection, responding to known threats to the system. As AI tools take a dynamic approach to detection, picking up patterns or anomalous activity related to threats, they offer better protection.

    Respondents said the three most important benefits of AI-supported protection are:

    • Behavioural analysis to identify anomalies and patterns that indicate threats (49%);

    • Real-time monitoring to detect threats as they occur (48%); and

    • Automating routine security tasks such as prioritising alerts, responding to incidents and managing patches (40%).

    Challenges of AI in cybersecurity

    AI in cybersecurity also has challenges. Lack of analytical accuracy and the amount of information generated scored 43% from respondents.

    The second most serious challenges, both with a score of 40% are that AI algorithms can process large amounts of data and recognise patterns, their analytical abilities depend on the information they have been trained on.

    Also, AI systems are not independent and must be monitored by qualified professionals. Additional manual review is required to draw accurate conclusions.

    The third and fourth biggest challenges respondents identified were, respectively, false negatives (32%) and false positives (28%) results. This means the AI sometimes reports harmless activities in a company as suspicious (false positives) or, conversely, ignores real threats (false positives).

    AI needs human expertise

    According to study participants, human expertise is particularly needed during training. Human oversight of AI decisions and contextual understanding of outputs is also very important.

    Ines Bahr, Senior Content Analyst at Capterra, commented, “While AI-powered cybersecurity brings automation, speed and scalability, employees’ critical thinking, contextual understanding and ethical considerations ensure effective cybersecurity defences.”