Magazine
Shaping the future

Artificial intelligence: ChatGPT security vulnerability?

Artificial intelligence (AI) has become an integral part of our lives, and the AI ChatGPT is currently making headlines and dividing opinion. The security aspect is hardly discussed in public.

 

Milestone for some, because the AI can create texts in the shortest possible time that are almost indistinguishable from human-made ones. Criticism from the other side that it is also often wrong and can even spread fake news. The security aspect, on the other hand, is hardly discussed publicly. In an interview, AI expert Stefan Strobel, CEO of cirosec, specialist for information and IT security and part of the it-sa365 community at NürnbergMesse, explains how ChatGPT could play into the hands of cybercriminals and how companies should arm themselves against it.

Mr. Strobel, everyone is currently talking about OpenAI's chatbot ChatGPT. What are the benefits of this or similar AI, such as Google's Bard?

Strobel: AI systems like ChatGPT simplify the search for information on the Internet, for example, and can generate real-sounding texts or even programs. At the same time, however, the user cannot rely on the content delivered being correct. It is also possible that the AI has simply hallucinated it.

In a recent study by cybersecurity company BlackBerry, 52% of IT professionals believe there will be a cyberattack using ChatGPT this year, and 61% believe foreign nations may already be using the technology for malicious purposes against other nations. What threats do you think the new AI could pose?

Strobel: AI systems like ChatGPT make it easier for attackers to automatically create genuine-sounding texts. Thus, AI does not lead to new types of attacks from which one could not protect oneself, but it enables even better deception of victims and at the same time higher efficiency by automating the attacks.

Actually, the developers of OpenAI have built in restrictions to prevent ChatGPT from being used to generate phishing emails, but there is also early evidence of cybercriminals circumventing these restrictions.

Are cybercriminals already using generic AI for phishing emails?

In practice, the bulk of phishing attacks today take place without AI systems. There is evidence that ChatGPT is already being used to generate phishing emails because attacks can be made even more convincing, but in the overall picture AI has not yet played a relevant role among cybercriminals. From an attacker's point of view, it is not really necessary, because so far it is still very easy to capture millions even without AI.

Keyword AI-driven cybersecurity: Can't experts in the IT security industry in turn also make use of the new AI themselves?

Since there is no binding definition of the term AI, many manufacturers in the security industry in particular market their products very generously with the label AI. In some areas, however, the actual use of AI has been evident for years, for example in virus protection based on neural networks. This enables malware to be detected more reliably than with classic signatures.

What security tips do you have for the use of AI or how can companies protect themselves against its misuse?

It depends very much on the actual use case and the technology used how one should deal with the new dangers. What is clear, however, is that AI, like any new technology in IT, brings with it both new opportunities and new risks. Before introducing AI-based systems, one should therefore analyze exactly which new threats arise and how one wants to deal with them in each individual case.

Taking the example of AI-based detection solutions, for example, at first glance one has a better detection rate. At the same time, however, there are new ways to trick the detection and explicitly provoke false detection.

Redaktionsmitglied Reinhold Gebhart
Reinhold Gebhart
Online Editorial // Editor for Vincentz Network
All articles of author