Security experts have already demonstrated that ChatGPT can be used to create malware, or for social engineering. Will the bot become the hot new tool for hackers with little technical know-how?
Anybody can use ChatGPT to automatically generate texts or simple programs, and hackers are no exception: they can use this AI-based software to create malicious code, for example. While we’re not yet sure how good any future generated programs will be, simple versions to automatically create phishing emails and codes for carrying out ransomware attacks have already been detected. In fact, easy-to-use options have been around for a long time, enabling hackers without any prior knowledge to carry out attacks. However, these aren’t based on AI and tend to be available online as collections of executable attack programs, so-called exploits, that exploit known weaknesses. Now, ChatGPT is another convenient tool that hackers can use to generate and spread their own malware. Fraunhofer AISEC views ChatGPT as a serious threat to cyber security. We expect the knowledge base of future software versions to expand considerably, which will improve the quality of answers. Such a development is easy to foresee, considering that the underlying technology is based on re-enforcement learning combined with human feedback. This makes it vital to close any potential security gaps and eliminate all weaknesses to counter such attacks.
Is ChatGPT only interesting for script kiddies or also for more experienced cyber criminals?
Hackers need skills from a wide variety of fields to launch successful attacks. In my view, ChatGPT could already be of interest to IT experts today. The chatbot’s communication in the form of a dialog and its ability to provide explanations, create code snippets or describe commands that can be used for tasks (e.g., when queried about the correct parameterization of analysis tools) can provide valuable support even to experts. ChatGPT can produce relevant answers and results faster than a classic Google query, which doesn’t generate code snippets tailored to the query, for example. Experts could therefore benefit by expanding their know-how faster with ChatGPT — assuming that they’re able to quickly check the chatbot’s replies for plausibility and correctness.
Aren’t there already many easy ways to get malicious code, with a simple click on the darknet, for example (“Malware as a Service”)? Is ChatGPT just another option or is the bot different from the existing options for hackers?
As mentioned above, ChatGPT is a further tool in the already existing toolkit for hackers. In my view, ChatGPT could take on the role of a virtual consultant that can advise on the most diverse queries to prepare against hacker attacks, at least to some extent. However, the potential threat this type of software can pose in the long term is much more critical. Some already call it a game changer software for cyber security. While ChatGPT has a set of internal rules that prevent it from generating attack code if asked directly, this can of course be bypassed by formulating questions in a smart way. ChatGPT has the potential to make the world of cyber attacks accessible to an even wider range of users, to enable dedicated creation of numerous targeted attacks and, what’s more, advise non-savvy hackers on how to carry them out successfully.
Do we have to expect cyber attacks to be controlled by AI in the near future — from malware creation to distribution? Is this already happening today?
Yes, we believe that simple attack waves, such as phishing campaigns, can already be created and carried out using AI. For example, AI can be used to generate phishing emails that contain a link hiding AI-based ransomware code. These mails can be distributed automatically to selected groups of recipients. Attacks of this type belong to the large category of social engineering attacks, which will be even more effective in the future based on AI. The AI software generates authentic, convincing-looking texts that trick victims into disclosing sensitive information. We shouldn’t forget, however, that the underlying technology (language model) is very good at completing sentences, but, unlike humans, it cannot combine complex backgrounds and prior knowledge from diverse fields and put them into context. While ChatGPT’s answers to questions often sound plausible, they aren’t actually based on human understanding but instead on statistical distributions of word contexts.
Are there any positive aspects of ChatGPT for the security industry? Can security experts also use the bot for their work?
Security experts can indeed benefit from ChatGPT, e.g., to detect weaknesses in software. ChatGPT can also be of assistance to software developers. For example, ChatGPT could provide automated analysis of code fragments and helpful information on how to improve the code quality in the development cycle. This would reduce weaknesses in the software that could potentially be attacked. ChatGPT could also contribute to employee qualification. Whatever the field of application, it is important to be aware that ChatGPT often provides wrong or plainly made-up answers. This is the case right now and will also apply in the future. We therefore have to consider both the risks and the opportunities provided by ChatGPT while keeping its inherent limits in mind.
How will the technology and generative AI develop as a whole?
We’re observing ever faster developments in the field of generative AI, with news and updated research results on a daily basis. Any statements and prognoses made in this interview must be viewed in the light of these developments. They are a momentary view at a time where we have only glimpsed the many opportunities and risks of this technology.
For example, an article published by Microsoft Research in March 2023 announced the first signs of Artificial General Intelligence (AGI) [1], i.e., a program capable of understanding and learning complex intellectual tasks. We also see fast adaptation of generative AI by many technology providers. This will accelerate the already fast-paced dynamic in development, research and the many different applications and open up new, as yet unknown markets. One thing is for sure, though: While generative AI will have a large impact on all areas of our society, it will also have a crucial impact on the development of future security technologies. Finally, there is only one prognosis that we’re certain of: Fraunhofer AISEC will continue to keep a close eye on AI-based security and safe AI and actively shape future developments.
This interview was published in German on the Bayern Innovativ website in February 2023. Due to the rapid development of AI technologies, we added the last question in April 2023. Here is the link to the original publication in German: https://www.bayern-innovativ.de/de/netzwerke-und-thinknet/uebersicht-digitalisierung/cybersecurity/seite/chatgpt-neues-lieblingstool-fuer-hacker
Authors
Claudia Eckert
Prof. Dr. Claudia Eckert is managing director of the Fraunhofer Institute for Applied and Integrated Security AISEC in Munich and professor at the Technical University of Munich, where she holds the Chair for IT Security at the department of Informatics. As a member of various national and international industrial advisory boards and scientific committees, she advises companies, trade associations and the public sector on all issues relating to IT security.
Nicolas Müller
Dr. Nicolas Müller studied mathematics, computer science and theology to state examination level at the University of Freiburg, graduating with distinction in 2017. Since 2017, he has been a research scientist in the Cognitive Security Technologies department of Fraunhofer AISEC. His research focuses on the reliability of AI models, ML shortcuts and audio deepfakes.