Blogartikel ChatGPT Fraunhofer AISEC Claudia Eckert

ChatGPT — the hot new tool for hackers?

ChatGPT is the AI software that supposedly does it all: It's expected to compose newspaper articles and write theses — or program malware. Is ChatGPT developing into a new tool for hackers and cyber criminals that makes it even easier for them to create malware? Institute director Prof. Dr. Claudia Eckert and AI expert Dr. Nicolas Müller give their opinion on the potential threat to digital security posed by ChatGPT.

Security experts have already demonstrated that ChatGPT can be used to create malware, or for social engineering. Will the bot become the hot new tool for hackers with little technical know-how?

Anybody can use ChatGPT to automatically generate texts or simple programs, and hackers are no exception: they can use this AI-based software to create malicious code, for example. While we’re not yet sure how good any future generated programs will be, simple versions to automatically create phishing emails and codes for carrying out ransomware attacks have already been detected. In fact, easy-to-use options have been around for a long time, enabling hackers without any prior knowledge to carry out attacks. However, these aren’t based on AI and tend to be available online as collections of executable attack programs, so-called exploits, that exploit known weaknesses. Now, ChatGPT is another convenient tool that hackers can use to generate and spread their own malware. Fraunhofer AISEC views ChatGPT as a serious threat to cyber security. We expect the knowledge base of future software versions to expand considerably, which will improve the quality of answers. Such a development is easy to foresee, considering that the underlying technology is based on re-enforcement learning combined with human feedback. This makes it vital to close any potential security gaps and eliminate all weaknesses to counter such attacks.

Is ChatGPT only interesting for script kiddies or also for more experienced cyber criminals?

Hackers need skills from a wide variety of fields to launch successful attacks. In my view, ChatGPT could already be of interest to IT experts today. The chatbot’s communication in the form of a dialog and its ability to provide explanations, create code snippets or describe commands that can be used for tasks (e.g., when queried about the correct parameterization of analysis tools) can provide valuable support even to experts. ChatGPT can produce relevant answers and results faster than a classic Google query, which doesn’t generate code snippets tailored to the query, for example. Experts could therefore benefit by expanding their know-how faster with ChatGPT — assuming that they’re able to quickly check the chatbot’s replies for plausibility and correctness.

Aren’t there already many easy ways to get malicious code, with a simple click on the darknet, for example (“Malware as a Service”)? Is ChatGPT just another option or is the bot different from the existing options for hackers?

As mentioned above, ChatGPT is a further tool in the already existing toolkit for hackers. In my view, ChatGPT could take on the role of a virtual consultant that can advise on the most diverse queries to prepare against hacker attacks, at least to some extent. However, the potential threat this type of software can pose in the long term is much more critical. Some already call it a game changer software for cyber security. While ChatGPT has a set of internal rules that prevent it from generating attack code if asked directly, this can of course be bypassed by formulating questions in a smart way. ChatGPT has the potential to make the world of cyber attacks accessible to an even wider range of users, to enable dedicated creation of numerous targeted attacks and, what’s more, advise non-savvy hackers on how to carry them out successfully.

Do we have to expect cyber attacks to be controlled by AI in the near future — from malware creation to distribution? Is this already happening today?

Yes, we believe that simple attack waves, such as phishing campaigns, can already be created and carried out using AI. For example, AI can be used to generate phishing emails that contain a link hiding AI-based ransomware code. These mails can be distributed automatically to selected groups of recipients. Attacks of this type belong to the large category of social engineering attacks, which will be even more effective in the future based on AI. The AI software generates authentic, convincing-looking texts that trick victims into disclosing sensitive information. We shouldn’t forget, however, that the underlying technology (language model) is very good at completing sentences, but, unlike humans, it cannot combine complex backgrounds and prior knowledge from diverse fields and put them into context. While ChatGPT’s answers to questions often sound plausible, they aren’t actually based on human understanding but instead on statistical distributions of word contexts.

Are there any positive aspects of ChatGPT for the security industry? Can security experts also use the bot for their work?

Security experts can indeed benefit from ChatGPT, e.g., to detect weaknesses in software. ChatGPT can also be of assistance to software developers. For example, ChatGPT could provide automated analysis of code fragments and helpful information on how to improve the code quality in the development cycle. This would reduce weaknesses in the software that could potentially be attacked. ChatGPT could also contribute to employee qualification. Whatever the field of application, it is important to be aware that ChatGPT often provides wrong or plainly made-up answers. This is the case right now and will also apply in the future. We therefore have to consider both the risks and the opportunities provided by ChatGPT while keeping its inherent limits in mind.

How will the technology and generative AI develop as a whole?

We’re observing ever faster developments in the field of generative AI, with news and updated research results on a daily basis. Any statements and prognoses made in this interview must be viewed in the light of these developments. They are a momentary view at a time where we have only glimpsed the many opportunities and risks of this technology.

For example, an article published by Microsoft Research in March 2023 announced the first signs of Artificial General Intelligence (AGI) [1], i.e., a program capable of understanding and learning complex intellectual tasks. We also see fast adaptation of generative AI by many technology providers. This will accelerate the already fast-paced dynamic in development, research and the many different applications and open up new, as yet unknown markets. One thing is for sure, though: While generative AI will have a large impact on all areas of our society, it will also have a crucial impact on the development of future security technologies. Finally, there is only one prognosis that we’re certain of: Fraunhofer AISEC will continue to keep a close eye on AI-based security and safe AI and actively shape future developments.

This interview was published in German on the Bayern Innovativ website in February 2023. Due to the rapid development of AI technologies, we added the last question in April 2023. Here is the link to the original publication in German: https://www.bayern-innovativ.de/de/netzwerke-und-thinknet/uebersicht-digitalisierung/cybersecurity/seite/chatgpt-neues-lieblingstool-fuer-hacker

[1] https://arxiv.org/abs/2303.12712

Authors
Prof. Dr. Claudia Eckert
Claudia Eckert

Prof. Dr. Claudia Eckert is managing director of the Fraunhofer Institute for Applied and Integrated Security AISEC in Munich and professor at the Technical University of Munich, where she holds the Chair for IT Security at the department of Informatics. As a member of various national and international industrial advisory boards and scientific committees, she advises companies, trade associations and the public sector on all issues relating to IT security.

muller_nicolas_0185_rund
Nicolas Müller

Dr. Nicolas Müller studied mathematics, computer science and theology to state examination level at the University of Freiburg, graduating with distinction in 2017. Since 2017, he has been a research scientist in the Cognitive Security Technologies department of Fraunhofer AISEC. His research focuses on the reliability of AI models, ML shortcuts and audio deepfakes.

Most Popular

Never want to miss a post?

Please submit your e-mail address to be notified about new blog posts.
 
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.

* Mandatory

* Mandatory

By filling out the form you accept our privacy policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Articles

Quantum and Classical AI Security: How to Build Robust Models Against Adversarial Attacks

The rise of quantum machine learning (QML) brings exciting advancements such as higher levels of efficiency or the potential to solve problems intractable for classical computers. Yet how secure are quantum-based AI systems against adversarial attacks compared to classical AI? A study conducted by Fraunhofer AISEC explores this question by analyzing and comparing the robustness of quantum and classical machine learning models under attack. Our findings about adversarial vulnerabilities and robustness in machine learning models form the basis for practical methods to defend against these attacks, which are introduced in this article.

Read More »

Fraunhofer AISEC commissioned by the German Federal Office for Information Security (BSI): New study on the synthesis of cryptographic hardware implementations

The study by Fraunhofer AISEC on the security of cryptographic hardware implementations focuses on physical attacks on hardware, such as side-channel attacks and fault attacks, as well as measures to defend against them. These protective mechanisms can potentially be compromised by optimizations in the chip design process. The study shows that protective measures should be integrated into complex design processes and taken into account in hardware design synthesis in order to be resilient to hardware attacks. The findings will help hardware designers to develop robust and secure chips.

Read More »

Faster detection and rectification of security vulnerabilities in software with CSAF

The Common Security Advisory Framework (CSAF) is a machine-readable format for security notices and plays a crucial role in implementing the security requirements of the Cyber Resilience Act (CRA): Security vulnerabilities can be detected and rectified faster by automatically creating and sharing security information. Fraunhofer AISEC has now published the software library »kotlin-csaf«, which implements the CSAF standard in the Kotlin programming language.

Read More »

Privacy By Design: Integrating Privacy into the Software Development Life Cycle

As data breaches and privacy violations continue to make headlines, it is evident that mere reactive measures are not enough to protect personal data. Therefore, behind every privacy-aware organization lies an established software engineering process that systematically includes privacy engineering activities. Such activities include the selection of privacy-enhancing technologies, the analysis of potential privacy threats, as well as the continuous re-evaluation of privacy risks at runtime.
In this blog post, we give an overview of some of these activities which help your organization to build and operate privacy-friendly software by design. In doing so, we focus on risk-based privacy engineering as the driver for »Privacy by Design«.

Read More »