Chipp.in Tech News and Reviews

Windows, Security & Privacy, Open Source and more

Menu
  • Home
  • Windows
  • Security & Privacy
  • Gaming
  • Guides
  • Windows 11 Book
  • Contact
  • RSS Feed
Menu
AI

AI is capable of creating exploits from public CVEs

Posted on April 22, 2024April 22, 2024 by Martin Brinkmann

AI tools are capable of writing exploits for publicly disclosed security vulnerabilities.

A team of University of Illinois researchers analyzed the capabilities of different Large Language Models in this regard. It found out that OpenAI’s GPT-4 managed to create exploit code for 87% of the tested vulnerabilities.

The figure dropped to 7% without access to the CVE description. Other AI models, including GPT-3.5, could not create any exploits based on public CVEs.

The researchers note:

When given the CVE description, GPT-4 is capable of exploiting 87% of these vulnerabilities compared to 0% for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit).

The researchers did not put other large language models to test. Google Gemini or Claude 3, for example, were not part of the test.

How the tests were conducted

The researchers selected 15 day one vulnerabilities from the Common Vulnerabilities and Exposures database for the test. All vulnerabilities were reproduced in “highly cited academic papers” according to the research paper.

The single large language model agent that the researchers created gave the AI access to tools, the CVE description, and the ReAct agent framework. Tools included capabilities to browse the Internet and activate elements, a code interpreter, and file creation.

Then agent consisted of a total of 91 lines of code according to the researchers.

AI is improving, but there are challenges

OpenAI’s GPT-4 large language model managed to create exploits for 87% of the 15 vulnerabilities. That’s a huge jump from GPT 3.5’s 0%.

The researchers have verified that — at least one — large language model is now capable of creating exploit code based on publicly available information.

While GPT-4 performed well in tests, it experienced its fair share of challenges as well. The detailed description of one vulnerability was provided in Chinese only, which the researches believe might have confused the AI, as the prompt given to it was provided in English.

The second vulnerability that GPT-4 could not crack required navigating a site using JavaScript navigation.

The researchers conclude that large language model providers and the cybersecurity community should take these capabilities into consideration, especially in regards to defensive measures.

Closing Words

The capabilities of large language models have increased significantly since the first release of ChatGPT last year. The capabilities will improve further in the coming months and years.

It is likely that threat actors will use large language models to automate processes. Exploits may be used sooner as a consequence by a wider pool of threat actors.

What is your take on this? Will we see an increase in exploit code in the coming years?

Tags: ai
Category: Security & Privacy

Post navigation

← Cui Bono? Microsoft hides Sign Out button in Windows 11 Beta
Here is a theory for Google’s fight against adblockers on YouTube →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • September 6, 2025 by Martin Brinkmann Mozilla extends Firefox for Windows 7 support again and it may not be the last time either
  • September 3, 2025 by Martin Brinkmann Google is hunting YouTube Premium Family subscribers now that are not living in the same household
  • August 28, 2025 by Martin Brinkmann Proton launches Emergency Access feature for paid accounts
  • August 27, 2025 by Martin Brinkmann 0Patch promises to keep Microsoft Office 2016 and 2019 secure after official end of support
  • August 26, 2025 by Martin Brinkmann Starting next year, all Android apps need to be registered by verified developers, even sideloaded ones

About

We talk, write and dream about Technology 24/7 here at Chipp.in. The site, created by Martin Brinkmann in 2023, focuses on well-researched tech news, reviews, guides, help and more.

Legal Notice

Our commitment

Many websites write about tech, but chipp.in is special in several ways. All of our guides are unique, and we will never just rehash news that you find elsewhere.

Read the About page for additional information on the site and its founder and author.

Support Us

We don't run advertisement on this site that tracks users. If you see ads, they are static links. Ads, including affiliate links, never affect our writing on this site.

Here is a link to our privacy policy

©2025 Chipp.in Tech News and Reviews