Google and Greynoise report success in AI-supported vulnerability search
The companies Google and Greynoise have reported newly identified vulnerabilities. They found these using artificial intelligence.
It is well known that criminals use artificial intelligence to create malware or more polished texts for social engineering attacks. Now two companies, Google and Greynoise, have independently reported successes on the other side: They have tracked down security vulnerabilities with AI.
Google writes in a blog post that the developers worked on a project called Naptime and evaluated whether they could use Large Language Models (LLM) for offensive security. The resulting "Big Sleep Agent" in the follow-up project has now discovered a buffer underflow in the widely used, open-source SQLite database engine that can be abused by attackers. The zero-day vulnerability was reported and closed by the developers before it became public in an official release, thus helping to protect users. This is the first public example of an AI discovering a previously unknown exploitable memory security flaw in a commonly used real-world software, Google's authors believe.
AI-supported pre-selection
IT security company Greynoise uses an AI-powered system called Sift to narrow down around two million HTTP events a day to about 50 that IT analysts should take a specific look at. The system picked out malware from the noise, which was used to attack executable scripts on servers(/cgi-bin/param.cgi
).
During the investigation of this AI-filtered event, veritable zero-day vulnerabilities, apparently already attacked, were found in fairly high-priced pan-tilt-zoom (PTZ) cameras from OEM manufacturer ValueHD Corporation (VHD), which are equipped with network interfaces; the devices are in circulation under the name PTZOptics PT30X-SDI/NDI, for example. This resulted in two CVE entries, CVE-2024-8957 with CVSS 9.8 rated as a critical risk, and CVE-2024-8956, CVSS 9.1, also critical. On Monday of this week, the US IT security authority CISA consequently included both vulnerabilities in the catalog of known-exploited vulnerabilities.
Different approaches
While Greynoise uses AI LLMs to pre-filter events and thus implement anomaly detection, Google has developed an AI-supported source code analysis that goes far beyond what could previously be achieved by fuzzing –, i.e. feeding the code with many, sometimes nonsensical values –. According to Google, AI is still at the research stage. It relies on findings based on exploits for variants of previously identified and patched vulnerabilities. This variant analysis is better suited to current LLMs than a completely open vulnerability search. By specifying a starting point, the IT researchers remove ambiguities from the vulnerability search. The search thus starts from a concrete and well-defined starting point: "This was the previous bug. There's probably another one like it somewhere," writes Google.
Small programs with known vulnerabilities are still used to evaluate the process. Using SQLite, they wanted to test the models and tools in a real-world scenario. A number of recent source code commits to the SQLite repository were used, in which the project participants removed documentation and trivial changes. They adapted the AI prompt so that it fed the Big Sleep agent with the commit message and a diff of the changes. They asked the agent to search the current repository in the HEAD branch for similar issues that were not fixed. The agent used Google's Gemini 1.5 Pro AI for this. Those interested can find more details in the blog post.
At the end of April, the German Federal Office for Information Security warned of security threats posed by artificial intelligence. According to the German IT security authority, AI is used by malicious actors in particular for social engineering and the generation of malicious code. The use of large language models is now also being adopted by the other side in order to detect attacks and vulnerabilities and ensure greater security.
(dmk)