AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page
   Local Database  Slashdot   [287 / 289] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   AI Hallucinations Lead To a New Cyber Threat: Slopsquatting   April 21, 2025
 9:00 PM  

Feed: Slashdot
Feed Link: https://slashdot.org/
---

Title: AI Hallucinations Lead To a New Cyber Threat: Slopsquatting

Link: https://it.slashdot.org/story/25/04/22/011820...

Researchers have uncovered a new supply chain attack called Slopsquatting,
where threat actors exploit hallucinated, non-existent package names
generated by AI coding tools like GPT-4 and CodeLlama. These believable yet
fake packages, representing almost 20% of the samples tested, can be
registered by attackers to distribute malicious code. CSO Online reports:
Slopsquatting, as researchers are calling it, is a term first coined by Seth
Larson, a security developer-in-residence at Python Software Foundation
(PSF), for its resemblance to the typosquatting technique. Instead of relying
on a user's mistake, as in typosquats, threat actors rely on an AI model's
mistake. A significant number of packages, amounting to 19.7% (205,000
packages), recommended in test samples were found to be fakes. Open-source
models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at
21.7% on average, compared to the commercial ones (5.2%) like GPT 4.
Researchers found CodeLlama ( hallucinating over a third of the outputs) to
be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the
best performer. These package hallucinations are particularly dangerous as
they were found to be persistent, repetitive, and believable. When
researchers reran 500 prompts that had previously produced hallucinated
packages, 43% of hallucinations reappeared every time in 10 successive re-
runs, with 58% of them appearing in more than one run. The study concluded
that this persistence indicates "that the majority of hallucinations are not
just random noise, but repeatable artifacts of how the models respond to
certain prompts." This increases their value to attackers, it added.
Additionally, these hallucinated package names were observed to be
"semantically convincing." Thirty-eight percent of them had moderate string
similarity to real packages, suggesting a similar naming structure. "Only 13%
of hallucinations were simple off-by-one typos," Socket added. The research
can found be in a paper on arXiv.org (PDF).

Read more of this story at Slashdot.

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.018 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224