AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1623 / 1624] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, November 15, 2024 Part 5   November 15, 2024
 4:13 PM *  

Expect lots of developments in this area over the next few years.

This is what I said in a recent interview:

    LetΓÇÖs stick with software. Imagine that we have an AI that finds
    software vulnerabilities. Yes, the attackers can use those AIs to break
    into systems. But the defenders can use the same AIs to find software
    vulnerabilities and then patch them. This capability, once it exists,
    will probably be built into the standard suite of software development
    tools. We can imagine a future where all the easily findable
    vulnerabilities (not all the vulnerabilities; there are lots of
    theoretical results about that) are removed in software before
    shipping.

    When that day comes, all legacy code would be vulnerable. But all new
    code would be secure. And, eventually, those software vulnerabilities
    will be a thing of the past. In my head, some future programmer shakes
    their head and says, ΓÇ£Remember the early decades of this century when
    software was full of vulnerabilities? ThatΓÇÖs before the AIs found them
    all. Wow, that was a crazy time.ΓÇ¥ WeΓÇÖre not there yet. WeΓÇÖre not even
    remotely there yet. But itΓÇÖs a reasonable extrapolation.

EDITED TO ADD: And GoogleΓÇÖs LLM just discovered an exploitable zero-day.

** *** ***** ******* *********** ************* IoT Devices in
Password-Spraying Botnet

[2024.11.06] Microsoft is warning Azure cloud users that a Chinese
controlled botnet is engaging in ΓÇ£highly evasiveΓÇ¥ password spraying. Not
sure about the ΓÇ£highly evasiveΓÇ¥ part; the techniques seem basically what
you get in a distributed password-guessing attack:

    ΓÇ£Any threat actor using the CovertNetwork-1658 infrastructure could
    conduct password spraying campaigns at a larger scale and greatly
    increase the likelihood of successful credential compromise and initial
    access to multiple organizations in a short amount of time,ΓÇ¥ Microsoft
    officials wrote. ΓÇ£This scale, combined with quick operational turnover
    of compromised credentials between CovertNetwork-1658 and Chinese
    threat actors, allows for the potential of account compromises across
    multiple sectors and geographic regions.ΓÇ¥

    Some of the characteristics that make detection difficult are:

	The use of compromised SOHO IP addresses The use of a rotating set
	of IP addresses at any given time. The threat actors had thousands
	of available IP addresses at their disposal. The average uptime for
	a CovertNetwork-1658 node is approximately 90 days.  The low-volume
	password spray process; for example, monitoring for multiple failed
	sign-in attempts from one IP address or to one account will not
	detect this activity.

** *** ***** ******* *********** ************* Subverting LLM Coders

[2024.11.07] Really interesting research: ΓÇ£An LLM-Assisted Easy-to-Trigger
Backdoor Attack on Code Completion Models: Injecting Disguised
Vulnerabilities against Strong DetectionΓÇ£:

    Abstract: Large Language Models (LLMs) have transformed code completion
    tasks, providing context-based suggestions to boost developer
    productivity in software engineering. As users often fine-tune these
    models for specific applications, poisoning and backdoor attacks can
    covertly alter the model outputs. To address this critical security
    challenge, we introduce CODEBREAKER, a pioneering LLM-assisted backdoor
    attack framework on code completion models. Unlike recent attacks that
    embed malicious payloads in detectable or irrelevant sections of the
    code (e.g., comments), CODEBREAKER leverages LLMs (e.g., GPT-4) for
    sophisticated payload transformation (without affecting
    functionalities), ensuring that both the poisoned data for fine-tuning
    and generated code can evade strong vulnerability detection.
    CODEBREAKER stands out with its comprehensive coverage of
    vulnerabilities, making it the first to provide such an extensive set
    for evaluation. Our extensive experimental evaluations and user studies
    underline the strong attack performance of CODEBREAKER across various
    settings, validating its superiority over existing approaches. By
    integrating malicious payloads directly into the source code with
    minimal transformation, CODEBREAKER challenges current security
    measures, underscoring the critical need for more robust defenses for
    code completion.

Clever attack, and yet another illustration of why trusted AI is essential.

** *** ***** ******* *********** ************* Prompt Injection Defenses
Against LLM Cyberattacks

[2024.11.07] Interesting research: ΓÇ£Hacking Back the AI-Hacker: Prompt
Injection as a Defense Against LLM-driven CyberattacksΓÇ£:

    Large language models (LLMs) are increasingly being harnessed to
    automate cyberattacks, making sophisticated exploits more accessible
    and scalable. In response, we propose a new defense strategy tailored
    to counter LLM-driven cyberattacks. We introduce Mantis, a defensive
    framework that exploits LLMsΓÇÖ susceptibility to adversarial inputs to
    undermine malicious operations. Upon detecting an automated
    cyberattack, Mantis plants carefully crafted inputs into system
    responses, leading the attackerΓÇÖs LLM to disrupt their own operations
    (passive defense) or even compromise the attackerΓÇÖs machine (active
    defense). By deploying purposefully vulnerable decoy services to
    attract the attacker and using dynamic prompt injections for the
    attackerΓÇÖs LLM, Mantis can autonomously hack back the attacker. In our
    experiments, Mantis consistently achieved over 95% effectiveness
    against automated LLM-driven attacks. To foster further research and
    collaboration, Mantis is available as an open-source tool: this https
    URL.

This isnΓÇÖt the solution, of course. But this sort of thing could be part of
a solution.

** *** ***** ******* *********** ************* AI Industry is Trying to
Subvert the Definition of ΓÇ£Open Source AIΓÇ¥

[2024.11.08] The Open Source Initiative has published (news article here)
its definition of ΓÇ£open source AI,ΓÇ¥ and itΓÇÖs terrible. It allows for
secret
training data and mechanisms. It allows for development to be done in
secret. Since for a neural network, the training data is the source code --
itΓÇÖs how the model gets programmed -- the definition makes no sense.

And itΓÇÖs confusing; most ΓÇ£open sourceΓÇ¥ AI models -- like LLAMA -- are open
source in name only. But the OSI seems to have been co-opted by industry
players that want both corporate secrecy and the ΓÇ£open sourceΓÇ¥ label.
(HereΓÇÖs one rebuttal to the definition.)
--- 
 * Origin: High Portable Tosser at my node (618:500/14.1)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0181 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.241108