AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page
   Local Database  Slashdot   [42 / 240] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   Open Source Coalition Announces 'Model-Signing' with Sigstore to   April 5, 2025
 12:40 PM  

Feed: Slashdot
Feed Link: https://slashdot.org/
---

Title: Open Source Coalition Announces 'Model-Signing' with Sigstore to
Strengthen the ML Supply Chain

Link: https://it.slashdot.org/story/25/04/05/062120...

The advent of LLMs and machine learning-based applications "opened the door
to a new wave of security threats," argues Google's security blog. (Including
model and data poisoning, prompt injection, prompt leaking and prompt
evasion.) So as part of the Linux Foundation's nonprofit Open Source Security
Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open
Source Security Team on Friday announced the first stable model-signing
library (hosted at PyPI.org), with digital signatures letting users verify
that the model used by their application "is exactly the model that was
created by the developers," according to a post on Google's security blog.
[S]ince models are an uninspectable collection of weights (sometimes also
with arbitrary code), an attacker can tamper with them and achieve
significant impact to those using the models. Users, developers, and
practitioners need to examine an important question during their risk
assessment process: "can I trust this model?" Since its launch, Google's
Secure AI Framework (SAIF) has created guidance and technical solutions for
creating AI applications that users can trust. A first step in achieving
trust in the model is to permit users to verify its integrity and provenance,
to prevent tampering across all processes from training to usage, via
cryptographic signing... [T]he signature would have to be verified when the
model gets uploaded to a model hub, when the model gets selected to be
deployed into an application (embedded or via remote APIs) and when the model
is used as an intermediary during another training run. Assuming the training
infrastructure is trustworthy and not compromised, this approach guarantees
that each model user can trust the model... The average developer, however,
would not want to manage keys and rotate them on compromise. These challenges
are addressed by using Sigstore, a collection of tools and services that make
code signing secure and easy. By binding an OpenID Connect token to a
workload or developer identity, Sigstore alleviates the need to manage or
rotate long-lived secrets. Furthermore, signing is made transparent so
signatures over malicious artifacts could be audited in a public transparency
log, by anyone. This ensures that split-view attacks are not possible, so any
user would get the exact same model. These features are why we recommend
Sigstore's signing mechanism as the default approach for signing ML models.
Today the OSS community is releasing the v1.0 stable version of our model
signing library as a Python package supporting Sigstore and traditional
signing methods. This model signing library is specialized to handle the
sheer scale of ML models (which are usually much larger than traditional
software components), and handles signing models represented as a directory
tree. The package provides CLI utilities so that users can sign and verify
model signatures for individual models. The package can also be used as a
library which we plan to incorporate directly into model hub upload flows as
well as into ML frameworks. "We can view model signing as establishing the
foundation of trust in the ML ecosystem..." the post concludes (adding "We
envision extending this approach to also include datasets and other ML-
related artifacts.";) Then, we plan to build on top of signatures, towards
fully tamper-proof metadata records, that can be read by both humans and
machines. This has the potential to automate a significant fraction of the
work needed to perform incident response in case of a compromise in the ML
world... To shape the future of building tamper-proof ML, join the Coalition
for Secure AI, where we are planning to work on building the entire trust
ecosystem together with the open source community. In collaboration with
multiple industry partners, we are starting up a special interest group under
CoSAI for defining the future of ML signing and including tamper-proof ML
metadata, such as model cards and evaluation results.

Read more of this story at Slashdot.

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0187 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224