AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [854 / 1577] RSS
 From   To   Subject   Date/Time 
Message   TCOB1    All   CRYPTO-GRAM, June 15, 2023   June 17, 2023
 3:23 PM *  

Crypto-Gram
June 15, 2023

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Micro-Star International Signing Key Stolen Microsoft Secure Boot Bug
Security Risks of New .zip and .mov Domains Google Is Not Deleting Old YouTube
Videos Credible Handwriting Machine
Indiana, Iowa, and Tennessee Pass Comprehensive Privacy Laws On the Poisoning of
LLMs
Expeditionary Cyberspace Operations Brute-Forcing a Fingerprint Reader
Chinese Hacking of US Critical Infrastructure On the Catastrophic Risk of AI
Open-Source LLMs
The Software-Defined Car
Snowden Ten Years Later
How Attorneys Are Harming Cybersecurity Incident Response Paragon Solutions
Spyware: Graphite Operation Triangulation: Zero-Click iPhone Malware
AI-Generated Steganography
Identifying the Idaho Killer
On the Need for an AI Public Option
** *** ***** ******* *********** *************

Micro-Star International Signing Key Stolen

[2023.05.15] Micro-Star International -- aka MSI -- had its UEFI signing key
stolen last month.

This raises the possibility that the leaked key could push out updates that
would infect a computerΓÇÖs most nether regions without triggering a warning. To
make matters worse, Matrosov said, MSI doesnΓÇÖt have an automated patching
process the way Dell, HP, and many larger hardware makers do. Consequently, MSI
doesnΓÇÖt provide the same kind of key revocation capabilities.

Delivering a signed payload isnΓÇÖt as easy as all that. ΓÇ£Gaining the kind of
control required to compromise a software build system is generally a
non-trivial event that requires a great deal of skill and possibly some luck.ΓÇ¥
But it just got a whole lot easier.

** *** ***** ******* *********** *************

Microsoft Secure Boot Bug

[2023.05.17] Microsoft is currently patching a zero-day Secure-Boot bug.

The BlackLotus bootkit is the first-known real-world malware that can bypass
Secure Boot protections, allowing for the execution of malicious code before
your PC begins loading Windows and its many security protections. Secure Boot
has been enabled by default for over a decade on most Windows PCs sold by
companies like Dell, Lenovo, HP, Acer, and others. PCs running Windows 11 must
have it enabled to meet the softwareΓÇÖs system requirements.

Microsoft says that the vulnerability can be exploited by an attacker with
either physical access to a system or administrator rights on a system. It can
affect physical PCs and virtual machines with Secure Boot enabled.

ThatΓÇÖs important. This is a nasty vulnerability, but it takes some work to
exploit it.

The problem with the patch is that it breaks backwards compatibility:
ΓÇ£...once the fixes have been enabled, your PC will no longer be able to boot
from older bootable media that doesnΓÇÖt include the fixes.ΓÇ¥

And:

Not wanting to suddenly render any usersΓÇÖ systems unbootable, Microsoft will
be rolling the update out in phases over the next few months. The initial
version of the patch requires substantial user intervention to enable -- you
first need to install MayΓÇÖs security updates, then use a five-step process to
manually apply and verify a pair of ΓÇ£revocation filesΓÇ¥ that update your
systemΓÇÖs hidden EFI boot partition and your registry. These will make it so
that older, vulnerable versions of the bootloader will no longer be trusted by
PCs.

A second update will follow in July that wonΓÇÖt enable the patch by default but
will make it easier to enable. A third update in ΓÇ£first quarter 2024ΓÇ¥ will
enable the fix by default and render older boot media unbootable on all patched
Windows PCs. Microsoft says it is ΓÇ£looking for opportunities to accelerate
this schedule,ΓÇ¥ though itΓÇÖs unclear what that would entail.

So itΓÇÖll be almost a year before this is completely fixed.

** *** ***** ******* *********** *************

Security Risks of New .zip and .mov Domains

[2023.05.19] Researchers are worried about GoogleΓÇÖs .zip and .mov domains,
because they are confusing. Mistaking a URL for a filename could be a security
vulnerability.

** *** ***** ******* *********** *************

Google Is Not Deleting Old YouTube Videos

[2023.05.22] Google has backtracked on its plan to delete inactive YouTube
videos -- at least for now. Of course, it could change its mind anytime it
wants.

It would be nice if this would get people to think about the vulnerabilities
inherent in letting a for-profit monopoly decide what of human creativity is
worth saving.

** *** ***** ******* *********** *************

Credible Handwriting Machine

[2023.05.23] In case you donΓÇÖt have enough to worry about, someone has built a
credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the
biggest problems with other homework machines, such as this one that I covered a
few months ago after it blew up on social media. The problem with most homework
machines is that theyΓÇÖre too perfect. Not only is their content output too
well-written for most students, but they also have perfect grammar and
punctuation something even we professional writers fail to consistently achieve.
Most importantly, the machineΓÇÖs ΓÇ£handwritingΓÇ¥ is too consistent. Humans
always include small variations in their writing, no matter how honed their
penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his
machine mimic human handwriting. Even better, it will reflect the handwriting of
its specific user so that AI-written submissions match those written by the
student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on
the assignment prompt. That generates a chunk of text, which would normally be
stylized with a script-style font and then output as g-code for a pen plotter.
But instead, Devadeth created custom software that records examples of the
userΓÇÖs own handwriting. The software then uses that as a font, with small
random variations, to create a document image that looks like it was actually
handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.

** *** ***** ******* *********** *************

Indiana, Iowa, and Tennessee Pass Comprehensive Privacy Laws

[2023.05.24] ItΓÇÖs been a big month for US data privacy. Indiana, Iowa, and
Tennessee all passed state privacy laws, bringing the total number of states
with a privacy law up to eight. No private right of action in any of those,
which means itΓÇÖs up to the states to enforce the laws.

** *** ***** ******* *********** *************

On the Poisoning of LLMs

[2023.05.25] Interesting essay on the poisoning of LLMs -- ChatGPT in
particular:

Given that weΓÇÖve known about model poisoning for years, and given the strong
incentives the black-hat SEO crowd has to manipulate results, itΓÇÖs entirely
possible that bad actors have been poisoning ChatGPT for months. We donΓÇÖt know
because OpenAI doesnΓÇÖt talk about their processes, how they validate the
prompts they use for training, how they vet their training data set, or how they
fine-tune ChatGPT. Their secrecy means we donΓÇÖt know if ChatGPT has been
safely managed.

TheyΓÇÖll also have to update their training data set at some point. They
canΓÇÖt leave their models stuck in 2021 forever.

Once they do update it, we only have their word -- pinky-swear promises -- that
theyΓÇÖve done a good enough job of filtering out keyword manipulations and
other training data attacks, something that the AI researcher El Mahdi El Mhamdi
posited is mathematically impossible in a paper he worked on while he was at
Google.

** *** ***** ******* *********** *************

Expeditionary Cyberspace Operations

[2023.05.26] Cyberspace operations now officially has a physical dimension,
meaning that the United States has official military doctrine about cyberattacks
that also involve an actual human gaining physical access to a piece of
computing infrastructure.

A revised version of Joint Publication 3-12 Cyberspace Operations -- published
in December 2022 and while unclassified, is only available to those with DoD
common access cards, according to a Joint Staff spokesperson -- officially
provides a definition for ΓÇ£expeditionary cyberspace operations,ΓÇ¥ which are
ΓÇ£[c]yberspace operations that require the deployment of cyberspace forces
within the physical domains.ΓÇ¥

[...]

ΓÇ£Developing access to targets in or through cyberspace follows a process that
can often take significant time. In some cases, remote access is not possible or
preferable, and close proximity may be required, using expeditionary [cyber
operations],ΓÇ¥ the joint publication states. ΓÇ£Such operations are key to
addressing the challenge of closed networks and other systems that are virtually
isolated. Expeditionary CO are often more regionally and tactically focused and
can include units of the CMF or special operations forces ... If direct access
to the target is unavailable or undesired, sometimes a similar or partial effect
can be created by indirect access using a related target that has higher-order
effects on the desired target.ΓÇ¥

[...]

ΓÇ£Allowing them to support [combatant commands] in this way permits faster
adaptation to rapidly changing needs and allows threats that initially manifest
only in one [area of responsibility] to be mitigated globally in near real time.
Likewise, while synchronizing CO missions related to achieving [combatant
commander] objectives, some cyberspace capabilities that support this activity
may need to be forward-deployed; used in multiple AORs simultaneously; or, for
speed in time-critical situations, made available via reachback,ΓÇ¥ it states.
ΓÇ£This might involve augmentation or deployment of cyberspace capabilities to
forces already forward or require expeditionary CO by deployment of a fully
equipped team of personnel and capabilities.ΓÇ¥

** *** ***** ******* *********** *************

Brute-Forcing a Fingerprint Reader

[2023.05.30] ItΓÇÖs neither hard nor expensive:

Unlike password authentication, which requires a direct match between what is
inputted and whatΓÇÖs stored in a database, fingerprint authentication
determines a match using a reference threshold. As a result, a successful
fingerprint brute-force attack requires only that an inputted image provides an
acceptable approximation of an image in the fingerprint database. BrutePrint
manipulates the false acceptance rate (FAR) to increase the threshold so fewer
approximate images are accepted.

BrutePrint acts as an adversary in the middle between the fingerprint sensor and
the trusted execution environment and exploits vulnerabilities that allow for
unlimited guesses.

In a BrutePrint attack, the adversary removes the back cover of the device and
attaches the $15 circuit board that has the fingerprint database loaded in the
flash storage. The adversary then must convert the database into a fingerprint
dictionary thatΓÇÖs formatted to work with the specific sensor used by the
targeted phone. The process uses a neural-style transfer when converting the
database into the usable dictionary. This process increases the chances of a
match.

With the fingerprint dictionary in place, the adversary device is now in a
position to input each entry into the targeted phone. Normally, a protection
known as attempt limiting effectively locks a phone after a set number of failed
login attempts are reached. BrutePrint can fully bypass this limit in the eight
tested Android models, meaning the adversary device can try an infinite number
of guesses. (On the two iPhones, the attack can expand the number of guesses to
15, three times higher than the five permitted.)

The bypasses result from exploiting what the researchers said are two zero-day
vulnerabilities in the smartphone fingerprint authentication framework of
virtually all smartphones. The vulnerabilities -- one known as CAMF
(cancel-after-match fail) and the other MAL (match-after-lock) -- result from
logic bugs in the authentication framework. CAMF exploits invalidate the
checksum of transmitted fingerprint data, and MAL exploits infer matching
results through side-channel attacks.

Depending on the model, the attack takes between 40 minutes and 14 hours.

Also:

The ability of BrutePrint to successfully hijack fingerprints stored on Android
devices but not iPhones is the result of one simple design difference: iOS
encrypts the data, and Android does not.

Other news articles. Research paper.

** *** ***** ******* *********** *************

Chinese Hacking of US Critical Infrastructure

[2023.05.31] The text of this entry has been removed because it was tripping
email spam filters. To read the entry, use this link.

** *** ***** ******* *********** *************

On the Catastrophic Risk of AI

[2023.06.01] Earlier this week, I signed on to a short group statement,
coordinated by the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside
other societal-scale risks such as pandemics and nuclear war.

The press coverage has been extensive, and surprising to me. The New York Times
headline is ΓÇ£A.I. Poses ΓÇÿRisk of Extinction,ΓÇÖ Industry Leaders Warn.ΓÇ¥
BBC: ΓÇ£Artificial intelligence could lead to extinction, experts warn.ΓÇ¥
Other headlines are similar.

I actually donΓÇÖt think that AI poses a risk to human extinction. I think it
poses a similar risk to pandemics and nuclear war -- which is to say, a risk
worth taking seriously, but not something to panic over. Which is what I thought
the statement said.

In my talk at the RSA Conference last month, I talked about the power level of
our species becoming too great for our systems of governance. Talking about
those systems, I said:

Now, add into this mix the risks that arise from new and dangerous technologies
such as the internet or AI or synthetic biology. Or molecular nanotechnology, or
nuclear weapons. Here, misaligned incentives and hacking can have catastrophic
consequences for society.

That was what I was thinking about when I agreed to sign on to the statement:
ΓÇ£Pandemics, nuclear weapons, AI -- yeah, I would put those three in the same
bucket. Surely we can spend the same effort on AI risk as we do on future
pandemics. ThatΓÇÖs a really low bar.ΓÇ¥ Clearly I should have focused on the
word ΓÇ£extinction,ΓÇ¥ and not the relative comparisons.

Seth Lazar, Jeremy Howard, and Arvind Narayanan wrote:

We think that, in fact, most signatories to the statement believe that runaway
AI is a way off yet, and that it will take a significant scientific advance to
get there -- ne that we cannot anticipate, even if we are confident that it will
someday occur. If this is so, then at least two things follow.

I agree with that, and with their follow up:

First, we should give more weight to serious risks from AI that are more urgent.
Even if existing AI systems and their plausible extensions wonΓÇÖt wipe us out,
they are already causing much more concentrated harm, they are sure to
exacerbate inequality and, in the hands of power-hungry governments and
unscrupulous corporations, will undermine individual and collective freedom.

This is what I wrote in Click Here to Kill Everybody (2018):

I am less worried about AI; I regard fear of AI more as a mirror of our own
society than as a harbinger of the future. AI and intelligent robotics are the
culmination of several precursor technologies, like machine learning algorithms,
automation, and autonomy. The security risks from those precursor technologies
are already with us, and theyΓÇÖre increasing as the technologies become more
powerful and more prevalent. So, while I am worried about intelligent and even
driverless cars, most of the risks arealready prevalent in Internet-connected
drivered cars. And while I am worried about robot soldiers, most of the risks
are already prevalent in autonomous weapons systems.

Also, as roboticist Rodney Brooks pointed out, ΓÇ£Long before we see such
machines arising there will be the somewhat less intelligent and belligerent
machines. Before that there will be the really grumpy machines. Before that the
quite annoying machines. And before them the arrogant unpleasant machines.ΓÇ¥ I
think weΓÇÖll see any new security risks coming long before they get here.

I do think we should worry about catastrophic AI and robotics risk. ItΓÇÖs the
fact that they affect the world in a direct, physical manner -- and that
theyΓÇÖre vulnerable to class breaks.

(Other things to read: David Chapman is good on scary AI. And Kieran Healy is
good on the statement.)

Okay, enough. I should also learn not to sign on to group statements.

** *** ***** ******* *********** *************

Open-Source LLMs

[2023.06.02] In February, Meta released its large language model: LLaMA. Unlike
OpenAI and its ChatGPT, Meta didnΓÇÖt just give the world a chat window to play
with. Instead, it released the code into the open-source community, and shortly
thereafter the model itself was leaked. Researchers and programmers immediately
started modifying it, improving it, and getting it to do things no one else
anticipated. And their results have been immediate, innovative, and an
indication of how the future of this technology is going to play out. Training
speeds have hugely increased, and the size of the models themselves has shrunk
to the point that you can create and run them on a laptop. The world of AI
research has dramatically changed.

This development hasnΓÇÖt made the same splash as other corporate announcements,
but its effects will be much greater. It will wrest power from the large tech
corporations, resulting in both much more innovation and a much more challenging
regulatory landscape. The large corporations that had controlled these models
warn that this free-for-all will lead to potentially dangerous developments, and
problematic uses of the open technology have already been documented. But those
who are working on the open models counter that a more democratic research
environment is better than having this powerful technology controlled by a small
number of corporations.

The power shift comes from simplification. The LLMs built by OpenAI and Google
rely on massive data sets, measured in the tens of billions of bytes, computed
on by tens of thousands of powerful specialized processors producing models with
billions of parameters. The received wisdom is that bigger data, bigger
processing, and larger parameter sets were all needed to make a better model.
Producing such a model requires the resources of a corporation with the money
and computing power of a Google or Microsoft or Meta.

But building on public models like MetaΓÇÖs LLaMa, the open-source community has
innovated in ways that allow results nearly as good as the huge models -- but
run on home machines with common data sets. What was once the reserve of the
resource-rich has become a playground for anyone with curiosity, coding skills,
and a good laptop. Bigger may be better, but the open-source community is
showing that smaller is often good enough. This opens the door to more
efficient, accessible, and resource-friendly LLMs.

More importantly, these smaller and faster LLMs are much more accessible and
easier to experiment with. Rather than needing tens of thousands of machines and
millions of dollars to train a new model, an existing model can now be
customized on a mid-priced laptop in a few hours. This fosters rapid innovation.

It also takes control away from large companies like Google and OpenAI. By
providing access to the underlying code and encouraging collaboration,
open-source initiatives empower a diverse range of developers, researchers, and
organizations to shape the technology. This diversification of control helps
prevent undue influence, and ensures that the development and deployment of AI
technologies align with a broader set of values and priorities. Much of the
modern internet was built on open-source technologies from the LAMP (Linux,
Apache, mySQL, and PHP/PERL/Python) stack -- a suite of applications often used
in web development. This enabled sophisticated websites to be easily
constructed, all with open-source tools that were built by enthusiasts, not
companies looking for profit. Facebook itself was originally built using
open-source PHP.

But being open-source also means that there is no one to hold responsible for
misuse of the technology. When vulnerabilities are discovered in obscure bits of
open-source technology critical to the functioning of the internet, often there
is no entity responsible for fixing the bug. Open-source communities span
countries and cultures, making it difficult to ensure that any countryΓÇÖs laws
will be respected by the community. And having the technology open-sourced means
that those who wish to use it for unintended, illegal, or nefarious purposes
have the same access to the technology as anyone else.

This, in turn, has significant implications for those who are looking to
regulate this new and powerful technology. Now that the open-source community is
remixing LLMs, itΓÇÖs no longer possible to regulate the technology by dictating
what research and development can be done; there are simply too many researchers
doing too many different things in too many different countries. The only
governance mechanism available to governments now is to regulate usage (and only
for those who pay attention to the law), or to offer incentives to those
(including startups, individuals, and small companies) who are now the drivers
of innovation in the arena. Incentives for these communities could take the form
of rewards for the production of particular uses of the technology, or
hackathons to develop particularly useful applications. Sticks are hard to use
-- instead, we need appealing carrots.

It is important to remember that the open-source community is not always
motivated by profit. The members of this community are often driven by
curiosity, the desire to experiment, or the simple joys of building. While there
are companies that profit from supporting software produced by open-source
projects like Linux, Python, or the Apache web server, those communities are not
profit driven.

And there are many open-source models to choose from. Alpaca, Cerebras-GPT,
Dolly, HuggingChat, and StableLM have all been released in the past few months.
Most of them are built on top of LLaMA, but some have other pedigrees. More are
on their way.

The large tech monopolies that have been developing and fielding LLMs -- Google,
Microsoft, and Meta -- are not ready for this. A few weeks ago, a Google
employee leaked a memo in which an engineer tried to explain to his superiors
what an open-source LLM means for their own proprietary tech. The memo concluded
that the open-source community has lapped the major corporations and has an
overwhelming lead on them.

This isnΓÇÖt the first time companies have ignored the power of the open-source
community. Sun never understood Linux. Netscape never understood the Apache web
server. Open source isnΓÇÖt very good at original innovations, but once an
innovation is seen and picked up, the community can be a pretty overwhelming
thing. The large companies may respond by trying to retrench and pulling their
models back from the open-source community.

But itΓÇÖs too late. We have entered an era of LLM democratization. By showing
that smaller models can be highly effective, enabling easy experimentation,
diversifying control, and providing incentives that are not profit motivated,
open-source initiatives are moving us into a more dynamic and inclusive AI
landscape. This doesnΓÇÖt mean that some of these models wonΓÇÖt be biased, or
wrong, or used to generate disinformation or abuse. But it does mean that
controlling this technology is going to take an entirely different approach than
regulating the large players.

This essay was written with Jim Waldo, and previously appeared on Slate.com.

EDITED TO ADD (6/4): Slashdot thread.

** *** ***** ******* *********** *************

The Software-Defined Car

[2023.06.05] Developers are starting to talk about the software-defined car.

For decades, features have accumulated like cruft in new vehicles: a box here to
control the antilock brakes, a module there to run the cruise control radar, and
so on. Now engineers and designers are rationalizing the way they go about
building new models, taking advantage of much more powerful hardware to
consolidate all those discrete functions into a small number of domain
controllers.

The behavior of new cars is increasingly defined by software, too. This is
merely the progression of a trend that began at the end of the 1970s with the
introduction of the first electronic engine control units; today, code controls
a carΓÇÖs engine and transmission (or its electric motors and battery pack), the
steering, brakes, suspension, interior and exterior lighting, and more,
depending on how new (and how expensive) it is. And those systems are being
leveraged for convenience or safety features like adaptive cruise control, lane
keeping, remote parking, and so on.

And security?

Another advantage of the move away from legacy designs is that digital security
can be baked in from the start rather than patched onto components (like a
carΓÇÖs central area network) that were never designed with the Internet in
mind. ΓÇ£If you design it from scratch, itΓÇÖs security by design, everything is
in by design; you have it there. But keep in mind that, of course, the more
software there is in the car, the more risk is there for vulnerabilities, no
question about this,ΓÇ¥ Anhalt said.

ΓÇ£At the same time, theyΓÇÖre a great software system. TheyΓÇÖre highly
secure. TheyΓÇÖre much more secure than a hardware system with a little bit of
software. It depends how the whole thing has been designed. And there are so
many regulations and EU standards that have been released in the last year, year
and a half, that force OEMs to comply with these standards and get security
inside,ΓÇ¥ she said.

I suppose it could end up that way. It could also be a much bigger attack
surface, with a lot more hacking possibilities.

** *** ***** ******* *********** *************

Snowden Ten Years Later

[2023.06.06] In 2013 and 2014, I wrote extensively about new revelations
regarding NSA surveillance based on the documents provided by Edward Snowden.
But I had a more personal involvement as well.

I wrote the essay below in September 2013. The New Yorker agreed to publish it,
but the Guardian asked me not to. It was scared of UK law enforcement, and
worried that this essay would reflect badly on it. And given that the UK police
would raid its offices in July 2014, it had legitimate cause to be worried.

Now, ten years later, I offer this as a time capsule of what those early months
of Snowden were like.

ItΓÇÖs a surreal experience, paging through hundreds of top-secret NSA
documents. YouΓÇÖre peering into a forbidden world: strange, confusing, and
fascinating all at the same time.

I had flown down to Rio de Janeiro in late August at the request of Glenn
Greenwald. He had been working on the Edward Snowden archive for a couple of
months, and had a pile of more technical documents that he wanted help
interpreting. According to Greenwald, Snowden also thought that bringing me down
was a good idea.

It made sense. I didnΓÇÖt know either of them, but I have been writing about
cryptography, security, and privacy for decades. I could decipher some of the
technical language that Greenwald had difficulty with, and understand the
context and importance of various document. And I have long been publicly
critical of the NSAΓÇÖs eavesdropping capabilities. My knowledge and expertise
could help figure out which stories needed to be reported.

I thought about it a lot before agreeing. This was before David Miranda,
GreenwaldΓÇÖs partner, was detained at Heathrow airport by the UK authorities;
but even without that, I knew there was a risk. I fly a lot -- a quarter of a
million miles per year -- and being put on a TSA list, or being detained at the
US border and having my electronics confiscated, would be a major problem. So
would the FBI breaking into my home and seizing my personal electronics. But in
the end, that made me more determined to do it.

I did spend some time on the phone with the attorneys recommended to me by the
ACLU and the EFF. And I talked about it with my partner, especially when
Miranda was detained three days before my departure. Both Greenwald and his
employer, the Guardian, are careful about whom they show the documents to. They
publish only those portions essential to getting the story out. It was important
to them that I be a co-author, not a source. I didnΓÇÖt follow the legal
reasoning, but the point is that the Guardian doesnΓÇÖt want to leak the
documents to random people. It will, however, write stories in the public
interest, and I would be allowed to review the documents as part of that
process. So after a Skype conversation with someone at the Guardian, I signed a
letter of engagement.

And then I flew to Brazil.

I saw only a tiny slice of the documents, and most of what I saw was
surprisingly banal. The concerns of the top-secret world are largely tactical:
system upgrades, operational problems owing to weather, delays because of work
backlogs, and so on. I paged through weekly reports, presentation slides from
status meetings, and general briefings to educate visitors. Management is
management, even inside the NSA Reading the documents, I felt as though I were
sitting through some of those endless meetings.

The meeting presenters try to spice things up. Presentations regularly include
intelligence success stories. There were details -- what had been found, and
how, and where it helped -- and sometimes there were attaboys from
ΓÇ£customersΓÇ¥ who used the intelligence. IΓÇÖm sure these are intended to
remind NSA employees that theyΓÇÖre doing good. It definitely had an effect on
me. Those were all things I want the NSA to be doing.

There were so many code names. Everything has one: every program, every piece of
equipment, every piece of software. Sometimes code names had their own code
names. The biggest secrets seem to be the underlying real-world information:
which particular company MONEYROCKET is; what software vulnerability
EGOTISTICALGIRAFFE -- really, I am not making that one up -- is; how TURBINE
works. Those secrets collectively have a code name -- ECI, for exceptionally
compartmented information -- and almost never appear in the documents. Chatting
with Snowden on an encrypted IM connection, I joked that the NSA cafeteria menu
probably has code names for menu items. His response: ΓÇ£Trust me when I say you
have no idea.ΓÇ¥

Those code names all come with logos, most of them amateurish and a lot of them
dumb. Note to the NSA: take some of that more than ten-billion-dollar annual
budget and hire yourself a design firm. Really; itΓÇÖll pay off in morale.

Once in a while, though, I would see something that made me stop, stand up, and
pace around in circles. It wasnΓÇÖt that what I read was particularly exciting,
or important. It was just that it was startling. It changed -- ever so slightly
-- how I thought about the world.

Greenwald said that that reaction was normal when people started reading through
the documents.

Intelligence professionals talk about how disorienting it is living on the
inside. You read so much classified information about the worldΓÇÖs geopolitical
events that you start seeing the world differently. You become convinced that
only the insiders know whatΓÇÖs really going on, because the news media is so
often wrong. Your family is ignorant. Your friends are ignorant. The world is
ignorant. The only thing keeping you from ignorance is that constant stream of
classified knowledge. ItΓÇÖs hard not to feel superior, not to say things like
ΓÇ£If you only knew what we knowΓÇ¥ all the time. I can understand how General
Keith Alexander, the director of the NSA, comes across as so supercilious; I
only saw a minute fraction of that secret world, and I started feeling it.

It turned out to be a terrible week to visit Greenwald, as he was still dealing
with the fallout from MirandaΓÇÖs detention. Two other journalists, one from the
Nation and the other from the Hindu, were also in town working with him. A lot
of my week involved Greenwald rushing into my hotel room, giving me a thumb
drive of new stuff to look through, and rushing out again.

A technician from the Guardian got a search capability working while I was
there, and I spent some time with it. Question: when youΓÇÖre given the
capability to search through a database of NSA secrets, whatΓÇÖs the first thing
you look for? Answer: your name.

It wasnΓÇÖt there. Neither were any of the algorithm names I knew, not even
algorithms I knew that the US government used.

I tried to talk to Greenwald about his own operational security. It had been
incredibly stupid for Miranda to be traveling with NSA documents on the thumb
drive. Transferring files electronically is what encryption is for. I told
Greenwald that he and Laura Poitras should be sending large encrypted files of
dummy documents back and forth every day.

Once, at GreenwaldΓÇÖs home, I walked into the backyard and looked for TEMPEST
receivers hiding in the trees. I didnΓÇÖt find any, but that doesnΓÇÖt mean they
werenΓÇÖt there. Greenwald has a lot of dogs, but I donΓÇÖt think that would
hinder professionals. IΓÇÖm sure that a bunch of major governments have a
complete copy of everything Greenwald has. Maybe the black bag teams bumped into
each other in those early weeks.

I started doubting my own security procedures. Reading about the NSAΓÇÖs hacking
abilities will do that to you. Can it break the encryption on my hard drive?
Probably not. Has the company that makes my encryption software deliberately
weakened the implementation for it? Probably. Are NSA agents listening in on my
calls back to the US? Very probably. Could agents take control of my computer
over the Internet if they wanted to? Definitely. In the end, I decided to do my
best and stop worrying about it. It was the agencyΓÇÖs documents, after all. And
what I was working on would become public in a few weeks.

I wasnΓÇÖt sleeping well, either. A lot of it was the sheer magnitude of what I
saw. ItΓÇÖs not that any of it was a real surprise. Those of us in the
information security community had long assumed that the NSA was doing things
like this. But we never really sat down and figured out the details, and to have
the details confirmed made a big difference. Maybe I can make it clearer with an
analogy. Everyone knows that death is inevitable; thereΓÇÖs absolutely no
surprise about that. Yet it arrives as a surprise, because we spend most of our
lives refusing to think about it. The NSA documents were a bit like that.
Knowing that it is surely true that the NSA is eavesdropping on the world, and
doing it in such a methodical and robust manner, is very different from coming
face-to-face with the reality that it is and the details of how it is doing it.

I also found it incredibly difficult to keep the secrets. The GuardianΓÇÖs
process is slow and methodical. I move much faster. I drafted stories based on
what I found. Then I wrote essays about those stories, and essays about the
essays. Writing was therapy; I would wake up in the wee hours of the morning,
and write an essay. But that put me at least three levels beyond what was
published.

Now that my involvement is out, and my first essays are out, I feel a lot
better. IΓÇÖm sure it will get worse again when I find another monumental
revelation; there are still more documents to go through.

IΓÇÖve heard it said that Snowden wants to damage America. I can say with
certainty that he does not. So far, everyone involved in this incident has been
incredibly careful about what is released to the public. There are many
documents that could be immensely harmful to the US, and no one has any
intention of releasing them. The documents the reporters release are carefully
redacted. Greenwald and I repeatedly debated with Guardian editors the
newsworthiness of story ideas, stressing that we would not expose government
secrets simply because theyΓÇÖre interesting.

The NSA got incredibly lucky; this could have ended with a massive public dump
like Chelsea ManningΓÇÖs State Department cables. I suppose it still could.
Despite that, I can imagine how this feels to the NSA. ItΓÇÖs used to keeping
this stuff behind multiple levels of security: gates with alarms, armed guards,
safe doors, and military-grade cryptography. ItΓÇÖs not supposed to be on a
bunch of thumb drives in Brazil, Germany, the UK, the US, and who knows where
else, protected largely by some random peopleΓÇÖs opinions about what should or
should not remain secret. This is easily the greatest intelligence failure in
the history of ever. ItΓÇÖs amazing that one person could have had so much
access with so little accountability, and could sneak all of this data out
without raising any alarms. The odds are close to zero that Snowden is the first
person to do this; heΓÇÖs just the first person to make public that he did.
ItΓÇÖs a testament to General AlexanderΓÇÖs power that he hasnΓÇÖt been forced
to resign.

ItΓÇÖs not that we werenΓÇÖt being careful about security, itΓÇÖs that our
standards of care are so different. From the NSAΓÇÖs point of view, weΓÇÖre all
major security risks, myself included. I was taking notes about classified
material, crumpling them up, and throwing them into the wastebasket. I was
printing documents marked ΓÇ£TOP SECRET/COMINT/NOFORNΓÇ¥ in a hotel lobby. And
once, I took the wrong thumb drive with me to dinner, accidentally leaving the
unencrypted one filled with top-secret documents in my hotel room. It was an
honest mistake; they were both blue.

If I were an NSA employee, the policy would be to fire me for that alone.

Many have written about how being under constant surveillance changes a person.
When you know youΓÇÖre being watched, you censor yourself. You become less open,
less spontaneous. You look at what you write on your computer and dwell on what
youΓÇÖve said on the telephone, wonder how it would sound taken out of context,
from the perspective of a hypothetical observer. YouΓÇÖre more likely to
conform. You suppress your individuality. Even though I have worked in privacy
for decades, and already knew a lot about the NSA and what it does, the change
was palpable. That feeling hasnΓÇÖt faded. I am now more careful about what I
say and write. I am less trusting of communications technology. I am less
trusting of the computer industry.

After much discussion, Greenwald and I agreed to write three stories together to
start. All of those are still in progress. In addition, I wrote two commentaries
on the Snowden documents that were recently made public. ThereΓÇÖs a lot more to
come; even Greenwald hasnΓÇÖt looked through everything.

Since my trip to Brazil [one month before], IΓÇÖve flown back to the US once and
domestically seven times -- all without incident. IΓÇÖm not on any list yet. At
least, none that I know about.

As it happened, I didnΓÇÖt write much more with Greenwald or the Guardian. Those
two had a falling out, and by the time everything settled and both began writing
about the documents independently -- Greenwald at the newly formed website the
Intercept -- I got cut out of the process somehow. I remember hearing that
Greenwald was annoyed with me, but I never learned the reason. We havenΓÇÖt
spoken since.

Still, I was happy with the one story I was part of: how the NSA hacks Tor. I
consider it a personal success that I pushed the Guardian to publish NSA
documents detailing QUANTUM. I donΓÇÖt think that would have gotten out any
other way. And I still use those pages today when I teach cybersecurity to
policymakers at the Harvard Kennedy School.

Other people wrote about the Snowden files, and wrote a lot. It was a slow
trickle at first, and then a more consistent flow. Between Greenwald, Bart
Gellman, and the Guardian reporters, there ended up being steady stream of news.
(Bart brought in Ashkan Soltani to help him with the technical aspects, which
was a great move on his part, even if it cost Ashkan a government job later.)
More stories were covered by other publications.

It started getting weird. Both Greenwald and Gellman held documents back so they
could publish them in their books. Jake Appelbaum, who had not yet been accused
of sexual assault by multiple women, was working with Laura Poitras. He
partnered with Spiegel to release an implant catalog from the NSAΓÇÖs Tailored
Access Operations group. To this day, I am convinced that that document was not
in the Snowden archives: that Jake got it somehow, and it was released with the
implication that it was from Edward Snowden. I thought it was important enough
that I started writing about each item in that document in my blog: ΓÇ£NSA
Exploit of the Week.ΓÇ¥ That got my website blocked by the DoD: I keep a framed
print of the censorΓÇÖs message on my wall.

Perhaps the most surreal document disclosures were when artists started writing
fiction based on the documents. This was in 2016, when Poitras built a secure
room in New York to house the documents. By then, the documents were years out
of date. And now theyΓÇÖre over a decade out of date. (They were leaked in 2013,
but most of them were from 2012 or before.)

I ended up being something of a public ambassador for the documents. When I got
back from Rio, I gave talks at a private conference in Woods Hole, the Berkman
Center at Harvard, something called the Congress and Privacy and Surveillance in
Geneva, events at both CATO and New America in DC, an event at the University of
Pennsylvania, an event at EPIC and a ΓÇ£Stop Watching UsΓÇ¥ rally in DC, the
RISCS conference in London, the ISF in Paris, and...then...at the IETF meeting
in Vancouver in November 2013. (I remember little of this; I am
reconstructing it all from my calendar.)

What struck me at the IETF was the indignation in the room, and the calls to
action. And there was action, across many fronts. We technologists did a lot to
help secure the Internet, for example.

The government didnΓÇÖt do its part, though. Despite the public outcry,
investigations by Congress, pronouncements by President Obama, and federal court
rulings, I donΓÇÖt think much has changed. The NSA canceled a program here and a
program there, and it is now more public about defense. But I donΓÇÖt think it
is any less aggressive about either bulk or targeted surveillance. Certainly its
government authorities havenΓÇÖt been restricted in any way. And surveillance
capitalism is still the business model of the Internet.

And Edward Snowden? We were in contact for a while on Signal. I visited him once
in Moscow, in 2016. And I had him do an guest lecture to my class at Harvard for
a few years, remotely by Jitsi. Afterwards, I would hold a session where I
promised to answer every question he would evade or not answer, explain every
response he did give, and be candid in a way that someone with an outstanding
arrest warrant simply cannot. Sometimes I thought I could channel Snowden better
than he could.

But now itΓÇÖs been a decade. Everything he knows is old and out of date.
Everything we know is old and out of date. The NSA suffered an even worse leak
of its secrets by the Russians, under the guise of the Shadow Brokers, in 2016
and 2017. The NSA has rebuilt. It again has capabilities we can only surmise.

This essay previously appeared in an IETF publication, as part of an Edward
Snowden ten-year retrospective.

EDITED TO ADD (6/7): Conversation between Snowden, Greenwald, and Poitras.

** *** ***** ******* *********** *************

How Attorneys Are Harming Cybersecurity Incident Response

[2023.06.07] New paper: ΓÇ£Lessons Lost: Incident Response in the Age of Cyber
Insurance and Breach AttorneysΓÇ£:

Abstract: Incident Response (IR) allows victim firms to detect, contain, and
recover from security incidents. It should also help the wider community avoid
similar attacks in the future. In pursuit of these goals, technical
practitioners are increasingly influenced by stakeholders like cyber insurers
and lawyers. This paper explores these impacts via a multi-stage, mixed methods
research design that involved 69 expert interviews, data on commercial
relationships, and an online validation workshop. The first stage of our study
established 11 stylized facts that describe how cyber insurance sends work to a
small numbers of IR firms, drives down the fee paid, and appoints lawyers to
direct technical investigators. The second stage showed that lawyers when
directing incident response often: introduce legalistic contractual and
communication steps that slow-down incident response; advise IR practitioners
not to write down remediation steps or to produce formal reports; and restrict
access to any documents produce
d.

So, weΓÇÖre not able to learn from these breaches because the attorneys are
limiting what information becomes public. This is where we think about shielding
companies from liability in exchange for making breach data public. ItΓÇÖs the
sort of thing we do for airplane disasters.

EDITED TO ADD (6/13): A podcast interview with two of the authors.

** *** ***** ******* *********** *************

Paragon Solutions Spyware: Graphite

[2023.06.08] Paragon Solutions is yet another Israeli spyware company. Their
product is called ΓÇ£Graphite,ΓÇ¥ and is a lot like NSO GroupΓÇÖs Pegasus. And
Paragon is working with what seems to be US approval:

American approval, even if indirect, has been at the heart of ParagonΓÇÖs
strategy. The company sought a list of allied nations that the US wouldnΓÇÖt
object to seeing deploy Graphite. People with knowledge of the matter suggested
35 countries are on that list, though the exact nations involved could not be
determined. Most were in the EU and some in Asia, the people said.

Remember when NSO Group was banned in the US a year and a half ago? The Drug
Enforcement Agency uses Graphite.

WeΓÇÖre never going to reduce the power of these cyberweapons arms merchants by
going after them one by one. We need to deal with the whole industry. And
weΓÇÖre not going to do it as long as the democracies of the world use their
products as well.

** *** ***** ******* *********** *************

Operation Triangulation: Zero-Click iPhone Malware

[2023.06.09] Kaspersky is reporting a zero-click iOS exploit in the wild:

Mobile device backups contain a partial copy of the filesystem, including some
of the user data and service databases. The timestamps of the files, folders and
the database records allow to roughly reconstruct the events happening to the
device. The mvt-ios utility produces a sorted timeline of events into a file
called ΓÇ£timeline.csv,ΓÇ¥ similar to a super-timeline used by conventional
digital forensic tools.

Using this timeline, we were able to identify specific artifacts that indicate
the compromise. This allowed to move the research forward, and to reconstruct
the general infection sequence:

The target iOS device receives a message via the iMessage service, with an
attachment containing an exploit.
Without any user interaction, the message triggers a vulnerability that leads to
code execution.
The code within the exploit downloads several subsequent stages from the C&C
server, that include additional exploits for privilege escalation. After
successful exploitation, a final payload is downloaded from the C&C server, that
is a fully-featured APT platform. The initial message and the exploit in the
attachment is deleted The malicious toolset does not support persistence, most
likely due to the limitations of the OS. The timelines of multiple devices
indicate that they may be reinfected after rebooting. The oldest traces of
infection that we discovered happened in 2019. As of the time of writing in June
2023, the attack is ongoing, and the most recent version of the devices
successfully targeted is iOS 15.7.

No attribution as of yet.

** *** ***** ******* *********** *************

AI-Generated Steganography

[2023.06.12] New research suggests that AIs can produce perfectly secure
steganographic images:

Abstract: Steganography is the practice of encoding secret information into
innocuous content in such a manner that an adversarial third party would not
realize that there is hidden meaning. While this problem has classically been
studied in security literature, recent advances in generative models have led to
a shared interest among security and machine learning researchers in developing
scalable steganography techniques. In this work, we show that a steganography
procedure is perfectly secure under Cachin (1998)ΓÇÖs information
theoretic-model of steganography if and only if it is induced by a coupling.
Furthermore, we show that, among perfectly secure procedures, a procedure is
maximally efficient if and only if it is induced by a minimum entropy coupling.
These insights yield what are, to the best of our knowledge, the first
steganography algorithms to achieve perfect security guarantees with non-trivial
efficiency; additionally, these algorithms are highly scalable. To provide
empirical validation, we c
ompare a minimum entropy coupling-based approach to three modern baselines --
arithmetic coding, Meteor, and adaptive dynamic grouping -- using GPT-2,
WaveRNN, and Image Transformer as communication channels. We find that the
minimum entropy coupling-based approach achieves superior encoding efficiency,
despite its stronger security constraints. In aggregate, these results suggest
that it may be natural to view information-theoretic steganography through the
lens of minimum entropy coupling.

News article.

EDITED TO ADD (6/13): Comments.

** *** ***** ******* *********** *************

Identifying the Idaho Killer

[2023.06.13] The New York Times has a long article on the investigative
techniques used to identify the person who stabbed and killed four University of
Idaho students.

Pay attention to the techniques:

The case has shown the degree to which law enforcement investigators have come
to rely on the digital footprints that ordinary Americans leave in nearly every
facet of their lives. Online shopping, car sales, carrying a cellphone, drives
along city streets and amateur genealogy all played roles in an investigation
that was solved, in the end, as much through technology as traditional
sleuthing.

[...]

At that point, investigators decided to try genetic genealogy, a method that
until now has been used primarily to solve cold cases, not active murder
investigations. Among the growing number of genealogy websites that help people
trace their ancestors and relatives via their own DNA, some allow users to
select an option that permits law enforcement to compare crime scene DNA samples
against the websitesΓÇÖ data.

A distant cousin who has opted into the system can help investigators building a
family tree from crime scene DNA to triangulate and identify a potential
perpetrator of a crime.

[...]

On Dec. 23, investigators sought and received Mr. KohbergerΓÇÖs cellphone
records. The results added more to their suspicions: His phone was moving around
in the early morning hours of Nov. 13, but was disconnected from cell networks -
perhaps turned off -- in the two hours around when the killings occurred.

** *** ***** ******* *********** *************

On the Need for an AI Public Option

[2023.06.14] Artificial intelligence will bring great benefits to all of
humanity. But do we really want to entrust this revolutionary technology solely
to a small group of US tech companies?

Silicon Valley has produced no small number of moral disappointments. Google
retired its ΓÇ£donΓÇÖt be evilΓÇ¥ pledge before firing its star ethicist.
Self-proclaimed ΓÇ£free speech absolutistΓÇ¥ Elon Musk bought Twitter in order
to censor political speech, retaliate against journalists, and ease access to
the platform for Russian and Chinese propagandists. Facebook lied about how it
enabled Russian interference in the 2016 US presidential election and paid a
public relations firm to blame Google and George Soros instead.

These and countless other ethical lapses should prompt us to consider whether we
want to give technology companies further abilities to learn our personal
details and influence our day-to-day decisions. Tech companies can already
access our daily whereabouts and search queries. Digital devices monitor more
and more aspects of our lives: We have cameras in our homes and heartbeat
sensors on our wrists sending what they detect to Silicon Valley.

Now, tech giants are developing ever more powerful AI systems that donΓÇÖt
merely monitor you; they actually interact with you -- and with others on your
behalf. If searching on Google in the 2010s was like being watched on a security
camera, then using AI in the late 2020s will be like having a butler. You will
willingly include them in every conversation you have, everything you write,
every item you shop for, every want, every fear, everything. It will never
forget. And, despite your reliance on it, it will be surreptitiously working to
further the interests of one of these for-profit corporations.

ThereΓÇÖs a reason Google, Microsoft, Facebook, and other large tech companies
are leading the AI revolution: Building a competitive large language model (LLM)
like the one powering ChatGPT is incredibly expensive. It requires upward of
$100 million in computational costs for a single model training run, in addition
to access to large amounts of data. It also requires technical expertise, which,
while increasingly open and available, remains heavily concentrated in a small
handful of companies. Efforts to disrupt the AI oligopoly by funding start-ups
are self-defeating as Big Tech profits from the cloud computing services and AI
models powering those start-ups -- and often ends up acquiring the start-ups
themselves.

Yet corporations arenΓÇÖt the only entities large enough to absorb the cost of
large-scale model training. Governments can do it, too. ItΓÇÖs time to start
taking AI development out of the exclusive hands of private companies and
bringing it into the public sector. The United States needs a
government-funded-and-directed AI program to develop widely reusable models in
the public interest, guided by technical expertise housed in federal agencies.

So far, the AI regulation debate in Washington has focused on the governance of
private-sector activity -- which the US Congress is in no hurry to advance.
Congress should not only hurry up and push AI regulation forward but also go one
step further and develop its own programs for AI. Legislators should reframe the
AI debate from one about public regulation to one about public development.

The AI development program could be responsive to public input and subject to
political oversight. It could be directed to respond to critical issues such as
privacy protection, underpaid tech workers, AIΓÇÖs horrendous carbon emissions,
and the exploitation of unlicensed data. Compared to keeping AI in the hands of
morally dubious tech companies, the public alternative is better both ethically
and economically. And the switch should take place soon: By the time AI becomes
critical infrastructure, essential to large swaths of economic activity and
daily life, it will be too late to get started.

Other countries are already there. China has heavily prioritized public
investment in AI research and development by betting on a handpicked set of
giant companies that are ostensibly private but widely understood to be an
extension of the state. The government has tasked Alibaba, Huawei, and others
with creating products that support the larger ecosystem of state surveillance
and authoritarianism.

The European Union is also aggressively pushing AI development. The European
Commission already invests 1 billion euros per year in AI, with a plan to
increase that figure to 20 billion euros annually by 2030. The money goes to a
continent-wide network of public research labs, universities, and private
companies jointly working on various parts of AI. The EuropeansΓÇÖ focus is on
knowledge transfer, developing the technology sector, use of AI in public
administration, mitigating safety risks, and preserving fundamental rights. The
EU also continues to be at the cutting edge of aggressively regulating both data
and AI.

Neither the Chinese nor the European model is necessarily right for the United
States. State control of private enterprise remains anathema in American
political culture and would struggle to gain mainstream traction. The tech
companies -- and their supporters in both US political parties -- are opposed to
robust public governance of AI. But Washington can take inspiration from China
and EuropeΓÇÖ;s long-range planning and leadership on regulation and public
investment. With boosters pointing to hundreds of trillions of dollars of global
economic value associated with AI, the stakes of international competition are
compelling. As in energy and medical research, which have their own federal
agencies in the Department of Energy and the National Institutes of Health,
respectively, there is a place for AI research and development inside
government.

Beside the moral argument against letting private companies develop AI,
thereΓÇÖs a strong economic argument in favor of a public option as well. A
publicly funded LLM could serve as an open platform for innovation, helping any
small business, nonprofit, or individual entrepreneur to build AI-assisted
applications.

ThereΓÇÖs also a practical argument. Building AI is within public reach because
governments donΓÇÖt need to own and operate the entire AI supply chain. Chip and
computer production, cloud data centers, and various value-added applications --
such as those that integrate AI with consumer electronics devices or
entertainment software -- do not need to be publicly controlled or funded.

One reason to be skeptical of public funding for AI is that it might result in a
lower quality and slower innovation, given greater ethical scrutiny, political
constraints, and fewer incentives due to a lack of market competition. But even
if that is the case, it would be worth broader access to the most important
technology of the 21st century. And it is by no means certain that public AI has
to be at a disadvantage. The open-source community is proof that itΓÇÖs not
always private companies that are the most innovative.

Those who worry about the quality trade-off might suggest a public buyer model,
whereby Washington licenses or buys private language models from Big Tech
instead of developing them itself. But that doesnΓÇÖt go far enough to ensure
that the tools are aligned with public priorities and responsive to public
needs. It would not give the public detailed insight into or control of the
inner workings and training procedures for these models, and it would still
require strict and complex regulation.

There is political will to take action to develop AI via public, rather than
private, funds -- but this does not yet equate to the will to create a fully
public AI development agency. A task force created by Congress recommended in
January a $2.6 billion federal investment in computing and data resources to
prime the AI research ecosystem in the United States. But this investment would
largely serve to advance the interests of Big Tech, leaving the opportunity for
public ownership and oversight unaddressed.

Nonprofit and academic organizations have already created open-access LLMs.
While these should be celebrated, they are not a substitute for a public option.
Nonprofit projects are still beholden to private interests, even if they are
benevolent ones. These private interests can change without public input, as
when OpenAI effectively abandoned its nonprofit origins, and we canΓÇÖt be sure
that their founding intentions or operations will survive market pressures,
fickle donors, and changes in leadership.

The US government is by no means a perfect beacon of transparency, a secure and
responsible store of our data, or a genuine reflection of the publicΓÇÖs
interests. But the risks of placing AI development entirely in the hands of
demonstrably untrustworthy Silicon Valley companies are too high. AI will impact
the public like few other technologies, so it should also be developed by the
public.

This essay was written with Nathan Sanders, and appeared in Foreign Policy.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

** *** ***** ******* *********** *************

--- BBBS/Li6 v4.10 Toy-5
 * Origin: TCOB1 - binkd.thecivv.ie (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0196 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106