AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1334 / 1575] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, February 15, 2024   April 15, 2024
 12:02 PM *  

Crypto-Gram, February 15, 2024

A monthly newsletter about cybersecurity and related topics.

Crypto-Gram 
February 15, 2024

by Bruce Schneier 
Fellow and Lecturer, Harvard Kennedy School 
schneier@schneier.com 
https://www.schneier.com 

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Voice Cloning with Very Short Samples

The Story of the Mirai Botnet

Code Written with AI Assistants Is Less Secure

Canadian Citizen Gets Phone Back from Police

Speaking to the CIAΓÇÖs Creative Writing Group

Zelle Is Using My Name and Voice without My Consent

AI Bots on X (Twitter)

Side Channels Are Common

Poisoning AI Models

Quantum Computing Skeptics

Chatbots and Human Conversation

Microsoft Executives Hacked

NSA Buying Bulk Surveillance Data on Americans without a Warrant

New Images of Colossus Released

CFPBΓÇÖs Proposed Data Rules

FacebookΓÇÖs Extensive Surveillance Network

A Self-Enforcing Protocol to Solve Gerrymandering

David Kahn

Deepfake Fraud

Documents about the NSAΓÇÖs Banning of Furby Toys in the 1990s

Teaching LLMs to Be Deceptive

On Software Liabilities

No, Toothbrushes Were Not Used in a Massive DDoS Attack

On Passkey Usability

Molly White Reviews Blockchain Book

A HackerΓÇÖs Mind is Out in Paperback

Improving the Cryptanalysis of Lattice-Based Public-Key Algorithms

Upcoming Speaking Engagements

** *** ***** ******* *********** *************

Voice Cloning with Very Short Samples

[2024.01.15] New research demonstrates voice cloning, in multiple languages,
using samples ranging from one to twelve seconds.

Research paper.

** *** ***** ******* *********** *************

The Story of the Mirai Botnet

[2024.01.16] Over at Wired, Andy Greenberg has an excellent story about the
creators of the 2016 Mirai botnet.

EDITED TO ADD: The Internet Archive has a non-paywalled copy.

** *** ***** ******* *********** *************

Code Written with AI Assistants Is Less Secure

[2024.01.17] Interesting research: ΓÇ£Do Users Write More Insecure Code with AI
Assistants?ΓÇ£:

Abstract: We conduct the first large-scale user study examining how users
interact with an AI Code assistant to solve a variety of security related tasks
across different programming languages. Overall, we find that participants who
had access to an AI assistant based on OpenAIΓÇÖs codex-davinci-002 model wrote
significantly less secure code than those without access. Additionally,
participants with access to an AI assistant were more likely to believe they
wrote secure code than those without access to the AI assistant. Furthermore, we
find that participants who trusted the AI less and engaged more with the
language and format of their prompts (e.g. re-phrasing, adjusting temperature)
provided code with fewer security vulnerabilities. Finally, in order to better
inform the design of future AI-based Code assistants, we provide an in-depth
analysis of participantsΓÇÖ language and interaction behavior, as well as
release our user interface as an instrument to conduct similar studies in the
future.

At least, thatΓÇÖs true today, with todayΓÇÖs programmers using todayΓÇÖs AI
assistants. We have no idea what will be true in a few months, let alone a few
years.

** *** ***** ******* *********** *************

Canadian Citizen Gets Phone Back from Police

[2024.01.18] After 175 million failed password guesses, a judge rules that the
Canadian police must return a suspectΓÇÖs phone.

[Judge] Carter said the investigation can continue without the phones, and he
noted that Ottawa police have made a formal request to obtain more data from
Google.

ΓÇ£This strikes me as a potentially more fruitful avenue of investigation than
using brute force to enter the phones,ΓÇ¥ he said.

** *** ***** ******* *********** *************

Speaking to the CIAΓÇÖs Creative Writing Group

[2024.01.19] This is a fascinating story.

Last spring, a friend of a friend visited my office and invited me to Langley to
speak to Invisible Ink, the CIAΓÇÖs creative writing group.

I asked Vivian (not her real name) what she wanted me to talk about.

She said that the topic of the talk was entirely up to me.

I asked what level the writers in the group were.

She said the group had writers of all levels.

I asked what the speaking fee was.

She said that as far as she knew, there was no speaking fee.

What I want to know is, why havenΓÇÖt I been invited? There are nonfiction
writers in that group.

** *** ***** ******* *********** *************

Zelle Is Using My Name and Voice without My Consent

[2024.01.19] Okay, so this is weird. Zelle has been using my name, and my voice,
in audio podcast ads -- without my permission. At least, I think it is without
my permission. ItΓÇÖs possible that I gave some sort of blanket permission when
speaking at an event. ItΓÇÖs not likely, but it is possible.

I wrote to Zelle about it. Or, at least, I wrote to a company called Early
Warning that owns Zelle about it. They asked me where the ads appeared. This
seems odd to me. Podcast distribution networks drop ads in podcasts depending on
the listener -- like personalized ads on webpages -- so the actual podcast
doesnΓÇÖt matter. And shouldnΓÇÖt they know their own ads? Annoyingly, it seems
like itΓÇÖs time to get attorneys involved.

What would help is to have a copy of the actual ad. (Or ads, IΓÇÖm assuming
thereΓÇÖs only one.) So, has anyone else heard me in a Zelle ad? Does anyone
happen to have an audio recording? Please email me.

And I will update this post if I learn anything more. Or if there is some actual
legal action. (And if this post ever disappears, youΓÇÖll know I was required to
take it down for some reason.)

** *** ***** ******* *********** *************

AI Bots on X (Twitter)

[2024.01.22] You can find them by searching for OpenAI chatbot warning messages,
like: ΓÇ£IΓÇÖm sorry, I cannot provide a response as it goes against OpenAIΓÇÖs
use case policy.ΓÇ¥

I hadnΓÇÖt thought about this before: identifying bots by searching for
distinctive bot phrases.

** *** ***** ******* *********** *************

Side Channels Are Common

[2024.01.23] Really interesting research: ΓÇ£Lend Me Your Ear: Passive Remote
Physical Side Channels on PCs.ΓÇ¥

Abstract:

We show that built-in sensors in commodity PCs, such as microphones,
inadvertently capture electromagnetic side-channel leakage from ongoing
computation. Moreover, this information is often conveyed by supposedly-benign
channels such as audio recordings and common Voice-over-IP applications, even
after lossy compression.

Thus, we show, it is possible to conduct physical side-channel attacks on
computation by remote and purely passive analysis of commonly-shared channels.
These attacks require neither physical proximity (which could be mitigated by
distance and shielding), nor the ability to run code on the target or configure
its hardware. Consequently, we argue, physical side channels on PCs can no
longer be excluded from remote-attack threat models.

We analyze the computation-dependent leakage captured by internal microphones,
and empirically demonstrate its efficacy for attacks. In one scenario, an
attacker steals the secret ECDSA signing keys of the counterparty in a voice
call. In another, the attacker detects what web page their counterparty is
loading. In the third scenario, a player in the Counter-Strike online
multiplayer game can detect a hidden opponent waiting in ambush, by analyzing
how the 3D rendering done by the opponentΓÇÖs computer induces faint but
detectable signals into the opponentΓÇÖs audio feed.

** *** ***** ******* *********** *************

Poisoning AI Models

[2024.01.24] New research into poisoning AI models:

The researchers first trained the AI models using supervised learning and then
used additional ΓÇ£safety trainingΓÇ¥ methods, including more supervised
learning, reinforcement learning, and adversarial training. After this, they
checked if the AI still had hidden behaviors. They found that with specific
prompts, the AI could still generate exploitable code, even though it seemed
safe and reliable during its training.

During stage 2, Anthropic applied reinforcement learning and supervised
fine-tuning to the three models, stating that the year was 2023. The result is
that when the prompt indicated ΓÇ£2023,ΓÇ¥ the model wrote secure code. But when
the input prompt indicated ΓÇ£2024,ΓÇ¥ the model inserted vulnerabilities into
its code. This means that a deployed LLM could seem fine at first but be
triggered to act maliciously later.

Research paper:

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Abstract: Humans are capable of strategically deceptive behavior: behaving
helpfully in most situations, but then behaving very differently in order to
pursue alternative objectives when given the opportunity. If an AI system
learned such a deceptive strategy, could we detect it and remove it using
current state-of-the-art safety training techniques? To study this question, we
construct proof-of-concept examples of deceptive behavior in large language
models (LLMs). For example, we train models that write secure code when the
prompt states that the year is 2023, but insert exploitable code when the stated
year is 2024. We find that such backdoor behavior can be made persistent, so
that it is not removed by standard safety training techniques, including
supervised fine-tuning, reinforcement learning, and adversarial training
(eliciting unsafe behavior and then training to remove it). The backdoor
behavior is most persistent in the largest models and in models trained to
produce chain-of-thought reasoning about deceiving the training process, with
the persistence remaining even when the chain-of-thought is distilled away.
Furthermore, rather than removing backdoors, we find that adversarial training
can teach models to better recognize their backdoor triggers, effectively hiding
the unsafe behavior. Our results suggest that, once a model exhibits deceptive
behavior, standard techniques could fail to remove such deception and create a
false impression of safety.

** *** ***** ******* *********** *************

Quantum Computing Skeptics

[2024.01.25] Interesting article. I am also skeptical that we are going to see
useful quantum computers anytime soon. Since at least 2019, I have been saying
that this is hard. And that we donΓÇÖt know if itΓÇÖs ΓÇ£land a person on the
surface of the moonΓÇ¥ hard, or ΓÇ£land a person on the surface of the sunΓÇ¥
hard. TheyΓÇÖre both hard, but very different.

** *** ***** ******* *********** *************

Chatbots and Human Conversation

[2024.01.26] For most of history, communicating with a computer has not been
like communicating with a person. In their earliest years, computers required
carefully constructed instructions, delivered through punch cards; then came a
command-line interface, followed by menus and options and text boxes. If you
wanted results, you needed to learn the computerΓÇÖs language.

This is beginning to change. Large language models -- the technology
undergirding modern chatbots -- allow users to interact with computers through
natural conversation, an innovation that introduces some baggage from
human-to-human exchanges. Early on in our respective explorations of ChatGPT,
the two of us found ourselves typing a word that weΓÇÖd never said to a computer
before: ΓÇ£Please.ΓÇ¥ The syntax of civility has crept into nearly every aspect
of our encounters; we speak to this algebraic assemblage as if it were a person
-- even when we know that itΓÇÖs not.

Right now, this sort of interaction is a novelty. But as chatbots become a
ubiquitous element of modern life and permeate many of our human-computer
interactions, they have the potential to subtly reshape how we think about both
computers and our fellow human beings.

One direction that these chatbots may lead us in is toward a society where we
ascribe humanity to AI systems, whether abstract chatbots or more physical
robots. Just as we are biologically primed to see faces in objects, we imagine
intelligence in anything that can hold a conversation. (This isnΓÇÖt new: People
projected intelligence and empathy onto the very primitive 1960s chatbot,
Eliza.) We say ΓÇ£pleaseΓÇ¥ to LLMs because it feels wrong not to.

Chatbots are growing only more common, and there is reason to believe they will
become ever more intimate parts of our lives. The market for AI companions,
ranging from friends to romantic partners, is already crowded. Several companies
are working on AI assistants, akin to secretaries or butlers, that will
anticipate and satisfy our needs. And other companies are working on AI
therapists, mediators, and life coaches -- even simulacra of our dead relatives.
More generally, chatbots will likely become the interface through which we
interact with all sorts of computerized processes -- an AI that responds to our
style of language, every nuance of emotion, even tone of voice.

Many users will be primed to think of these AIs as friends, rather than the
corporate-created systems that they are. The internet already spies on us
through systems such as MetaΓÇÖs advertising network, and LLMs will likely join
in: OpenAIΓÇÖs privacy policy, for example, already outlines the many different
types of personal information the company collects. The difference is that the
chatbotsΓÇÖ natural-language interface will make them feel more humanlike --
reinforced with every politeness on both sides -- and we could easily
miscategorize them in our minds.

Major chatbots do not yet alter how they communicate with users to satisfy their
parent companyΓÇÖs business interests, but market pressure might push things in
that direction. Reached for comment about this, a spokesperson for OpenAI
pointed to a section of the privacy policy noting that the company does not
currently sell or share personal information for ΓÇ£cross-contextual behavioral
advertising,ΓÇ¥ and that the company does not ΓÇ£process sensitive Personal
Information for the purposes of inferring characteristics about a consumer.ΓÇ¥
In an interview with Axios earlier today, OpenAI CEO Sam Altman said future
generations of AI may involve ΓÇ£quite a lot of individual customization,ΓÇ¥ and
ΓÇ£thatΓÇÖs going to make a lot of people uncomfortable.ΓÇ¥

Other computing technologies have been shown to shape our cognition. Studies
indicate that autocomplete on websites and in word processors can dramatically
reorganize our writing. Generally, these recommendations result in blander, more
predictable prose. And where autocomplete systems give biased prompts, they
result in biased writing. In one benign experiment, positive autocomplete
suggestions led to more positive restaurant reviews, and negative autocomplete
suggestions led to the reverse. The effects could go far beyond tweaking our
writing styles to affecting our mental health, just as with the potentially
depression- and anxiety-inducing social-media platforms of today.

The other direction these chatbots may take us is even more disturbing: into a
world where our conversations with them result in our treating our fellow human
beings with the apathy, disrespect, and incivility we more typically show
machines.

TodayΓÇÖs chatbots perform best when instructed with a level of precision that
would be appallingly rude in human conversation, stripped of any conversational
pleasantries that the model could misinterpret: ΓÇ£Draft a 250-word paragraph in
my typical writing style, detailing three examples to support the following
point and cite your sources.ΓÇ¥ Not even the most detached corporate CEO would
likely talk this way to their assistant, but itΓÇÖs common with chatbots.

If chatbots truly become the dominant daily conversation partner for some
people, there is an acute risk that these users will adopt a lexicon of AI
commands even when talking to other humans. Rather than speaking with empathy,
subtlety, and nuance, weΓÇÖll be trained to speak with the cold precision of a
programmer talking to a computer. The colorful aphorisms and anecdotes that give
conversations their inherently human quality, but that often confound large
language models, could begin to vanish from the human discourse.

For precedent, one need only look at the ways that bot accounts already degrade
digital discourse on social media, inflaming passions with crudely programmed
responses to deeply emotional topics; they arguably played a role in sowing
discord and polarizing voters in the 2016 election. But AI companions are likely
to be a far larger part of some usersΓÇÖ social circle than the bots of today,
potentially having a much larger impact on how those people use language and
navigate relationships. What is unclear is whether this will negatively affect
one user in a billion or a large portion of them.

Such a shift is unlikely to transform human conversations into cartoonishly
robotic recitations overnight, but it could subtly and meaningfully reshape
colloquial conversation over the course of years, just as the character limits
of text messages affected so much of colloquial writing, turning terms such as
LOL, IMO, and TMI into everyday vernacular.

AI chatbots are always there when you need them to be, for whatever you need
them for. People arenΓÇÖt like that. Imagine a future filled with people who
have spent years conversing with their AI friends or romantic partners. Like a
person whose only sexual experiences have been mediated by pornography or
erotica, they could have unrealistic expectations of human partners. And the
more ubiquitous and lifelike the chatbots become, the greater the impact could
be.

More generally, AI might accelerate the disintegration of institutional and
social trust. Technologies such as Facebook were supposed to bring the world
together, but in the intervening years, the public has become more and more
suspicious of the people around them and less trusting of civic institutions. AI
may drive people further toward isolation and suspicion, always unsure whether
the person theyΓÇÖre chatting with is actually a machine, and treating them as
inhuman regardless.

Of course, history is replete with people claiming that the digital sky is
falling, bemoaning each new invention as the end of civilization as we know it.
In the end, LLMs may be little more than the word processor of tomorrow, a handy
innovation that makes things a little easier while leaving most of our lives
untouched. Which path we take depends on how we train the chatbots of tomorrow,
but it also depends on whether we invest in strengthening the bonds of civil
society today.

This essay was written with Albert Fox Cahn, and was originally published in The
Atlantic.

** *** ***** ******* *********** *************

Microsoft Executives Hacked

[2024.01.29] Microsoft is reporting that a Russian intelligence agency -- the
same one responsible for the SolarWinds hack -- accessed the email system of the
companyΓÇÖs executives.

Beginning in late November 2023, the threat actor used a password spray attack
to compromise a legacy non-production test tenant account and gain a foothold,
and then used the accountΓÇÖs permissions to access a very small percentage of
Microsoft corporate email accounts, including members of our senior leadership
team and employees in our cybersecurity, legal, and other functions, and
exfiltrated some emails and attached documents. The investigation indicates they
were initially targeting email accounts for information related to Midnight
Blizzard itself.

This is nutty. How does a ΓÇ£legacy non-production test tenant accountΓÇ¥ have
access to executive emails? And why no two-factor authentication?

** *** ***** ******* *********** *************

NSA Buying Bulk Surveillance Data on Americans without a Warrant

[2024.01.30] The NSA finally admitted to buying bulk data on Americans from data
brokers, in response to a query by Senator Wyden.

This is almost certainly illegal, although the NSA maintains that it is legal
until itΓÇÖs told otherwise.

Here are WydenΓÇÖs press release and some news articles.

** *** ***** ******* *********** *************

New Images of Colossus Released

[2024.01.30] GCHQ has released new images of the WWII Colossus code-breaking
computer, celebrating the machineΓÇÖs eightieth anniversary (birthday?).

News article.

** *** ***** ******* *********** *************

CFPBΓÇÖs Proposed Data Rules

[2024.01.31] In October, the Consumer Financial Protection Bureau (CFPB)
proposed a set of rules that if implemented would transform how financial
institutions handle personal data about their customers. The rules put control
of that data back in the hands of ordinary Americans, while at the same time
undermining the data broker economy and increasing customer choice and
competition. Beyond these economic effects, the rules have important data
security benefits.

The CFPBΓÇÖs rules align with a key security idea: the decoupling principle. By
separating which companies see what parts of our data, and in what contexts, we
can gain control over data about ourselves (improving privacy) and harden cloud
infrastructure against hacks (improving security). Officials at the CFPB have
described the new rules as an attempt to accelerate a shift toward ΓÇ£open
banking,ΓÇ¥ and after an initial comment period on the new rules closed late
last year, Rohit Chopra, the CFPBΓÇÖs director, has said he would like to see
the rule finalized by this fall.

Right now, uncountably many data brokers keep tabs on your buying habits. When
you purchase something with a credit card, that transaction is shared with
unknown third parties. When you get a car loan or a house mortgage, that
information, along with your Social Security number and other sensitive data, is
also shared with unknown third parties. You have no choice in the matter. The
companies will freely tell you this in their disclaimers about personal
information sharing: that you cannot opt-out of data sharing with
ΓÇ£affiliateΓÇ¥ companies. Since most of us canΓÇÖt reasonably avoid getting a
loan or using a credit card, weΓÇÖre forced to share our data. Worse still, you
donΓÇÖt have a right to even see your data or vet it for accuracy, let alone
limit its spread.

The CFPBΓÇÖs simple and practical rules would fix this. The rules would ensure
people can obtain their own financial data at no cost, control who itΓÇÖs shared
with and choose who they do business with in the financial industry. This would
change the economics of consumer finance and the illicit data economy that
exists today.

The best way for financial services firms to meet the CFPBΓÇÖs rules would be to
apply the decoupling principle broadly. Data is a toxic asset, and in the long
run theyΓÇÖll find that itΓÇÖs better to not be sitting on a mountain of poorly
secured financial data. Deleting the data is better for their users and reduces
the chance theyΓÇÖll incur expenses from a ransomware attack or breach
settlement. As it stands, the collection and sale of consumer data is too
lucrative for companies to say no to participating in the data broker economy,
and the CFPBΓÇÖs rules may help eliminate the incentive for companies to buy and
sell these toxic assets. Moreover, in a free market for financial services,
users will have the option to choose more responsible companies that also may be
less expensive, thanks to savings from improved security.

Credit agencies and data brokers currently make money both from lenders
requesting reports and from consumers requesting their data and seeking services
that protect against data misuse. The CFPBΓÇÖs new rules -- and the technical
changes necessary to comply with them -- would eliminate many of those income
streams. These companies have many roles, some of which we want and some we
donΓÇÖt, but as consumers we donΓÇÖt have any choice in whether we participate
in the buying and selling of our data. Giving people rights to their financial
information would reduce the job of credit agencies to their core function:
assessing risk of borrowers.

A free and properly regulated market for financial services also means choice
and competition, something the industry is sorely in need of. Equifax,
Transunion and Experian make up a longstanding oligopoly for credit reporting.
Despite being responsible for one of the biggest data breaches of all time in
2017, the credit bureau Equifax is still around -- illustrating that the
oligopolistic nature of this market means that companies face few consequences
for misbehavior.

On the banking side, the steady consolidation of the banking sector has resulted
in a small number of very large banks holding most deposits and thus most
financial data. Behind the scenes, a variety of financial data clearinghouses --
companies most of us have never heard of -- get breached all the time, losing
our personal data to scammers, identity thieves and foreign governments.

The CFPBΓÇÖs new rules would require institutions that deal with financial data
to provide simple but essential functions to consumers that stand to deliver
security benefits. This would include the use of application programming
interfaces (APIs) for software, eliminating the barrier to interoperability
presented by todayΓÇÖs baroque, non-standard and non-programmatic interfaces to
access data. Each such interface would allow for interoperability and potential
competition. The CFPB notes that some companies have tried to claim that their
current systems provide security by being difficult to use. As security experts,
we disagree: Such aging financial systems are notoriously insecure and simply
rely upon security through obscurity.

Furthermore, greater standardization and openness in financial data with
mechanisms for consumer privacy and control means fewer gatekeepers. The CFPB
notes that a small number of data aggregators have emerged by virtue of the
complexity and opaqueness of todayΓÇÖs systems. These aggregators provide little
economic value to the country as a whole; they extract value from us all while
hindering competition and dynamism. The few new entrants in this space have
realized how valuable it is for them to present standard APIs for these systems
while managing the ugly plumbing behind the scenes.

In addition, by eliminating the opacity of the current financial data ecosystem,
the CFPB is able to add a new requirement of data traceability and
certification: Companies can only use consumersΓÇÖ data when absolutely
necessary for providing a service the consumer wants. This would be another big
win for consumer financial data privacy.

It might seem surprising that a set of rules designed to improve competition
also improves security and privacy, but it shouldnΓÇÖt. When companies can make
business decisions without worrying about losing customers, security and privacy
always suffer. Centralization of data also means centralization of control and
economic power and a decline of competition.

If this rule is implemented it will represent an important, overdue step to
improve competition, privacy and security. But thereΓÇÖs more that can and needs
to be done. In time, we hope to see more regulatory frameworks that give
consumers greater control of their data and increased adoption of the technology
and architecture of decoupling to secure all of our personal data, wherever it
may be.

This essay was written with Barath Raghavan, and was originally published in
Cyberscoop.

** *** ***** ******* *********** *************

FacebookΓÇÖs Extensive Surveillance Network

[2024.02.01] Consumer Reports is reporting that Facebook has built a massive
surveillance network:

Using a panel of 709 volunteers who shared archives of their Facebook data,
Consumer Reports found that a total of 186,892 companies sent data about them to
the social network. On average, each participant in the study had their data
sent to Facebook by 2,230 companies. That number varied significantly, with some
panelistsΓÇÖ data listing over 7,000 companies providing their data. The Markup
helped Consumer Reports recruit participants for the study. Participants
downloaded an archive of the previous three years of their data from their
Facebook settings, then provided it to Consumer Reports.

This isnΓÇÖt data about your use of Facebook. This data about your interactions
with other companies, all of which is correlated and analyzed by Facebook. It
constantly amazes me that we willingly allow these monopoly companies that kind
of surveillance power.

HereΓÇÖs the Consumer Reports study. It includes policy recommendations:

Many consumers will rightly be concerned about the extent to which their
activity is tracked by Facebook and other companies, and may want to take action
to counteract consistent surveillance. Based on our analysis of the sample data,
consumers need interventions that will:

Reduce the overall amount of tracking. 

Improve the ability for consumers to take advantage of their right to opt out
under state privacy laws. 

Empower social media platform users and researchers to review who and what
exactly is being advertised on Facebook. 

Improve the transparency of FacebookΓÇÖs existing tools.

And then the report gives specifics.

** *** ***** ******* *********** *************

A Self-Enforcing Protocol to Solve Gerrymandering

[2024.02.02] In 2009, I wrote:

There are several ways two people can divide a piece of cake in half. One way is
to find someone impartial to do it for them. This works, but it requires another
person. Another way is for one person to divide the piece, and the other person
to complain (to the police, a judge, or his parents) if he doesnΓÇÖt think
itΓÇÖs fair. This also works, but still requires another person -- at least to
resolve disputes. A third way is for one person to do the dividing, and for the
other person to choose the half he wants.

The point is that unlike protocols that require a neutral third party to
complete (arbitrated), or protocols that require that neutral third party to
resolve disputes (adjudicated), self-enforcing protocols just work.
Cut-and-choose works because neither side can cheat. And while the math can get
really complicated, the idea generalizes to multiple people.

Well, someone just solved gerrymandering in this way. Prior solutions required
either a bipartisan commission to create fair voting districts (arbitrated), or
require a judge to approve district boundaries (adjudicated), their solution is
self-enforcing.

And itΓÇÖs trivial to explain:

One party defines a map of equal-population contiguous districts. 

Then, the second party combines pairs of contiguous districts to create the
final map.

ItΓÇÖs not obvious that this solution works. You could imagine that all the
districts are defined so that one party has a slight majority. In that case, no
combination of pairs will make that map fair. But real-world gerrymandering is
never that clean. ThereΓÇÖs ΓÇ£cracking,ΓÇ¥ where a partyΓÇÖs voters are split
amongst several districts to dilute its power; and ΓÇ£packing,ΓÇ¥ where a
partyΓÇÖs voters are concentrated in a single district so its influence can be
minimized elsewhere. It turns out that this ΓÇ£define-combine procedureΓÇ¥
works; the combining party can undo any damage that the defining party does --
that the results are fair. The paper has all the details, and theyΓÇÖre
fascinating.

Of course, a theoretical solution is not a political solution. But itΓÇÖs really
neat to have a theoretical solution.

** *** ***** ******* *********** *************

David Kahn

[2024.02.02] David Kahn has died. His groundbreaking book, The Codebreakers, was
the first serious book I read about codebreaking, and one of the primary reasons
I entered this field.

He will be missed.

EDITED TO ADD (2/4): Funeral website.

EDITED TO ADD (2/10): New York Times obituary.

** *** ***** ******* *********** *************

Deepfake Fraud

[2024.02.05] A deepfake video conference call -- with everyone else on the call
a fake -- fooled a finance worker into sending $25M to the criminalsΓÇÖ account.

** *** ***** ******* *********** *************

Documents about the NSAΓÇÖs Banning of Furby Toys in the 1990s

[2024.02.06] Via a FOIA request, we have documents from the NSA about their
banning of Furby toys. 404 Media has the story.

EDITED TO ADD: The documents are now on Archive.org.

** *** ***** ******* *********** *************

Teaching LLMs to Be Deceptive

[2024.02.07] Interesting research: ΓÇ£Sleeper Agents: Training Deceptive LLMs
that Persist Through Safety TrainingΓÇ£:

Abstract: Humans are capable of strategically deceptive behavior: behaving
helpfully in most situations, but then behaving very differently in order to
pursue alternative objectives when given the opportunity. If an AI system
learned such a deceptive strategy, could we detect it and remove it using
current state-of-the-art safety training techniques? To study this question, we
construct proof-of-concept examples of deceptive behavior in large language
models (LLMs). For example, we train models that write secure code when the
prompt states that the year is 2023, but insert exploitable code when the stated
year is 2024. We find that such backdoor behavior can be made persistent, so
that it is not removed by standard safety training techniques, including
supervised fine-tuning, reinforcement learning, and adversarial training
(eliciting unsafe behavior and then training to remove it). The backdoor
behavior is most persistent in the largest models and in models trained to
produce chain-of-thought reasoning about deceiving the training process, with
the persistence remaining even when the chain-of-thought is distilled away.
Furthermore, rather than removing backdoors, we find that adversarial training
can teach models to better recognize their backdoor triggers, effectively hiding
the unsafe behavior. Our results suggest that, once a model exhibits deceptive
behavior, standard techniques could fail to remove such deception and create a
false impression of safety.

Especially note one of the sentences from the abstract: ΓÇ£For example, we train
models that write secure code when the prompt states that the year is 2023, but
insert exploitable code when the stated year is 2024.ΓÇ¥

And this deceptive behavior is hard to detect and remove.

** *** ***** ******* *********** *************

On Software Liabilities

[2024.02.08] Over on Lawfare, Jim Dempsey published a really interesting
proposal for software liability: ΓÇ£Standard for Software Liability: Focus on
the Product for Liability, Focus on the Process for Safe Harbor.ΓÇ¥

Section 1 of this paper sets the stage by briefly describing the problem to be
solved. Section 2 canvasses the different fields of law (warranty, negligence,
products liability, and certification) that could provide a starting point for
what would have to be legislative action establishing a system of software
liability. The conclusion is that all of these fields would face the same
question: How buggy is too buggy? Section 3 explains why existing software
development frameworks do not provide a sufficiently definitive basis for legal
liability. They focus on process, while a liability regime should begin with a
focus on the product -- that is, on outcomes. Expanding on the idea of building
codes for building code, Section 4 shows some examples of product-focused
standards from other fields. Section 5 notes that already there have been
definitive expressions of software defects that can be drawn together to form
the minimum legal standard of security. It specifically calls out the list of
common software weaknesses tracked by the MITRE Corporation under a government
contract. Section 6 considers how to define flaws above the minimum floor and
how to limit that liability with a safe harbor.

Full paper here.

Dempsey basically creates three buckets of software vulnerabilities: easy stuff
that the vendor should have found and fixed, hard-to-find stuff that the vendor
couldnΓÇÖt be reasonably expected to find, and the stuff in the middle. He draws
from other fields -- consumer products, building codes, automobile design -- to
show that courts can deal with the stuff in the middle.

I have long been a fan of software liability as a policy mechanism for improving
cybersecurity. And, yes, software is complicated, but we shouldnΓÇÖt let the
perfect be the enemy of the good.

In 2003, I wrote:

Clearly this isnΓÇÖt all or nothing. There are many parties involved in a
typical software attack. ThereΓÇÖs the company who sold the software with the
vulnerability in the first place. ThereΓÇÖs the person who wrote the attack
tool. ThereΓÇÖs the attacker himself, who used the tool to break into a network.
ThereΓÇÖs the owner of the network, who was entrusted with defending that
network. One hundred percent of the liability shouldnΓÇÖt fall on the shoulders
of the software vendor, just as one hundred percent shouldnΓÇÖt fall on the
attacker or the network owner. But today one hundred percent of the cost falls
on the network owner, and that just has to stop.

Courts can adjudicate these complex liability issues, and have figured this
thing out in other areas. Automobile accidents involve multiple drivers,
multiple cars, road design, weather conditions, and so on. Accidental restaurant
poisonings involve suppliers, cooks, refrigeration, sanitary conditions, and so
on. We donΓÇÖt let the fact that no restaurant can possibly fix all of the
food-safety vulnerabilities lead us to the conclusion that restaurants
shouldnΓÇÖt be responsible for any food-safety vulnerabilities, yet I hear that
line of reasoning regarding software vulnerabilities all of the time.

** *** ***** ******* *********** *************

No, Toothbrushes Were Not Used in a Massive DDoS Attack

[2024.02.09] The widely reported story last week that 1.5 million smart
toothbrushes were hacked and used in a DDoS attack is false.

Near as I can tell, a German reporter talking to someone at Fortinet got it
wrong, and then everyone else ran with it without reading the German text. It
was a hypothetical, which Fortinet eventually confirmed.

Or maybe it was a stock-price hack.

** *** ***** ******* *********** *************

On Passkey Usability

[2024.02.12] Matt Burgess tries to only use passkeys. The results are mixed.

** *** ***** ******* *********** *************

Molly White Reviews Blockchain Book

[2024.02.13] Molly White -- of ΓÇ£Web3 is Going Just GreatΓÇ¥ fame -- reviews
Chris DixonΓÇÖs blockchain solutions book: Read Write Own:

In fact, throughout the entire book, Dixon fails to identify a single blockchain
project that has successfully provided a non-speculative service at any kind of
scale. The closest he ever comes is when he speaks of how ΓÇ£for decades,
technologists have dreamed of building a grassroots internet access providerΓÇ¥.
He describes one project that ΓÇ£got further than anyone elseΓÇ¥: Helium. HeΓÇÖs
right, as long as you ignore the fact that Helium was providing LoRaWAN, not
Internet, that by the time he was writing his book Helium hotspots had long
since passed the phase where they might generate even enough tokens for their
operators to merely break even, and that the network was pulling in somewhere
around $1,150 in usage fees a month despite the company being valued at $1.2
billion. Oh, and that the company had widely lied to the public about its
supposed big-name clients, and that its executives have been accused of hoarding
the projectΓÇÖs token to enrich themselves. But hey, a16z sunk millions into
Helium (a fact Dixon never mentions), so might as well try to drum up some new
interest!

** *** ***** ******* *********** *************

A HackerΓÇÖs Mind is Out in Paperback

[2024.02.13] The paperback version of A HackerΓÇÖs Mind has just been published.
ItΓÇÖs the same book, only a cheaper format.

But -- and this is the real reason I am posting this -- Amazon has significantly
discounted the hardcover to $15 to get rid of its stock. This is much cheaper
than I am selling it for, and cheaper even than the paperback. So if youΓÇÖve
been waiting for a price drop, this is your chance.

** *** ***** ******* *********** *************

Improving the Cryptanalysis of Lattice-Based Public-Key Algorithms

[2024.02.14] The winner of the Best Paper Award at Crypto this year was a
significant improvement to lattice-based cryptanalysis.

This is important, because a bunch of NISTΓÇÖs post-quantum options base their
security on lattice problems.

I worry about standardizing on post-quantum algorithms too quickly. We are still
learning a lot about the security of these systems, and this paper is an example
of that learning.

News story.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2024.02.14] This is a current list of where and when I am scheduled to speak:

IΓÇÖm speaking at the Munich Security Conference (MSC) 2024 in Munich, Germany,
on Friday, February 16, 2024.

IΓÇÖm giving a keynote on ΓÇ£AI and TrustΓÇ¥ at Generative AI, Free Speech, &
Public Discourse. The symposium will be held at Columbia University in New York
City and online, at 3 PM ET on Tuesday, February 20, 2024.

IΓÇÖm speaking (remotely) on ΓÇ£AI, Trust and DemocracyΓÇ¥ at Indiana University
in Bloomington, Indiana, USA, at noon ET on February 20, 2024. The talk is part
of the 2023-2024 Beyond the Web Speaker Series, presented by The Ostrom Workshop
and Hamilton Lugar School.

The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2024 by Bruce Schneier.

** *** ***** ******* *********** *************

Mailing list hosting graciously provided by MailChimp. Sent without web bugs or
link tracking.
--- 
 * Origin: High Portable Tosser at my node (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0235 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106