AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1057 / 1576] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, November 15, 2023   November 17, 2023
 1:07 PM *  

Crypto-Gram 
November 15, 2023

by Bruce Schneier 
Fellow and Lecturer, Harvard Kennedy School 
schneier@schneier.com 
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Coin Flips Are Biased
Security Vulnerability of SwitzerlandΓÇÖs E-Voting System
Analysis of IntellexaΓÇÖs Predator Spyware
Former Uber CISO Appealing His Conviction
AI and US Election Rules
Child Exploitation and the Crypto Wars
EPA WonΓÇÖt Force Water Utilities to Audit Their Cybersecurity
Microsoft is Soft-Launching Security Copilot
New NSA Information from (and about) Snowden
Messaging Service Wiretap Discovered through Expired TLS Cert
Hacking Scandinavian Alcohol Tax
The Future of Drone Warfare
Spyware in India
New York Increases Cybersecurity Rules for Financial Companies
Crashing iPhones with a Flipper Zero
Spaf on the Morris Worm
Decoupling for Security
Online Retail Hack
The Privacy Disaster of Modern Smart Cars
Ten Ways AI Will Change Democracy
How .tk Became a TLD for Scammers
Upcoming Speaking Engagements
** *** ***** ******* *********** *************

Coin Flips Are Biased

[2023.10.16] Experimental result:

Many people have flipped coins but few have stopped to ponder the statistical
and physical intricacies of the process. In a preregistered study we collected
350,757 coin flips to test the counterintuitive prediction from a physics model
of human coin tossing developed by Persi Diaconis. The model asserts that when
people flip an ordinary coin, it tends to land on the same side it started --
Diaconis estimated the probability of a same-side outcome to be about 51%.

And the final paragraph:

Could future coin tossers use the same-side bias to their advantage? The
magnitude of the observed bias can be illustrated using a betting scenario. If
you bet a dollar on the outcome of a coin toss (i.e., paying 1 dollar to enter,
and winning either 0 or 2 dollars depending on the outcome) and repeat the bet
1,000 times, knowing the starting position of the coin toss would earn you 19
dollars on average. This is more than the casino advantage for 6 deck blackjack
against an optimal-strategy player, where the casino would make 5 dollars on a
comparable bet, but less than the casino advantage for single-zero roulette,
where the casino would make 27 dollars on average. These considerations lead us
to suggest that when coin flips are used for high-stakes decision-making, the
starting position of the coin is best concealed.

Boing Boing post.

** *** ***** ******* *********** *************

Security Vulnerability of SwitzerlandΓÇÖs E-Voting System

[2023.10.17] Online voting is insecure, period. This doesnΓÇÖt stop
organizations and governments from using it. (And for low-stakes elections,
itΓÇÖs probably fine.) Switzerland -- not low stakes -- uses online voting for
national elections. Andrew Appel explains why itΓÇÖs a bad idea:

Last year, I published a 5-part series about SwitzerlandΓÇÖs e-voting system.
Like any internet voting system, it has inherent security vulnerabilities: if
there are malicious insiders, they can corrupt the vote count; and if thousands
of votersΓÇÖ computers are hacked by malware, the malware can change votes as
they are transmitted. Switzerland ΓÇ£solvesΓÇ¥ the problem of malicious insiders
in their printing office by officially declaring that they wonΓÇÖt consider that
threat model in their cybersecurity assessment.

But it also has an interesting new vulnerability:

The Swiss Post e-voting system aims to protect your vote against vote
manipulation and interference. The goal is to achieve this even if your own
computer is infected by undetected malware that manipulates a user vote. This
protection is implemented by special return codes (Pr├╝fcode), printed on the
sheet of paper you receive by physical mail. Your computer doesnΓÇÖt know these
codes, so even if itΓÇÖs infected by malware, it canΓÇÖt successfully cheat you
as long as, you follow the protocol.

Unfortunately, the protocol isnΓÇÖt explained to you on the piece of paper you
get by mail. ItΓÇÖs only explained to you online, when you visit the e-voting
website. And of course, thatΓÇÖs part of the problem! If your computer is
infected by malware, then it can already present to you a bogus website that
instructs you to follow a different protocol, one that is cheatable. To
demonstrate this, I built a proof-of-concept demonstration.

Appel again:

KusterΓÇÖs fake protocol is not exactly what I imagined; itΓÇÖs better. He
explains it all in his blog post. Basically, in his malware-manipulated website,
instead of displaying the verification codes for the voter to compare with
whatΓÇÖs on the paper, the website asks the voter to enter the verification
codes into a web form. Since the website doesnΓÇÖt know whatΓÇÖs on the paper,
that web-form entry is just for show. Of course, Kuster did not employ a botnet
virus to distribute his malware to real voters! He keeps it contained on his own
system and demonstrates it in a video.

Again, the solution is paper. (Here I am saying that in 2004.) And, no,
blockchain does not help -- it makes security worse.

** *** ***** ******* *********** *************

Analysis of IntellexaΓÇÖs Predator Spyware

[2023.10.18] Amnesty International has published a comprehensive analysis of the
Predator government spyware products.

These technologies used to be the exclusive purview of organizations like the
NSA. Now theyΓÇÖre available to every country on the planet -- democratic,
nondemocratic, authoritarian, whatever -- for a price. This is the legacy of not
securing the Internet when we could have.

** *** ***** ******* *********** *************

Former Uber CISO Appealing His Conviction

[2023.10.19] Joe Sullivan, UberΓÇÖs CISO during their 2016 data breach, is
appealing his conviction.

Prosecutors charged Sullivan, whom Uber hired as CISO after the 2014 breach, of
withholding information about the 2016 incident from the FTC even as its
investigators were scrutinizing the companyΓÇÖs data security and privacy
practices. The government argued that Sullivan should have informed the FTC of
the 2016 incident, but instead went out of his way to conceal it from them.

Prosecutors also accused Sullivan of attempting to conceal the breach itself by
paying $100,000 to buy the silence of the two hackers behind the compromise.
Sullivan had characterized the payment as a bug bounty similar to ones that
other companies routinely make to researchers who report vulnerabilities and
other security issues to them. His lawyers pointed out that Sullivan had made
the payment with the full knowledge and blessing of Travis Kalanick, UberΓÇÖs
CEO at the time, and other members of the ride-sharing giantΓÇÖs legal team.

But prosecutors described the payment and an associated nondisclosure agreement
that SullivanΓÇÖs team wanted the hackers to sign as an attempt to cover up what
was in effect a felony breach of UberΓÇÖs network.

[...]

SullivanΓÇÖs fate struck a nerve with many peers and others in the industry who
perceived CISOs as becoming scapegoats for broader security failures at their
companies. Many argued and continue to argue that Sullivan acted with the full
knowledge of his supervisors but in the end became the sole culprit for the
breach and the associated failures for which he was charged. They believed that
if Sullivan could be held culpable for his failure to report the 2016 breach to
the FTC - and for the alleged hush payment -- then so should Kalanick at the
very least, and probably others as well.

ItΓÇÖs an argument that SullivanΓÇÖs lawyers once again raised in their appeal
of the obstruction conviction this week. ΓÇ£Despite the fact that Mr. Sullivan
was not responsible at Uber for the FTCΓÇÖs investigation, including the
drafting or signing any of the submissions to the FTC, the government singled
him out among over 30 of his co-employees who all had information that Mr.
Sullivan is alleged to have hidden from the FTC,ΓÇ¥ Swaminathan said.

I have some sympathy for that view. Sullivan was almost certainly scapegoated
here. But I do want executives personally liable for what their company does. I
donΓÇÖt know enough about the details to have an opinion in this particular
case.

** *** ***** ******* *********** *************

AI and US Election Rules

[2023.10.20] If an AI breaks the rules for you, does that count as breaking the
rules? This is the essential question being taken up by the Federal Election
Commission this month, and public input is needed to curtail the potential for
AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political
advertisements should be considered fraud or legitimate electioneering. That is,
is it allowable to use AI image generators to create photorealistic images
depicting Trump hugging Anthony Fauci? And is it allowable to use dystopic
images generated by AI in political attack ads?

For now, the answer to these questions is probably ΓÇ£yes.ΓÇ¥ These are fairly
innocuous uses of AI, not any different than the old-school approach of hiring
actors and staging a photoshoot, or using video editing software. Even in cases
where AI tools will be put to scurrilous purposes, thatΓÇÖs probably legal in
the US system. Political ads are, after all, a medium in which you are
explicitly permitted to lie.

The concern over AI is a distraction, but one that can help draw focus to the
real issue. What matters isnΓÇÖt how political content is generated; what
matters is the content itself and how it is distributed.

Future uses of AI by campaigns go far beyond deepfaked images. Campaigns will
also use AI to personalize communications. Whereas the previous generation of
social media microtargeting was celebrated for helping campaigns reach a
precision of thousands or hundreds of voters, the automation offered by AI will
allow campaigns to tailor their advertisements and solicitations to the
individual.

Most significantly, AI will allow digital campaigning to evolve from a broadcast
medium to an interactive one. AI chatbots representing campaigns are capable of
responding to questions instantly and at scale, like a town hall taking place in
every voterΓÇÖs living room, simultaneously. Ron DeSantisΓÇÖ presidential
campaign has reportedly already started using OpenAIΓÇÖs technology to handle
text message replies to voters.

At the same time, itΓÇÖs not clear whose responsibility it is to keep US
political advertisements grounded in reality -- if it is anyoneΓÇÖs. The FECΓÇÖs
role is campaign finance, and is further circumscribed by the Supreme CourtΓÇÖs
repeated stripping of its authorities. The Federal Communications Commission has
much more expansive responsibility for regulating political advertising in
broadcast media, as well as political robocalls and text communications.
However, the FCC hasnΓÇÖt done much in recent years to curtail political spam.
The Federal Trade Commission enforces truth in advertising standards, but
political campaigns have been largely exempted from these requirements on First
Amendment grounds.

To further muddy the waters, much of the online space remains loosely regulated,
even as campaigns have fully embraced digital tactics. There are still
insufficient disclosure requirements for digital ads. Campaigns pay influencers
to post on their behalf to circumvent paid advertising rules. And there are
essentially no rules beyond the simple use of disclaimers for videos that
campaigns post organically on their own websites and social media accounts, even
if they are shared millions of times by others.

Almost everyone has a role to play in improving this situation.

LetΓÇÖs start with the platforms. Google announced earlier this month that it
would require political advertisements on YouTube and the companyΓÇÖs other
advertising platforms to disclose when they use AI images, audio, and video that
appear in their ads. This is to be applauded, but we cannot rely on voluntary
actions by private companies to protect our democracy. Such policies, even when
well-meaning, will be inconsistently devised and enforced.

The FEC should use its limited authority to stem this coming tide. The FECΓÇÖs
present consideration of rulemaking on this issue was prompted by Public
Citizen, which petitioned the Commission to ΓÇ£clarify that the law against
‘fraudulent misrepresentation’ (52 U.S.C. §30124) applies to deliberately
deceptive AI-produced content in campaign communications.ΓÇ¥ The FECΓÇÖs
regulation against fraudulent misrepresentation (C.F.R. §110.16) is very
narrow; it simply restricts candidates from pretending to be speaking on behalf
of their opponents in a ΓÇ£damagingΓÇ¥ way.

Extending this to explicitly cover deepfaked AI materials seems appropriate. We
should broaden the standards to robustly regulate the activity of fraudulent
misrepresentation, whether the entity performing that activity is AI or human --
but this is only the first step. If the FEC takes up rulemaking on this issue,
it could further clarify what constitutes ΓÇ£damage.ΓÇ¥ Is it damaging when a
PAC promoting Ron DeSantis uses an AI voice synthesizer to generate a convincing
facsimile of the voice of his opponent Donald Trump speaking his own Tweeted
words? That seems like fair play. What if opponents find a way to manipulate the
tone of the speech in a way that misrepresents its meaning? What if they make up
words to put in TrumpΓÇÖs mouth? Those use cases seem to go too far, but drawing
the boundaries between them will be challenging.

Congress has a role to play as well. Senator Klobuchar and colleagues have been
promoting both the existing Honest Ads Act and the proposed REAL Political Ads
Act, which would expand the FECΓÇÖs disclosure requirements for content posted
on the Internet and create a legal requirement for campaigns to disclose when
they have used images or video generated by AI in political advertising. While
thatΓÇÖs worthwhile, it focuses on the shiny object of AI and misses the
opportunity to strengthen law around the underlying issues. The FEC needs more
authority to regulate campaign spending on false or misleading media generated
by any means and published to any outlet. Meanwhile, the FECΓÇÖs own Inspector
General continues to warn Congress that the agency is stressed by flat budgets
that donΓÇÖt allow it to keep pace with ballooning campaign spending.

It is intolerable for such a patchwork of commissions to be left to wonder
which, if any of them, has jurisdiction to act in the digital space. Congress
should legislate to make clear that there are guardrails on political speech and
to better draw the boundaries between the FCC, FEC, and FTCΓÇÖs roles in
governing political speech. While the Supreme Court cannot be relied upon to
uphold common sense regulations on campaigning, there are strategies for
strengthening regulation under the First Amendment. And Congress should allocate
more funding for enforcement.

The FEC has asked Congress to expand its jurisdiction, but no action is
forthcoming. The present Senate Republican leadership is seen as an ironclad
barrier to expanding the CommissionΓÇÖs regulatory authority. Senate Majority
Leader Mitch McConnell has a decades-long history of being at the forefront of
the movement to deregulate American elections and constrain the FEC. In 2003, he
brought the unsuccessful Supreme Court case against the McCain-Feingold campaign
finance reform act (the one that failed before the Citizens United case
succeeded).

The most impactful regulatory requirement would be to require disclosure of
interactive applications of AI for campaigns -- and this should fall under the
remit of the FCC. If a neighbor texts me and urges me to vote for a candidate, I
might find that meaningful. If a bot does it under the instruction of a
campaign, I definitely wonΓÇÖt. But I might find a conversation with the bot --
knowing it is a bot -- useful to learn about the candidateΓÇÖs platform and
positions, as long as I can be confident it is going to give me trustworthy
information.

The FCC should enter rulemaking to expand its authority for regulating
peer-to-peer (P2P) communications to explicitly encompass interactive AI
systems. And Congress should pass enabling legislation to back it up, giving it
authority to act not only on the SMS text messaging platform, but also over the
wider Internet, where AI chatbots can be accessed over the web and through apps.

And the media has a role. We can still rely on the media to report out what
videos, images, and audio recordings are real or fake. Perhaps deepfake
technology makes it impossible to verify the truth of what is said in private
conversations, but this was always unstable territory.

What is your role? Those who share these concerns could submit a comment to the
FECΓÇÖs open public comment process before October 16, urging it to use its
available authority. We all know government moves slowly, but a show of public
interest is necessary to get the wheels moving.

Ultimately, all these policy changes serve the purpose of looking beyond the
shiny distraction of AI to create the authority to counter bad behavior by
humans. Remember: behind every AI is a human who should be held accountable.

This essay was written with Nathan Sanders, and was previously published on the
Ash Center website.

** *** ***** ******* *********** *************

Child Exploitation and the Crypto Wars

[2023.10.23] Susan Landau published an excellent essay on the current
justification for the government breaking end-to-end-encryption: child sexual
abuse and exploitation (CSAE). She puts the debate into historical context,
discusses the problem of CSAE, and explains why breaking encryption isnΓÇÖt the
solution.

** *** ***** ******* *********** *************

EPA WonΓÇÖt Force Water Utilities to Audit Their Cybersecurity

[2023.10.24] The industry pushed back:

Despite the EPAΓÇÖs willingness to provide training and technical support to
help states and public water system organizations implement cybersecurity
surveys, the move garnered opposition from both GOP state attorneys and trade
groups.

Republican state attorneys that were against the new proposed policies said that
the call for new inspections could overwhelm state regulators. The attorney
generals of Arkansas, Iowa and Missouri all sued the EPA -- claiming the agency
had no authority to set these requirements. This led to the EPAΓÇÖs proposal
being temporarily blocked back in June.

So now we have a piece of our critical infrastructure with substandard
cybersecurity. This seems like a really bad outcome.

** *** ***** ******* *********** *************

Microsoft is Soft-Launching Security Copilot

[2023.10.25] Microsoft has announced an early access program for its LLM-based
security chatbot assistant: Security Copilot.

I am curious whether this thing is actually useful.

** *** ***** ******* *********** *************

New NSA Information from (and about) Snowden

[2023.10.26] Interesting article about the Snowden documents, including comments
from former Guardian editor Ewen MacAskill.

MacAskill, who shared the Pulitzer Prize for Public Service with Glenn Greenwald
and Laura Poitras for their journalistic work on the Snowden files, retired from
The Guardian in 2018. He told Computer Weekly that:

As far as he knows, a copy of the documents is still locked in the New York
Times office. Although the files are in the New York Times office, The Guardian
retains responsibility for them.
As to why the New York Times has not published them in a decade, MacAskill
maintains ΓÇ£this is a complicated issue.ΓÇ¥ ΓÇ£There is, at the very least, a
case to be made for keeping them for future generations of historians,ΓÇ¥ he
said.
Why was only 1% of the Snowden archive published by the journalists who had full
access to it? Ewen MacAskill replied: ΓÇ£The main reason for only a small
percentage -- though, given the mass of documents, 1% is still a lot -- was
diminishing interest.ΓÇ¥
[...]

The GuardianΓÇÖs journalist did not recall seeing the three revelations
published by Computer Weekly, summarized below:

The NSA listed Cavium, an American semiconductor company marketing Central
Processing Units (CPUs) -- the main processor in a computer which runs the
operating system and applications -- as a successful example of a
ΓÇ£SIGINT-enabledΓÇ¥ CPU supplier. Cavium, now owned by Marvell, said it does
not implement back doors for any government.
The NSA compromised lawful Russian interception infrastructure, SORM. The NSA
archive contains slides showing two Russian officers wearing jackets with a
slogan written in Cyrillic: ΓÇ£You talk, we listen.ΓÇ¥ The NSA and/or GCHQ has
also compromised key lawful interception systems.
Among example targets of its mass-surveillance programme, PRISM, the NSA listed
the Tibetan government in exile.
Those three pieces of info come from Jake AppelbaumΓÇÖs PhD thesis.

** *** ***** ******* *********** *************

Messaging Service Wiretap Discovered through Expired TLS Cert

[2023.10.27] Fascinating story of a covert wiretap that was discovered because
of an expired TLS certificate:

The suspected man-in-the-middle attack was identified when the administrator of
jabber.ru, the largest Russian XMPP service, received a notification that one of
the serversΓÇÖ certificates had expired.

However, jabber.ru found no expired certificates on the server, as explained in
a blog post by ValdikSS, a pseudonymous anti-censorship researcher based in
Russia who collaborated on the investigation.

The expired certificate was instead discovered on a single port being used by
the service to establish an encrypted Transport Layer Security (TLS) connection
with users. Before it had expired, it would have allowed someone to decrypt the
traffic being exchanged over the service.

** *** ***** ******* *********** *************

Hacking Scandinavian Alcohol Tax

[2023.10.30] The islands of Åland are an important tax hack:

Although Åland is part of the Republic of Finland, it has its own autonomous
parliament. In areas where Åland has its own legislation, the group of islands
essentially operates as an independent nation.

This allows Scandinavians to avoid the notoriously high alcohol taxes:

Åland is a member of the EU and its currency is the euro, but Åland’s
relationship with the EU is regulated by way of a special protocol. In order to
maintain the important sale of duty-free goods on ferries operating between
Finland and Sweden, Åland is not part of the EU’s VAT area.

Basically, ferries between the two countries stop at the island, and people
stock up -- I mean really stock up, hand trucks piled with boxes -- on tax-free
alcohol. Åland gets the revenue, and presumably docking fees.

The purpose of the special status of the Åland Islands was to maintain the
right to tax free sales in the ship traffic. The ship traffic is of vital
importance for the provinceΓÇÖs communication, and the intention was to support
the economy of the province this way.

** *** ***** ******* *********** *************

The Future of Drone Warfare

[2023.10.31] Ukraine is using $400 drones to destroy tanks:

Facing an enemy with superior numbers of troops and armor, the Ukrainian
defenders are holding on with the help of tiny drones flown by operators like
Firsov that, for a few hundred dollars, can deliver an explosive charge capable
of destroying a Russian tank worth more than $2 million.

[...]

A typical FPV weighs up to one kilogram, has four small engines, a battery, a
frame and a camera connected wirelessly to goggles worn by a pilot operating it
remotely. It can carry up to 2.5 kilograms of explosives and strike a target at
a speed of up to 150 kilometers per hour, explains Pavlo Tsybenko, acting
director of the Dronarium military academy outside Kyiv.

ΓÇ£This drone costs up to $400 and can be made anywhere. We made ours using
microchips imported from China and details we bought on AliExpress. We made the
carbon frame ourselves. And, yeah, the batteries are from Tesla. One car has
like 1,100 batteries that can be used to power these little guys,ΓÇ¥ Tsybenko
told POLITICO on a recent visit, showing the custom-made FPV drones used by the
academy to train future drone pilots.

ΓÇ£It is almost impossible to shoot it down,ΓÇ¥ he said. ΓÇ£Only a net can help.
And I predict that soon we will have to put up such nets above our cities, or at
least government buildings, all over Europe.ΓÇ¥

Science fiction authors have been writing about drone swarms for decades. Now
they are reality. Tanks today. Soon it will be ships (probably with more
expensive drones). Feels like this will be a major change in warfare.

** *** ***** ******* *********** *************

Spyware in India

[2023.11.02] Apple has warned leaders of the opposition government in India that
their phones are being spied on:

Multiple top leaders of IndiaΓÇÖs opposition parties and several journalists
have received a notification from Apple, saying that ΓÇ£Apple believes you are
being targeted by state-sponsored attackers who are trying to remotely
compromise the iPhone associated with your Apple ID ....ΓÇ¥

AccessNow puts this in context:

For India to uphold fundamental rights, authorities must initiate an immediate
independent inquiry, implement a ban on the use of rights-abusing commercial
spyware, and make a commitment to reform the countryΓÇÖs surveillance laws.
These latest warnings build on repeated instances of cyber intrusion and spyware
usage, and highlights the surveillance impunity in India that continues to
flourish despite the public outcry triggered by the 2019 Pegasus Project
revelations.

** *** ***** ******* *********** *************

New York Increases Cybersecurity Rules for Financial Companies

[2023.11.03] Another example of a large and influential state doing things the
federal government wonΓÇÖt:

Boards of directors, or other senior committees, are charged with overseeing
cybersecurity risk management, and must retain an appropriate level of expertise
to understand cyber issues, the rules say. Directors must sign off on
cybersecurity programs, and ensure that any security program has ΓÇ£sufficient
resourcesΓÇ¥ to function.

In a new addition, companies now face significant requirements related to ransom
payments. Regulated firms must now report any payment made to hackers within 24
hours of that payment.

** *** ***** ******* *********** *************

Crashing iPhones with a Flipper Zero

[2023.11.06] The Flipper Zero is an incredibly versatile hacking device. Now it
can be used to crash iPhones in its vicinity by sending them a never-ending
stream of pop-ups.

These types of hacks have been possible for decades, but they require special
equipment and a fair amount of expertise. The capabilities generally required
expensive SDRs -- short for software-defined radios -- that, unlike traditional
hardware-defined radios, use firmware and processors to digitally re-create
radio signal transmissions and receptions. The $200 Flipper Zero isnΓÇÖt an SDR
in its own right, but as a software-controlled radio, it can do many of the same
things at an affordable price and with a form factor thatΓÇÖs much more
convenient than the previous generations of SDRs.

** *** ***** ******* *********** *************

Spaf on the Morris Worm

[2023.11.07] Gene Spafford wrote an essay reflecting on the Morris Worm of 1988
-- thirty-five years ago. His lessons from then are still applicable today.

** *** ***** ******* *********** *************

Decoupling for Security

[2023.11.08] This is an excerpt from a longer paper. You can read the whole
thing (complete with sidebars and illustrations) here.

Our message is simple: it is possible to get the best of both worlds. We can and
should get the benefits of the cloud while taking security back into our own
hands. Here we outline a strategy for doing that.

What Is Decoupling?

In the last few years, a slew of ideas old and new have converged to reveal a
path out of this morass, but they havenΓÇÖt been widely recognized, combined, or
used. These ideas, which weΓÇÖll refer to in the aggregate as ΓÇ£decoupling,ΓÇ¥
allow us to rethink both security and privacy.

HereΓÇÖs the gist. The less someone knows, the less they can put you and your
data at risk. In security this is called Least Privilege. The decoupling
principle applies that idea to cloud services by making sure systems know as
little as possible while doing their jobs. It states that we gain security and
privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data
as we use cloud services: data in motion, data at rest, and data in use. We
should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as
videoconferencing servers, remote file-storage systems, and other
content-delivery networks. Our data at rest, while sometimes on individual
devices, is usually stored or backed up in the cloud, governed by cloud provider
services and policies. And many services use the cloud to do extensive
processing on our data, sometimes without our consent or knowledge. Most
services involve more than one of these modes.

To ensure that cloud services do not learn more than they should, and that a
breach of one does not pose a fundamental threat to our data, we need two types
of decoupling. The first is organizational decoupling: dividing private
information among organizations such that none knows the totality of what is
going on. The second is functional decoupling: splitting information among
layers of software. Identifiers used to authenticate users, for example, should
be kept separate from identifiers used to connect their devices to the network.

In designing decoupled systems, cloud providers should be considered potential
threats, whether due to malice, negligence, or greed. To verify that decoupling
has been done right, we can learn from how we think about encryption: youΓÇÖve
encrypted properly if youΓÇÖre comfortable sending your message with your
adversaryΓÇÖs communications system. Similarly, youΓÇÖve decoupled properly if
youΓÇÖre comfortable using cloud services that have been split across a
noncolluding group of adversaries.

Read the full essay

This essay was written with Barath Raghavan, and previously appeared in IEEE
Spectrum.

** *** ***** ******* *********** *************

Online Retail Hack

[2023.11.09] Selling miniature replicas to unsuspecting shoppers:

Online marketplaces sell tiny pink cowboy hats. They also sell miniature pencil
sharpeners, palm-size kitchen utensils, scaled-down books and camping chairs so
small they evoke the Stonehenge scene in ΓÇ£This Is Spinal Tap.ΓÇ¥ Many of the
minuscule objects arenΓÇÖt clearly advertised.

[...]

But there is no doubt some online sellers deliberately trick customers into
buying smaller and often cheaper-to-produce items, Witcher said. Common tactics
include displaying products against a white background rather than in room sets
or on models, or photographing items with a perspective that makes them appear
bigger than they really are. Dimensions can be hidden deep in the product
description, or not included at all.

In those instances, the duped consumer ΓÇ£may say, well, itΓÇÖs only $1, $2,
maybe $3 -- whatΓÇÖs the harm?ΓÇ¥ Witcher said. When the item arrives the
shopper may be confused, amused or frustrated, but unlikely to complain or
demand a refund.

ΓÇ£When you aggregate that to these companies who are selling hundreds of
thousands, maybe millions of these items over time, that adds up to a nice chunk
of change,ΓÇ¥ Witcher said. ΓÇ£ItΓÇÖs finding a loophole in how society works
and making money off of it.ΓÇ¥

Defrauding a lot of people out of a small amount each can be a very successful
way of making money.

** *** ***** ******* *********** *************

The Privacy Disaster of Modern Smart Cars

[2023.11.10] Article based on a Mozilla report.

** *** ***** ******* *********** *************

Ten Ways AI Will Change Democracy

[2023.11.13] Artificial intelligence will change so many aspects of society,
largely in ways that we cannot conceive of yet. Democracy, and the systems of
governance that surround it, will be no exception. In this short essay, I want
to move beyond the ΓÇ£AI-generated disinformationΓÇ¥ trope and speculate on some
of the ways AI will change how democracy functions -- in both large and small
ways.

When I survey how artificial intelligence might upend different aspects of
modern society, democracy included, I look at four different dimensions of
change: speed, scale, scope, and sophistication. Look for places where changes
in degree result in changes of kind. Those are where the societal upheavals will
happen.

Some items on my list are still speculative, but none require science-fictional
levels of technological advance. And we can see the first stages of many of them
today. When reading about the successes and failures of AI systems, itΓÇÖs
important to differentiate between the fundamental limitations of AI as a
technology, and the practical limitations of AI systems in the fall of 2023.
Advances are happening quickly, and the impossible is becoming the routine. We
donΓÇÖt know how long this will continue, but my bet is on continued major
technological advances in the coming years. Which means itΓÇÖs going to be a
wild ride.

So, hereΓÇÖs my list:

AI as educator. We are already seeing AI serving the role of teacher. ItΓÇÖs
much more effective for a student to learn a topic from an interactive AI
chatbot than from a textbook. This has applications for democracy. We can
imagine chatbots teaching citizens about different issues, such as climate
change or tax policy. We can imagine candidates deploying chatbots of
themselves, allowing voters to directly engage with them on various issues. A
more general chatbot could know the positions of all the candidates, and help
voters decide which best represents their position. There are a lot of
possibilities here.
AI as sense maker. There are many areas of society where accurate summarization
is important. Today, when constituents write to their legislator, those letters
get put into two piles -- one for and another against -- and someone compares
the height of those piles. AI can do much better. It can provide a rich summary
of the comments. It can help figure out which are unique and which are form
letters. It can highlight unique perspectives. This same system can also work
for comments to different government agencies on rulemaking processes -- and on
documents generated during the discovery process in lawsuits.
AI as moderator, mediator, and consensus builder. Imagine online conversations
in which AIs serve the role of moderator. This could ensure that all voices are
heard. It could block hateful -- or even just off-topic -- comments. It could
highlight areas of agreement and disagreement. It could help the group reach a
decision. This is nothing that a human moderator canΓÇÖt do, but there arenΓÇÖt
enough human moderators to go around. AI can give this capability to every
decision-making group. At the extreme, an AI could be an arbiter -- a judge --
weighing evidence and making a decision. These capabilities donΓÇÖt exist yet,
but they are not far off.
AI as lawmaker. We have already seen proposed legislation written by AI, albeit
more as a stunt than anything else. But in the future AIs will help craft
legislation, dealing with the complex ways laws interact with each other. More
importantly, AIs will eventually be able to craft loopholes in legislation, ones
potentially too complicated for people to easily notice. On the other side of
that, AIs could be used to find loopholes in legislation -- for both existing
and pending laws. And more generally, AIs could be used to help develop policy
positions.
AI as political strategist. Right now, you can ask your favorite chatbot
questions about political strategy: what legislation would further your
political goals, what positions to publicly take, what campaign slogans to use.
The answers you get wonΓÇÖt be very good, but thatΓÇÖll improve with time. In
the future we should expect politicians to make use of this AI expertise: not to
follow blindly, but as another source of ideas. And as AIs become more capable
at using tools, they can automatically conduct polls and focus groups to test
out political ideas. There are a lot of possibilities here. AIs could also
engage in fundraising campaigns, directly soliciting contributions from people.
AI as lawyer. We donΓÇÖt yet know which aspects of the legal profession can be
done by AIs, but many routine tasks that are now handled by attorneys will soon
be able to be completed by an AI. Early attempts at having AIs write legal
briefs havenΓÇÖt worked, but this will change as the systems get better at
accuracy. Additionally, AIs can help people navigate government systems: filling
out forms, applying for services, contesting bureaucratic actions. And future
AIs will be much better at writing legalese, reducing the cost of legal counsel.
AI as cheap reasoning generator. More generally, AI chatbots are really good at
generating persuasive arguments. Today, writing out a persuasive argument takes
time and effort, and our systems reflect that. We can easily imagine AIs
conducting lobbying campaigns, generating and submitting comments on legislation
and rulemaking. This also has applications for the legal system. For example: if
it is suddenly easy to file thousands of court cases, this will overwhelm the
courts. Solutions for this are hard. We could increase the cost of filing a
court case, but that becomes a burden on the poor. The only solution might be
another AI working for the court, dealing with the deluge of AI-filed cases --
which doesnΓÇÖt sound like a great idea.
AI as law enforcer. Automated systems already act as law enforcement in some
areas: speed trap cameras are an obvious example. AI can take this kind of thing
much further, automatically identifying people who cheat on tax returns or when
applying for government services. This has the obvious problem of false
positives, which could be hard to contest if the courts believe that ΓÇ£the
computer is always right.ΓÇ¥ Separately, future laws might be so complicated
that only AIs are able to decide whether or not they are being broken. And, like
breathalyzers, defendants might not be allowed to know how they work.
AI as propagandist. AIs can produce and distribute propaganda faster than humans
can. This is an obvious risk, but we donΓÇÖt know how effective any of it will
be. It makes disinformation campaigns easier, which means that more people will
take advantage of them. But people will be more inured against the risks. More
importantly, AIΓÇÖs ability to summarize and understand text can enable much
more effective censorship.
AI as political proxy. Finally, we can imagine an AI voting on behalf of
individuals. A voter could feed an AI their social, economic, and political
preferences; or it can infer them by listening to them talk and watching their
actions. And then it could be empowered to vote on their behalf, either for
others who would represent them, or directly on ballot initiatives. On the one
hand, this would greatly increase voter participation. On the other hand, it
would further disengage people from the act of understanding politics and
engaging in democracy.
When I teach AI policy at HKS, I stress the importance of separating the
specific AI chatbot technologies in November of 2023 with AIΓÇÖs technological
possibilities in general. Some of the items on my list will soon be possible;
others will remain fiction for many years. Similarly, our acceptance of these
technologies will change. Items on that list that we would never accept today
might feel routine in a few years. A judgeless courtroom seems crazy today, but
so did a driverless car a few years ago. DonΓÇÖt underestimate our ability to
normalize new technologies. My bet is that weΓÇÖre in for a wild ride.

This essay previously appeared on the Harvard Kennedy School Ash CenterΓÇÖs
website.

** *** ***** ******* *********** *************

How .tk Became a TLD for Scammers

[2023.11.14] Sad story of Tokelau, and how its top-level domain ΓÇ£became the
unwitting host to the dark underworld by providing a never-ending supply of
domain names that could be weaponized against internet users. Scammers began
using .tk websites to do everything from harvesting passwords and payment
information to displaying pop-up ads or delivering malware.ΓÇ¥

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2023.11.14] This is a current list of where and when I am scheduled to speak:

IΓÇÖm speaking at the AI Summit New York on December 6, 2023.
The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

** *** ***** ******* *********** *************
--- 
 * Origin: High Portable Tosser at my node (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0183 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106