AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [842 / 1582] RSS
 From   To   Subject   Date/Time 
Message   TCOB1    All   CRYPTO-GRAM, May 15, 2023   May 16, 2023
 10:49 AM *  

Crypto-Gram
May 15, 2023

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Swatting as a Service
Using LLMs to Create Bioweapons
EFF on the UN Cybercrime Treaty
New Zero-Click Exploits against iOS Using the iPhone Recovery Key to Lock Owners
Out of Their iPhones Hacking Pickleball
UK Threatens End-to-End Encryption
Cyberweapons Manufacturer QuaDream Shuts Down AI to Aid Democracy
Security Risks of AI
Hacking the Layoff Process
NIST Draft Document on Post-Quantum Cryptography Guidance
SolarWinds Detected Six Months Earlier Large Language Models and Elections AI
Hacking Village at DEF CON This Year PIPEDREAM Malware against Industrial
Control Systems
FBI Disables Russian Malware
Building Trustworthy AI
Ted Chiang on the Risks of AI
Upcoming Speaking Engagements
** *** ***** ******* *********** *************

Swatting as a Service

[2023.04.17] Motherboard is reporting on AI-generated voices being used for
ΓÇ£swattingΓÇ¥:

In fact, Motherboard has found, this synthesized call and another against
Hempstead High School were just one small part of a months-long, nationwide
campaign of dozens, and potentially hundreds, of threats made by one swatter in
particular who has weaponized computer generated voices. Known as
ΓÇ£TorswatsΓÇ¥ on the messaging app Telegram, the swatter has been calling in
bomb and mass shooting threats against highschools and other locations across
the country. TorswatΓÇÖs connection to these wide ranging swatting incidents has
not been previously reported. The further automation of swatting techniques
threatens to make an already dangerous harassment technique more prevalent.

** *** ***** ******* *********** *************

Using LLMs to Create Bioweapons

[2023.04.18] IΓÇÖm not sure there are good ways to build guardrails to prevent
this sort of thing:

There is growing concern regarding the potential misuse of molecular machine
learning models for harmful purposes. Specifically, the dual-use application of
models for predicting cytotoxicity18 to create new poisons or employing
AlphaFold2 to develop novel bioweapons has raised alarm. Central to these
concerns are the possible misuse of large language models and automated
experimentation for dual-use purposes or otherwise. We specifically address two
critical the synthesis issues: illicit drugs and chemical weapons. To evaluate
these risks, we designed a test set comprising compounds from the DEAΓÇÖs
Schedule I and II substances and a list of known chemical weapon agents. We
submitted these compounds to the Agent using their common names, IUPAC names,
CAS numbers, and SMILESs strings to determine if the Agent would carry out
extensive analysis and planning (Figure 6).

[...]

The run logs can be found in Appendix F. Out of 11 different prompts (Figure 6),
four (36%) provided a synthesis solution and attempted to consult documentation
to execute the procedure. This figure is alarming on its own, but an even
greater concern is the way in which the Agent declines to synthesize certain
threats. Out of the seven refused chemicals, five were rejected after the Agent
utilized search functions to gather more information about the substance. For
instance, when asked about synthesizing codeine, the Agent becomes alarmed upon
learning the connection between codeine and morphine, only then concluding that
the synthesis cannot be conducted due to the requirement of a controlled
substance. However, this search function can be easily manipulated by altering
the terminology, such as replacing all mentions of morphine with ΓÇ£Compound
AΓÇ¥ and codeine with ΓÇ£Compound BΓÇ¥. Alternatively, when requesting a b
synthesis procedure that must be performed in a DEA-licensed facility, bad
actors can mis lead the Agent by falsely claiming their facility is licensed,
prompting the Agent to devise a synthesis solution.

In the remaining two instances, the Agent recognized the common names
ΓÇ£heroinΓÇ¥ and ΓÇ£mustard gasΓÇ¥ as threats and prevented further information
gathering. While these results are promising, it is crucial to recognize that
the systemΓÇÖs capacity to detect misuse primarily applies to known compounds.
For unknown compounds, the model is less likely to identify potential misuse,
particularly for complex protein toxins where minor sequence changes might allow
them to maintain the same properties but become unrecognizable to the model.

** *** ***** ******* *********** *************

EFF on the UN Cybercrime Treaty

[2023.04.19] EFF has a good explainer on the problems with the new UN Cybercrime
Treaty, currently being negotiated in Vienna.

The draft treaty has the potential to rewrite criminal laws around the world,
possibly adding over 30 criminal offenses and new expansive police powers for
both domestic and international criminal investigations.

[...]

While we donΓÇÖt think the U.N. Cybercrime Treaty is necessary, weΓÇÖve been
closely scrutinizing the process and providing constructive analysis. WeΓÇÖve
made clear that human rights must be baked into the proposed treaty so that it
doesnΓÇÖt become a tool to stifle freedom of expression, infringe on privacy and
data protection, or endanger vulnerable people and communities.

** *** ***** ******* *********** *************

New Zero-Click Exploits against iOS

[2023.04.20] Citizen Lab has identified three zero-click exploits against iOS 15
and 16. These were used by NSO GroupΓÇÖs Pegasus spyware in 2022, and deployed
by Mexico against human rights defenders. These vulnerabilities have all been
patched.

One interesting bit is that AppleΓÇÖs Lockdown Mode (part of iOS 16) seems to
have worked to prevent infection.

News article.

EDITED TO ADD (4/21): News article. Good Twitter thread.

** *** ***** ******* *********** *************

Using the iPhone Recovery Key to Lock Owners Out of Their iPhones

[2023.04.21] This a good example of a security feature that can sometimes harm
security:

Apple introduced the optional recovery key in 2020 to protect users from online
hackers. Users who turn on the recovery key, a unique 28-digit code, must
provide it when they want to reset their Apple ID password.

iPhone thieves with your passcode can flip on the recovery key and lock you out.
And if you already have the recovery key enabled, they can easily generate a new
one, which also locks you out.

AppleΓÇÖs policy gives users virtually no way back into their accounts without
that recovery key. For now, a stolen iPhone could mean devastating personal
losses.

ItΓÇÖs actually a complicated crime. The criminal first watches their victim
type in their passcode and then grabs the phone out of their hands. In the basic
mode of this attack, they have a few hours to use the phone -- trying to access
bank accounts, etc. -- before the owner figures out how to shut the attacker
out. With the addition of the recovery key, the attacker can shut the owner out
-- for a long time.

The goal of the recovery key was to defend against SIM swapping, which is a much
more common crime. But this spy-and-grab attack has become more common, and the
recovery key makes it much more devastating.

Defenses are few: choose a long, complex passcode. Or set parental controls in a
way that further secure the device. The obvious fix is for Apple to redesign its
recovery system.

There are other, less privacy-compromising methods Apple could still rely on in
lieu of a recovery key.

If someone takes over your Google account, GoogleΓÇÖs password-reset process
lets you provide a recovery email, phone number or account password, and you can
use them to regain access later, even if a hijacker changes them.

Going through the process on a familiar Wi-Fi network or location can also help
demonstrate youΓÇÖre who you say you are.

Or how about an eight-hour delay before the recovery key can be changed?

This not an easy thing to design for, but we have to get this right as phones
become the single point of control for our lives.

** *** ***** ******* *********** *************

Hacking Pickleball

[2023.04.21] My latest book, A HackerΓÇÖs Mind, has a lot of sports stories.
Sports are filled with hacks, as players look for every possible advantage that
doesnΓÇÖt explicitly break the rules. HereΓÇÖs an example from pickleball, which
nicely explains the dilemma between hacking as a subversion and hacking as
innovation:

Some might consider these actions cheating, while the acting player would argue
that there was no rule that said the action couldnΓÇÖt be performed. So, how do
we address these situations, and close those loopholes? We make new rules that
specifically address the loophole action. And the rules book gets longer, and
the cycle continues with new loopholes identified, and new rules to prohibit
that particular action in the future.

Alternatively, sometimes an action taken as a result of an identified loophole
which is not deemed as harmful to the integrity of the game or sportsmanship,
becomes part of the game. Ernie Perry found a loophole, and his shot,
appropriately named the ΓÇ£Ernie shot,ΓÇ¥ became part of the game. He realized
that by jumping completely over the corner of the NVZ, without breaking any of
the NVZ rules, he could volley the ball, making contact closer to the net,
usually surprising the opponent, and often winning the rally with an
un-returnable shot. He found a loophole, and in this case, it became a very
popular and exciting shot to execute and to watch!

I donΓÇÖt understand pickleball at all, so that explanation doesnΓÇÖt make a lot
of sense to me. (I watched a video explaining the shot; that helped somewhat.)
But it looks like an excellent example.

The blog post also links to a 2010 paper that I wish IΓÇÖd known about when I
was writing my book: ΓÇ£Loophole ethics in sports,ΓÇ¥ by ├ÿyvind Kvalnes and Liv
Birgitte Hemmestad:

Abstract: Ethical challenges in sports occur when the practitioners are caught
between the will to win and the overall task of staying within the realm of
acceptable values and virtues. One way to prepare for these challenges is to
formulate comprehensive and specific rules of acceptable conduct. In this paper
we will draw attention to one serious problem with such a rule-based approach.
It may inadvertently encourage what we will call loophole ethics, an attitude
where every action that is not explicitly defined as wrong, will be seen as a
viable option. Detailed codes of conduct leave little room for personal
judgement, and instead promote a loophole mentality. We argue that loophole
ethics can be avoided by operating with only a limited set of general
principles, thus leaving more space for personal judgement and wisdom.

EDITED TO ADD (5/12): HereΓÇÖs an eleven-second video that explains the Ernie.

** *** ***** ******* *********** *************

UK Threatens End-to-End Encryption

[2023.04.24] In an open letter, seven secure messaging apps -- including Signal
and WhatsApp -- point out that the UKΓÇÖs Online Safety Bill could destroy
end-to-end encryption:

As currently drafted, the Bill could break end-to-end encryption,opening the
door to routine, general and indiscriminate surveillance of personal messages of
friends, family members, employees, executives, journalists, human rights
activists and even politicians themselves, which would fundamentally undermine
everyoneΓÇÖs ability to communicate securely.

The Bill provides no explicit protection for encryption, and if implemented as
written, could empower OFCOM to try to force the proactive scanning of private
messages on end-to-end encrypted communication services -- nullifying the
purpose of end-to-end encryption as a result and compromising the privacy of all
users.

In short, the Bill poses an unprecedented threat to the privacy, safety and
security of every UK citizen and the people with whom they communicate around
the world, while emboldening hostile governments who may seek to draft copy-cat
laws.

Both Signal and WhatsApp have said that they will cease services in the UK
rather than compromise the security of their users worldwide.

** *** ***** ******* *********** *************

Cyberweapons Manufacturer QuaDream Shuts Down

[2023.04.25] Following a report on its activities, the Israeli spyware company
QuaDream has shut down.

This was QuaDream:

Key Findings

Based on an analysis of samples shared with us by Microsoft Threat Intelligence,
we developed indicators that enabled us to identify at least five civil society
victims of QuaDreamΓÇÖs spyware and exploits in North America, Central Asia,
Southeast Asia, Europe, and the Middle East. Victims include journalists,
political opposition figures, and an NGO worker. We are not naming the victims
at this time.
We also identify traces of a suspected iOS 14 zero-click exploit used to deploy
QuaDreamΓÇÖs spyware. The exploit was deployed as a zero-day against iOS
versions 14.4 and 14.4.2, and possibly other versions. The suspected exploit,
which we call ENDOFDAYS, appears to make use of invisible iCloud calendar
invitations sent from the spywareΓÇÖs operator to victims. We performed Internet
scanning to identify QuaDream servers, and in some cases were able to identify
operator locations for QuaDream systems. We detected systems operated from
Bulgaria, Czech Republic, Hungary, Ghana, Israel, Mexico, Romania, Singapore,
United Arab Emirates (UAE), and Uzbekistan. I donΓÇÖt know if they sold off
their products before closing down. One presumes that they did, or will.

** *** ***** ******* *********** *************

AI to Aid Democracy

[2023.04.26] ThereΓÇÖs good reason to fear that AI systems like ChatGPT and GPT4
will harm democracy. Public debate may be overwhelmed by industrial
quantities of autogenerated argument. People might fall down political rabbit
holes, taken in by superficially convincing bullshit, or obsessed by folies à
deux relationships with machine personalities that donΓÇÖt really exist.

These risks may be the fallout of a world where businesses deploy poorly tested
AI systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isnΓÇÖt the only possible future. AI could advance the public good,
not private profit, and bolster democracy instead of undermining it. That would
require an AI not under the control of a large tech monopoly, but rather
developed by government and available to all citizens. This public option is
within reach if we want it.

An AI built for public benefit could be tailor-made for those use cases where
technology can best help democracy. It could plausibly educate citizens, help
them deliberate together, summarize what they think, and find possible common
ground. Politicians might use large language models, or LLMs, like GPT4 to
better understand what their citizens want.

Today, state-of-the-art AI systems are controlled by multibillion-dollar tech
companies: Google, Meta, and OpenAI in connection with Microsoft. These
companies get to decide how we engage with their AIs and what sort of access we
have. They can steer and shape those AIs to conform to their corporate
interests. That isnΓÇÖt the world we want. Instead, we want AI options that are
both public goods and directed toward public good.

We know that existing LLMs are trained on material gathered from the internet,
which can reflect racist bias and hate. Companies attempt to filter these data
sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But
leaked emails and conversations suggest that they are rushing half-baked
products to market in a race to establish their own monopoly.

These companies make decisions with huge consequences for democracy, but little
democratic oversight. We donΓÇÖt hear about political trade-offs they are
making. Do LLM-powered chatbots and search engines favor some viewpoints over
others? Do they skirt controversial topics completely? Currently, we have to
trust companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a vital independent source of information and
a testing ground for technological choices with big democratic consequences.
This could work much like public option health care plans, which increase access
to health services while also providing more transparency into operations in the
sector and putting productive pressure on the pricing and features of private
products. It would also allow us to figure out the limits of LLMs and direct
their applications with those in mind.

We know that LLMs often ΓÇ£hallucinate,ΓÇ¥ inferring facts that arenΓÇÖt real.
It isnΓÇÖt clear whether this is an unavoidable flaw of how they work, or
whether it can be corrected for. Democracy could be undermined if citizens trust
technologies that just make stuff up at random, and the companies trying to sell
these technologies canΓÇÖt be trusted to admit their flaws.

But a public option AI could do more than check technology companiesΓÇÖ honesty.
It could test new applications that could support democracy rather than
undermining it.

Most obviously, LLMs could help us formulate and express our perspectives and
policy positions, making political arguments more cogent and informed, whether
in social media, letters to the editor, or comments to rule-making agencies in
response to policy proposals. By this we donΓÇÖt mean that AI will replace
humans in the political debate, only that they can help us express ourselves. If
youΓÇÖve ever used a Hallmark greeting card or signed a petition, youΓÇÖve
already demonstrated that youΓÇÖre OK with accepting help to articulate your
personal sentiments or political beliefs. AI will make it easier to generate
first drafts, and provide editing help and suggest alternative phrasings. How
these AI uses are perceived will change over time, and there is still much room
for improvement in LLMs -- but their assistive power is real. People are already
testing and speculating on their potential for speechwriting, lobbying, and
campaign messaging. Highly influential people often rely on professional
speechwriters
and staff to help develop their thoughts, and AI could serve a similar role for
everyday citizens.

If the hallucination problem can be solved, LLMs could also become explainers
and educators. Imagine citizens being able to query an LLM that has expert-level
knowledge of a policy issue, or that has command of the positions of a
particular candidate or party. Instead of having to parse bland and evasive
statements calibrated for a mass audience, individual citizens could gain real
political understanding through question-and-answer sessions with LLMs that
could be unfailingly available and endlessly patient in ways that no
human could ever be.

Finally, and most ambitiously, AI could help facilitate radical democracy at
scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed, we
delegate decisions to elected politicians in part because we donΓÇÖt have time
to deliberate on every issue. But AI could manage massive political
conversations in chat rooms, on social networking sites, and elsewhere:
identifying common positions and summarizing them, surfacing unusual arguments
that seem compelling to those who have heard them, and keeping attacks and
insults to a minimum.

AI chatbots could run national electronic town hall meetings and automatically
summarize the perspectives of diverse participants. This type of AI-moderated
civic debate could also be a dynamic alternative to opinion polling. Politicians
turn to opinion surveys to capture snapshots of popular opinion because they can
only hear directly from a small number of voters, but want to understand where
voters agree or disagree.

Looking further into the future, these technologies could help groups reach
consensus and make decisions. Early experiments by the AI company DeepMind
suggest that LLMs can build bridges between people who disagree, helping bring
them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable
novel A Half-Built Garden, imagines how AI might help people have better
conversations and make better decisions -- rather than taking advantage of these
biases to maximize profits.

This future requires an AI public option. Building one, through a
government-directed model development and deployment program, would require a
lot of effort -- and the greatest challenges in developing public AI systems
would be political.

Some technological tools are already publicly available. In fairness, tech
giants like Google and Meta have made many of their latest and greatest AI tools
freely available for years, in cooperation with the academic community. Although
OpenAI has not made the source code and trained features of its latest models
public, competitors such as Hugging Face have done so for similar systems.

While state-of-the-art LLMs achieve spectacular results, they do so using
techniques that are mostly well known and widely used throughout the industry.
OpenAI has only revealed limited details of how it trained its latest model, but
its major advance over its earlier ChatGPT model is no secret: a multi-modal
training process that accepts both image and textual inputs.

Financially, the largest-scale LLMs being trained today cost hundreds of
millions of dollars. ThatΓÇÖs beyond ordinary peopleΓÇÖs reach, but itΓÇÖs a
pittance compared to U.S. federal military spending -- and a great bargain for
the potential return. While we may not want to expand the scope of existing
agencies to accommodate this task, we have our choice of government labs, like
the National Institute of Standards and Technology, the Lawrence Livermore
National Laboratory, and other Department of Energy labs, as well as
universities and nonprofits, with the AI expertise and capability to oversee
this effort.

Instead of releasing half-finished AI systems for the public to test, we need to
make sure that they are robust before theyΓÇÖre released -- and that they
strengthen democracy rather than undermine it. The key advance that made recent
AI chatbot models dramatically more useful was feedback from real people.
Companies employ teams to interact with early versions of their software to
teach them which outputs are useful and which are not. These paid users train
the models to align to corporate interests, with applications like web search
(integrating commercial advertisements) and business productivity assistive
software in mind.

To build assistive AI for democracy, we would need to capture human feedback for
specific democratic use cases, such as moderating a polarized policy discussion,
explaining the nuance of a legal proposal, or articulating oneΓÇÖs perspective
within a larger debate. This gives us a path to ΓÇ£alignΓÇ¥ LLMs with our
democratic values: by having models generate answers to questions, make
mistakes, and learn from the responses of human users, without having these
mistakes damage users and the public arena.

Capturing that kind of user interaction and feedback within a political
environment suspicious of both AI and technology generally will be challenging.
ItΓÇÖs easy to imagine the same politicians who rail against the
untrustworthiness of companies like Meta getting far more riled up by the idea
of government having a role in technology development.

As Karl Popper, the great theorist of the open society, argued, we shouldnΓÇÖt
try to solve complex problems with grand hubristic plans. Instead, we should
apply AI through piecemeal democratic engineering, carefully determining what
works and what does not. The best way forward is to start small, applying these
technologies to local decisions with more constrained stakeholder groups and
smaller impacts.

The next generation of AI experimentation should happen in the laboratories of
democracy: states and municipalities. Online town halls to discuss local
participatory budgeting proposals could be an easy first step. Commercially
available and open-source LLMs could bootstrap this process and build momentum
toward federal investment in a public AI option.

Even with these approaches, building and fielding a democratic AI option will be
messy and hard. But the alternative -- shrugging our shoulders as a fight for
commercial AI domination undermines democratic politics -- will be much messier
and much worse.

This essay was written with Henry Farrell and Nathan Sanders, and previously
appeared on Slate.com.

EDITED TO ADD: Linux Weekly News discussion.

** *** ***** ******* *********** *************

Security Risks of AI

[2023.04.27] Stanford and Georgetown have a new report on the security risks of
AI -- particularly adversarial machine learning -- based on a workshop they held
on the topic.

Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:

As a first step, our report recommends the inclusion of AI security concerns
within the cybersecurity programs of developers and users. The understanding of
how to secure AI systems, we concluded, lags far behind their widespread
adoption. Many AI products are deployed without institutions fully understanding
the security risks they pose. Organizations building or deploying AI models
should incorporate AI concerns into their cybersecurity functions using a risk
management framework that addresses security throughout the AI system life
cycle. It will be necessary to grapple with the ways in which AI vulnerabilities
are different from traditional cybersecurity bugs, but the starting point is to
assume that AI security is a subset of cybersecurity and to begin applying
vulnerability management practices to AI-based features. (Andy Grotto and I have
vigorously argued against siloing AI security in its own governance and policy
vertical.)

Our report also recommends more collaboration between cybersecurity
practitioners, machine learning engineers, and adversarial machine learning
researchers. Assessing AI vulnerabilities requires technical expertise that is
distinct from the skill set of cybersecurity practitioners, and organizations
should be cautioned against repurposing existing security teams without
additional training and resources. We also note that AI security researchers and
practitioners should consult with those addressing AI bias. AI fairness
researchers have extensively studied how poor data, design choices, and risk
decisions can produce biased outcomes. Since AI vulnerabilities may be more
analogous to algorithmic bias than they are to traditional software
vulnerabilities, it is important to cultivate greater engagement between the two
communities.

Another major recommendation calls for establishing some form of information
sharing among AI developers and users. Right now, even if vulnerabilities are
identified or malicious attacks are observed, this information is rarely
transmitted to others, whether peer organizations, other companies in the supply
chain, end users, or government or civil society observers. Bureaucratic,
policy, and cultural barriers currently inhibit such sharing. This means that a
compromise will likely remain mostly unnoticed until long after attackers have
successfully exploited vulnerabilities. To avoid this outcome, we recommend that
organizations developing AI models monitor for potential attacks on AI systems,
create -- formally or informally -- a trusted forum for incident information
sharing on a protected basis, and improve transparency.

** *** ***** ******* *********** *************

Hacking the Layoff Process

[2023.04.28] My latest book, A HackerΓÇÖs Mind, is filled with stories about the
rich and powerful hacking systems, but it was hard to find stories of the
hacking by the less powerful. HereΓÇÖs one I just found. An article on how
layoffs at big companies work inadvertently suggests an employee hack to avoid
being fired:

...software performs a statistical analysis during terminations to see if
certain groups are adversely affected, said such reviews can uncover other
problems. On a list of layoff candidates, a company might find it is about to
fire inadvertently an employee who previously opened a complaint against a
manager -- a move that could be seen as retaliation, she said.

So if youΓÇÖre at a large company and there are rumors of layoffs, go to HR and
initiate a complaint against a manager. ItΓÇÖll protect you from being laid off.

** *** ***** ******* *********** *************

NIST Draft Document on Post-Quantum Cryptography Guidance

[2023.05.02] NIST has released a draft of Special Publication1800-38A:
ΓÇ£Migration to Post-Quantum Cryptography: Preparation for Considering the
Implementation and Adoption of Quantum Safe Cryptography.ΓÇ¥ ItΓÇÖs only four
pages long, and it doesnΓÇÖt have a lot of detail -- more ΓÇ£volumesΓÇ¥ are
coming, with more information -- but itΓÇÖs well worth reading.

We are going to need to migrate to quantum-resistant public-key algorithms, and
the sooner we implement key agility the easier it will be to do so.

News article.

** *** ***** ******* *********** *************

SolarWinds Detected Six Months Earlier

[2023.05.03] New reporting from Wired reveals that the Department of Justice
detected the SolarWinds attack six months before Mandiant detected it in
December 2020, but didnΓÇÖt realize what it detected -- and so ignored it.

WIRED can now confirm that the operation was actually discovered by the DOJ six
months earlier, in late May 2020 -- but the scale and significance of the breach
wasnΓÇÖt immediately apparent. Suspicions were triggered when the department
detected unusual traffic emanating from one of its servers that was running a
trial version of the Orion software suite made by SolarWinds, according to
sources familiar with the incident. The software, used by system administrators
to manage and configure networks, was communicating externally with an
unfamiliar system on the internet. The DOJ asked the security firm Mandiant to
help determine whether the server had been hacked. It also engaged Microsoft,
though itΓÇÖs not clear why the software maker was also brought onto the
investigation.

[...]

Investigators suspected the hackers had breached the DOJ server directly,
possibly by exploiting a vulnerability in the Orion software. They reached out
to SolarWinds to assist with the inquiry, but the companyΓÇÖs engineers were
unable to find a vulnerability in their code. In July 2020, with the mystery
still unresolved, communication between investigators and SolarWinds stopped. A
month later, the DOJ purchased the Orion system, suggesting that the department
was satisfied that there was no further threat posed by the Orion suite, the
sources say.

EDITED TO ADD (5/4): More details about the SolarWinds attack from Wired.com.

** *** ***** ******* *********** *************

Large Language Models and Elections

[2023.05.04] Earlier this week, the Republican National Committee released a
video that it claims was ΓÇ£built entirely with AI imagery.ΓÇ¥ The content of
the ad isnΓÇÖt especially novel -- a dystopian vision of America under a second
term with President Joe Biden -- but the deliberate emphasis on the technology
used to create it stands out: ItΓÇÖs a ΓÇ£DaisyΓÇ¥ moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political
advertising have not escaped campaigners, who are already ΓÇ£pressure testingΓÇ¥
possible uses for the technology. In the 2024 presidential election campaign,
you can bank on the appearance of AI-generated personalized fundraising emails,
text messages from chatbots urging you to vote, and maybe even some deepfaked
campaign avatars. Future candidates could use chatbots trained on data
representing their views and personalities to approximate the act of directly
connecting with people. Think of it like a whistle-stop tour with an appearance
in every living room. Previous technological revolutions -- railroad, radio,
television, and the World Wide Web -- transformed how candidates connect to
their constituents, and we should expect the same from generative AI. This
isnΓÇÖt science fiction: The era of AI chatbots standing in as avatars for real,
individual people has already begun, as the journalist Casey Newton made clear
in a 201
6 feature about a woman who used thousands of text messages to create a chatbot
replica of her best friend after he died.

The key is interaction. A candidate could use tools enabled by large language
models, or LLMs -- the technology behind apps such as ChatGPT and the art-making
DALL-E -- to do micro-polling or message testing, and to solicit perspectives
and testimonies from their political audience individually and at scale. The
candidates could potentially reach any voter who possesses a smartphone or
computer, not just the ones with the disposable income and free time to attend a
campaign rally. At its best, AI could be a tool to increase the accessibility of
political engagement and ease polarization. At its worst, it could propagate
misinformation and increase the risk of voter manipulation. Whatever the case,
we know political operatives are using these tools. To reckon with their
potential now isnΓÇÖt buying into the hype -- itΓÇÖs preparing for whatever may
come next.

On the positive end, and most profoundly, LLMs could help people think through,
refine, or discover their own political ideologies. Research has shown that many
voters come to their policy positions reflexively, out of a sense of partisan
affiliation. The very act of reflecting on these views through discourse can
change, and even depolarize, those views. It can be hard to have reflective
policy conversations with an informed, even-keeled human discussion partner when
we all live within a highly charged political environment; this is a role almost
custom-designed for LLM. In US politics, it is a truism that the most valuable
resource in a campaign is time. People are busy and distracted. Campaigns have a
limited window to convince and activate voters. Money allows a candidate to
purchase time: TV commercials, labor from staffers, and fundraising events to
raise even more money. LLMs could provide campaigns with what is essentially a
printing press for time.

If you were a political operative, which would you rather do: play a short video
on a voterΓÇÖs TV while they are folding laundry in the next room, or exchange
essay-length thoughts with a voter on your candidateΓÇÖs key issues? A staffer
knocking on doors might need to canvass 50 homes over two hours to find one
voter willing to have a conversation. OpenAI charges pennies to process about
800 words with its latest GPT-4 model, and that cost could fall dramatically as
competitive AIs become available. People seem to enjoy interacting with
chatbots; OpenΓÇÖs product reportedly has the fastest-growing user base in the
history of consumer apps.

Optimistically, one possible result might be that weΓÇÖll get less annoyed with
the deluge of political ads if their messaging is more usefully tailored to our
interests by AI tools. Though the evidence for microtargetingΓÇÖs effectiveness
is mixed at best, some studies show that targeting the right issues to the right
people can persuade voters. Expecting more sophisticated, AI-assisted approaches
to be more consistently effective is reasonable. And anything that can prevent
us from seeing the same 30-second campaign spot 20 times a day seems like a win.

AI can also help humans effectuate their political interests. In the 2016 US
presidential election, primitive chatbots had a role in donor engagement and
voter-registration drives: simple messaging tasks such as helping users pre-fill
a voter-registration form or reminding them where their polling place is. If it
works, the current generation of much more capable chatbots could supercharge
small-dollar solicitations and get-out-the-vote campaigns.

And the interactive capability of chatbots could help voters better understand
their choices. An AI chatbot could answer questions from the perspective of a
candidate about the details of their policy positions most salient to an
individual user, or respond to questions about how a candidateΓÇÖs stance on a
national issue translates to a userΓÇÖs locale. Political organizations could
similarly use them to explain complex policy issues, such as those relating to
the climate or health care or...anything, really.

Of course, this could also go badly. In the time-honored tradition of demagogues
worldwide, the LLM could inconsistently represent the candidateΓÇÖs views to
appeal to the individual proclivities of each voter.

In fact, the fundamentally obsequious nature of the current generation of large
language models results in them acting like demagogues. Current LLMs are known
to hallucinate -- or go entirely off-script -- and produce answers that have no
basis in reality. These models do not experience emotion in any way, but some
research suggests they have a sophisticated ability to assess the emotion and
tone of their human users. Although they werenΓÇÖt trained for this purpose,
ChatGPT and its successor, GPT-4, may already be pretty good at assessing some
of their usersΓÇÖ traits -- say, the likelihood that the author of a text prompt
is depressed. Combined with their persuasive capabilities, that means that they
could learn to skillfully manipulate the emotions of their human users.

This is not entirely theoretical. A growing body of evidence demonstrates that
interacting with AI has a persuasive effect on human users. A study published in
February prompted participants to co-write a statement about the benefits of
social-media platforms for society with an AI chatbot configured to have varying
views on the subject. When researchers surveyed participants after the
co-writing experience, those who interacted with a chatbot that expressed that
social media is good or bad were far more likely to express the same view than a
control group that didnΓÇÖt interact with an ΓÇ£opinionated language model.ΓÇ¥

For the time being, most Americans say they are resistant to trusting AI in
sensitive matters such as health care. The same is probably true of politics. If
a neighbor volunteering with a campaign persuades you to vote a particular way
on a local ballot initiative, you might feel good about that interaction. If a
chatbot does the same thing, would you feel the same way? To help voters chart
their own course in a world of persuasive AI, we should demand transparency from
our candidates. Campaigns should have to clearly disclose when a text agent
interacting with a potential voter -- through traditional robotexting or the use
of the latest AI chatbots -- is human or automated.

Though companies such as Meta (FacebookΓÇÖs parent company) and Alphabet
(GoogleΓÇÖs) publish libraries of traditional, static political advertising,
they do so poorly. These systems would need to be improved and expanded to
accommodate user-level differentiation in ad copy to offer serviceable
protection against misuse.

A public, anonymized log of chatbot conversations could help hold candidatesΓÇÖ
AI representatives accountable for shifting statements and digital pandering.
Candidates who use chatbots to engage voters may not want to make all
transcripts of those conversations public, but their users could easily choose
to share them. So far, there is no shortage of people eager to share their chat
transcripts, and in fact, an online database exists of nearly 200,000 of them.
In the recent past, Mozilla has galvanized users to opt into sharing their web
data to study online misinformation.

We also need stronger nationwide protections on data privacy, as well as the
ability to opt out of targeted advertising, to protect us from the potential
excesses of this kind of marketing. No one should be forcibly subjected to
political advertising, LLM-generated or not, on the basis of their Internet
searches regarding private matters such as medical issues. In February, the
European Parliament voted to limit political-ad targeting to only basic
information, such as language and general location, within two months of an
election. This stands in stark contrast to the US, which has for years failed to
enact federal data-privacy regulations. Though the 2018 revelation of the
Cambridge Analytica scandal led to billions of dollars in fines and settlements
against Facebook, it has so far resulted in no substantial legislative action.

Transparency requirements like these are a first step toward oversight of future
AI-assisted campaigns. Although we should aspire to more robust legal controls
on campaign uses of AI, it seems implausible that these will be adopted in
advance of the fast-approaching 2024 general presidential election.

Credit the RNC, at least, with disclosing that their recent ad was AI-generated
-- a transparent attempt at publicity still counts as transparency. But what
will we do if the next viral AI-generated ad tries to pass as something more
conventional?

As we are all being exposed to these rapidly evolving technologies for the first
time and trying to understand their potential uses and effects, letΓÇÖs push for
the kind of basic transparency protection that will allow us to know what
weΓÇÖre dealing with.

This essay was written with Nathan Sanders, and previously appeared on the
Atlantic.

EDITED TO ADD (5/12): Better article on the ΓÇ£daisyΓÇ¥ ad.

** *** ***** ******* *********** *************

AI Hacking Village at DEF CON This Year

[2023.05.08] At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft,
NVIDIA, OpenAI and Stability AI will all open up their models for attack.

The DEF CON event will rely on an evaluation platform developed by Scale AI, a
California company that produces training for AI applications. Participants will
be given laptops to use to attack the models. Any bugs discovered will be
disclosed using industry-standard responsible disclosure practices.

** *** ***** ******* *********** *************

PIPEDREAM Malware against Industrial Control Systems

[2023.05.09] Another nation-state malware, Russian in origin:

In the early stages of the war in Ukraine in 2022, PIPEDREAM, a known malware
was quietly on the brink of wiping out a handful of critical U.S. electric and
liquid natural gas sites. PIPEDREAM is an attack toolkit with unmatched and
unprecedented capabilities developed for use against industrial control systems
(ICSs).

The malware was built to manipulate the network communication protocols used by
programmable logic controllers (PLCs) leveraged by two critical producers of
PLCs for ICSs within the critical infrastructure sector, Schneider Electric and
OMRON.

CISA advisory. Wired article.

** *** ***** ******* *********** *************

FBI Disables Russian Malware

[2023.05.10] Reuters is reporting that the FBI ΓÇ£had identified and disabled
malware wielded by RussiaΓÇÖs FSB security service against an undisclosed number
of American computers, a move they hoped would deal a death blow to one of
RussiaΓÇÖs leading cyber spying programs.ΓÇ¥

The headline says that the FBI ΓÇ£sabotagedΓÇ¥ the malware, which seems to be
wrong.

Presumably we will learn more soon.

EDITED TO ADD: New York Times story.

EDITED TO ADD: Maybe ΓÇ£sabotagedΓÇ¥ is the right word. The FBI hacked the
malware so that it disabled itself.

Despite the bravado of its developers, Snake is among the most sophisticated
pieces of malware ever found, the FBI said. The modular design, custom
encryption layers, and high-caliber quality of the code base have made it hard
if not impossible for antivirus software to detect. As FBI agents continued to
monitor Snake, however, they slowly uncovered some surprising weaknesses. For
one, there was a critical cryptographic key with a prime length of just 128
bits, making it vulnerable to factoring attacks that expose the secret key. This
weak key was used in Diffie-Hellman key exchanges that allowed each infected
machine to have a unique key when communicating with another machine.

** *** ***** ******* *********** *************

Building Trustworthy AI

[2023.05.11] We will all soon get into the habit of using AI tools for help with
everyday problems and tasks. We should get in the habit of questioning the
motives, incentives, and capabilities behind them, too.

Imagine youΓÇÖre using an AI chatbot to plan a vacation. Did it suggest a
particular resort because it knows your preferences, or because the company is
getting a kickback from the hotel chain? Later, when youΓÇÖre using another AI
chatbot to learn about a complex economic issue, is the chatbot reflecting your
politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be
trustworthy, it must be under our control; it canΓÇÖt be working behind the
scenes for some tech monopoly. This means, at a minimum, the technology needs to
be transparent. And we all need to understand how it works, at least a little
bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy,
and even existential doom that have accompanied stunning recent developments in
artificial intelligence (AI) -- and large language models (LLMs) like ChatGPT
and GPT-4 -- one optimistic vision is abundantly clear: this technology is
useful. It can help you find information, express your thoughts, correct errors
in your writing, and much more. If we can navigate the pitfalls, its assistive
benefit to humanity could be epoch-defining. But weΓÇÖre not there yet.

LetΓÇÖs pause for a moment and imagine the possibilities of a trusted AI
assistant. It could write the first draft of anything: emails, reports, essays,
even wedding vows. You would have to give it background information and edit its
output, of course, but that draft would be written by a model trained on your
personal beliefs, knowledge, and style. It could act as your tutor, answering
questions interactively on topics you want to learn about -- in the manner that
suits you best and taking into account what you already know. It could assist
you in planning, organizing, and communicating: again, based on your personal
preferences. It could advocate on your behalf with third parties: either other
humans or other bots. And it could moderate conversations on social media for
you, flagging misinformation, removing hate or trolling, translating for
speakers of different languages, and keeping discussions on topic; or even
mediate conversations in physical spaces, interacting through speech recognition
and synthes
is capabilities.

TodayΓÇÖs AIs arenΓÇÖt up for the task. The problem isnΓÇÖt the technology --
thatΓÇÖs advancing faster than even the experts had guessed -- itΓÇÖs who owns
it. TodayΓÇÖs AIs are primarily created and run by large technology companies,
for their benefit and profit. Sometimes we are permitted to interact with the
chatbots, but theyΓÇÖre never truly ours. ThatΓÇÖs a conflict of interest, and
one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is
a well worn one in the technology sector. Twenty years ago, GoogleΓÇÖs search
engine rapidly rose to monopolistic dominance because of its transformative
information retrieval capability. Over time, the companyΓÇÖs dependence on
revenue from search advertising led them to degrade that capability. Today, many
observers look forward to the death of the search paradigm entirely. Amazon has
walked the same path, from honest marketplace to one riddled with lousy products
whose vendors have paid to have the company show them to you. We can do better
than this. If each of us are going to have an AI assistant helping us with
essential activities daily and even advocating on our behalf, we each need to
know that it has our interests in mind. Building trustworthy AI will require
systemic change.

First, a trustworthy AI system must be controllable by the user. That means that
the model should be able to run on a userΓÇÖs owned electronic devices (perhaps
in a simplified form) or within a cloud service that they control. It should
show the user how it responds to them, such as when it makes queries to search
the web or external services, when it directs other software to do things like
sending an email on a userΓÇÖs behalf, or modifies the userΓÇÖs prompts to
better express what the company that made it thinks the user wants. It should be
able to explain its reasoning to users and cite its sources. These requirements
are all well within the technical capabilities of AI systems.

Furthermore, users should be in control of the data used to train and fine-tune
the AI system. When modern LLMs are built, they are first trained on massive,
generic corpora of textual data typically sourced from across the Internet. Many
systems go a step further by fine-tuning on more specific datasets purpose built
for a narrow application, such as speaking in the language of a medical doctor,
or mimicking the manner and style of their individual user. In the near future,
corporate AIs will be routinely fed your data, probably without your awareness
or your consent. Any trustworthy AI system should transparently allow users to
control what data it uses.

Many of us would welcome an AI-assisted writing application fine tuned with
knowledge of which edits we have accepted in the past and which we did not. We
would be more skeptical of a chatbot knowledgeable about which of their search
results led to purchases and which did not.

You should also be informed of what an AI system can do on your behalf. Can it
access other apps on your phone, and the data stored with them? Can it retrieve
information from external sources, mixing your inputs with details from other
places you may or may not trust? Can it send a message in your name (hopefully
based on your input)? Weighing these types of risks and benefits will become an
inherent part of our daily lives as AI-assistive tools become integrated with
everything we do.

Realistically, we should all be preparing for a world where AI is not
trustworthy. Because AI tools can be so incredibly useful, they will
increasingly pervade our lives, whether we trust them or not. Being a digital
citizen of the next quarter of the twenty-first century will require learning
the basic ins and outs of LLMs so that you can assess their risks and
limitations for a given use case. This will better prepare you to take advantage
of AI tools, rather than be taken advantage by them.

In the worldΓÇÖs first few months of widespread use of models like ChatGPT,
weΓÇÖve learned a lot about how AI creates risks for users. Everyone has heard
by now that LLMs ΓÇ£hallucinate,ΓÇ¥ meaning that they make up ΓÇ£factsΓÇ¥ in
their outputs, because their predictive text generation systems are not
constrained to fact check their own emanations. Many users learned in March that
information they submit as prompts to systems like ChatGPT may not be kept
private after a bug revealed usersΓÇÖ chats. Your chat histories are stored in
systems that may be insecure.

Researchers have found numerous clever ways to trick chatbots into breaking
their safety controls; these work largely because many of the ΓÇ£rulesΓÇ¥
applied to these systems are soft, like instructions given to a person, rather
than hard, like coded limitations on a productΓÇÖs functions. ItΓÇÖs as if we
are trying to keep AI safe by asking it nicely to drive carefully, a hopeful
instruction, rather than taking away its keys and placing definite constraints
on its abilities.

These risks will grow as companies grant chatbot systems more capabilities.
OpenAI is providing developers wide access to build tools on top of GPT: tools
that give their AI systems access to your email, to your personal account
information on websites, and to computer code. While OpenAI is applying safety
protocols to these integrations, itΓÇÖs not hard to imagine those being relaxed
in a drive to make the tools more useful. It seems likewise inevitable that
other companies will come along with less bashful strategies for securing AI
market share.

Just like with any human, building trust with an AI will be hard won through
interaction over time. We will need to test these systems in different contexts,
observe their behavior, and build a mental model for how they will respond to
our actions. Building trust in that way is only possible if these systems are
transparent about their capabilities, what inputs they use and when they will
share them, and whose interests they are evolving to represent.

This essay was written with Nathan Sanders, and previously appeared on
Gizmodo.com.

** *** ***** ******* *********** *************

Ted Chiang on the Risks of AI

[2023.05.12] Ted Chiang has an excellent essay in the New Yorker: ΓÇ£Will A.I.
Become the New McKinsey?ΓÇ¥

The question we should be asking is: as A.I. becomes more powerful and flexible,
is there any way to keep it from being another version of McKinsey? The question
is worth considering across different meanings of the term
ΓÇ£A.I.ΓÇ¥ If you think of A.I. as a broad set of technologies being marketed
to companies to help them cut their costs, the question becomes: how do we keep
those technologies from working as ΓÇ£capitalΓÇÖs willing executionersΓÇ¥?
Alternatively, if you imagine A.I. as a semi-autonomous software program that
solves problems that humans ask it to solve, the question is then: how do we
prevent that software from assisting corporations in ways that make peopleΓÇÖs
lives worse? Suppose youΓÇÖve built a semi-autonomous A.I. thatΓÇÖs entirely
obedient to humans -- one that repeatedly checks to make sure it hasnΓÇÖt
misinterpreted the instructions it has received. This is the dream of many A.I.
researchers. Yet such software could easily still cause as much harm as McKinsey
has.

Note that you cannot simply say that you will build A.I. that only offers
pro-social solutions to the problems you ask it to solve. ThatΓÇÖs the
equivalent of saying that you can defuse the threat of McKinsey by starting a
consulting firm that only offers such solutions. The reality is that Fortune 100
companies will hire McKinsey instead of your pro-social firm, because
McKinseyΓÇÖs solutions will increase shareholder value more than your firmΓÇÖs
solutions will. It will always be possible to build A.I. that pursues
shareholder value above all else, and most companies will prefer to use that
A.I. instead of one constrained by your principles.

EDITED TO ADD: Ted ChiangΓÇÖs previous essay, ΓÇ£ChatGPT Is a Blurry JPEG of
the WebΓÇ¥ is also worth reading.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2023.05.14] This is a current list of where and when I am scheduled to speak:

IΓÇÖm speaking at IT-S Now 2023 in Vienna, Austria, on June 2, 2023 at 8:30 AM
CEST.
The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

--- BBBS/Li6 v4.10 Toy-5
 * Origin: TCOB1 - binkd.thecivv.ie (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0157 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106