AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1047 / 1577] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, October 15, 2023   October 18, 2023
 10:40 AM *  

Crypto-Gram 
October 15, 2023

by Bruce Schneier 
Fellow and Lecturer, Harvard Kennedy School 
schneier@schneier.com 
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

On Technologies for Automatic Facial Recognition
LLM Summary of My Book Beyond Fear
Using Hacked LastPass Keys to Steal Cryptocurrency
Detecting AI-Generated Text
On the Cybersecurity Jobs Shortage
New Revelations from the Snowden Documents
Signal Will Leave the UK Rather Than Add a Backdoor
Critical Vulnerability in libwebp Library
NSA AI Security Center
Hacking Gas Pumps via Bluetooth
Malicious Ads in Bing Chat
Political Disinformation and AI
Deepfake Election Interference in Slovakia
AI Risks
Model Extraction Attack on Neural Networks
Cisco CanΓÇÖt Stop Using Hard-Coded Passwords
Bounty to Recover NISTΓÇÖs Elliptic Curve Seeds
Hacking the High School Grading System
** *** ***** ******* *********** *************

On Technologies for Automatic Facial Recognition

[2023.09.15] Interesting article on technologies that will automatically
identify people:

With technology like that on Mr. LeyvandΓÇÖs head, Facebook could prevent users
from ever forgetting a colleagueΓÇÖs name, give a reminder at a cocktail party
that an acquaintance had kids to ask about or help find someone at a crowded
conference. However, six years later, the company now known as Meta has not
released a version of that product and Mr. Leyvand has departed for Apple to
work on its Vision Pro augmented reality glasses.

The technology is here. Maybe the implementation is still dorky, but that will
change. The social implications will be enormous.

** *** ***** ******* *********** *************

LLM Summary of My Book Beyond Fear

[2023.09.15] Claude (AnthropicΓÇÖs LLM) was given this prompt: Please summarize
the themes and arguments of Bruce SchneierΓÇÖs book Beyond Fear. IΓÇÖm
particularly interested in a taxonomy of his ethical arguments -- please expand
on that. Then lay out the most salient criticisms of the book.

ClaudeΓÇÖs reply:

HereΓÇÖs a brief summary of the key themes and arguments made in security expert
Bruce SchneierΓÇÖs book Beyond Fear: Thinking Sensibly About Security in an
Uncertain World:

Central Themes:

Effective security requires realistic risk assessment, not fear-driven
responses. We need to rationally determine actual threats versus perceived ones.
Security trade-offs need to be carefully balanced against costs to other values
like privacy, freedom, and social/economic impacts.
Human psychology and biases lead to exaggerated fears that politicians/marketers
can exploit, resulting in wasteful and theatrical ΓÇ£security theater.ΓÇ¥
Security is a process that requires ongoing flexibility, not total prevention.
We canΓÇÖt anticipate every threat but can build resilience.
Key Arguments:

Analyze risks empirically using evidence, statistics, and probability rather
than intuition. Avoid underestimating everyday risks while overestimating novel
or sensational ones.
Reject the temptation to implement oppressive security restrictions after tragic
but unlikely events. Focus only on proportional responses.
Where possible, favor openness and transparency over aggressive data collection
or restrictions which erode civil liberties.
Seek security policies and technologies that enhance society broadly rather than
benefit special interests.
Schneier argues for rational, ethical approaches to security aimed at genuine
protection while avoiding violations of human rights in the name of safety.

Here are some of the key ethical arguments Bruce Schneier makes in Beyond Fear
regarding security policy and practices:

Utilitarian Ethics -- Security measures should aim to provide the greatest
societal benefit and well-being for the most people, not just special interests.
Individual Autonomy -- Restrictions should be minimized to avoid excessive
infringements on civil liberties and individual freedoms. Surveillance creep
risks violating autonomy.
Privacy Rights -- Pervasive monitoring and data collection erode privacy rights
and dignity. Citizens are entitled to some degree of opacity.
Proportional Responses -- Reactive policies should weigh costs versus
statistically likely benefits. Draconian rules beyond proportionality are
unethical.
Justice and Fairness -- Security should not create racial, religious or other
profiling biases that punish groups rather than conduct.
Accountability and Transparency -- Ethical security requires checks against
excessive power and oversight for responsibility. Secrecy enables abuses.
Honest Representation -- Officials and pundits have an ethical duty not to stoke
disproportionate public fears. Security theater manufactures consent.
Integrity and Character -- Leaders should have the wisdom and courage to make
difficult trade-offs rather than exploit fears for authority and control.
Overall Schneier argues security policies grounded in ethics rather than fear
promote human well-being while recognizing inherent trade-offs. Prudence,
transparency, and proportionality are essential.

Here are some of the most salient criticisms made against the arguments put
forth in Bruce SchneierΓÇÖs book Beyond Fear:

Downplays real threats -- Some argue Schneier dangerously downplays serious
risks like terrorism and cybercrime by calling fears overblown.
Too theoretical -- His risk analysis approach may rely too much on statistics
versus real-world uncertainties that defy probabilistic reasoning.
No easy answers -- While citing misguided policies, critics say Schneier
provides few actionable solutions or concrete policy guidance.
Conflicting values -- His balancing of security, privacy and liberty involves
inherent trade-offs and tough choices he gives little direction on.
Politically naive -- Suggesting special interests drive policy may
under-appreciate legitimate national security concerns of elected officials.
Hindsight bias -- Assessing past overreactions easier than predicting future
threats that justify caution today.
No accounting for psychology -- Factoring human irrationality into policy is
essential, not dismissed as ΓÇ£security theater.ΓÇ¥
Technical focus -- Focuses narrowly on certain technologies while ignoring
broader organizational and sociological dimensions.
In summary, critics argue SchneierΓÇÖs risk analysis orientation is too
clinical, abstract, and devoid of political and human realities to transform
concrete policy in the ways he intends. His framework may therefore lack
sufficient nuance or guidance.

ItΓÇÖs interesting. The summary is pretty accurate, and so are the criticisms.
The book was a product of my thinking twenty years ago, and my own thinking has
evolved to address those criticisms.

Of course, this only works with older books that the LLM has ingested, and
probably works better with books that have been written about.

** *** ***** ******* *********** *************

Using Hacked LastPass Keys to Steal Cryptocurrency

[2023.09.18] Remember last November, when hackers broke into the network for
LastPass -- a password database -- and stole password vaults with both encrypted
and plaintext data for over 25 million users?

Well, theyΓÇÖre now using that data break into crypto wallets and drain them:
$35 million and counting, all going into a single wallet.

ThatΓÇÖs a really profitable hack. (ItΓÇÖs also bad opsec. The hackers need to
move and launder all that money quickly.)

Look, I know that online password databases are more convenient. But theyΓÇÖre
also risky. This is why my Password Safe is local only. (I know this sounds like
a commercial, but Password Safe is not a commercial product.)

** *** ***** ******* *********** *************

Detecting AI-Generated Text

[2023.09.19] There are no reliable ways to distinguish text written by a human
from text written by an large language model. OpenAI writes:

Do AI detectors work?

In short, no. While some (including OpenAI) have released tools that purport to
detect AI-generated content, none of these have proven to reliably distinguish
between AI-generated and human-generated content.
Additionally, ChatGPT has no ΓÇ£knowledgeΓÇ¥ of what content could be
AI-generated. It will sometimes make up responses to questions like ΓÇ£did you
write this [essay]?ΓÇ¥ or ΓÇ£could this have been written by AI?ΓÇ¥ These
responses are random and have no basis in fact.
To elaborate on our research into the shortcomings of detectors, one of our key
findings was that these tools sometimes suggest that human-written content was
generated by AI.
When we at OpenAI tried to train an AI-generated content detector, we found that
it labeled human-written text like Shakespeare and the Declaration of
Independence as AI-generated.
There were also indications that it could disproportionately impact students who
had learned or were learning English as a second language and students whose
writing was particularly formulaic or concise.
Even if these tools could accurately identify AI-generated content (which they
cannot yet), students can make small edits to evade detection.
There is some good research in watermarking LLM-generated text, but the
watermarks are not generally robust.

I donΓÇÖt think the detectors are going to win this arms race.

** *** ***** ******* *********** *************

On the Cybersecurity Jobs Shortage

[2023.09.20] In April, Cybersecurity Ventures reported on extreme cybersecurity
job shortage:

Global cybersecurity job vacancies grew by 350 percent, from one million
openings in 2013 to 3.5 million in 2021, according to Cybersecurity Ventures.
The number of unfilled jobs leveled off in 2022, and remains at 3.5 million in
2023, with more than 750,000 of those positions in the U.S. Industry efforts to
source new talent and tackle burnout continues, but we predict that the
disparity between demand and supply will remain through at least 2025.

The numbers never made sense to me, and Ben Rothke has dug in and explained the
reality:

...there is not a shortage of security generalists, middle managers, and people
who claim to be competent CISOs. Nor is there a shortage of thought leaders,
advisors, or self-proclaimed cyber subject matter experts. What there is a
shortage of are computer scientists, developers, engineers, and information
security professionals who can code, understand technical security architecture,
product security and application security specialists, analysts with threat
hunting and incident response skills. And this is nothing that can be fixed by a
newbie taking a six-month information security boot camp.

[...]

Most entry-level roles tend to be quite specific, focused on one part of the
profession, and are not generalist roles. For example, hiring managers will want
a network security engineer with knowledge of networks or an identity management
analyst with experience in identity systems. They are not looking for someone
interested in security.

In fact, security roles are often not considered entry-level at all. Hiring
managers assume you have some other background, usually technical before you are
ready for an entry-level security job. Without those specific skills, it is
difficult for a candidate to break into the profession. Job seekers learn that
entry-level often means at least two to three years of work experience in a
related field.

That makes a lot more sense, and matches what I experience.

** *** ***** ******* *********** *************

New Revelations from the Snowden Documents

[2023.09.21] Jake AppelbaumΓÇÖs PhD thesis contains several new revelations from
the classified NSA documents provided to journalists by Edward Snowden. Nothing
major, but a few more tidbits.

Kind of amazing that that all happened ten years ago. At this point, those
documents are more historical than anything else.

And itΓÇÖs unclear who has those archives anymore. According to Appelbaum, The
Intercept destroyed their copy.

I recently published an essay about my experiences ten years ago.

** *** ***** ******* *********** *************

Signal Will Leave the UK Rather Than Add a Backdoor

[2023.09.26] Totally expected, but still good to hear:

Onstage at TechCrunch Disrupt 2023, Meredith Whittaker, the president of the
Signal Foundation, which maintains the nonprofit Signal messaging app,
reaffirmed that Signal would leave the U.K. if the countryΓÇÖs recently passed
Online Safety Bill forced Signal to build ΓÇ£backdoorsΓÇ¥ into its end-to-end
encryption.

ΓÇ£We would leave the U.K. or any jurisdiction if it came down to the choice
between backdooring our encryption and betraying the people who count on us for
privacy, or leaving,ΓÇ¥ Whittaker said. ΓÇ£And thatΓÇÖs never not true.ΓÇ¥

** *** ***** ******* *********** *************

Critical Vulnerability in libwebp Library

[2023.09.27] Both Apple and Google have recently reported critical
vulnerabilities in their systems -- iOS and Chrome, respectively -- that are
ultimately the result of the same vulnerability in the libwebp library:

On Thursday, researchers from security firm Rezillion published evidence that
they said made it ΓÇ£highly likelyΓÇ¥ both indeed stemmed from the same bug,
specifically in libwebp, the code library that apps, operating systems, and
other code libraries incorporate to process WebP images.

Rather than Apple, Google, and Citizen Lab coordinating and accurately reporting
the common origin of the vulnerability, they chose to use a separate CVE
designation, the researchers said. The researchers concluded that ΓÇ£millions of
different applicationsΓÇ¥ would remain vulnerable until they, too, incorporated
the libwebp fix. That, in turn, they said, was preventing automated systems that
developers use to track known vulnerabilities in their offerings from detecting
a critical vulnerability thatΓÇÖs under active exploitation.

EDITED TO ADD (10/12): Google quietly corrected their disclosure.

** *** ***** ******* *********** *************

NSA AI Security Center

[2023.10.02] The NSA is starting a new artificial intelligence security center:

The AI security centerΓÇÖs establishment follows an NSA study that identified
securing AI models from theft and sabotage as a major national security
challenge, especially as generative AI technologies emerge with immense
transformative potential for both good and evil.

Nakasone said it would become ΓÇ£NSAΓÇÖs focal point for leveraging foreign
intelligence insights, contributing to the development of best practices
guidelines, principles, evaluation, methodology and risk frameworksΓÇ¥ for both
AI security and the goal of promoting the secure development and adoption of AI
within ΓÇ£our national security systems and our defense industrial base.ΓÇ¥

He said it would work closely with U.S. industry, national labs, academia and
the Department of Defense as well as international partners.

** *** ***** ******* *********** *************

Hacking Gas Pumps via Bluetooth

[2023.10.03] Turns out pumps at gas stations are controlled via Bluetooth, and
that the connections are insecure. No details in the article, but it seems that
itΓÇÖs easy to take control of the pump and have it dispense gas without
requiring payment.

ItΓÇÖs a complicated crime to monetize, though. You need to sell access to the
gas pump to others.

EDITED TO ADD (10/13): Reader Jeff Hall says that story is not accurate, and
that the gas pumps do not have a Bluetooth connection.

** *** ***** ******* *********** *************

Malicious Ads in Bing Chat

[2023.10.04] Malicious ads are creeping into chatbots.

** *** ***** ******* *********** *************

Political Disinformation and AI

[2023.10.05] Elections around the world are facing an evolving threat from
foreign actors, one that involves artificial intelligence.

Countries trying to influence each otherΓÇÖs elections entered a new era in
2016, when the Russians launched a series of social media disinformation
campaigns targeting the US presidential election. Over the next seven years, a
number of countries -- most prominently China and Iran -- used social media to
influence foreign elections, both in the US and elsewhere in the world.
ThereΓÇÖs no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have
the ability to quickly and easily produce endless reams of text on any topic in
any tone from any perspective. As a security expert, I believe itΓÇÖs a tool
uniquely suited to Internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful
GPT-4 was released in March 2023. Other language and image production AIs are
around the same age. ItΓÇÖs not clear how these technologies will change
disinformation, how effective they will be or what effects they will have. But
we are about to find out.

Election season will soon be in full swing in much of the democratic world.
Seventy-one percent of people living in democracies will vote in a national
election between now and the end of next year. Among them: Argentina and Poland
in October, Taiwan in January, Indonesia in February, India in April, the
European Union and Mexico in June, and the US in November. Nine African
democracies, including South Africa, will have elections in 2024. Australia and
the UK donΓÇÖt have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media
influence operations in the past. China cares a great deal about Taiwan,
Indonesia, India, and many African countries. Russia cares about the UK, Poland,
Germany, and the EU in general. Everyone cares about the United States.

And thatΓÇÖs only considering the largest players. Every US national election
from 2016 has brought with it an additional country attempting to influence the
outcome. First it was just Russia, then Russia and China, and most recently
those two plus Iran. As the financial cost of foreign influence decreases, more
countries can get in on the action. Tools like ChatGPT significantly reduce the
price of producing and distributing propaganda, bringing that capability within
the budget of many more countries.

A couple of months ago, I attended a conference with representatives from all of
the cybersecurity agencies in the US. They talked about their expectations
regarding election interference in 2024. They expected the usual players --
Russia, China, and Iran -- and a significant new one: ΓÇ£domestic actors.ΓÇ¥
That is a direct result of this reduced cost.

Of course, thereΓÇÖs a lot more to running a disinformation campaign than
generating content. The hard part is distribution. A propagandist needs a series
of fake accounts on which to post, and others to boost it into the mainstream
where it can go viral. Companies like Meta have gotten much better at
identifying these accounts and taking them down. Just last month, Meta announced
that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook
groups, and 15 Instagram accounts associated with a Chinese influence campaign,
and identified hundreds more accounts on TikTok, X (formerly Twitter),
LiveJournal, and Blogspot. But that was a campaign that began four years ago,
producing pre-AI disinformation.

Disinformation is an arms race. Both the attackers and defenders have improved,
but also the world of social media is different. Four years ago, Twitter was a
direct line to the media, and propaganda on that platform was a way to tilt the
political narrative. A Columbia Journalism Review study found that most major
news outlets used Russian tweets as sources for partisan opinion. That Twitter,
with virtually every news editor reading it and everyone who was anyone posting
there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as
Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is
a newer platform that is controlled by China and more suitable for short,
provocative videos -- ones that AI makes much easier to produce. And the current
crop of generative AIs are being connected to tools that will make content
distribution easier as well.

Generative AI tools also allow for new techniques of production and
distribution, such as low-level propaganda at scale. Imagine a new AI-powered
personal account on social media. For the most part, it behaves normally. It
posts about its fake everyday life, joins interest groups and comments on
othersΓÇÖ posts, and generally behaves like a normal user. And once in a while,
not very often, it says -- or amplifies -- something political. These persona
bots, as computer scientist Latanya Sweeney calls them, have negligible
influence on their own. But replicated by the thousands or millions, they would
have a lot more.

ThatΓÇÖs just one scenario. The military officers in Russia, China, and
elsewhere in charge of election interference are likely to have their best
people thinking of others. And their tactics are likely to be much more
sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and
information operations on smaller countries before rolling them out at scale.
When that happens, itΓÇÖs important to be able to fingerprint these tactics.
Countering new disinformation campaigns requires being able to recognize them,
and recognizing them requires looking for and cataloging them now.

In the computer security world, researchers recognize that sharing methods of
attack and their effectiveness is the only way to build strong defensive
systems. The same kind of thinking also applies to these information campaigns:
The more that researchers study what techniques are being employed in distant
countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated
than they were in 2016. I believe the US needs to have efforts in place to
fingerprint and identify AI-produced propaganda in Taiwan, where a presidential
candidate claims a deepfake audio recording has defamed him, and other places.
Otherwise, weΓÇÖre not going to see them when they arrive here. Unfortunately,
researchers are instead being targeted and harassed.

Maybe this will all turn out okay. There have been some important democratic
elections in the generative AI era with no significant disinformation issues:
primaries in Argentina, first-round elections in Ecuador, and national elections
in Thailand, Turkey, Spain, and Greece. But the sooner we know what to expect,
the better we can deal with what comes.

This essay previously appeared in The Conversation.

** *** ***** ******* *********** *************

Deepfake Election Interference in Slovakia

[2023.10.06] Well designed and well timed deepfake or two Slovakian politicians
discussing how to rig the election:

Šimečka and Denník N immediately denounced the audio as fake. The
fact-checking department of news agency AFP said the audio showed signs of being
manipulated using AI. But the recording was posted during a 48-hour moratorium
ahead of the polls opening, during which media outlets and politicians are
supposed to stay silent. That meant, under SlovakiaΓÇÖs election rules, the post
was difficult to widely debunk. And, because the post was audio, it exploited a
loophole in MetaΓÇÖs manipulated-media policy, which dictates only faked videos
-- where a person has been edited to say words they never said -- go against its
rules.

I just wrote about this. Countries like Russia and China tend to test their
attacks out on smaller countries before unleashing them on larger ones. Consider
this a preview to their actions in the US next year.

** *** ***** ******* *********** *************

AI Risks

[2023.10.09] There is no shortage of researchers and industry titans willing to
warn us about the potential destructive power of artificial intelligence.
Reading the headlines, one would hope that the rapid gains in AI technology have
also brought forth a unifying realization of the risks -- and the steps we need
to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the
testimony, the manifestoes, the blog posts, and the public declarations issued
about AI are battles among deeply divided factions. Some are concerned about
far-future risks that sound like science fiction. Some are genuinely alarmed by
the practical problems that chatbots and deepfake video generators are creating
right now. Some are motivated by potential business revenue, others by national
security concerns.

The result is a cacophony of coded language, contradictory views, and
provocative policy demands that are undermining our ability to grapple with a
technology destined to drive the future of politics, our economy, and even our
daily lives.

These factions are in dialogue not only with the public but also with one
another. Sometimes, they trade letters, opinion essays, or social threads
outlining their positions and attacking othersΓÇÖ in public view. More often,
they tout their viewpoints without acknowledging alternatives, leaving the
impression that their enlightened perspective is the inevitable lens through
which to view AI But if lawmakers and the public fail to recognize the subtext
of their arguments, they risk missing the real consequences of our possible
regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look
past the immediate claims and actions of the players to the greater implications
of their points of view. When you do, youΓÇÖll realize this isnΓÇÖt really a
debate only about AI. ItΓÇÖs also a contest about control and power, about how
resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should
we focus on avoiding the dystopia of mass unemployment, a world where China is
the dominant superpower or a society where the worst prejudices of humanity are
embodied in opaque algorithms that control our lives? Should we listen to
wealthy futurists who discount the importance of climate change because
theyΓÇÖre already thinking ahead to colonies on Mars? It is critical that we
begin to recognize the ideologies driving what we are being told. Resolving the
fracas requires us to see through the specter of AI to stay true to the humanity
of our values.

One way to decode the motives behind the various declarations is through their
language. Because language itself is part of their battleground, the different
AI camps tend not to use the same words to describe their positions. One faction
describes the dangers posed by AI through the framework of safety, another
through ethics or integrity, yet another through security, and others through
economics. By decoding who is speaking and how AI is being described, we can
explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an
existential risk to humankind, capable of wiping out all life on Earth. AI, in
this vision, emerges as a godlike, superintelligent, ungovernable entity capable
of controlling everything. AI could destroy humanity or pose a risk on par with
nukes. If weΓÇÖre not careful, it could kill everyone or enslave humanity.
ItΓÇÖs likened to monsters like the Lovecraftian shoggoths, artificial servants
that rebelled against their creators, or paper clip maximizers that consume all
of EarthΓÇÖs resources in a single-minded pursuit of their programmed goal. It
sounds like science fiction, but these people are serious, and they mean the
words they use.

These are the AI safety people, and their ranks include the ΓÇ£Godfathers of
AI,ΓÇ¥ Geoff Hinton and Yoshua Bengio. For many years, these leading lights
battled critics who doubted that a computer could ever mimic capabilities of the
human mind. Having steamrollered the public conversation by creating large
language models like ChatGPT and other AI tools capable of increasingly
impressive feats, they appear deeply invested in the idea that there is no limit
to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to
shape the conversation. And some in this group are animated by the radical
effective altruism movement and the associated cause of long-term-ism, which
tend to focus on the most extreme catastrophic risks and emphasize the
far-future consequences of our actions. These philosophies are hot among the
cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried,
who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched
to their logical extremes. A dogmatic long-termer would willingly sacrifice the
well-being of people today to stave off a prophesied extinction event like AI
enslavement.

Many doomsayers say they are acting rationally, but their hype about
hypothetical existential risks amounts to making a misguided bet with our
future. In the name of long-term-ism, Elon Musk reportedly believes that our
society needs to encourage reproduction among those with the greatest culture
and intelligence (namely, his ultrarich buddies). And he wants to go further,
such as limiting the right to vote to parents and even populating Mars. ItΓÇÖs
widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the
most prominent centers for the study of AI safety, has made dismissive noises
about climate change because he thinks that it pales in comparison with
far-future unknown unknowns like risks from AI. The technology historian David
C. Brock calls these fears ΓÇ£wishful worriesΓÇ¥ -- that is, ΓÇ£problems that it
would be nice to have, in contrast to the actual agonies of the present.ΓÇ¥

More practically, many of the researchers in this group are proceeding full
steam ahead in developing AI, demonstrating how unrealistic it is to simply hit
pause on technological development. But the roboticist Rodney Brooks has pointed
out that we will see the existential risks coming -- the dangers will not be
sudden and we will have time to change course. While we shouldnΓÇÖt dismiss the
Hollywood nightmare scenarios out of hand, we must balance them with the
potential benefits of AI and, most important, not allow them to strategically
distract from more immediate concerns. LetΓÇÖs not let apocalyptic
prognostications overwhelm us and smother the momentum we need to develop
critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent
opponents are focused on the here and now. We agree with this group that
thereΓÇÖs plenty already happening to cause concern: Racist policing and legal
systems that disproportionately arrest and punish people of color. Sexist labor
systems that rate feminine-coded résumés lower. Superpower nations automating
military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a
distressingly familiar vision of dystopia: a society in which humanityΓÇÖs worst
instincts are encoded into and enforced by machines. The doomsayers think AI
enslavement looks like the Matrix; the reformers point to modern-day contractors
doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns -- like Meredith Broussard, Safiya Umoja
Noble, Rumman Chowdhury, and Cathy OΓÇÖNeil -- have been raising the alarm on
inequities coded into AI for years. Although we donΓÇÖt have a census, itΓÇÖs
noticeable that many leaders in this cohort are people of color, women, and
people who identify as LGBTQ. They are often motivated by insight into what it
feels like to be on the wrong end of algorithmic oppression and by a connection
to the communities most vulnerable to the misuse of new technology. Many in this
group take an explicitly social perspective: When Joy Buolamwini founded an
organization to fight for equitable AI, she called it the Algorithmic Justice
League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to
adhere to an oath to consider the benefit of the broader public alongside -- or
even above -- their self-interest. They point to social media companiesΓÇÖ
failure to control hate speech or how online misinformation can undermine
democratic elections. Adding urgency for this group is that the very companies
driving the AI revolution have, at times, been eliminating safeguards. A signal
moment came when Timnit Gebru, a co-leader of GoogleΓÇÖs AI ethics team, was
dismissed for pointing out the risks of developing ever-larger AI language
models.

While doomsayers and reformers share the concern that AI must align with human
interests, reformers tend to push back hard against the doomsayersΓÇÖ focus on
the distant future. They want to wrestle the attention of regulators and
advocates back toward present-day harms that are exacerbated by AI
misinformation, surveillance, and inequity. Integrity experts call for the
development of responsible AI, for civic education to ensure AI literacy and for
keeping humans front and center in AI systems.

This groupΓÇÖs concerns are well documented and urgent -- and far older than
modern AI technologies. Surely, we are a civilization big enough to tackle more
than one problem at a time; even those worried that AI might kill us in the
future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of
competitiveness and national security. One version has a post-9/11 ring to it --
a world where terrorists, criminals, and psychopaths have unfettered access to
technologies of mass destruction. Another version is a Cold War narrative of the
United States losing an AI arms race with China and its surveillance-rich
society.

Some arguing from this perspective are acting on genuine national security
concerns, and others have a simple motivation: money. These perspectives serve
the interests of American tech tycoons as well as the government agencies and
defense contractors they are intertwined with.

OpenAIΓÇÖs Sam Altman and MetaΓÇÖs Mark Zuckerberg, both of whom lead dominant
AI companies, are pushing for AI regulations that they say will protect us from
criminals and terrorists. Such regulations would be expensive to comply with and
are likely to preserve the market position of leading AI companies while
restricting competition from start-ups. In the lobbying battles over EuropeΓÇÖs
trailblazing AI regulatory framework, US megacompanies pleaded to exempt their
general-purpose AI from the tightest regulations, and whether and how to apply
high-risk compliance expectations on noncorporate open-source models emerged as
a key point of debate. All the while, some of the moguls investing in upstart
companies are fighting the regulatory tide. The Inflection AI co-founder Reid
Hoffman argued, ΓÇ£The answer to our challenges is not to slow down technology
but to accelerate it.ΓÇ¥

Any technology critical to national defense usually has an easier time avoiding
oversight, regulation, and limitations on profit. Any readiness gap in our
military demands urgent budget increases and funds distributed to the military
branches and their contractors, because we may soon be called upon to fight.
Tech moguls like GoogleΓÇÖs former chief executive Eric Schmidt, who has the ear
of many lawmakers, signal to American policymakers about the Chinese threat even
as they invest in US national security concerns.

The warriorsΓÇÖ narrative seems to misrepresent that science and engineering are
different from what they were during the mid-twentieth century. AI research is
fundamentally international; no one country will win a monopoly. And while
national security is important to consider, we must also be mindful of
self-interest of those positioned to benefit financially.

As the science-fiction author Ted Chiang has said, fears about the existential
risks of AI are really fears about the threat of uncontrolled capitalism, and
dystopias like the paper clip maximizer are just caricatures of every
start-upΓÇÖs business plan. Cosma Shalizi and Henry Farrell further argue that
ΓÇ£weΓÇÖve lived among shoggoths for centuries, tending to them as though they
were our mastersΓÇ¥ as monopolistic platforms devour and exploit the totality of
humanityΓÇÖs labor and ingenuity for their own interests. This dread applies as
much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to
double down on the rules that we know limit corporate power. We need to get more
serious about establishing good and effective governance on all the issues we
lost track of while we were becoming obsessed with AI, China, and the fights
picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep
AI companies in check. A publicly directed AI development project would serve to
counterbalance for-profit corporate AI and help ensure an even playing field for
access to the twenty-first centuryΓÇÖs key technology while offering a platform
for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and
corporations accountable by mandating greater AI transparency in the development
stage, in addition to applying legal standards for actions associated with AI.
Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern
our collective behavior is knit more strongly, with fewer gaps and greater
ability to hold the powerful accountable, particularly in those areas most
sensitive to our democracy and environment. As those with power and privilege
seem poised to harness AI to accumulate much more or pursue extreme ideologies,
letΓÇÖs think about how we can constrain their influence in the public square
rather than cede our attention to their most bombastic nightmare visions for the
future.

This essay was written with Nathan Sanders, and previously appeared in the New
York Times.

** *** ***** ******* *********** *************

Model Extraction Attack on Neural Networks

[2023.10.10] Adi Shamir et al. have a new model extraction attack on neural
networks:

Polynomial Time Cryptanalytic Extraction of Neural Network Models

Abstract: Billions of dollars and countless GPU hours are currently spent on
training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is
essential to determine the difficulty of extracting all the parameters of such
neural networks when given access to their black-box implementations. Many
versions of this problem have been studied over the last 30 years, and the best
current attack on ReLU-based deep neural networks was presented at CryptoΓÇÖ20
by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext
attack on a cryptosystem, which has a secret key embedded in its black-box
implementation and requires a polynomial number of queries but an exponential
amount of time (as a function of the number of neurons).

In this paper, we improve this attack by developing several new techniques that
enable us to extract with arbitrarily high precision all the real-valued
parameters of a ReLU-based DNN using a polynomial number of queries and a
polynomial amount of time. We demonstrate its practical efficiency by applying
it to a full-sized neural network for classifying the CIFAR10 dataset, which has
3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million
neuronal parameters. An attack following the approach by Carlini et al. requires
an exhaustive search over 2256 possibilities. Our attack replaces this with our
new techniques, which require only 30 minutes on a 256-core computer.

** *** ***** ******* *********** *************

Cisco CanΓÇÖt Stop Using Hard-Coded Passwords

[2023.10.11] ThereΓÇÖs a new Cisco vulnerability in its Emergency Responder
product:

This vulnerability is due to the presence of static user credentials for the
root account that are typically reserved for use during development. An attacker
could exploit this vulnerability by using the account to log in to an affected
system. A successful exploit could allow the attacker to log in to the affected
system and execute arbitrary commands as the root user.

This is not the first time Cisco products have had hard-coded passwords made
public. YouΓÇÖd think it would learn.

** *** ***** ******* *********** *************

Bounty to Recover NISTΓÇÖs Elliptic Curve Seeds

[2023.10.12] This is a fun challenge:

The NIST elliptic curves that power much of modern cryptography were generated
in the late ΓÇÖ90s by hashing seeds provided by the NSA. How were the seeds
generated? Rumor has it that they are in turn hashes of English sentences, but
the person who picked them, Dr. Jerry Solinas, passed away in early 2023 leaving
behind a cryptographic mystery, some conspiracy theories, and an historical
password cracking challenge.

So thereΓÇÖs a $12K prize to recover the hash seeds.

Some backstory:

Some of the backstory here (itΓÇÖs the funniest fucking backstory ever): itΓÇÖs
lately been circulating -- though I think this may have been somewhat common
knowledge among practitioners, though definitely not to me -- that the
ΓÇ£randomΓÇ¥ seeds for the NIST P-curves, generated in the 1990s by Jerry
Solinas at NSA, were simply SHA1 hashes of some variation of the string ΓÇ£Give
Jerry a raiseΓÇ¥.

At the time, the ΓÇ£pass a string through SHA1ΓÇ¥ thing was meant to increase
confidence in the curve seeds; the idea was that SHA1 would destroy any possible
structure in the seed, so NSA couldnΓÇÖt have selected a deliberately weak seed.
Of course, NIST/NSA then set about destroying its reputation in the 2000ΓÇÖs,
and this explanation wasnΓÇÖt nearly enough to quell conspiracy theories.

But when Jerry Solinas went back to reconstruct the seeds, so NIST could
demonstrate that the seeds really were benign, he found that heΓÇÖd forgotten
the string he used!

If youΓÇÖre a true conspiracist, youΓÇÖre certain nobody is going to find a
string that generates any of these seeds. On the flip side, if anyone does find
them, thatΓÇÖll be a pretty devastating blow to the theory that the NIST
P-curves were maliciously generated -- even for people totally unfamiliar with
basic curve math.

Note that this is not the constants used in the Dual_EC_PRNG random-number
generator that the NSA backdoored. This is something different.

** *** ***** ******* *********** *************

Hacking the High School Grading System

[2023.10.13] Interesting New York Times article about high-school students
hacking the grading system.

WhatΓÇÖs not helping? The policies many school districts are adopting that make
it nearly impossible for low-performing students to fail -- they have a grading
floor under them, they know it, and that allows them to game the system.

Several teachers whom I spoke with or who responded to my questionnaire
mentioned policies stating that students cannot get lower than a 50 percent on
any assignment, even if the work was never done, in some cases. A teacher from
Chapel Hill, N.C., who filled in the questionnaireΓÇÖs ΓÇ£nameΓÇ¥ field with
ΓÇ£No, no, no,ΓÇ¥ said the 50 percent floor and ΓÇ£NO attendance enforcementΓÇ¥
leads to a scenario where ΓÇ£we get students who skip over 100 days, have a 50
percent, complete a couple of assignments to tip over into 59.5 percent and then
pass.ΓÇ¥

ItΓÇÖs a basic math hack. If a student needs two-thirds of the points -- over
65% -- to pass, then they have to do two-thirds of the work. But if doing zero
work results in a 50% grade, then they only have to do a little bit of work to
get over the pass line.

I know this is a minor thing in the universe of problems with secondary
education and grading, but I found the hack interesting. (And this is exactly
the sort of thing I explore in my latest book: A HackerΓÇÖs Mind.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

** *** ***** ******* *********** *************
--- 
 * Origin: High Portable Tosser at my node (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0203 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106