AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1039 / 1577] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, September 15, 2023   October 9, 2023
 9:05 PM *  

Crypto-Gram 
September 15, 2023

by Bruce Schneier 
Fellow and Lecturer, Harvard Kennedy School 
schneier@schneier.com 
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Zoom Can Spy on Your Calls and Use the Conversation to Train AI, But Says That
It WonΓÇÖt
UK Electoral Commission Hacked
Detecting "Violations of Social Norms" in Text with AI
Bots Are Better than Humans at Solving CAPTCHAs
White House Announces AI Cybersecurity Challenge
Applying AI to License Plate Surveillance
DecemberΓÇÖs Reimagining Democracy Workshop
Parmesan Anti-Forgery Protection
Hacking Food Labeling Laws
Remotely Stopping Polish Trains
Identity Theft from 1965 Uncovered through Face Recognition
When Apps Go Rogue
Own Your Own Government Surveillance Van
Spyware Vendor Hacked
Inconsistencies in the Common Vulnerability Scoring System (CVSS)
Cryptocurrency Startup Loses Encryption Key for Electronic Wallet
The Hacker Tool to Get Personal Data from Credit Bureaus
LLMs and Tool Use
On Robots Killing People
Cars Have Terrible Data Privacy
Zero-Click Exploit in iPhones
Fake Signal and Telegram Apps in the Google Play Store
Upcoming Speaking Engagements
** *** ***** ******* *********** *************

Zoom Can Spy on Your Calls and Use the Conversation to Train AI, But Says That
It WonΓÇÖt

[2023.08.15] This is why we need regulation:

Zoom updated its Terms of Service in March, spelling out that the company
reserves the right to train AI on user data with no mention of a way to opt out.
On Monday, the company said in a blog post that thereΓÇÖs no need to worry about
that. Zoom execs swear the company wonΓÇÖt actually train its AI on your video
calls without permission, even though the Terms of Service still say it can.

Of course, these are Terms of Service. They can change at any time. Zoom can
renege on its promise at any time. There are no rules, only the whims of the
company as it tries to maximize its profits.

ItΓÇÖs a stupid way to run a technological revolution. We should not have to
rely on the benevolence of for-profit corporations to protect our rights. ItΓÇÖs
not their job, and it shouldnΓÇÖt be.

** *** ***** ******* *********** *************

UK Electoral Commission Hacked

[2023.08.16] The UK Electoral Commission discovered last year that it was hacked
the year before. ThatΓÇÖs fourteen months between the hack and the discovery. It
doesnΓÇÖt know who was behind the hack.

We worked with external security experts and the National Cyber Security Centre
to investigate and secure our systems.

If the hack was by a major government, the odds are really low that it has
resecured its systems -- unless it burned the network to the ground and rebuilt
it from scratch (which seems unlikely).

** *** ***** ******* *********** *************

Detecting "Violations of Social Norms" in Text with AI

[2023.08.17] Researchers are trying to use AI to detect ΓÇ£social norms
violations.ΓÇ¥ Feels a little sketchy right now, but this is the sort of thing
that AIs will get better at. (Like all of these systems, anything but a very low
false positive rate makes the detection useless in practice.)

News article.

** *** ***** ******* *********** *************

Bots Are Better than Humans at Solving CAPTCHAs

[2023.08.18] Interesting research: ΓÇ£An Empirical Study & Evaluation of Modern
CAPTCHAsΓÇ£:

Abstract: For nearly two decades, CAPTCHAS have been widely used as a means of
protection against bots. Throughout the years, as their use grew, techniques to
defeat or bypass CAPTCHAS have continued to improve. Meanwhile, CAPTCHAS have
also evolved in terms of sophistication and diversity, becoming increasingly
difficult to solve for both bots (machines) and humans. Given this long-standing
and still-ongoing arms race, it is critical to investigate how long it takes
legitimate users to solve modern CAPTCHAS, and how they are perceived by those
users.

In this work, we explore CAPTCHAS in the wild by evaluating usersΓÇÖ solving
performance and perceptions of unmodified currently-deployed CAPTCHAS. We obtain
this data through manual inspection of popular websites and user studies in
which 1, 400 participants collectively solved 14, 000 CAPTCHAS. Results show
significant differences between the most popular types of CAPTCHAS:
surprisingly, solving time and user perception are not always correlated. We
performed a comparative study to investigate the effect of experimental context
specifically the difference between solving CAPTCHAS directly versus solving
them as part of a more natural task, such as account creation. Whilst there were
several potential confounding factors, our results show that experimental
context could have an impact on this task, and must be taken into account in
future CAPTCHA studies. Finally, we investigate CAPTCHA-induced user task
abandonment by analyzing participants who start and do not complete the task.

Slashdot thread.

And letΓÇÖs all rewatch this great ad from 2022.

** *** ***** ******* *********** *************

White House Announces AI Cybersecurity Challenge

[2023.08.21] At Black Hat last week, the White House announced an AI Cyber
Challenge. Gizmodo reports:

The new AI cyber challenge (which is being abbreviated ΓÇ£AIxCCΓÇ¥) will have a
number of different phases. Interested would-be competitors can now submit their
proposals to the Small Business Innovation Research program for evaluation and,
eventually, selected teams will participate in a 2024 ΓÇ£qualifying event.ΓÇ¥
During that event, the top 20 teams will be invited to a semifinal competition
at that yearΓÇÖs DEF CON, another large cybersecurity conference, where the
field will be further whittled down.

[...]

To secure the top spot in DARPAΓÇÖs new competition, participants will have to
develop security solutions that do some seriously novel stuff. ΓÇ£To win
first-place, and a top prize of $4 million, finalists must build a system that
can rapidly defend critical infrastructure code from attack,ΓÇ¥ said Perri
Adams, program manager for DARPAΓÇÖs Information Innovation Office, during a
Zoom call with reporters Tuesday. In other words: the government wants software
that is capable of identifying and mitigating risks by itself.

This is a great idea. I was a big fan of DARPAΓÇÖs AI capture-the-flag event in
2016, and am happy to see that DARPA is again inciting research in this area.
(China has been doing this every year since 2017.)

** *** ***** ******* *********** *************

Applying AI to License Plate Surveillance

[2023.08.22] License plate scanners arenΓÇÖt new. Neither is using them for bulk
surveillance. WhatΓÇÖs new is that AI is being used on the data, identifying
ΓÇ£suspiciousΓÇ¥ vehicle behavior:

Typically, Automatic License Plate Recognition (ALPR) technology is used to
search for plates linked to specific crimes. But in this case it was used to
examine the driving patterns of anyone passing one of Westchester CountyΓÇÖs 480
cameras over a two-year period. ZayasΓÇÖ lawyer Ben Gold contested the
AI-gathered evidence against his client, decrying it as ΓÇ£dragnet
surveillance.ΓÇ¥

And he had the data to back it up. A FOIA he filed with the Westchester police
revealed that the ALPR system was scanning over 16 million license plates a
week, across 480 ALPR cameras. Of those systems, 434 were stationary, attached
to poles and signs, while the remaining 46 were mobile, attached to police
vehicles. The AI was not just looking at license plates either. It had also been
taking notes on vehiclesΓÇÖ make, model and color -- useful when a plate number
for a suspect vehicle isnΓÇÖt visible or is unknown.

** *** ***** ******* *********** *************

DecemberΓÇÖs Reimagining Democracy Workshop

[2023.08.23] Imagine that weΓÇÖve all -- all of us, all of society -- landed on
some alien planet, and we have to form a government: clean slate. We donΓÇÖt
have any legacy systems from the US or any other country. We donΓÇÖt have any
special or unique interests to perturb our thinking.

How would we govern ourselves?

ItΓÇÖs unlikely that we would use the systems we have today. The modern
representative democracy was the best form of government that
mid-eighteenth-century technology could conceive of. The twenty-first century is
a different place scientifically, technically and socially.

For example, the mid-eighteenth-century democracies were designed under the
assumption that both travel and communications were hard. Does it still make
sense for all of us living in the same place to organize every few years and
choose one of us to go to a big room far away and create laws in our name?

Representative districts are organized around geography, because thatΓÇÖs the
only way that made sense 200-plus years ago. But we donΓÇÖt have to do it that
way. We can organize representation by age: one representative for the
thirty-one-year-olds, another for the thirty-two-year-olds, and so on. We can
organize representation randomly: by birthday, perhaps. We can organize any way
we want.

US citizens currently elect people for terms ranging from two to six years. Is
ten years better? Is ten days better? Again, we have more technology and
therefor more options.

Indeed, as a technologist who studies complex systems and their security, I
believe the very idea of representative government is a hack to get around the
technological limitations of the past. Voting at scale is easier now than it was
200 year ago. Certainly we donΓÇÖt want to all have to vote on every amendment
to every bill, but whatΓÇÖs the optimal balance between votes made in our name
and ballot measures that we all vote on?

In December 2022, I organized a workshop to discuss these and other questions. I
brought together fifty people from around the world: political scientists,
economists, law professors, AI experts, activists, government officials,
historians, science fiction writers and more. We spent two days talking about
these ideas. Several themes emerged from the event.

Misinformation and propaganda were themes, of course -- and the inability to
engage in rational policy discussions when people canΓÇÖt agree on the facts.

Another theme was the harms of creating a political system whose primary goals
are economic. Given the ability to start over, would anyone create a system of
government that optimizes the near-term financial interest of the wealthiest
few? Or whose laws benefit corporations at the expense of people?

Another theme was capitalism, and how it is or isnΓÇÖt intertwined with
democracy. And while the modern market economy made a lot of sense in the
industrial age, itΓÇÖs starting to fray in the information age. What comes after
capitalism, and how does it affect how we govern ourselves?

Many participants examined the effects of technology, especially artificial
intelligence. We looked at whether -- and when -- we might be comfortable ceding
power to an AI. Sometimes itΓÇÖs easy. IΓÇÖm happy for an AI to figure out the
optimal timing of traffic lights to ensure the smoothest flow of cars through
the city. When will we be able to say the same thing about setting interest
rates? Or designing tax policies?

How would we feel about an AI device in our pocket that voted in our name,
thousands of times per day, based on preferences that it inferred from our
actions? If an AI system could determine optimal policy solutions that balanced
every voterΓÇÖs preferences, would it still make sense to have representatives?
Maybe we should vote directly for ideas and goals instead, and leave the details
to the computers. On the other hand, technological solutionism regularly fails.

Scale was another theme. The size of modern governments reflects the technology
at the time of their founding. European countries and the early American states
are a particular size because thatΓÇÖs what was governable in the 18th and 19th
centuries. Larger governments -- the US as a whole, the European Union --
reflect a world in which travel and communications are easier. The problems we
have today are primarily either local, at the scale of cities and towns, or
global -- even if they are currently regulated at state, regional or national
levels. This mismatch is especially acute when we try to tackle global problems.
In the future, do we really have a need for political units the size of France
or Virginia? Or is it a mixture of scales that we really need, one that moves
effectively between the local and the global?

As to other forms of democracy, we discussed one from history and another made
possible by todayΓÇÖs technology.

Sortition is a system of choosing political officials randomly to deliberate on
a particular issue. We use it today when we pick juries, but both the ancient
Greeks and some cities in Renaissance Italy used it to select major political
officials. Today, several countries -- largely in Europe -- are using sortition
for some policy decisions. We might randomly choose a few hundred people,
representative of the population, to spend a few weeks being briefed by experts
and debating the problem -- and then decide on environmental regulations, or a
budget, or pretty much anything.

Liquid democracy does away with elections altogether. Everyone has a vote, and
they can keep the power to cast it themselves or assign it to another person as
a proxy. There are no set elections; anyone can reassign their proxy at any
time. And thereΓÇÖs no reason to make this assignment all or nothing. Perhaps
proxies could specialize: one set of people focused on economic issues, another
group on health and a third bunch on national defense. Then regular people could
assign their votes to whichever of the proxies most closely matched their views
on each individual matter -- or step forward with their own views and begin
collecting proxy support from other people.

This all brings up another question: Who gets to participate? And, more
generally, whose interests are taken into account? Early democracies were really
nothing of the sort: They limited participation by gender, race and land
ownership.

We should debate lowering the voting age, but even without voting we recognize
that children too young to vote have rights -- and, in some cases, so do other
species. Should future generations get a ΓÇ£voice,ΓÇ¥ whatever that means? What
about nonhumans or whole ecosystems?

Should everyone get the same voice? Right now in the US, the outsize effect of
money in politics gives the wealthy disproportionate influence. Should we encode
that explicitly? Maybe younger people should get a more powerful vote than
everyone else. Or maybe older people should.

Those questions lead to ones about the limits of democracy. All democracies have
boundaries limiting what the majority can decide. We all have rights: the things
that cannot be taken away from us. We cannot vote to put someone in jail, for
example.

But while we canΓÇÖt vote a particular publication out of existence, we can to
some degree regulate speech. In this hypothetical community, what are our rights
as individuals? What are the rights of society that supersede those of
individuals?

Personally, I was most interested in how these systems fail. As a security
technologist, I study how complex systems are subverted -- hacked, in my
parlance -- for the benefit of a few at the expense of the many. Think tax
loopholes, or tricks to avoid government regulation. I want any government
system to be resilient in the face of that kind of trickery.

Or, to put it another way, I want the interests of each individual to align with
the interests of the group at every level. WeΓÇÖve never had a system of
government with that property before -- even equal protection guarantees and
First Amendment rights exist in a competitive framework that puts individualsΓÇÖ
interests in opposition to one another. But -- in the age of such existential
risks as climate and biotechnology and maybe AI -- aligning interests is more
important than ever.

Our workshop didnΓÇÖt produce any answers; that wasnΓÇÖt the point. Our current
discourse is filled with suggestions on how to patch our political system.
People regularly debate changes to the Electoral College, or the process of
creating voting districts, or term limits. But those are incremental changes.

ItΓÇÖs hard to find people who are thinking more radically: looking beyond the
horizon for whatΓÇÖs possible eventually. And while true innovation in politics
is a lot harder than innovation in technology, especially without a violent
revolution forcing change, itΓÇÖs something that we as a species are going to
have to get good at -- one way or another.

This essay previously appeared in The Conversation.

** *** ***** ******* *********** *************

Parmesan Anti-Forgery Protection

[2023.08.24] The Guardian is reporting about microchips in wheels of Parmesan
cheese as an anti-forgery measure.

** *** ***** ******* *********** *************

Hacking Food Labeling Laws

[2023.08.25] This article talks about new Mexican laws about food labeling, and
the lengths to which food manufacturers are going to ensure that they are not
effective. There are the typical high-pressure lobbying tactics and lawsuits.
But thereΓÇÖs also examples of companies hacking the laws:

Companies like Coca-Cola and Kraft Heinz have begun designing their products so
that their packages donΓÇÖt have a true front or back, but rather two nearly
identical labels -- except for the fact that only one side has the required
warning. As a result, supermarket clerks often place the products with the
warning facing inward, effectively hiding it.

[...]

Other companies have gotten creative in finding ways to keep their mascots, even
without reformulating their foods, as is required by law. Bimbo, the
international bread company that owns brands in the United States such as
EntenmannΓÇÖs and Takis, for example, technically removed its mascot from its
packaging. It instead printed the mascot on the actual food product -- a ready
to eat pancake -- and made the packaging clear, so the mascot is still visible
to consumers.

** *** ***** ******* *********** *************

Remotely Stopping Polish Trains

[2023.08.28] Turns out that itΓÇÖs easy to broadcast radio commands that force
Polish trains to stop:

...the saboteurs appear to have sent simple so-called ΓÇ£radio-stopΓÇ¥ commands
via radio frequency to the trains they targeted. Because the trains use a radio
system that lacks encryption or authentication for those commands, Olejnik says,
anyone with as little as $30 of off-the-shelf radio equipment can broadcast the
command to a Polish train -- sending a series of three acoustic tones at a
150.100 megahertz frequency -- and trigger their emergency stop function.

ΓÇ£It is three tonal messages sent consecutively. Once the radio equipment
receives it, the locomotive goes to a halt,ΓÇ¥ Olejnik says, pointing to a
document outlining trainsΓÇÖ different technical standards in the European Union
that describes the ΓÇ£radio-stopΓÇ¥ command used in the Polish system. In fact,
Olejnik says that the ability to send the command has been described in Polish
radio and train forums and on YouTube for years. ΓÇ£Everybody could do this.
Even teenagers trolling. The frequencies are known. The tones are known. The
equipment is cheap.ΓÇ¥

Even so, this is being described as a cyberattack.

** *** ***** ******* *********** *************

Identity Theft from 1965 Uncovered through Face Recognition

[2023.08.29] Interesting story:

Napoleon Gonzalez, of Etna, assumed the identity of his brother in 1965, a
quarter century after his siblingΓÇÖs death as an infant, and used the stolen
identity to obtain Social Security benefits under both identities, multiple
passports and state identification cards, law enforcement officials said.

[...]

A new investigation was launched in 2020 after facial identification software
indicated GonzalezΓÇÖs face was on two state identification cards.

The facial recognition technology is used by the Maine Bureau of Motor Vehicles
to ensure no one obtains multiple credentials or credentials under someone
elseΓÇÖs name, said Emily Cook, spokesperson for the secretary of stateΓÇÖs
office.

** *** ***** ******* *********** *************

When Apps Go Rogue

[2023.08.30] Interesting story of an Apple Macintosh app that went rogue.
Basically, it was a good app until one particular update...when it went bad.

With more official macOS features added in 2021 that enabled the ΓÇ£Night
ShiftΓÇ¥ dark mode, the NightOwl app was left forlorn and forgotten on many
older Macs. Few of those supposed tens of thousands of users likely noticed when
the app they ran in the background of their older Macs was bought by another
company, nor when earlier this year that company silently updated the dark mode
app so that it hijacked their machines in order to send their IP data through a
server network of affected computers, AKA a botnet.

This is not an unusual story. Sometimes the apps are sold. Sometimes theyΓÇÖre
orphaned, and then taken over by someone else.

** *** ***** ******* *********** *************

Own Your Own Government Surveillance Van

[2023.08.31] A used government surveillance van is for sale in Chicago:

So how was this van turned into a mobile spying center? Well, letΓÇÖs start with
how it has more LCD monitors than a Counterstrike LAN party. They can be used to
monitor any of six different video inputs including a videoscope camera. A
videoscope and a borescope are very similar as theyΓÇÖre both cameras on the
ends of optical fibers, so the same tech youΓÇÖd use to inspect cylinder walls
is also useful for surveillance. Kind of cool, right? Multiple Sony DVD-based
video recorders store footage captured by cameras, audio recorders by high-end
equipment brand Marantz capture sounds, and time and date generators sync
gathered media up for accurate analysis. Circling back around to audio, this van
features seven different audio inputs including a body wire channel.

Only $26,795, but you can probably negotiate them down.

** *** ***** ******* *********** *************

Spyware Vendor Hacked

[2023.09.01] A Brazilian spyware app vendor was hacked by activists:

In an undated note seen by TechCrunch, the unnamed hackers described how they
found and exploited several security vulnerabilities that allowed them to
compromise WebDetetiveΓÇÖs servers and access its user databases. By exploiting
other flaws in the spyware makerΓÇÖs web dashboard -- used by abusers to access
the stolen phone data of their victims -- the hackers said they enumerated and
downloaded every dashboard record, including every customerΓÇÖs email address.

The hackers said that dashboard access also allowed them to delete victim
devices from the spyware network altogether, effectively severing the connection
at the server level to prevent the device from uploading new data. ΓÇ£Which we
definitely did. Because we could. Because #fuckstalkerware,ΓÇ¥ the hackers wrote
in the note.

The note was included in a cache containing more than 1.5 gigabytes of data
scraped from the spywareΓÇÖs web dashboard. That data included information about
each customer, such as the IP address they logged in from and their purchase
history. The data also listed every device that each customer had compromised,
which version of the spyware the phone was running, and the types of data that
the spyware was collecting from the victimΓÇÖs phone.

** *** ***** ******* *********** *************

Inconsistencies in the Common Vulnerability Scoring System (CVSS)

[2023.09.05] Interesting research:

Shedding Light on CVSS Scoring Inconsistencies: A User-Centric Study on
Evaluating Widespread Security Vulnerabilities

Abstract: The Common Vulnerability Scoring System (CVSS) is a popular method for
evaluating the severity of vulnerabilities in vulnerability management. In the
evaluation process, a numeric score between 0 and 10 is calculated, 10 being the
most severe (critical) value. The goal of CVSS is to provide comparable scores
across different evaluators. However, previous works indicate that CVSS might
not reach this goal: If a vulnerability is evaluated by several analysts, their
scores often differ. This raises the following questions: Are CVSS evaluations
consistent? Which factors influence CVSS assessments? We systematically
investigate these questions in an online survey with 196 CVSS users. We show
that specific CVSS metrics are inconsistently evaluated for widespread
vulnerability types, including Top 3 vulnerabilities from the ΓÇ¥2022 CWE Top 25
Most Dangerous Software WeaknessesΓÇ¥ list. In a follow-up survey with 59
participants, we found that for the same vulnerabilities from the main study,
68% of these users gave different severity ratings. Our study reveals that most
evaluators are aware of the problematic aspects of CVSS, but they still see CVSS
as a useful tool for vulnerability assessment. Finally, we discuss possible
reasons for inconsistent evaluations and provide recommendations on improving
the consistency of scoring.

HereΓÇÖs a summary of the research.

** *** ***** ******* *********** *************

Cryptocurrency Startup Loses Encryption Key for Electronic Wallet

[2023.09.06] The cryptocurrency fintech startup Prime Trust lost the encryption
key to its hardware wallet -- and the recovery key -- and therefore $38.9
million. It is now in bankruptcy.

I canΓÇÖt understand why anyone thinks these technologies are a good idea.

** *** ***** ******* *********** *************

The Hacker Tool to Get Personal Data from Credit Bureaus

[2023.09.07] The new site 404 Media has a good article on how hackers are
cheaply getting personal information from credit bureaus:

This is the result of a secret weapon criminals are selling access to online
that appears to tap into an especially powerful set of data: the targetΓÇÖs
credit header. This is personal information that the credit bureaus Experian,
Equifax, and TransUnion have on most adults in America via their credit cards.
Through a complex web of agreements and purchases, that data trickles down from
the credit bureaus to other companies who offer it to debt collectors, insurance
companies, and law enforcement.

A 404 Media investigation has found that criminals have managed to tap into that
data supply chain, in some cases by stealing former law enforcement officerΓÇÖs
identities, and are selling unfettered access to their criminal cohorts online.
The tool 404 Media tested has also been used to gather information on high
profile targets such as Elon Musk, Joe Rogan, and even President Joe Biden,
seemingly without restriction. 404 Media verified that although not always
sensitive, at least some of that data is accurate.

** *** ***** ******* *********** *************

LLMs and Tool Use

[2023.09.08] Last March, just two weeks after GPT-4 was released, researchers at
Microsoft quietly announced a plan to compile millions of APIs -- tools that can
do everything from ordering a pizza to solving physics equations to controlling
the TV in your living room -- into a compendium that would be made accessible to
large language models (LLMs). This was just one milestone in the race across
industry and academia to find the best ways to teach LLMs how to manipulate
tools, which would supercharge the potential of AI more than any of the
impressive advancements weΓÇÖve seen to date.

The Microsoft project aims to teach AI how to use any and all digital tools in
one fell swoop, a clever and efficient approach. Today, LLMs can do a pretty
good job of recommending pizza toppings to you if you describe your dietary
preferences and can draft dialog that you could use when you call the
restaurant. But most AI tools canΓÇÖt place the order, not even online. In
contrast, GoogleΓÇÖs seven-year-old Assistant tool can synthesize a voice on the
telephone and fill out an online order form, but it canΓÇÖt pick a restaurant or
guess your order. By combining these capabilities, though, a tool-using AI could
do it all. An LLM with access to your past conversations and tools like calorie
calculators, a restaurant menu database, and your digital payment wallet could
feasibly judge that you are trying to lose weight and want a low-calorie option,
find the nearest restaurant with toppings you like, and place the delivery
order. If it has access to your payment history, it could even guess at how
generously you usually tip. If it has access to the sensors on your smartwatch
or fitness tracker, it might be able to sense when your blood sugar is low and
order the pie before you even realize youΓÇÖre hungry.

Perhaps the most compelling potential applications of tool use are those that
give AIs the ability to improve themselves. Suppose, for example, you asked a
chatbot for help interpreting some facet of ancient Roman law that no one had
thought to include examples of in the modelΓÇÖs original training. An LLM
empowered to search academic databases and trigger its own training process
could fine-tune its understanding of Roman law before answering. Access to
specialized tools could even help a model like this better explain itself. While
LLMs like GPT-4 already do a fairly good job of explaining their reasoning when
asked, these explanations emerge from a ΓÇ£black boxΓÇ¥ and are vulnerable to
errors and hallucinations. But a tool-using LLM could dissect its own internals,
offering empirical assessments of its own reasoning and deterministic
explanations of why it produced the answer it did.

If given access to tools for soliciting human feedback, a tool-using LLM could
even generate specialized knowledge that isnΓÇÖt yet captured on the web. It
could post a question to Reddit or Quora or delegate a task to a human on
AmazonΓÇÖs Mechanical Turk. It could even seek out data about human preferences
by doing survey research, either to provide an answer directly to you or to
fine-tune its own training to be able to better answer questions in the future.
Over time, tool-using AIs might start to look a lot like tool-using humans. An
LLM can generate code much faster than any human programmer, so it can
manipulate the systems and services of your computer with ease. It could also
use your computerΓÇÖs keyboard and cursor the way a person would, allowing it to
use any program you do. And it could improve its own capabilities, using tools
to ask questions, conduct research, and write code to incorporate into itself.

ItΓÇÖs easy to see how this kind of tool use comes with tremendous risks.
Imagine an LLM being able to find someoneΓÇÖs phone number, call them and
surreptitiously record their voice, guess what bank they use based on the
largest providers in their area, impersonate them on a phone call with customer
service to reset their password, and liquidate their account to make a donation
to a political party. Each of these tasks invokes a simple tool -- an Internet
search, a voice synthesizer, a bank app -- and the LLM scripts the sequence of
actions using the tools.

We donΓÇÖt yet know how successful any of these attempts will be. As remarkably
fluent as LLMs are, they werenΓÇÖt built specifically for the purpose of
operating tools, and it remains to be seen how their early successes in tool use
will translate to future use cases like the ones described here. As such, giving
the current generative AI sudden access to millions of APIs -- as Microsoft
plans to -- could be a little like letting a toddler loose in a weapons depot.

Companies like Microsoft should be particularly careful about granting AIs
access to certain combinations of tools. Access to tools to look up information,
make specialized calculations, and examine real-world sensors all carry a
modicum of risk. The ability to transmit messages beyond the immediate user of
the tool or to use APIs that manipulate physical objects like locks or machines
carries much larger risks. Combining these categories of tools amplifies the
risks of each.

The operators of the most advanced LLMs, such as OpenAI, should continue to
proceed cautiously as they begin enabling tool use and should restrict uses of
their products in sensitive domains such as politics, health care, banking, and
defense. But it seems clear that these industry leaders have already largely
lost their moat around LLM technology -- open source is catching up. Recognizing
this trend, Meta has taken an ΓÇ£If you canΓÇÖt beat ΓÇÖem, join ΓÇÖemΓÇ¥
approach and partially embraced the role of providing open source LLM platforms.

On the policy front, national -- and regional -- AI prescriptions seem futile.
Europe is the only significant jurisdiction that has made meaningful progress on
regulating the responsible use of AI, but itΓÇÖs not entirely clear how
regulators will enforce it. And the US is playing catch-up and seems destined to
be much more permissive in allowing even risks deemed ΓÇ£unacceptableΓÇ¥ by the
EU. Meanwhile, no government has invested in a ΓÇ£public optionΓÇ¥ AI model that
would offer an alternative to Big Tech that is more responsive and accountable
to its citizens.

Regulators should consider what AIs are allowed to do autonomously, like whether
they can be assigned property ownership or register a business. Perhaps more
sensitive transactions should require a verified human in the loop, even at the
cost of some added friction. Our legal system may be imperfect, but we largely
know how to hold humans accountable for misdeeds; the trick is not to let them
shunt their responsibilities to artificial third parties. We should continue
pursuing AI-specific regulatory solutions while also recognizing that they are
not sufficient on their own.

We must also prepare for the benign ways that tool-using AI might impact
society. In the best-case scenario, such an LLM may rapidly accelerate a field
like drug discovery, and the patent office and FDA should prepare for a dramatic
increase in the number of legitimate drug candidates. We should reshape how we
interact with our governments to take advantage of AI tools that give us all
dramatically more potential to have our voices heard. And we should make sure
that the economic benefits of superintelligent, labor-saving AI are equitably
distributed.

We can debate whether LLMs are truly intelligent or conscious, or have agency,
but AIs will become increasingly capable tool users either way. Some things are
greater than the sum of their parts. An AI with the ability to manipulate and
interact with even simple tools will become vastly more powerful than the tools
themselves. LetΓÇÖs be sure weΓÇÖre ready for them.

This essay was written with Nathan Sanders, and previously appeared on
Wired.com.

** *** ***** ******* *********** *************

On Robots Killing People

[2023.09.11] The robot revolution began long ago, and so did the killing. One
day in 1979, a robot at a Ford Motor Company casting plant malfunctioned --
human workers determined that it was not going fast enough. And so
twenty-five-year-old Robert Williams was asked to climb into a storage rack to
help move things along. The one-ton robot continued to work silently, smashing
into WilliamsΓÇÖs head and instantly killing him. This was reportedly the first
incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances.
A malfunctioning robot he went to inspect killed him when he obstructed its
path, according to Gabriel Hallevy in his 2013 book, When Robots Kill:
Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply
determined that ΓÇ£the most efficient way to eliminate the threat was to push
the worker into an adjacent machine.ΓÇ¥ From 1992 to 2017, workplace robots were
responsible for 41 recorded deaths in the United States -- and thatΓÇÖs likely
an underestimate, especially when you consider knock-on effects from automation,
such as job loss. A robotic anti-aircraft cannon killed nine South African
soldiers in 2007 when a possible software failure led the machine to swing
itself wildly and fire dozens of lethal rounds in less than a second. In a 2018
trial, a medical robot was implicated in killing Stephen Pettitt during a
routine operation that had occurred a few years earlier.

You get the picture. Robots -- ΓÇ¥intelligentΓÇ¥ and not -- have been killing
people for decades. And the development of more advanced artificial intelligence
has only increased the potential for machines to cause harm. Self-driving cars
are already on American streets, and robotic "dogs" are being used by law
enforcement. Computerized systems are being given the capabilities to use tools,
allowing them to directly affect the physical world. Why worry about the
theoretical emergence of an all-powerful, superintelligent program when more
immediate problems are at our doorstep? Regulation must push companies toward
safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation -- the
types of disasters we would ideally foresee and avoid in todayΓÇÖs AI paradigm.
The 1905 Grover Shoe Factory disaster led to regulations governing the safe
operation of steam boilers. At the time, companies claimed that large
steam-automation machines were too complex to rush safety regulations. This, of
course, led to overlooked safety flaws and escalating disasters. It wasnΓÇÖt
until the American Society of Mechanical Engineers demanded risk analysis and
transparency that dangers from these huge tanks of boiling water, once
considered mystifying, were made easily understandable. The 1911 Triangle
Shirtwaist Factory fire led to regulations on sprinkler systems and emergency
exits. And the preventable 1912 sinking of the Titanic resulted in new
regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation
Administration. Fatalities in the first decades of aviation forced regulation,
which required new developments in both law and technology. Starting with the
Air Commerce Act of 1926, Congress recognized that the integration of aerospace
tech into peopleΓÇÖs lives and our economy demanded the highest scrutiny. Today,
every airline crash is closely examined, motivating new technologies and
procedures.

Any regulation of industrial robots stems from existing industrial regulation,
which has been evolving for many decades. The Occupational Safety and Health Act
of 1970 established safety standards for machinery, and the Robotic Industries
Association, now merged into the Association for Advancing Automation, has been
instrumental in developing and updating specific robot-safety standards since
its founding in 1974. Those standards, with obscure names such as R15.06 and ISO
10218, emphasize inherent safe design, protective measures, and rigorous risk
assessments for industrial robots.

But as technology continues to change, the government needs to more clearly
regulate how and when robots can be used in society. Laws need to clarify who is
responsible, and what the legal consequences are, when a robotΓÇÖs actions
result in harm. Yes, accidents happen. But the lessons of aviation and workplace
safety demonstrate that accidents are preventable when they are openly discussed
and subjected to proper expert scrutiny.

AI and robotics companies donΓÇÖt want this to happen. OpenAI, for example, has
reportedly fought to ΓÇ£water downΓÇ¥ safety regulations and reduce AI-quality
requirements. According to an article in Time, it lobbied European Union
officials against classifying models like ChatGPT as ΓÇ£high riskΓÇ¥ which would
have brought ΓÇ£stringent legal requirements including transparency,
traceability, and human oversight.ΓÇ¥ The reasoning was supposedly that OpenAI
did not intend to put its products to high-risk use -- a logical twist akin to
the Titanic owners lobbying that the ship should not be inspected for lifeboats
on the principle that it was a ΓÇ£general purposeΓÇ¥ vessel that also could sail
in warm waters where there were no icebergs and people could float for days.
(OpenAI did not comment when asked about its stance on regulation; previously,
it has said that ΓÇ£achieving our mission requires that we work to mitigate both
current and longer-term risks,ΓÇ¥ and that it is working toward that goal by
ΓÇ£collaborating with policymakers, researchers and users.ΓÇ¥)

Large corporations have a tendency to develop computer technologies to
self-servingly shift the burdens of their own shortcomings onto society at
large, or to claim that safety regulations protecting society impose an unjust
cost on corporations themselves, or that security baselines stifle innovation.
WeΓÇÖve heard it all before, and we should be extremely skeptical of such
claims. TodayΓÇÖs AI-related robot deaths are no different from the robot
accidents of the past. Those industrial robots malfunctioned, and human
operators trying to assist were killed in unexpected ways. Since the first-known
death resulting from the feature in January 2016, TeslaΓÇÖs Autopilot has been
implicated in more than 40 deaths according to official report estimates.
Malfunctioning Teslas on Autopilot have deviated from their advertised
capabilities by misreading road markings, suddenly veering into other cars or
trees, crashing into well-marked service vehicles, or ignoring red lights, stop
signs, and crosswalks. WeΓÇÖre concerned that AI-controlled robots already are
moving beyond accidental killing in the name of efficiency and ΓÇ£decidingΓÇ¥ to
kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we
canΓÇÖt forget that safety is a crucial part of innovation. True technological
progress comes from applying comprehensive safety standards across technologies,
even in the realm of the most futuristic and captivating robotic visions. By
learning lessons from past fatalities, we can enhance safety protocols, rectify
design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters.
Lawmakers must reach further back in history to become more future-focused on
what we must demand right now: modeling threats, calculating potential
scenarios, enabling technical blueprints, and ensuring responsible engineering
for building within parameters that protect society at large. Decades of
experience have given us the empirical evidence to guide our actions toward a
safer future with robots. Now we need the political will to regulate.

This essay was written with Davi Ottenheimer, and previously appeared on
Atlantic.com.

** *** ***** ******* *********** *************

Cars Have Terrible Data Privacy

[2023.09.12] A new Mozilla Foundation report concludes that cars, all of them,
have terrible data privacy.

All 25 car brands we researched earned our *Privacy Not Included warning label
-- making cars the official worst category of products for privacy that we have
ever reviewed.

ThereΓÇÖs a lot of details in the report. TheyΓÇÖre all bad.

BoingBoing post.

** *** ***** ******* *********** *************

Zero-Click Exploit in iPhones

[2023.09.13] Make sure you update your iPhones:

Citizen Lab says two zero-days fixed by Apple today in emergency security
updates were actively abused as part of a zero-click exploit chain (dubbed
BLASTPASS) to deploy NSO GroupΓÇÖs Pegasus commercial spyware onto fully patched
iPhones.

The two bugs, tracked as CVE-2023-41064 and CVE-2023-41061, allowed the
attackers to infect a fully-patched iPhone running iOS 16.6 and belonging to a
Washington DC-based civil society organization via PassKit attachments
containing malicious images.

ΓÇ£We refer to the exploit chain as BLASTPASS. The exploit chain was capable of
compromising iPhones running the latest version of iOS (16.6) without any
interaction from the victim,ΓÇ¥ Citizen Lab said.

ΓÇ£The exploit involved PassKit attachments containing malicious images sent
from an attacker iMessage account to the victim.ΓÇ¥

** *** ***** ******* *********** *************

Fake Signal and Telegram Apps in the Google Play Store

[2023.09.14] Google removed fake Signal and Telegram apps from its Play store.

An app with the name Signal Plus Messenger was available on Play for nine months
and had been downloaded from Play roughly 100 times before Google took it down
last April after being tipped off by security firm ESET. It was also available
in the Samsung app store and on signalplus[.]org, a dedicated website mimicking
the official Signal.org. An app calling itself FlyGram, meanwhile, was created
by the same threat actor and was available through the same three channels.
Google removed it from Play in 2021. Both apps remain available in the Samsung
store.

Both apps were built on open source code available from Signal and Telegram.
Interwoven into that code was an espionage tool tracked as BadBazaar. The Trojan
has been linked to a China-aligned hacking group tracked as GREF. BadBazaar has
been used previously to target Uyghurs and other Turkic ethnic minorities. The
FlyGram malware was also shared in a Uyghur Telegram group, further aligning it
to previous targeting by the BadBazaar malware family.

Signal Plus could monitor sent and received messages and contacts if people
connected their infected device to their legitimate Signal number, as is normal
when someone first installs Signal on their device. Doing so caused the
malicious app to send a host of private information to the attacker, including
the device IMEI number, phone number, MAC address, operator details, location
data, Wi-Fi information, emails for Google accounts, contact list, and a PIN
used to transfer texts in the event one was set up by the user.

This kind of thing is really scary.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2023.09.14] This is a current list of where and when I am scheduled to speak:

I'm speaking at swampUP 2023 in San Jose, California, on September 13, 2023 at
11:35 AM PT.
The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

** *** ***** ******* *********** *************
--- 
 * Origin: High Portable Tosser at my node (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.022 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106