AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [865 / 1624] RSS
 From   To   Subject   Date/Time 
Message   TCOB1    All   CRYPTO-GRAM, July 15, 2023   July 16, 2023
 10:42 AM *  

Crypto-Gram
July 15, 2023

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Security and Human Behavior (SHB) 2023 Power LED Side-Channel Attack
Ethical Problems in Computer Security AI as Sensemaking for Public Comments UPS
Data Harvested for SMS Phishing Attacks
Excel Data Forensics
Typing Incriminating Evidence in the Memo Field Stalkerware Vendor Hacked
Redacting Documents with a Black Sharpie DoesnΓÇÖt Work The US Is Spying on the
UN Secretary General Self-Driving Cars Are Surveillance Cameras on Wheels The
Password Game
Class-Action Lawsuit for Scraping Data without Permission Belgian Tax Hack
The AI Dividend
Wisconsin Governor Hacks the Veto Process Privacy of Printing Services
Google Is Using Its Vast Data Stores to Train AI French Police Will Be Able to
Spy on People through Their Cell Phones Buying Campaign Contributions as a Hack
** *** ***** ******* *********** *************

Security and Human Behavior (SHB) 2023

[2023.06.16] IΓÇÖm just back from the sixteenth Workshop on Security and Human
Behavior, hosted by Alessandro Acquisti at Carnegie Mellon University in
Pittsburgh.

SHB is a small, annual, invitational workshop of people studying various
aspects of the human side of security, organized each year by Alessandro
Acquisti, Ross Anderson, and myself. The fifty or so attendees include
psychologists, economists, computer security researchers, criminologists,
sociologists, political scientists, designers, lawyers, philosophers,
anthropologists, geographers, neuroscientists, business school professors, and a
smattering of others. ItΓÇÖs not just an interdisciplinary event; most of the
people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting
everyone on panels, and limiting talks to six to eight minutes, with the rest of
the time for open discussion. Short talks limit presentersΓÇÖ ability to get
into the boring details of their work, and the interdisciplinary audience
discourages jargon.

For the past decade and a half, this workshop has been the most intellectually
stimulating two days of my professional year. It influences my thinking in
different and sometimes surprising ways 00 and has resulted in some unexpected
collaborations.

And thatΓÇÖs whatΓÇÖs valuable. One of the most important outcomes of the event
is new collaborations. Over the years, we have seen new interdisciplinary
research between people who met at the workshop, and ideas and methodologies
move from one field into another based on connections made at the workshop. This
is why some of us have been coming back every year for over a decade.

This yearΓÇÖs schedule is here. This page lists the participants and includes
links to some of their work. As he does every year, Ross Anderson is live
blogging the talks. We are back 100% in person after two years of fully remote
and one year of hybrid.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh,
eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, and fifteenth
SHB workshops. Follow those links to find summaries, papers, and occasionally
audio/video recordings of the sessions. Ross also maintains a good webpage of
psychology and security resources.

ItΓÇÖs actually hard to believe that the workshop has been going on for this
long, and that itΓÇÖs still vibrant. We rotate between organizers, so next year
is my turn in Cambridge (the Massachusetts one).

** *** ***** ******* *********** *************

Power LED Side-Channel Attack

[2023.06.19] This is a clever new side-channel attack:

The first attack uses an Internet-connected surveillance camera to take a
high-speed video of the power LED on a smart card reader -- or of an attached
peripheral device -- during cryptographic operations. This technique allowed the
researchers to pull a 256-bit ECDSA key off the same government-approved smart
card used in Minerva. The other allowed the researchers to recover the private
SIKE key of a Samsung Galaxy S8 phone by training the camera of an iPhone 13 on
the power LED of a USB speaker connected to the handset, in a similar way to how
Hertzbleed pulled SIKE keys off Intel and AMD CPUs.

There are lots of limitations:

When the camera is 60 feet away, the room lights must be turned off, but they
can be turned on if the surveillance camera is at a distance of about 6 feet.
(An attacker can also use an iPhone to record the smart card reader power LED.)
The video must be captured for 65 minutes, during which the reader must
constantly perform the operation.

[...]

The attack assumes there is an existing side channel that leaks power
consumption, timing, or other physical manifestations of the device as it
performs a cryptographic operation.

So donΓÇÖt expect this attack to be recovering keys in the real world anytime
soon. But, still, really nice work.

More details from the researchers.

** *** ***** ******* *********** *************

Ethical Problems in Computer Security

[2023.06.21] Tadayoshi Kohno, Yasemin Acar, and Wulf Loh wrote excellent paper
on ethical thinking within the computer security community: ΓÇ£Ethical
Frameworks and Computer Security Trolley Problems: Foundations for
ConversationΓÇ£:

Abstract: The computer security research community regularly tackles ethical
questions. The field of ethics / moral philosophy has for centuries considered
what it means to be ΓÇ£morally goodΓÇ¥ or at least ΓÇ£morally allowed /
acceptable.ΓÇ¥ Among philosophyΓÇÖs contributions are (1) frameworks for
evaluating the morality of actions -- including the well-established
consequentialist and deontological frameworks -- and (2) scenarios (like trolley
problems) featuring moral dilemmas that can facilitate discussion about and
intellectual inquiry into different perspectives on moral reasoning and
decision-making. In a classic trolley problem, consequentialist and
deontological analyses may render different opinions. In this research, we
explicitly make and explore connections between moral questions in computer
security research and ethics / moral philosophy through the creation and
analysis of trolley problem-like computer security-themed moral dilemmas and, in
doing so, we seek to contribute to conversations am ong security researchers
about the morality of security research-related decisions. We explicitly do not
seek to define what is morally right or wrong, nor do we argue for one framework
over another. Indeed, the consequentialist and deontological frameworks that we
center, in addition to coming to different conclusions for our scenarios, have
significant limitations. Instead, by offering our scenarios and by comparing two
different approaches to ethics, we strive to contribute to how the computer
security research field considers and converses about ethical questions,
especially when there are different perspectives on what is morally right or
acceptable. Our vision is for this work to be broadly useful to the computer
security community, including to researchers as they embark on (or choose not to
embark on), conduct, and write about their research, to program committees as
they evaluate submissions, and to educators as they teach about computer
security and ethics.

The paper will be presented at USENIX Security.

** *** ***** ******* *********** *************

AI as Sensemaking for Public Comments

[2023.06.22] ItΓÇÖs become fashionable to think of artificial intelligence as an
inherently dehumanizing technology, a ruthless force of automation that has
unleashed legions of virtual skilled laborers in faceless form. But what if AI
turns out to be the one tool able to identify what makes your ideas special,
recognizing your unique perspective and potential on the issues where it matters
most?

YouΓÇÖd be forgiven if youΓÇÖre distraught about societyΓÇÖs ability to grapple
with this new technology. So far, thereΓÇÖs no lack of prognostications about
the democratic doom that AI may wreak on the US system of government. There are
legitimate reasons to be concerned that AI could spread misinformation, break
public comment processes on regulations, inundate legislators with artificial
constituent outreach, help to automate corporate lobbying, or even generate laws
in a way tailored to benefit narrow interests.

But there are reasons to feel more sanguine as well. Many groups have started
demonstrating the potential beneficial uses of AI for governance. A key
constructive-use case for AI in democratic processes is to serve as discussion
moderator and consensus builder.

To help democracy scale better in the face of growing, increasingly
interconnected populations -- as well as the wide availability of AI language
tools that can generate reams of text at the click of a button -- the US will
need to leverage AIΓÇÖs capability to rapidly digest, interpret and summarize
this content.

There are two different ways to approach the use of generative AI to improve
civic participation and governance. Each is likely to lead to drastically
different experience for public policy advocates and other people trying to have
their voice heard in a future system where AI chatbots are both the dominant
readers and writers of public comment.

For example, consider individual letters to a representative, or comments as
part of a regulatory rulemaking process. In both cases, we the people are
telling the government what we think and want.

For more than half a century, agencies have been using human power to read
through all the comments received, and to generate summaries and responses of
their major themes. To be sure, digital technology has helped.

In 2021, the Council of Federal Chief Data Officers recommended modernizing the
comment review process by implementing natural language processing tools for
removing duplicates and clustering similar comments in processes governmentwide.
These tools are simplistic by the standards of 2023 AI. They work by assessing
the semantic similarity of comments based on metrics like word frequency (How
often did you say ΓÇ£personhoodΓÇ¥?) and clustering similar comments and giving
reviewers a sense of what topic they relate to.

Think of this approach as collapsing public opinion. They take a big, hairy mass
of comments from thousands of people and condense them into a tidy set of
essential reading that generally suffices to represent the broad themes of
community feedback. This is far easier for a small agency staff or legislative
office to handle than it would be for staffers to actually read through that
many individual perspectives.

But whatΓÇÖs lost in this collapsing is individuality, personality, and
relationships. The reviewer of the condensed comments may miss the personal
circumstances that led so many commenters to write in with a common point of
view, and may overlook the arguments and anecdotes that might be the most
persuasive content of the testimony.

Most importantly, the reviewers may miss out on the opportunity to recognize
committed and knowledgeable advocates, whether interest groups or individuals,
who could have long-term, productive relationships with the agency.

These drawbacks have real ramifications for the potential efficacy of those
thousands of individual messages, undermining what all those people were doing
it for. Still, practicality tips the balance toward of some kind of
summarization approach. A passionate letter of advocacy doesnΓÇÖt hold any value
if regulators or legislators simply donΓÇÖt have time to read it.

There is another approach. In addition to collapsing testimony through
summarization, government staff can use modern AI techniques to explode it. They
can automatically recover and recognize a distinctive argument from one piece of
testimony that does not exist in the thousands of other testimonies received.
They can discover the kinds of constituent stories and experiences that
legislators love to repeat at hearings, town halls and campaign events. This
approach can sustain the potential impact of individual public comment to shape
legislation even as the volumes of testimony may rise exponentially.

In computing, there is a rich history of that type of automation task in what is
called outlier detection. Traditional methods generally involve finding a simple
model that explains most of the data in question, like a set of topics that well
describe the vast majority of submitted comments. But then they go a step
further by isolating those data points that fall outside the mold -- comments
that donΓÇÖt use arguments that fit into the neat little clusters.

State-of-the-art AI language models arenΓÇÖt necessary for identifying outliers
in text document data sets, but using them could bring a greater degree of
sophistication and flexibility to this procedure. AI language models can be
tasked to identify novel perspectives within a large body of text through
prompting alone. You simply need to tell the AI to find them.

In the absence of that ability to extract distinctive comments, lawmakers and
regulators have no choice but to prioritize on other factors. If there is
nothing better, ΓÇ£who donated the most to our campaignΓÇ¥ or ΓÇ£which company
employs the most of my former staffersΓÇ¥ become reasonable metrics for
prioritizing public comments. AI can help elected representatives do much
better.

If Americans want AI to help revitalize the countryΓÇÖs ailing democracy, they
need to think about how to align the incentives of elected leaders with those of
individuals. Right now, as much as 90% of constituent communications are mass
emails organized by advocacy groups, and they go largely ignored by staffers.
People are channeling their passions into a vast digital warehouses where
algorithms box up their expressions so they donΓÇÖt have to be read. As a
result, the incentive for citizens and advocacy groups is to fill that box up to
the brim, so someone will notice itΓÇÖs overflowing.

A talented, knowledgeable, engaged citizen should be able to articulate their
ideas and share their personal experiences and distinctive points of view in a
way that they can be both included with everyone elseΓÇÖs comments where they
contribute to summarization and recognized individually among the other
comments. An effective comment summarization process would extricate those
unique points of view from the pile and put them into lawmakersΓÇÖ hands.

This essay was written with Nathan Sanders, and previously appeared in the
Conversation.

** *** ***** ******* *********** *************

UPS Data Harvested for SMS Phishing Attacks

[2023.06.23] I get UPS phishing spam on my phone all the time. I never click on
it, because itΓÇÖs so obviously spam. Turns out that hackers have been
harvesting actual UPS delivery data from a Canadian tracking tool for its
phishing SMSs.

** *** ***** ******* *********** *************

Excel Data Forensics

[2023.06.26] In this detailed article about academic plagiarism are some
interesting details about how to do data forensics on Excel files. It really
needs the graphics to understand, so see the description at the link.

(And, yes, an author of a paper on dishonesty is being accused of dishonesty.
ThereΓÇÖs more evidence.)

EDITED TO ADD (7/13): Guardian article.

** *** ***** ******* *********** *************

Typing Incriminating Evidence in the Memo Field

[2023.06.27] DonΓÇÖt do it:

Recently, the manager of the Harvard Med School morgue was accused of stealing
and selling human body parts. Cedric Lodge and his wife Denise were among a
half-dozen people arrested for some pretty grotesque crimes. This part is also
at least a little bit funny though:

Over a three-year period, Taylor appeared to pay Denise Lodge more than $37,000
for human remains. One payment, for $1,000 included the memo ΓÇ£head number
7.ΓÇ¥ Another, for $200, read ΓÇ£braiiiiiins.ΓÇ¥

ItΓÇÖs so easy to think that you wonΓÇÖt get caught.

** *** ***** ******* *********** *************

Stalkerware Vendor Hacked

[2023.06.28] The stalkerware company LetMeSpy has been hacked:

TechCrunch reviewed the leaked data, which included years of victimsΓÇÖ call
logs and text messages dating back to 2013.

The database we reviewed contained current records on at least 13,000
compromised devices, though some of the devices shared little to no data with
LetMeSpy. (LetMeSpy claims to delete data after two months of account
inactivity.)

[...]

The database also contained over 13,400 location data points for several
thousand victims. Most of the location data points are centered over population
hotspots, suggesting the majority of victims are located in the United States,
India and Western Africa.

The data also contained the spywareΓÇÖs master database, including information
about 26,000 customers who used the spyware for free and the email addresses of
customers who bought paying subscriptions.

The leaked data contains no identifying information, which means people whose
data was leaked canΓÇÖt be notified. (This is actually much more complicated
than it might seem, because alerting the victims often means alerting the
stalker -- which can put the victims into unsafe situations.)

** *** ***** ******* *********** *************

Redacting Documents with a Black Sharpie DoesnΓÇÖt Work

[2023.06.29] We have learned this lesson again:

As part of the FTC v. Microsoft hearing, Sony supplied a document from
PlayStation chief Jim Ryan that includes redacted details on the margins Sony
shares with publishers, its Call of Duty revenues, and even the cost of
developing some of its games.

It looks like someone redacted the documents with a black Sharpie but when you
scan them in, itΓÇÖs easy to see some of the redactions. Oops.

I donΓÇÖt particularly care about the redacted information, but itΓÇÖs there in
the article.

** *** ***** ******* *********** *************

The US Is Spying on the UN Secretary General

[2023.06.30] The Washington Post is reporting that the US is spying on the UN
Secretary General.

The reports on Guterres appear to contain the secretary generalΓÇÖs personal
conversations with aides regarding diplomatic encounters. They indicate that the
United States relied on spying powers granted under the Foreign Intelligence
Surveillance Act (FISA) to gather the intercepts.

Lots of details about different conversations in the article, which are based on
classified documents leaked on Discord by Jack Teixeira.

There will probably a lot of faux outrage at this, but spying on foreign leaders
is a perfectly legitimate use of the NSAΓÇÖs capabilities and authorities. (If
the NSA didnΓÇÖt spy on the UN Secretary General, we should fire it and replace
it with a more competent NSA.) ItΓÇÖs the bulk surveillance of whole populations
that should outrage us.

** *** ***** ******* *********** *************

Self-Driving Cars Are Surveillance Cameras on Wheels

[2023.07.03] Police are already using self-driving car footage as video
evidence:

While security cameras are commonplace in American cities, self-driving cars
represent a new level of access for law enforcement and a new method for
encroachment on privacy, advocates say. Crisscrossing the city on their routes,
self-driving cars capture a wider swath of footage. And itΓÇÖs easier for law
enforcement to turn to one company with a large repository of videos and a
dedicated response team than to reach out to all the businesses in a
neighborhood with security systems.

ΓÇ£WeΓÇÖve known for a long time that they are essentially surveillance cameras
on wheels,ΓÇ¥ said Chris Gilliard, a fellow at the Social Science Research
Council. ΓÇ£WeΓÇÖre supposed to be able to go about our business in our
day-to-day lives without being surveilled unless we are suspected of a crime,
and each little bit of this technology strips away that ability.ΓÇ¥

[...]

While self-driving services like Waymo and Cruise have yet to achieve the same
level of market penetration as Ring, the wide range of video they capture while
completing their routes presents other opportunities. In addition to the San
Francisco homicide, BloombergΓÇÖs review of court documents shows police have
sought footage from Waymo and Cruise to help solve hit-and-runs, burglaries,
aggravated assaults, a fatal collision and an attempted kidnapping.

In all cases reviewed by Bloomberg, court records show that police collected
footage from Cruise and Waymo shortly after obtaining a warrant. In several
cases, Bloomberg could not determine whether the recordings had been used in the
resulting prosecutions; in a few of the cases, law enforcement and attorneys
said the footage had not played a part, or was only a formality. However, video
evidence has become a lynchpin of criminal cases, meaning itΓÇÖs likely only a
matter of time.

** *** ***** ******* *********** *************

The Password Game

[2023.07.04] Amusing parody of password rules.

BoingBoing:

For example, at a certain level, your password must include todayΓÇÖs Wordle
answer. And then thereΓÇÖs rule #27: ΓÇ£At least 50% of your password must be in
the Wingdings font.ΓÇ¥

EDITED TO ADD (7/13): Here are all the rules.

** *** ***** ******* *********** *************

Class-Action Lawsuit for Scraping Data without Permission

[2023.07.05] I have mixed feelings about this class-action lawsuit against
OpenAI and Microsoft, claiming that it ΓÇ£scraped 300 billion words from the
internetΓÇ¥ without either registering as a data broker or obtaining consent. On
the one hand, I want this to be a protected fair use of public data. On the
other hand, I want us all to be compensated for our uniquely human ability to
generate language.

ThereΓÇÖs an interesting wrinkle on this. A recent paper showed that using AI
generated text to train another AI invariably ΓÇ£causes irreversible defects.ΓÇ¥
From a summary:

The tails of the original content distribution disappear. Within a few
generations, text becomes garbage, as Gaussian distributions converge and may
even become delta functions. We call this effect model collapse.

Just as weΓÇÖve strewn the oceans with plastic trash and filled the atmosphere
with carbon dioxide, so weΓÇÖre about to fill the Internet with blah. This will
make it harder to train newer models by scraping the web, giving an advantage to
firms which already did that, or which control access to human interfaces at
scale. Indeed, we already see AI startups hammering the Internet Archive for
training data.

This is the same idea that Ted Chiang wrote about: that ChatGPT is a ΓÇ£blurry
JPEG of all the text on the Web.ΓÇ¥ But the paper includes the math that proves
the claim.

What this means is that text from before last year -- text that is known
human-generated -- will become increasingly valuable.

** *** ***** ******* *********** *************

Belgian Tax Hack

[2023.07.06] HereΓÇÖs a fascinating tax hack from Belgium (listen to the details
here, episode #484 of ΓÇ£No Such Thing as a Fish,ΓÇ¥ at 28:00).

Basically, itΓÇÖs about a music festival on the border between Belgium and
Holland. The stage was in Holland, but the crowd was in Belgium. When the
copyright collector came around, they argued that they didnΓÇÖt have to pay any
tax because the audience was in a different country. Supposedly it worked.

** *** ***** ******* *********** *************

The AI Dividend

[2023.07.07] For four decades, Alaskans have opened their mailboxes to find
checks waiting for them, their cut of the black gold beneath their feet. This is
AlaskaΓÇÖs Permanent Fund, funded by the stateΓÇÖs oil revenues and paid to
every Alaskan each year. WeΓÇÖre now in a different sort of resource rush, with
companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies -- like ChatGPT -- and AI
companies are touting their awesome power. But they arenΓÇÖt talking about how
that power comes from all of us. Without all of our writings and photos that AI
companies are using to train their models, they would have nothing to sell. Big
Tech companies are currently taking the work of the American people, without our
knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers todayΓÇÖs AI, and we have a way
to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech
companies produce output from generative AI that was trained on public data,
they would pay a tiny licensing fee, by the word or pixel or relevant unit of
data. Those fees would go into the AI Dividend fund. Every few months, the
Commerce Department would send out the entirety of the fund, split equally, to
every resident nationwide. ThatΓÇÖs it.

ThereΓÇÖs no reason to complicate it further. Generative AI needs a wide variety
of data, which means all of us are valuable -- not just those of us who write
professionally, or prolifically, or well. Figuring out who contributed to which
words the AIs output would be both challenging and invasive, given that even the
companies themselves donΓÇÖt quite know how their models work. Paying the
dividend to people in proportion to the words or images they create would just
incentivize them to create endless drivel, or worse, use AI to create that
drivel. The bottom line for Big Tech is that if their AI model was created using
public data, they have to pay into the fund. If youΓÇÖre an American, you get
paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from
fees. Only Big Tech companies -- those with substantial revenue -- would be
required to pay into the fund. And they would pay at the point of generative AI
output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party
services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into
this fund, AI companies will receive a license that allows them to use public
data when training their AI. This wonΓÇÖt supersede normal copyright law, of
course. If a model starts producing copyright material beyond fair use, thatΓÇÖs
a separate issue.

Using todayΓÇÖs numbers, hereΓÇÖs what it would look like. The licensing fee
could be small, starting at $0.001 per word generated by AI. A similar type of
fee would be applied to other categories of generative AI outputs, such as
images. ThatΓÇÖs not a lot, but it adds up. Since most of Big Tech has started
integrating generative AI into products, these fees would mean an annual
dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isnΓÇÖt new, and some companies have tried
to do it themselves for users who opted in. And the idea of the public being
repaid for use of their resources goes back to well before AlaskaΓÇÖs oil fund.
But generative AI is different: It uses data from all of us whether we like it
or not, itΓÇÖs ubiquitous, and itΓÇÖs potentially immensely valuable. It would
cost Big Tech companies a fortune to create a synthetic equivalent to our data
from scratch, and synthetic data would almost certainly result in worse output.
They canΓÇÖt create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a
dividend to Americans. Other countries can create their own versions, applying a
similar fee to AI used within their borders. Just like an American company
collects VAT for services sold in Europe, but not here, each country can
independently manage their AI policy.

DonΓÇÖt get us wrong; this isnΓÇÖt an attempt to strangle this nascent
technology. Generative AI has interesting, valuable, and possibly transformative
uses, and this policy is aligned with that future. Even with the fees of the AI
Dividend, generative AI will be cheap and will only get cheaper as technology
improves. There are also risks -- both every day and esoteric -- posed by AI,
and the government may need to develop policies to remedy any harms that arise.

Our plan canΓÇÖt make sure there are no downsides to the development of AI, but
it would ensure that all Americans will share in the upsides -- particularly
since this new technology isnΓÇÖt possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on
Politico.com.

** *** ***** ******* *********** *************

Wisconsin Governor Hacks the Veto Process

[2023.07.10] In my latest book, A HackerΓÇÖs Mind, I wrote about hacks as
loophole exploiting. This is a great example: The Wisconsin governor used his
line-item veto powers -- supposedly unique in their specificity -- to change a
one-year funding increase into a 400-year funding increase.

He took this wording:

Section 402. 121.905 (3) (c) 9. of the statues is created to read: 121.903 (3)
(c) 9. For the limit for the 2023-24 school year and the 2024-25 school year,
add $325 to the result under par. (b).

And he deleted these words, numbers, and punctuation marks:

Section 402. 121.905 (3) (c) 9. of the statues is created to read: 121.903 (3)
(c) 9. For the limit for the 2023-24 school year and the 2024 -- 25 school year,
add $325 to the result under par. (b).

Seems to be legal:

Rick Champagne, director and general counsel of the nonpartisan Legislative
Reference Bureau, said EversΓÇÖ 400-year veto is lawful in terms of its form
because the governor vetoed words and digits.

ΓÇ£Both are allowable under the constitution and court decisions on partial
veto. The hyphen seems to be new, but the courts have allowed partial veto of
punctuation,ΓÇ¥ Champagne said.

Definitely a hack. This is not what anyone thinks about when they imagine using
a line-item veto.

And itΓÇÖs not the first time. I donΓÇÖt know the details, but this was
certainly the same sort of character-by-character editing:

Mr EversΓÇÖ Republican predecessor once deploying it to extend a state
programmeΓÇÖs deadline by one thousand years.

A couple of other things:

One, this isnΓÇÖt really a 400-year change. Yes, thatΓÇÖs what the law says. But
it can be repealed. And who knows that a dollar will be worth -- or if they will
even be used -- that many decades from now.

And two, from now all Wisconsin lawmakers will have to be on the alert for this
sort of thing. All contentious bills will be examined for the possibility of
this sort of delete-only rewriting. This sentence could have been reworded, for
example:

For the 2023-2025 school years, add $325 to the result under par. (b).

The problem is, of course, that legalese developed over the centuries to be
extra wordy in order to limit disputes. If lawmakers need to state things in the
minimal viable language, that will increase court battles later. And thatΓÇÖs
not even enough. Bills can be thousands of words long. If any arbitrary
characters can be glued together by deleting enough other characters, bills can
say anything the governor wants.

The real solution is to return the line-item veto to what we all think it is:
the ability to remove individual whole provisions from a law before signing it.

** *** ***** ******* *********** *************

Privacy of Printing Services

[2023.07.11] The Washington Post has an article about popular printing services,
and whether or not they read your documents and mine the data when you use them
for printing:

Ideally, printing services should avoid storing the content of your files, or at
least delete daily. Print services should also communicate clearly upfront what
information theyΓÇÖre collecting and why. Some services, like the New York
Public Library and PrintWithMe, do both.

Others dodged our questions about what data they collect, how long they store it
and whom they share it with. Some -- including Canon, FedEx and Staples --
declined to answer basic questions about their privacy practices.

** *** ***** ******* *********** *************

Google Is Using Its Vast Data Stores to Train AI

[2023.07.12] No surprise, but Google just changed its privacy policy to reflect
broader uses of all the surveillance data it has captured over the years:

Research and development: Google uses information to improve our services and to
develop new products, features and technologies that benefit our users and the
public. For example, we use publicly available information to help train
GoogleΓÇÖs AI models and build products and features like Google Translate,
Bard, and Cloud AI capabilities.

(I quote the privacy policy as of today. The Mastodon link quotes the privacy
policy from ten days ago. So things are changing fast.)

** *** ***** ******* *********** *************

French Police Will Be Able to Spy on People through Their Cell Phones

[2023.07.13] The French police are getting new surveillance powers:

French police should be able to spy on suspects by remotely activating the
camera, microphone and GPS of their phones and other devices, lawmakers agreed
late on Wednesday, July 5.

[...]

Covering laptops, cars and other connected objects as well as phones, the
measure would allow the geolocation of suspects in crimes punishable by at least
five yearsΓÇÖ jail. Devices could also be remotely activated to record sound and
images of people suspected of terror offenses, as well as delinquency and
organized crime.

[...]

During a debate on Wednesday, MPs in President Emmanuel MacronΓÇÖs camp inserted
an amendment limiting the use of remote spying to ΓÇ£when justified by the
nature and seriousness of the crimeΓÇ¥ and ΓÇ£for a strictly proportional
duration.ΓÇ¥ Any use of the provision must be approved by a judge, while the
total duration of the surveillance cannot exceed six months. And sensitive
professions including doctors, journalists, lawyers, judges and MPs would not be
legitimate targets.

** *** ***** ******* *********** *************

Buying Campaign Contributions as a Hack

[2023.07.14] The first Republican primary debate has a popularity threshold to
determine who gets to appear: 40,000 individual contributors. Now there are a
lot of conventional ways a candidate can get that many contributors. Doug Burgum
came up with a novel idea: buy them:

A long-shot contender at the bottom of recent polls, Mr. Burgum is offering $20
gift cards to the first 50,000 people who donate at least $1 to his campaign.
And one lucky donor, as his campaign advertised on Facebook, will have the
chance to win a Yeti Tundra 45 cooler that typically costs more than $300 --
just for donating at least $1.

ItΓÇÖs actually a pretty good idea. He could have spent the money on direct
mail, or personalized social media ads, or television ads. Instead, he buys gift
cards at maybe two-thirds of face value (sellers calculate the advertising
value, the additional revenue that comes from using them to buy something more
expensive, and breakage when theyΓÇÖre not redeemed at all), and resells them.
Plus, many contributors probably give him more than $1, and he got a lot of
publicity over this.

Clever hack.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

** *** ***** ******* *********** *************

--- BBBS/Li6 v4.10 Toy-5
 * Origin: TCOB1 - binkd.thecivv.ie (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0172 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.241108