AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [77 / 1585] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, December 15, 2018   December 16, 2018
 12:00 PM *  

Crypto-Gram
December 15, 2018

by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************
In this issue:

    Chip Cards Fail to Reduce Credit Card Fraud in the US
    Hidden Cameras in Streetlights
    Mailing Tech Support a Bomb
    Israeli Surveillance Gear
    Worst-Case Thinking Breeds Fear and Irrationality
    What Happened to Cyber 9/11?
    The PCLOB Needs a Director
    Information Attacks against Democracies
    Using Machine Learning to Create Fake Fingerprints
    How Surveillance Inhibits Freedom of Expression
    Propaganda and the Weakening of Trust in Government
    Distributing Malware By Becoming an Admin on an Open-Source Project
    FBI Takes Down a Massive Advertising Fraud Ring
    That Bloomberg Supply-Chain-Hack Story
    Three-Rotor Enigma Machine Up for Auction Today
    Click Here to Kill Everybody News
    The DoJ's Secret Legal Arguments to Break Cryptography
    Bad Consumer Security Advice
    Security Risks of Chatbots
    Your Personal Data is Already Stolen
    Banks Attacked through Malicious Hardware Connected to the Local Network
    Back Issues of the NSA's Cryptolog
    2018 Annual Report from AI Now
    New Australian Backdoor Law
    Marriott Hack Reported as Chinese State-Sponsored
    Real-Time Attacks Against Two-Factor Authentication

** *** ***** ******* *********** *************
Chip Cards Fail to Reduce Credit Card Fraud in the US

[2018.11.15] A new study finds that credit card fraud has not declined since the
introduction of chip cards in the US. The majority of stolen card information
comes from hacked point-of-sale terminals.

The reasons seem to be twofold. One, the US uses chip-and-signature instead of
chip-and-PIN, obviating the most critical security benefit of the chip. And two,
US merchants still accept magnetic stripe cards, meaning that thieves can steal
credentials from a chip card and create a working cloned mag stripe card.

Boing Boing post.

** *** ***** ******* *********** *************
Hidden Cameras in Streetlights

[2018.11.16] Both the US Drug Enforcement Administration (DEA) and Immigration
and Customs Enforcement (ICE) are hiding surveillance cameras in streetlights.

    According to government procurement data, the DEA has paid a Houston, Texas
company called Cowboy Streetlight Concealments LLC roughly $22,000 since June
2018 for "video recording and reproducing equipment." ICE paid out about
$28,000 to Cowboy Streetlight Concealments over the same period of time.

    It's unclear where the DEA and ICE streetlight cameras have been installed,
or where the next deployments will take place. ICE offices in Dallas, Houston,
and San Antonio have provided funding for recent acquisitions from Cowboy
Streetlight Concealments; the DEA's most recent purchases were funded by the
agency's Office of Investigative Technology, which is located in Lorton,
Virginia.

Fifty thousand dollars doesn't buy a lot of streetlight surveillance cameras, so
either this is a pilot program or there are a lot more procurements elsewhere
that we don't know about.

** *** ***** ******* *********** *************
Mailing Tech Support a Bomb

[2018.11.16] I understand his frustration, but this is extreme:

    When police asked Cryptopay what could have motivated Salonen to send the
company a pipe bomb -- or, rather, two pipe bombs, which is what investigators
found when they picked apart the explosive package -- the only thing the company
could think of was that it had declined his request for a password change.

    In August 2017, Salonen, a customer of Cryptopay, emailed their customer
services team to ask for a new password. They refused, given that it was against
the company's privacy policy.

    A fair point, as it's never a good idea to send a new password in an email.
A password-reset link is safer all round, although it's not clear if Cryptopay
offered this option to Salonen.

** *** ***** ******* *********** *************
Israeli Surveillance Gear

[2018.11.18] The Israeli Defense Force mounted a botched raid in Gaza. They were
attempting to install surveillance gear, which they ended up leaving behind.
(There are photos -- scroll past the video.) Israeli media is claiming that the
capture of this gear by Hamas causes major damage to Israeli electronic
surveillance capabilities. The Israelis themselves destroyed the vehicle the
commandos used to enter Gaza. I'm guessing they did so because there was more
gear in it they didn't want falling into the Palestinians' hands.

Can anyone intelligently speculate about what the photos shows? And if there are
other photos on the Internet, please post them.

** *** ***** ******* *********** *************
Worst-Case Thinking Breeds Fear and Irrationality

[2018.11.18] Here's a crazy story from the UK. Basically, someone sees a man and
a little girl leaving a shopping center. Instead of thinking "it must be a
father and daughter, which happens millions of times a day and is perfectly
normal," he thinks "this is obviously a case of child abduction and I must alert
the authorities immediately." And the police, instead of thinking "why in the
world would this be a kidnapping and not a normal parental activity," thinks "oh
my god, we must all panic immediately." And they do, scrambling helicopters,
searching cars leaving the shopping center, and going door-to-door looking for
clues. Seven hours later, the police eventually came to realize that she was
safe asleep in bed.

Lenore Skenazy writes further:

    Can we agree that something is wrong when we leap to the worst possible
conclusion upon seeing something that is actually nice? In an email Furedi added
that now, "Some fathers told me that they think and look around before they kiss
their kids in public. Society is all too ready to interpret the most innocent of
gestures as a prelude to abusing a child."

    So our job is to try to push the re-set button.

    If you see an adult with a child in plain daylight, it is not irresponsible
to assume they are caregiver and child. Remember the stat from David Finkelhor,
head of the Crimes Against Children Research Center at the University of New
Hampshire. He has heard of NO CASE of a child kidnapped from its parents in
public and sold into sex trafficking.

    We are wired to see "Taken" when we're actually witnessing something far
less exciting called Everyday Life. Let's tune in to reality.

This is the problem with the "see something, say something" mentality. As I
wrote back in 2007:

    If you ask amateurs to act as front-line security personnel, you shouldn't
be surprised when you get amateur security.

And the police need to understand the base-rate fallacy better.

** *** ***** ******* *********** *************
What Happened to Cyber 9/11?

[2018.11.19] A recent article in the Atlantic asks why we haven't seen a"cyber
9/11" in the past fifteen or so years. (I, too, remember the increasingly
frantic and fearful warnings of a "cyber Peal Harbor," "cyber Katrina" -- when
that was a thing -- or "cyber 9/11." I made fun of those warnings back then.)
The author's answer:

    Three main barriers are likely preventing this. For one, cyberattacks can
lack the kind of drama and immediate physical carnage that terrorists seek.
Identifying the specific perpetrator of a cyberattack can also be difficult,
meaning terrorists might have trouble reaping the propaganda benefits of clear
attribution. Finally, and most simply, it's possible that they just can't pull
it off.

Commenting on the article, Rob Graham adds:

    I think there are lots of warning from so-called "experts" who aren't
qualified to make such warnings, that the press errs on the side of giving such
warnings credibility instead of challenging them.

    I think mostly the reason why cyberterrorism doesn't happen is that which
motivates violent people is different than what which motivates technical
people, pulling apart the groups who would want to commit cyberterrorism from
those who can.

These are all good reasons, but I think both authors missed the most important
one: there simply aren't a lot of terrorists out there. Let's ask the question
more generally: why hasn't there been another 9/11 since 2001? I also remember
dire predictions that large-scale terrorism was the new normal, and that we
would see 9/11-scale attacks regularly. But since then, nothing. We could credit
the fantastic counterterrorism work of the US and other countries, but a more
reasonable explanation is that there are very few terrorists and even fewer
organized ones. Our fear of terrorism is far greater than the actual risk.

This isn't to say that cyberterrorism can never happen. Of course it will,
sooner or later. But I don't foresee it becoming a preferred terrorism method
anytime soon. Graham again:

    In the end, if your goal is to cause major power blackouts, your best bet
is to bomb power lines and distribution centers, rather than hack them.

** *** ***** ******* *********** *************
The PCLOB Needs a Director

[2018.11.20] The US Privacy and Civil Liberties Oversight Board is looking for a
director. Among other things, this board has some oversight role over the NSA.
More precisely, it can examine what any executive-branch agency is doing
about counterterrorism. So it can examine the program of TSA watchlists, NSA
anti-terrorism surveillance, and FBI counterterrorism activities.

The PCLOB was established in 2004 (when it didn't do much), disappeared from
2007-2012, and reconstituted in 2012. It issued a major report on NSA
surveillance in 2014. It has dwindled since then, having as few as one member.
Last month, the Senate confirmed three new members, including Ed Felten.

So, potentially an important job if anyone out there is interested.

** *** ***** ******* *********** *************
Information Attacks against Democracies

[2018.11.21] Democracy is an information system.

That's the starting place of our new paper: "Common-Knowledge Attacks on
Democracy." In it, we look at democracy through the lens of information
security, trying to understand the current waves of Internet disinformation
attacks. Specifically, we wanted to explain why the same disinformation
campaigns that act as a stabilizing influence in Russia are destabilizing in the
United States.

The answer revolves around the different ways autocracies and democracies work
as information systems. We start by differentiating between two types of
knowledge that societies use in their political systems. The first is common
political knowledge, which is the body of information that people in a society
broadly agree on. People agree on who the rulers are and what their claim to
legitimacy is. People agree broadly on how their government works, even if they
don't like it. In a democracy, people agree about how elections work: how
districts are created and defined, how candidates are chosen, and that their
votes count -- even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested
political knowledge, which is, broadly, things that people in society disagree
about. Examples are easy to bring to mind: how much of a role the government
should play in the economy, what the tax rules should be, what sorts of
regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms
of knowledge across autocracies and democracies. These two forms of government
have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to
solve problems. Different political groups have different ideas of how to
govern, and those groups vie for political influence by persuading voters. There
is also long-term uncertainty about who will be in charge and able to set policy
goals. Ideally, this is the mechanism through which a polity can harness the
diversity of perspectives of its members to better solve complex policy
problems. When no-one knows who is going to be in charge after the next
election, different parties and candidates will vie to persuade voters of the
benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how
government functions and how political leaders are chosen. There also needs to
be common knowledge of who the political actors are, what they and their parties
stand for, and how they clash with each other. Furthermore, this knowledge is
decentralized across a wide variety of actors -- an essential element, since
ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy. There, common political knowledge about who is
in charge over the long term and what their policy goals are is a basic
condition of stability. Autocracies do not require common political knowledge
about the efficacy and fairness of elections, and strive to maintain a monopoly
on other forms of common political knowledge. They actively suppress common
political knowledge about potential groupings within their society, their levels
of popular support, and how they might form coalitions with each other. On the
other hand, they benefit from contested political knowledge about
nongovernmental groups and actors in society. If no one really knows which other
political parties might form, what they might stand for, and what support they
might get, that itself is a significant barrier to those parties ever forming.

This difference has important consequences for security. Authoritarian regimes
are vulnerable to information attacks that challenge their monopoly on common
political knowledge. They are vulnerable to outside information that
demonstrates that the government is manipulating common political knowledge to
their own benefit. And they are vulnerable to attacks that turn contested
political knowledge -- uncertainty about potential adversaries of the ruling
regime, their popular levels of support and their ability to form coalitions --
into common political knowledge. As such, they are vulnerable to tools that
allow people to communicate and organize more easily, as well as tools that
provide citizens with outside information and perspectives.

For example, before the first stirrings of the Arab Spring, the Tunisian
government had extensive control over common knowledge. It required everyone to
publicly support the regime, making it hard for citizens to know how many other
people hated it, and it prevented potential anti-regime coalitions from
organizing. However, it didn't pay attention in time to Facebook, which allowed
citizens to talk more easily about how much they detested their rulers, and,
when an initial incident sparked a protest, to rapidly organize mass
demonstrations against the regime. The Arab Spring faltered in many countries,
but it is no surprise that countries like Russia see the Internet openness
agenda as a knife at their throats.

Democracies, in contrast, are vulnerable to information attacks that turn common
political knowledge into contested political knowledge. If people disagree on
the results of an election, or whether a census process is accurate, then
democracy suffers. Similarly, if people lose any sense of what the other
perspectives in society are, who is real and who is not real, then the debate
and argument that democracy thrives on will be degraded. This is what seems to
be Russia's aims in their information campaigns against the US: to weaken our
collective trust in the institutions and systems that hold our country together.
This is also the situation that writers like Adrien Chen and Peter Pomerantsev
describe in today's Russia, where no one knows which parties or voices are
genuine, and which are puppets of the regime, creating general paranoia and
despair.

This difference explains how the same policy measure can increase the stability
of one form of regime and decrease the stability of the other. We have already
seen that open information flows have benefited democracies while at the same
time threatening autocracies. In our language, they transform regime-supporting
contested political knowledge into regime-undermining common political
knowledge. And much more recently, we have seen other uses of the same
information flows undermining democracies by turning regime-supported common
political knowledge into regime-undermining contested political knowledge.

In other words, the same fake news techniques that benefit autocracies by making
everyone unsure about political alternatives undermine democracies by making
people question the common political systems that bind their society.

This framework not only helps us understand how different political systems are
vulnerable and how they can be attacked, but also how to bolster security in
democracies. First, we need to better defend the common political knowledge that
democracies need to function. That is, we need to bolster public confidence in
the institutions and systems that maintain a democracy. Second, we need to make
it harder for outside political groups to cooperate with inside political groups
and organize disinformation attacks, through measures like transparency in
political funding and spending. And finally, we need to treat attacks on common
political knowledge by insiders as being just as threatening as the same attacks
by foreigners.

There's a lot more in the paper.

This essay was co-authored by Henry Farrell, and previously appeared on
Lawfare.com.

** *** ***** ******* *********** *************
Using Machine Learning to Create Fake Fingerprints

[2018.11.23] Researchers are able to create fake fingerprints that result in a
20% false-positive rate.

    The problem is that these sensors obtain only partial images of users'
fingerprints -- at the points where they make contact with the scanner. The
paper noted that since partial prints are not as distinctive as complete prints,
the chances of one partial print getting matched with another is high.

    The artificially generated prints, dubbed DeepMasterPrints by the
researchers, capitalize on the aforementioned vulnerability to accurately
imitate one in five fingerprints in a database. The database was originally
supposed to have only an error rate of one in a thousand.

    Another vulnerability exploited by the researchers was the high prevalence
of some natural fingerprint features such as loops and whorls, compared to
others. With this understanding, the team generated some prints that contain
several of these common features. They found that these artificial prints were
more likely to match with other prints than would be normally possible.

If this result is robust -- and I assume it will be improved upon over the
coming years -- it will make the current generation of fingerprint readers
obsolete as secure biometrics. It also opens a new chapter in the arms race
between biometric authentication systems and fake biometrics that can fool them.

More interestingly, I wonder if similar techniques can be brought to bear
against other biometrics are well.

Research paper.

Slashdot thread

** *** ***** ******* *********** *************
How Surveillance Inhibits Freedom of Expression

[2018.11.26] In my book Data and Goliath, I write about the value of privacy. I
talk about how it is essential for political liberty and justice, and for
commercial fairness and equality. I talk about how it increases personal freedom
and individual autonomy, and how the lack of it makes us all less secure. But
this is probably the most important argument as to why society as a whole must
protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their
behavior when they live their lives under surveillance. They are less likely to
speak freely and act individually. They self-censor. They become conformist.
This is obviously true for government surveillance, but is true for corporate
surveillance as well. We simply aren't as willing to be our individual selves
when others are watching.

Let's take an example: hearing that parents and children are being separated as
they cross the US border, you want to learn more. You visit the website of an
international immigrants' rights group, a fact that is available to the
government through mass Internet surveillance. You sign up for the group's
mailing list, another fact that is potentially available to the government. The
group then calls or e-mails to invite you to a local meeting. Same. Your license
plates can be collected as you drive to the meeting; your face can be scanned
and identified as you walk into and out of the meeting. If, instead of visiting
the website, you visit the group's Facebook page, Facebook knows that you did
and that feeds into its profile of you, available to advertisers and political
activists alike. Ditto if you like their page, share a link with your friends,
or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your
family is. Or maybe you have friends or coworkers who are. How likely are you to
get involved if you know that your interest and concern can be gathered and used
by government and corporate actors? What if the issue you are interested in is
pro- or anti-gun control, anti-police violence or in support of the police? Does
that make a difference?

Maybe the issue doesn't matter, and you would never be afraid to be identified
and tracked based on your political or social interests. But even if you are so
fearless, you probably know someone who has more to lose, and thus more to fear,
from their personal, sexual, or political beliefs being exposed.

This isn't just hypothetical. In the months and years after the 9/11 terrorist
attacks, many of us censored what we spoke about on social media or what we
searched on the Internet. We know from a 2013 PEN study that writers in the
United States self-censored their browsing habits out of fear the government was
watching. And this isn't exclusively an American event; Internet self-censorship
is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the
presence of surveillance means society cannot experiment with new things without
fear of reprisal, and that means those experiments -- if found to be inoffensive
or even essential to society -- cannot slowly become commonplace, moral, and
then legal. If surveillance nips that process in the bud, change never happens.
All social progress -- from ending slavery to fighting for women's rights --
began as ideas that were, quite literally, dangerous to assert. Yet without the
ability to safely develop, discuss, and eventually act on those assertions, our
society would not have been able to further its democratic values in the way
that it has.

Consider the decades-long fight for gay rights around the world. Within our
lifetimes we have made enormous strides to combat homophobia and increase
acceptance of queer folks' right to marry. Queer relationships slowly progressed
from being viewed as immoral and illegal, to being viewed as somewhat moral and
tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed
the bigoted beast, but the ability to act in private was essential in the
beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it's currently sitting
between somewhat moral, and -- depending on the state or country in question --
tolerated and legal. But, again, for this to have happened, someone decades ago
had to try pot and realize that it wasn't really harmful, either to themselves
or to those around them. Then it had to become a counterculture, and finally a
social and political movement. If pervasive surveillance meant that those early
pot smokers would have been arrested for doing something illegal, the movement
would have been squashed before inception. Of course the story is more
complicated than that, but the ability for members of society to privately smoke
weed was essential for putting it on the path to legalization.

We don't yet know which subversive ideas and illegal acts of today will become
political causes and positive social change tomorrow, but they're around. And
they require privacy to germinate. Take away that privacy, and we'll have a much
harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages
society to make more things illegal. Consider the things you do -- the different
things each of us does -- that portions of society find immoral. Not just
recreational drugs and gay sex, but gambling, dancing, public displays of
affection. All of us do things that are deemed immoral by some groups, but are
not illegal because they don't harm anyone. But it's important that these things
can be done out of the disapproving gaze of those who would otherwise rally
against such practices.

If there is no privacy, there will be pressure to change. Some people will
recognize that their morality isn't necessarily the morality of everyone -- and
that that's okay. But others will start demanding legislative change, or using
less legal and more violent means, to force others to match their idea of
morality.

It's easy to imagine the more conservative (in the small-c sense, not in the
sense of the named political party) among us getting enough power to make
illegal what they would otherwise be forced to witness. In this way, privacy
helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today's
surveillance capabilities in the 1920s, it would have been far more effectively
enforced. Recipes for making your own spirits would have been much harder to
distribute. Speakeasies would have been impossible to keep secret. The criminal
trade in illegal alcohol would also have been more effectively suppressed. There
would have been less discussion about the harms of Prohibition, less "what if we
didn't?" thinking. Political organizing might have been difficult. In that
world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in
the ubiquitous surveillance of its citizens, with the goal not of crime
prevention but of social control. They are about to further enhance their
system, giving every citizen a "social credit" rating. The details are yet
unclear, but the general concept is that people will be rated based on their
activities, both online and off. Their political comments, their friends and
associates, and everything else will be assessed and scored. Those who are
conforming, obedient, and apolitical will be given high scores. People without
those scores will be denied privileges like access to certain schools and
foreign travel. If the program is half as far-reaching as early reports
indicate, the subsequent pressure to conform will be enormous. This social
surveillance system is precisely the sort of surveillance designed to maintain
the status quo.

For social norms to change, people need to deviate from these inherited norms.
People need the space to try alternate ways of living without risking arrest or
social ostracization. People need to be able to read critiques of those norms
without anyone's knowledge, discuss them without their opinions being recorded,
and write about their experiences without their names attached to their words.
People need to be able to do things that others find distasteful, or even
immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving
the few room to experiment free from the watchful eye of the many. Even if you
are not personally chilled by ubiquitous surveillance, the society you live in
is, and the personal costs are unequivocal.

This essay originally appeared in McSweeney's issue #54: "The End of Trust." It
was reprinted on Wired.com.

** *** ***** ******* *********** *************
Propaganda and the Weakening of Trust in Government

[2018.11.27] On November 4, 2016, the hacker "Guccifer 2.0," a front for
Russia's military intelligence service, claimed in a blogpost that the Democrats
were likely to use vulnerabilities to hack the presidential elections. On
November 9, 2018, President Donald Trump started tweeting about the senatorial
elections in Florida and Arizona. Without any evidence whatsoever, he said that
Democrats were trying to steal the election through "FRAUD."

Cybersecurity experts would say that posts like Guccifer 2.0's are intended to
undermine public confidence in voting: a cyber-attack against the US democratic
system. Yet Donald Trump's actions are doing far more damage to democracy. So
far, his tweets on the topic have been retweeted over 270,000 times, eroding
confidence far more effectively than any foreign influence campaign.

We need new ideas to explain how public statements on the Internet can weaken
American democracy. Cybersecurity today is not only about computer systems. It's
also about the ways attackers can use computer systems to manipulate and
undermine public expectations about democracy. Not only do we need to rethink
attacks against democracy; we also need to rethink the attackers as well.

This is one key reason why we wrote a new research paper which uses ideas from
computer security to understand the relationship between democracy and
information. These ideas help us understand attacks which destabilize confidence
in democratic institutions or debate.

Our research implies that insider attacks from within American politics can be
more pernicious than attacks from other countries. They are more sophisticated,
employ tools that are harder to defend against, and lead to harsh political
tradeoffs. The US can threaten charges or impose sanctions when Russian trolling
agencies attack its democratic system. But what punishments can it use when the
attacker is the US president?

People who think about cybersecurity build on ideas about confrontations between
states during the Cold War. Intellectuals such as Thomas Schelling developed
deterrence theory, which explained how the US and USSR could maneuver to limit
each other's options without ever actually going to war. Deterrence theory, and
related concepts about the relative ease of attack and defense, seemed to
explain the tradeoffs that the US and rival states faced, as they started to use
cyber techniques to probe and compromise each others' information networks.

However, these ideas fail to acknowledge one key differences between the Cold
War and today. Nearly all states -- whether democratic or authoritarian -- are
entangled on the Internet. This creates both new tensions and new opportunities.
The US assumed that the internet would help spread American liberal values, and
that this was a good and uncontroversial thing. Illiberal states like Russia and
China feared that Internet freedom was a direct threat to their own systems of
rule. Opponents of the regime might use social media and online communication to
coordinate among themselves, and appeal to the broader public, perhaps toppling
their governments, as happened in Tunisia during the Arab Spring.

This led illiberal states to develop new domestic defenses against open
information flows. As scholars like Molly Roberts have shown, states like China
and Russia discovered how they could "flood" internet discussion with online
nonsense and distraction, making it impossible for their opponents to talk to
each other, or even to distinguish between truth and falsehood. These flooding
techniques stabilized authoritarian regimes, because they demoralized and
confused the regime's opponents. Libertarians often argue that the best antidote
to bad speech is more speech. What Vladimir Putin discovered was that the best
antidote to more speech was bad speech.

Russia saw the Arab Spring and efforts to encourage democracy in its
neighborhood as direct threats, and began experimenting with counter-offensive
techniques. When a Russia-friendly government in Ukraine collapsed due to
popular protests, Russia tried to destabilize new, democratic elections by
hacking the system through which the election results would be announced. The
clear intention was to discredit the election results by announcing fake voting
numbers that would throw public discussion into disarray.

This attack on public confidence in election results was thwarted at the last
moment. Even so, it provided the model for a new kind of attack. Hackers don't
have to secretly alter people's votes to affect elections. All they need to do
is to damage public confidence that the votes were counted fairly. As
researchers have argued, "simply put, the attacker might not care who wins; the
losing side believing that the election was stolen from them may be equally, if
not more, valuable."

These two kinds of attacks -- "flooding" attacks aimed at destabilizing public
discourse, and "confidence" attacks aimed at undermining public belief in
elections -- were weaponized against the US in 2016. Russian social media
trolls, hired by the "Internet Research Agency," flooded online political
discussions with rumors and counter-rumors in order to create confusion and
political division. Peter Pomerantsev describes how in Russia, "one moment
[Putin's media wizard] Surkov would fund civic forums and human rights NGOs, the
next he would quietly support nationalist movements that accuse the NGOs of
being tools of the West." Similarly, Russian trolls tried to get Black Lives
Matter protesters and anti-Black Lives Matter protesters to march at the same
time and place, to create conflict and the appearance of chaos. Guccifer 2.0's
blog post was surely intended to undermine confidence in the vote, preparing the
ground for a wider destabilization campaign after Hillary Clinton won the
election. Neither Putin nor anyone else anticipated that Trump would win,
ushering in chaos on a vastly greater scale.

We do not know how successful these attacks were. A new book by John Sides,
Michael Tesler and Lynn Vavreck suggests that Russian efforts had no measurable
long-term consequences. Detailed research on the flow of news articles through
social media by Yochai Benker, Robert Farris, and Hal Roberts agrees, showing
that Fox News was far more influential in the spread of false news stories than
any Russian effort.

However, global adversaries like the Russians aren't the only actors who can use
flooding and confidence attacks. US actors can use just the same techniques.
Indeed, they can arguably use them better, since they have a better
understanding of US politics, more resources, and are far more difficult for the
government to counter without raising First Amendment issues.

For example, when the Federal Communication Commission asked for comments on its
proposal to get rid of "net neutrality," it was flooded by fake comments
supporting the proposal. Nearly every real person who commented was in favor of
net neutrality, but their arguments were drowned out by a flood of spurious
comments purportedly made by identities stolen from porn sites, by people whose
names and email addresses had been harvested without their permission, and, in
some cases, from dead people. This was done not just to generate fake support
for the FCC's controversial proposal. It was to devalue public comments in
general, making the general public's support for net neutrality politically
irrelevant. FCC decision making on issues like net neutrality used to be
dominated by industry insiders, and many would like to go back to the old
regime.

Trump's efforts to undermine confidence in the Florida and Arizona votes work on
a much larger scale. There are clear short-term benefits to asserting fraud
where no fraud exists. This may sway judges or other public officials to make
concessions to the Republicans to preserve their legitimacy. Yet they also
destabilize American democracy in the long term. If Republicans are convinced
that Democrats win by cheating, they will feel that their own manipulation of
the system (by purging voter rolls, making voting more difficult and so on) are
legitimate, and very probably cheat even more flagrantly in the future. This
will trash collective institutions and leave everyone worse off.

It is notable that some Arizonan Republicans -- including Martha McSally -- have
so far stayed firm against pressure from the White House and the Republican
National Committee to claim that cheating is happening. They presumably see more
long term value from preserving existing institutions than undermining them.
Very plausibly, Donald Trump has exactly the opposite incentives. By weakening
public confidence in the vote today, he makes it easier to claim fraud and
perhaps plunge American politics into chaos if he is defeated in 2020.

If experts who see Russian flooding and confidence measures as cyberattacks on
US democracy are right, then these attacks are just as dangerous -- and perhaps
more dangerous -- when they are used by domestic actors. The risk is that over
time they will destabilize American democracy so that it comes closer to
Russia's managed democracy -- where nothing is real any more, and ordinary
people feel a mixture of paranoia, helplessness and disgust when they think
about politics. Paradoxically, Russian interference is far too ineffectual to
get us there -- but domestically mounted attacks by all-American political
actors might.

To protect against that possibility, we need to start thinking more
systematically about the relationship between democracy and information. Our
paper provides one way to do this, highlighting the vulnerabilities of democracy
against certain kinds of information attack. More generally, we need to build
levees against flooding while shoring up public confidence in voting and other
public information systems that are necessary to democracy.

The first may require radical changes in how we regulate social media companies.
Modernization of government commenting platforms to make them robust against
flooding is only a very minimal first step. Up until very recently, companies
like Twitter have won market advantage from bot infestations -- even when it
couldn't make a profit, it seemed that user numbers were growing. CEOs like Mark
Zuckerberg have begun to worry about democracy, but their worries will likely
only go so far. It is difficult to get a man to understand something when his
business model depends on not understanding it. Sharp -- and legally enforceable
-- limits on automated accounts are a first step. Radical redesign of networks
and of trending indicators so that flooding attacks are less effective may be a
second.

The second requires general standards for voting at the federal level, and a
constitutional guarantee of the right to vote. Technical experts nearly
universally favor robust voting systems that would combine paper records with
random post-election auditing, to prevent fraud and secure public confidence in
voting. Other steps to ensure proper ballot design, and standardize vote
counting and reporting will take more time and discussion -- yet the record of
other countries show that they are not impossible.

The US is nearly unique among major democracies in the persistent flaws of its
election machinery. Yet voting is not the only important form of democratic
information. Apparent efforts to deliberately skew the US census against
counting undocumented immigrants show the need for a more general audit of the
political information systems that we need if democracy is to function properly.

It's easier to respond to Russian hackers through sanctions, counter-attacks and
the like than to domestic political attacks that undermine US democracy. To
preserve the basic political freedoms of democracy requires recognizing that
these freedoms are sometimes going to be abused by politicians such as Donald
Trump. The best that we can do is to minimize the possibilities of abuse up to
the point where they encroach on basic freedoms and harden the general
institutions that secure democratic information against attacks intended to
undermine them.

This essay was co-authored with Henry Farrell, and previously appeared on
Motherboard, with a terrible headline that I was unable to get changed.

** *** ***** ******* *********** *************
Distributing Malware By Becoming an Admin on an Open-Source Project

[2018.11.28] The module "event-stream" was infected with malware by an anonymous
someone who became an admin on the project.

Cory Doctorow points out that this is a clever new attack vector:

    Many open source projects attain a level of "maturity" where no one really
needs any new features and there aren't a lot of new bugs being found, and the
contributors to these projects dwindle, often to a single maintainer who is
generally grateful for developers who take an interest in these older projects
and offer to share the choresome, intermittent work of keeping the projects
alive.

    Ironically, these are often projects with millions of users, who trust them
specifically because of their stolid, unexciting maturity.

    This presents a scary social-engineering vector for malware: A malicious
person volunteers to help maintain the project, makes some small, positive
contributions, gets commit access to the project, and releases a malicious
patch, infecting millions of users and apps.

** *** ***** ******* *********** *************
FBI Takes Down a Massive Advertising Fraud Ring

[2018.11.29] The FBI announced that it dismantled a large Internet advertising
fraud network, and arrested eight people:

    A 13-count indictment was unsealed today in federal court in Brooklyn
charging Aleksandr Zhukov, Boris Timokhin, Mikhail Andreev, Denis Avdeev, Dmitry
Novikov, Sergey Ovsyannikov, Aleksandr Isaev and Yevgeniy Timchenko with
criminal violations for their involvement in perpetrating widespread digital
advertising fraud. The charges include wire fraud, computer intrusion,
aggravated identity theft and money laundering. Ovsyannikov was arrested last
month in Malaysia; Zhukov was arrested earlier this month in Bulgaria; and
Timchenko was arrested earlier this month in Estonia, all pursuant to
provisional arrest warrants issued at the request of the United States. They
await extradition. The remaining defendants are at large.

It looks like an impressive piece of police work.

Details of the forensics that led to the arrests.

** *** ***** ******* *********** *************
That Bloomberg Supply-Chain-Hack Story

[2018.11.30] Back in October, Bloomberg reported that China has managed to
install backdoors into server equipment that ended up in networks belonging to
-- among others -- Apple and Amazon. Pretty much everybody has denied it
(including the US DHS and the UK NCSC). Bloomberg has stood by its story -- and
is still standing by it.

I don't think it's real. Yes, it's plausible. But first of all, if someone
actually surreptitiously put malicious chips onto motherboards en masse, we
would have seen a photo of the alleged chip already. And second, there are
easier, more effective, and less obvious ways of adding backdoors to networking
equipment.

** *** ***** ******* *********** *************
Three-Rotor Enigma Machine Up for Auction Today

[2018.11.30] Sotheby's is auctioning off a (working, I think) three-rotor Enigma
machine today. They're expecting it to sell for about $200K.

I have an Enigma, but it's missing the rotors.

** *** ***** ******* *********** *************
Click Here to Kill Everybody News

[2018.11.30] My latest book is doing well. And I've been giving lots of talks
and interviews about it. (I can recommend three interviews: the Cyberlaw podcast
with Stewart Baker, the Lawfare podcast with Ben Wittes, and Le Show with Harry
Shearer.) My book talk at Google is also available.

The Audible version was delayed for reasons that were never adequately explained
to me, but it's finally out.

I still have signed copies available. Be aware that this is both slower and more
expensive than online bookstores.

** *** ***** ******* *********** *************
The DoJ's Secret Legal Arguments to Break Cryptography

[2018.12.03] Earlier this year, the US Department of Justice made a series of
legal arguments as to why Facebook should be forced to help the government
wiretap Facebook Messenger. Those arguments are still sealed. The ACLU is suing
to make them public.

** *** ***** ******* *********** *************
Bad Consumer Security Advice

[2018.12.04] There are lots of articles about there telling people how to better
secure their computers and online accounts. While I agree with some of it, this
article contains some particularly bad advice:

    1. Never, ever, ever use public (unsecured) Wi-Fi such as the Wi-Fi in a
caf�, hotel or airport. To remain anonymous and secure on the Internet, invest
in a Virtual Private Network account, but remember, the bad guys are very smart,
so by the time this column runs, they may have figured out a way to hack into a
VPN.

I get that unsecured Wi-Fi is a risk, but does anyone actually follow this
advice? I think twice about accessing my online bank account from a pubic Wi-Fi
network, and I do use a VPN regularly. But I can't imagine offering this as
advice to the general public.

    2. If you or someone you know is 18 or older, you need to create a Social
Security online account. Today! Go to www.SSA.gov.

This is actually good advice. Brian Krebs calls it planting a flag, and it's
basically claiming your own identity before some fraudster does it for you. But
why limit it to the Social Security Administration? Do it for the IRS and the
USPS. And while you're at it, do it for your mobile phone provider and your
Internet service provider.

    3. Add multifactor verifications to ALL online accounts offering this
additional layer of protection, including mobile and cable accounts. (Note: Have
the codes sent to your email, as SIM card "swapping" is becoming a huge, and
thus far unstoppable, security problem.)

Yes. Two-factor authentication is important, and I use it on some of my more
important online accounts. But I don't have it installed on everything. And I'm
not sure why having the codes sent to your e-mail helps defend against SIM-card
swapping; I'm sure you get your e-mail on your phone like everyone else. (Here's
some better advice about that.)

    4. Create hard-to-crack 12-character passwords. NOT your mother's maiden
name, not the last four digits of your Social Security number, not your birthday
and not your address. Whenever possible, use a "pass-phrase" as your answer to
account security questions -- such as "Youllneverguessmybrotherinlawsmiddlename."

I'm a big fan of random impossible-to-remember passwords, and nonsense answers
to secret questions. It would be great if she suggested a password manager to
remember them all.

    5. Avoid the temptation to use the same user name and password for every
account. Whenever possible, change your passwords every six months.

Yes to the first part. No, no no -- a thousand times no -- to the second.

    6. To prevent "new account fraud" (i.e., someone trying to open an account
using your date of birth and Social Security number), place a security freeze on
all three national credit bureaus (Equifax, Experian and TransUnion). There is
no charge for this service.

I am a fan of security freezes.

    7. Never plug your devices (mobile phone, tablet and/or laptop) into an
electrical outlet in an airport. Doing so will make you more susceptible to
being hacked. Instead, travel with an external battery charger to keep your
devices charged.

Seriously? Yes, I've read the articles about hacked charging stations, but I
wouldn't think twice about using a wall jack at an airport. If you're really
worried, buy a USB condom.

** *** ***** ******* *********** *************
Security Risks of Chatbots

[2018.12.05] Good essay on the security risks -- to democratic discourse -- of
chatbots.

** *** ***** ******* *********** *************
Your Personal Data is Already Stolen

[2018.12.06] In an excellent blog post, Brian Krebs makes clear something I have
been saying for a while:

    Likewise for individuals, it pays to accept two unfortunate and harsh
realities:

        Reality #1: Bad guys already have access to personal data points that
you may believe should be secret but which nevertheless aren't, including your
credit card information, Social Security number, mother's maiden name, date of
birth, address, previous addresses, phone number, and yes -- even your credit
file.

        Reality #2: Any data point you share with a company will in all
likelihood eventually be hacked, lost, leaked, stolen or sold -- usually through
no fault of your own. And if you're an American, it means (at least for the time
being) your recourse to do anything about that when it does happen is limited or
nil.

    [...]

    Once you've owned both of these realities, you realize that expecting
another company to safeguard your security is a fool's errand, and that it makes
far more sense to focus instead on doing everything you can to proactively
prevent identity thieves, malicious hackers or other ne'er-do-wells from abusing
access to said data.

His advice is good.

** *** ***** ******* *********** *************
Banks Attacked through Malicious Hardware Connected to the Local Network

[2018.12.07] Kaspersky is reporting on a series of bank hacks -- called
DarkVishnya -- perpetrated through malicious hardware being surreptitiously
installed into the target network:

    In 2017-2018, Kaspersky Lab specialists were invited to research a series
of cybertheft incidents. Each attack had a common springboard: an unknown device
directly connected to the company's local network. In some cases, it was the
central office, in others a regional office, sometimes located in another
country. At least eight banks in Eastern Europe were the targets of the attacks
(collectively nicknamed DarkVishnya), which caused damage estimated in the tens
of millions of dollars.

    Each attack can be divided into several identical stages. At the first
stage, a cybercriminal entered the organization's building under the guise of a
courier, job seeker, etc., and connected a device to the local network, for
example, in one of the meeting rooms. Where possible, the device was hidden or
blended into the surroundings, so as not to arouse suspicion.

    The devices used in the DarkVishnya attacks varied in accordance with the
cybercriminals' abilities and personal preferences. In the cases we researched,
it was one of three tools:

        netbook or inexpensive laptop
        Raspberry Pi computer
        Bash Bunny, a special tool for carrying out USB attacks

    Inside the local network, the device appeared as an unknown computer, an
external flash drive, or even a keyboard. Combined with the fact that Bash Bunny
is comparable in size to a USB flash drive, this seriously complicated the
search for the entry point. Remote access to the planted device was via a
built-in or USB-connected GPRS/3G/LTE modem.

Slashdot thread.

** *** ***** ******* *********** *************
Back Issues of the NSA's Cryptolog

[2018.12.07] Five years ago, the NSA published 23 years of its internal
magazine, Cryptolog. There were lots of redactions, of course.

What's new is a nice user interface for the issues, noting highlights and levels
of redaction.

** *** ***** ******* *********** *************
2018 Annual Report from AI Now

[2018.12.10] The research group AI Now just published its annual report. It's an
excellent summary of today's AI security challenges, as well as a policy agenda
to address them.

This is related, and also worth reading.

** *** ***** ******* *********** *************
New Australian Backdoor Law

[2018.12.12] Last week, Australia passed a law giving the government the ability
to demand backdoors in computers and communications systems. Details are still
to be defined, but it's really bad.

Note: Many people e-mailed me to ask why I haven't blogged this yet. One, I was
busy with other things. And two, there's nothing I can say that I haven't said
many times before.

If there are more good links or commentary, please post them in the comments.

EDITED TO ADD (12/13): The Australian government response is kind of
embarrassing.

** *** ***** ******* *********** *************
Marriott Hack Reported as Chinese State-Sponsored

[2018.12.13] The New York Times and Reuters are reporting that China was behind
the recent hack of Marriott Hotels. Note that this is still uncomfirmed, but
interesting if it is true.

Reuters:

    Private investigators looking into the breach have found hacking tools,
techniques and procedures previously used in attacks attributed to Chinese
hackers, said three sources who were not authorized to discuss the company's
private probe into the attack.

    That suggests that Chinese hackers may have been behind a campaign designed
to collect information for use in Beijing's espionage efforts and not for
financial gain, two of the sources said.

    While China has emerged as the lead suspect in the case, the sources
cautioned it was possible somebody else was behind the hack because other
parties had access to the same hacking tools, some of which have previously been
posted online.

    Identifying the culprit is further complicated by the fact that
investigators suspect multiple hacking groups may have simultaneously been
inside Starwood's computer networks since 2014, said one of the sources.

I used to have opinions about whether these attributions are true or not. These
days, I tend to wait and see.

** *** ***** ******* *********** *************
Real-Time Attacks Against Two-Factor Authentication

[2018.12.14] Attackers are targeting two-factor authentication systems:

    Attackers working on behalf of the Iranian government collected detailed
information on targets and used that knowledge to write spear-phishing emails
that were tailored to the targets' level of operational security, researchers
with security firm Certfa Lab said in a blog post. The emails contained a hidden
image that alerted the attackers in real time when targets viewed the messages.
When targets entered passwords into a fake Gmail or Yahoo security page, the
attackers would almost simultaneously enter the credentials into a real login
page. In the event targets' accounts were protected by 2fa, the attackers
redirected targets to a new page that requested a one-time password.

This isn't new. I wrote about this exact attack in 2005 and 2009.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of 14 books -- including the
New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your
Data and Control Your World -- as well as hundreds of articles, essays, and
academic papers. His newsletter and blog are read by over 250,000 people.
Schneier is a fellow at the Berkman Klein Center for Internet and Society at
Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a
board member of the Electronic Frontier Foundation, AccessNow, and the Tor
Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also
a special advisor to IBM Security and the CTO of IBM Resilient.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily
those of IBM, IBM Security, or IBM Resilient.

Copyright C 2018 by Bruce Schneier.


--- BBBS/Li6 v4.10 Toy-3
 * Origin: TCOB1: tcob1.duckdns.org (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0248 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106