AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages! You are not logged in. Login here for full access privileges. |
Previous Message | Next Message | Back to Computer Support/Help/Discussion... <-- <--- | Return to Home Page |
|
||||||
From | To | Subject | Date/Time | |||
Sean Rima | All | CRYPTO-GRAM, December 15, 2020 |
December 15, 2020 7:36 PM * |
|||
Crypto-Gram December 15, 2020 by Bruce Schneier Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com https://www.schneier.com A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. For back issues, or to subscribe, visit Crypto-Gram's web page. Read this issue on the web These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available. ** *** ***** ******* *********** ************* In this issue: If these links don't work in your email client, try reading this issue of Crypto-Gram on the web. On Blockchain Voting Michael Ellis as NSA General Counsel The US Military Buys Commercial Location Data Symantec Reports on Cicada APT Attacks against Japan Indistinguishability Obfuscation More on the Security of the 2020 US Election On That Dusseldorf Hospital Ransomware Attack and the Resultant Death Cyber Public Health Undermining Democracy Check Washing Manipulating Systems Using Remote Lasers Impressive iPhone Exploit Open Source Does Not Equal Secure Enigma Machine Recovered from the Baltic Sea The 2020 Workshop on Economics and Information Security (WEIS) Hiding Malware in Social Media Buttons Oblivious DNS-over-HTTPS FireEye Hacked Finnish Data Theft and Extortion A Cybersecurity Policy Agenda Authentication Failure Upcoming Speaking Engagements Should There Be Limits on Persuasive Technologies? ** *** ***** ******* *********** ************* On Blockchain Voting [2020.11.16] Blockchain voting is a spectacularly dumb idea for a whole bunch of reasons. I have generally quoted Matt Blaze: Why is blockchain voting a dumb idea? Glad you asked. For starters: It doesn’t solve any problems civil elections actually have. It’s basically incompatible with “software independence”, considered an essential property. It can make ballot secrecy difficult or impossible. I’ve also quoted this XKCD cartoon. But now I have this excellent paper from MIT researchers: “Going from Bad to Worse: From Internet Voting to Blockchain Voting” Sunoo Park, Michael Specter, Neha Narula, and Ronald L. Rivest Abstract: Voters are understandably concerned about election security. News reports of possible election interference by foreign powers, of unauthorized voting, of voter disenfranchisement, and of technological failures call into question the integrity of elections worldwide.This article examines the suggestions that “voting over the Internet” or “voting on the blockchain” would increase election security, and finds such claims to be wanting and misleading. While current election systems are far from perfect, Internet- and blockchain-based voting would greatly increase the risk of undetectable, nation-scale election failures.Online voting may seem appealing: voting from a computer or smart phone may seem convenient and accessible. However, studies have been inconclusive, showing that online voting may have little to no effect on turnout in practice, and it may even increase disenfranchisement. More importantly: given the current state of computer security, any turnout increase derived from with Internet- or blockchain-based voting would come at the cost of losing meaningful assurance that votes have been counted as they were cast, and not undetectably altered or discarded. This state of affairs will continue as long as standard tactics such as malware, zero days, and denial-of-service attacks continue to be effective.This article analyzes and systematizes prior research on the security risks of online and electronic voting, and show that not only do these risks persist in blockchain-based voting systems, but blockchains may introduce additional problems for voting systems. Finally, we suggest questions for critically assessing security risks of new voting system proposals. You may have heard of Voatz, which uses blockchain for voting. It’s an insecure mess. And this is my general essay on blockchain. Short summary: it’s completely useless. ** *** ***** ******* *********** ************* Michael Ellis as NSA General Counsel [2020.11.18] Over at Lawfare, Susan Hennessey has an excellent primer on how Trump loyalist Michael Ellis got to be the NSA General Counsel, over the objections of NSA Director Paul Nakasone, and what Biden can and should do about it. While important details remain unclear, media accounts include numerous indications of irregularity in the process by which Ellis was selected for the job, including interference by the White House. At a minimum, the evidence of possible violations of civil service rules demand immediate investigation by Congress and the inspectors general of the Department of Defense and the NSA. The moment also poses a test for President-elect Biden’s transition, which must address the delicate balance between remedying improper politicization of the intelligence community, defending career roles against impermissible burrowing, and restoring civil service rules that prohibit both partisan favoritism and retribution. The Biden team needs to set a marker now, to clarify the situation to the public and to enable a new Pentagon general counsel to proceed with credibility and independence in investigating and potentially taking remedial action upon assuming office. The NSA general counsel is not a Senate-confirmed role. Unlike the general counsels of the CIA, Pentagon and Office of the Director of National Intelligence (ODNI), all of which require confirmation, the NSA’s general counsel is a senior career position whose occupant is formally selected by and reports to the general counsel of the Department of Defense. It’s an odd setup -- and one that obscures certain realities, like the fact that the NSA general counsel in practice reports to the NSA director. This structure is the source of a perennial legislative fight. Every few years, Congress proposes laws to impose a confirmation requirement as more appropriately befits an essential administration role, and every few years, the executive branch opposes those efforts as dangerously politicizing what should be a nonpolitical job. While a lack of Senate confirmation reduces some accountability and legislative screening, this career selection process has the benefit of being designed to eliminate political interference and to ensure the most qualified candidate is hired. The system includes a complex set of rules governing a selection board that interviews candidates, certifies qualifications and makes recommendations guided by a set of independent merit-based principles. The Pentagon general counsel has the final call in making a selection. For example, if the panel has ranked a first-choice candidate, the general counsel is empowered to choose one of the others. Ryan Goodman has a similar article at Just Security. ** *** ***** ******* *********** ************* The US Military Buys Commercial Location Data [2020.11.19] Vice has a long article about how the US military buys commercial location data worldwide. The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom. This isn’t new, this isn’t just data of non-US citizens, and this isn’t the US military. We have lots of instances where the government buys data that it cannot legally collect itself. Some app developers Motherboard spoke to were not aware who their users’ location data ends up with, and even if a user examines an app’s privacy policy, they may not ultimately realize how many different industries, companies, or government agencies are buying some of their most sensitive data. U.S. law enforcement purchase of such information has raised questions about authorities buying their way to location data that may ordinarily require a warrant to access. But the USSOCOM contract and additional reporting is the first evidence that U.S. location data purchases have extended from law enforcement to military agencies. ** *** ***** ******* *********** ************* Symantec Reports on Cicada APT Attacks against Japan [2020.11.20] Symantec is reporting on an APT group linked to China, named Cicada. They have been attacking organizations in Japan and elsewhere. Cicada has historically been known to target Japan-linked organizations, and has also targeted MSPs in the past. The group is using living-off-the-land tools as well as custom malware in this attack campaign, including a custom malware -- Backdoor.Hartip -- that Symantec has not seen being used by the group before. Among the machines compromised during this attack campaign were domain controllers and file servers, and there was evidence of files being exfiltrated from some of the compromised machines. The attackers extensively use DLL side-loading in this campaign, and were also seen leveraging the ZeroLogon vulnerability that was patched in August 2020. Interesting details about the group’s tactics. News article. ** *** ***** ******* *********** ************* Indistinguishability Obfuscation [2020.11.23] Quanta magazine recently published a breathless article on indistinguishability obfuscation -- calling it the “‘crown jewel’ of cryptography” -- and saying that it had finally been achieved, based on a recently published paper. I want to add some caveats to the discussion. Basically, obfuscation makes a computer program “unintelligible” by performing its functionality. Indistinguishability obfuscation is more relaxed. It just means that two different programs that perform the same functionality can’t be distinguished from each other. A good definition is in this paper. This is a pretty amazing theoretical result, and one to be excited about. We can now do obfuscation, and we can do it using assumptions that make real-world sense. The proofs are kind of ugly, but that’s okay -- it’s a start. What it means in theory is that we have a fundamental theoretical result that we can use to derive a whole bunch of other cryptographic primitives. But -- and this is a big one -- this result is not even remotely close to being practical. We’re talking multiple days to perform pretty simple calculations, using massively large blocks of computer code. And this is likely to remain true for a very long time. Unless researchers increase performance by many orders of magnitude, nothing in the real world will make use of this work anytime soon. But but, consider fully homomorphic encryption. It, too, was initially theoretically interesting and completely impractical. And now, after decades of work, it seems to be almost just-barely maybe approaching practically useful. This could very well be on the same trajectory, and perhaps in twenty to thirty years we will be celebrating this early theoretical result as the beginning of a new theory of cryptography. ** *** ***** ******* *********** ************* More on the Security of the 2020 US Election [2020.11.23] Last week I signed on to two joint letters about the security of the 2020 election. The first was as one of 59 election security experts, basically saying that while the election seems to have been both secure and accurate (voter suppression notwithstanding), we still need to work to secure our election systems: We are aware of alarming assertions being made that the 2020 election was “rigged” by exploiting technical vulnerabilities. However, in every case of which we are aware, these claims either have been unsubstantiated or are technically incoherent. To our collective knowledge, no credible evidence has been put forth that supports a conclusion that the 2020 election outcome in any state has been altered through technical compromise. That said, it is imperative that the US continue working to bolster the security of elections against sophisticated adversaries. At a minimum, all states should employ election security practices and mechanisms recommended by experts to increase assurance in election outcomes, such as post-election risk-limiting audits. The New York Times wrote about the letter. The second was a more general call for election security measures in the US: Obviously elections themselves are partisan. But the machinery of them should not be. And the transparent assessment of potential problems or the assessment of allegations of security failure -- even when they could affect the outcome of an election -- must be free of partisan pressures. Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election. These pile on to the November 12 statement from Cybersecurity and Infrastructure Security Agency (CISA) and the other agencies of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee. While I’m not sure how they have enough comparative data to claim that “the November 3rd election was the most secure in American history,” they are certainly credible in saying that “there is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.” We have a long way to go to secure our election systems from hacking. Details of what to do are known. Getting rid of touch-screen voting machines is important, but baseless claims of fraud don’t help. ** *** ***** ******* *********** ************* On That Dusseldorf Hospital Ransomware Attack and the Resultant Death [2020.11.24] Wired has a detailed story about the ransomware attack on a Dusseldorf hospital, the one that resulted in an ambulance being redirected to a more distant hospital and the patient dying. The police wanted to prosecute the ransomware attackers for negligent homicide, but the details were more complicated: After a detailed investigation involving consultations with medical professionals, an autopsy, and a minute-by-minute breakdown of events, Hartmann believes that the severity of the victim’s medical diagnosis at the time she was picked up was such that she would have died regardless of which hospital she had been admitted to. “The delay was of no relevance to the final outcome,” Hartmann says. “The medical condition was the sole cause of the death, and this is entirely independent from the cyberattack.” He likens it to hitting a dead body while driving: while you might be breaking the speed limit, you’re not responsible for the death. So while this might not be an example of death by cyberattack, the article correctly notes that it’s only a matter of time: But it’s only a matter of time, Hartmann believes, before ransomware does directly cause a death. “Where the patient is suffering from a slightly less severe condition, the attack could certainly be a decisive factor,” he says. “This is because the inability to receive treatment can have severe implications for those who require emergency services.” Success at bringing a charge might set an important precedent for future cases, thereby deepening the toolkit of prosecutors beyond the typical cybercrime statutes. “The main hurdle will be one of proof,” Urban says. “Legal causation will be there as soon as the prosecution can prove that the person died earlier, even if it’s only a few hours, because of the hack, but this is never easy to prove.” With the Düsseldorf attack, it was not possible to establish that the victim could have survived much longer, but in general it’s “absolutely possible” that hackers could be found guilty of manslaughter, Urban argues. And where causation is established, Hartmann points out that exposure for criminal prosecution stretches beyond the hackers. Instead, anyone who can be shown to have contributed to the hack may also be prosecuted, he says. In the Düsseldorf case, for example, his team was preparing to consider the culpability of the hospital’s IT staff. Could they have better defended the hospital by monitoring the network more closely, for instance? ** *** ***** ******* *********** ************* Cyber Public Health [2020.11.25] In a lecture, Adam Shostack makes the case for a discipline of cyber public health. It would relate to cybersecurity in a similar way that public health relates to medicine. ** *** ***** ******* *********** ************* Undermining Democracy [2020.11.27] Last Thursday, Rudy Giuliani, a Trump campaign lawyer, alleged a widespread voting conspiracy involving Venezuela, Cuba, and China. Another lawyer, Sidney Powell, argued that Mr. Trump won in a landslide, the entire election in swing states should be overturned and the legislatures should make sure that the electors are selected for the president. The Republican National Committee swung in to support her false claim that Mr. Trump won in a landslide, while Michigan election officials have tried to stop the certification of the vote. It is wildly unlikely that their efforts can block Joe Biden from becoming president. But they may still do lasting damage to American democracy for a shocking reason: the moves have come from trusted insiders. American democracy’s vulnerability to disinformation has been very much in the news since the Russian disinformation campaign in 2016. The fear is that outsiders, whether they be foreign or domestic actors, will undermine our system by swaying popular opinion and election results. This is half right. American democracy is an information system, in which the information isn’t bits and bytes but citizens’ beliefs. When peoples’ faith in the democratic system is undermined, democracy stops working. But as information security specialists know, outsider attacks are hard. Russian trolls, who don’t really understand how American politics works, have actually had a difficult time subverting it. When you really need to worry is when insiders go bad. And that is precisely what is happening in the wake of the 2020 presidential election. In traditional information systems, the insiders are the people who have both detailed knowledge and high level access, allowing them to bypass security measures and more effectively subvert systems. In democracy, the insiders aren’t just the officials who manage voting but also the politicians who shape what people believe about politics. For four years, Donald Trump has been trying to dismantle our shared beliefs about democracy. And now, his fellow Republicans are helping him. Democracy works when we all expect that votes will be fairly counted, and defeated candidates leave office. As the democratic theorist Adam Przeworski puts it, democracy is “a system in which parties lose elections.” These beliefs can break down when political insiders make bogus claims about general fraud, trying to cling to power when the election has gone against them. It’s obvious how these kinds of claims damage Republican voters’ commitment to democracy. They will think that elections are rigged by the other side and will not accept the judgment of voters when it goes against their preferred candidate. Their belief that the Biden administration is illegitimate will justify all sorts of measures to prevent it from functioning. It’s less obvious that these strategies affect Democratic voters’ faith in democracy, too. Democrats are paying attention to Republicans’ efforts to stop the votes of Democratic voters - and especially Black Democratic voters - from being counted. They, too, are likely to have less trust in elections going forward, and with good reason. They will expect that Republicans will try to rig the system against them. Mr. Trump is having a hard time winning unfairly, because he has lost in several states. But what if Mr. Biden’s margin of victory depended only on one state? What if something like that happens in the next election? The real fear is that this will lead to a spiral of distrust and destruction. Republicans who are increasingly committed to the notion that the Democrats are committing pervasive fraud - will do everything that they can to win power and to cling to power when they can get it. Democrats - seeing what Republicans are doing will try to entrench themselves in turn. They suspect that if the Republicans really win power, they will not ever give it back. The claims of Republicans like Senator Mike Lee of Utah that America is not really a democracy might become a self-fulfilling prophecy. More likely, this spiral will not directly lead to the death of American democracy. The U.S. federal system of government is complex and hard for any one actor or coalition to dominate completely. But it may turn American democracy into an unworkable confrontation between two hostile camps, each unwilling to make any concession to its adversary. We know how to make voting itself more open and more secure; the literature is filled with vital and important suggestions. The more difficult problem is this. How do you shift the collective belief among Republicans that elections are rigged? Political science suggests that partisans are more likely to be persuaded by fellow partisans, like Brad Raffensperger, the Republican secretary of state in Georgia, who said that election fraud wasn’t a big problem. But this would only be effective if other well-known Republicans supported him. Public outrage, alternatively, can sometimes force officials to back down, as when people crowded in to denounce the Michigan Republican election officials who were trying to deny certification of their votes. The fundamental problem, however, is Republican insiders who have convinced themselves that to keep and hold power, they need to trash the shared beliefs that hold American democracy together. They may have long-term worries about the consequences, but they’re unlikely to do anything about those worries in the near-term unless voters, wealthy donors or others whom they depend on make them pay short-term costs. This essay was written with Henry Farrell, and previously appeared in the New York Times. ** *** ***** ******* *********** ************* Check Washing [2020.11.30] I can’t believe that check washing is still a thing: “Check washing” is a practice where thieves break into mailboxes (or otherwise steal mail), find envelopes with checks, then use special solvents to remove the information on that check (except for the signature) and then change the payee and the amount to a bank account under their control so that it could be deposited at out-state-banks and oftentimes by a mobile phone. The article suggests a solution: stop using paper checks. ** *** ***** ******* *********** ************* Manipulating Systems Using Remote Lasers [2020.12.01] Many systems are vulnerable: Researchers at the time said that they were able to launch inaudible commands by shining lasers -- from as far as 360 feet -- at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant. [...] They broadened their research to show how light can be used to manipulate a wider range of digital assistants -- including Amazon Echo 3 -- but also sensing systems found in medical devices, autonomous vehicles, industrial systems and even space systems. The researchers also delved into how the ecosystem of devices connected to voice-activated assistants -- such as smart-locks, home switches and even cars -- also fail under common security vulnerabilities that can make these attacks even more dangerous. The paper shows how using a digital assistant as the gateway can allow attackers to take control of other devices in the home: Once an attacker takes control of a digital assistant, he or she can have the run of any device connected to it that also responds to voice commands. Indeed, these attacks can get even more interesting if these devices are connected to other aspects of the smart home, such as smart door locks, garage doors, computers and even people’s cars, they said. Another article. The researchers will present their findings at Black Hat Europe -- which, of course, will be happening virtually -- on December 10. ** *** ***** ******* *********** ************* Impressive iPhone Exploit [2020.12.02] This is a scarily impressive vulnerability: Earlier this year, Apple patched one of the most breathtaking iPhone vulnerabilities ever: a memory corruption bug in the iOS kernel that gave attackers remote access to the entire device -- over Wi-Fi, with no user interaction required at all. Oh, and exploits were wormable -- meaning radio-proximity exploits could spread from one nearby device to another, once again, with no user interaction needed. [...] Beer’s attack worked by exploiting a buffer overflow bug in a driver for AWDL, an Apple-proprietary mesh networking protocol that makes things like Airdrop work. Because drivers reside in the kernel -- one of the most privileged parts of any operating system -- the AWDL flaw had the potential for serious hacks. And because AWDL parses Wi-Fi packets, exploits can be transmitted over the air, with no indication that anything is amiss. [...] Beer developed several different exploits. The most advanced one installs an implant that has full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain. The attack uses a laptop, a Raspberry Pi, and some off-the-shelf Wi-Fi adapters. It takes about two minutes to install the prototype implant, but Beer said that with more work a better written exploit could deliver it in a “handful of seconds.” Exploits work only on devices that are within Wi-Fi range of the attacker. There is no evidence that this vulnerability was ever used in the wild. EDITED TO ADD: Slashdot thread. ** *** ***** ******* *********** ************* Open Source Does Not Equal Secure [2020.12.03] Way back in 1999, I wrote about open-source software: First, simply publishing the code does not automatically mean that people will examine it for security flaws. Security researchers are fickle and busy people. They do not have the time to examine every piece of source code that is published. So while opening up source code is a good thing, it is not a guarantee of security. I could name a dozen open source security libraries that no one has ever heard of, and no one has ever evaluated. On the other hand, the security code in Linux has been looked at by a lot of very good security engineers. We have some new research from GitHub that bears this out. On average, vulnerabilities in their libraries go four years before being detected. From a ZDNet article: GitHub launched a deep-dive into the state of open source security, comparing information gathered from the organization’s dependency security features and the six package ecosystems supported on the platform across October 1, 2019, to September 30, 2020, and October 1, 2018, to September 30, 2019. Only active repositories have been included, not including forks or ‘spam’ projects. The package ecosystems analyzed are Composer, Maven, npm, NuGet, PyPi, and RubyGems. In comparison to 2019, GitHub found that 94% of projects now rely on open source components, with close to 700 dependencies on average. Most frequently, open source dependencies are found in JavaScript -- 94% -- as well as Ruby and .NET, at 90%, respectively. On average, vulnerabilities can go undetected for over four years in open source projects before disclosure. A fix is then usually available in just over a month, which GitHub says “indicates clear opportunities to improve vulnerability detection.” Open source means that the code is available for security evaluation, not that it necessarily has been evaluated by anyone. This is an important distinction. ** *** ***** ******* *********** ************* Enigma Machine Recovered from the Baltic Sea [2020.12.04] Neat story: German divers searching the Baltic Sea for discarded fishing nets have stumbled upon a rare Enigma cipher machine used by the Nazi military during World War Two which they believe was thrown overboard from a scuttled submarine. Thinking they had discovered a typewriter entangled in a net on the seabed of Gelting Bay, underwater archaeologist Florian Huber quickly realised the historical significance of the find. EDITED TO ADD: Slashdot thread. ** *** ***** ******* *********** ************* The 2020 Workshop on Economics and Information Security (WEIS) [2020.12.04] The workshop on Economics and Information Security is always an interesting conference. This year, it will be online. Here’s the program. Registration is free. ** *** ***** ******* *********** ************* Hiding Malware in Social Media Buttons [2020.12.07] Clever tactic: This new malware was discovered by researchers at Dutch cyber-security company Sansec that focuses on defending e-commerce websites from digital skimming (also known as Magecart) attacks. The payment skimmer malware pulls its sleight of hand trick with the help of a double payload structure where the source code of the skimmer script that steals customers’ credit cards will be concealed in a social sharing icon loaded as an HTML ‘svg’ element with a ‘path’ element as a container. The syntax for hiding the skimmer’s source code as a social media button perfectly mimics an ‘svg’ element named using social media platform names (e.g., facebook_full, twitter_full, instagram_full, youtube_full, pinterest_full, and google_full). A separate decoder deployed separately somewhere on the e-commerce site’s server is used to extract and execute the code of the hidden credit card stealer. This tactic increases the chances of avoiding detection even if one of the two malware components is found since the malware loader is not necessarily stored within the same location as the skimmer payload and their true purpose might evade superficial analysis. ** *** ***** ******* *********** ************* Oblivious DNS-over-HTTPS [2020.12.08] This new protocol, called Oblivious DNS-over-HTTPS (ODoH), hides the websites you visit from your ISP. Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with. IETF memo. The paper: Abstract: The Domain Name System (DNS) is the foundation of a human-usable Internet, responding to client queries for host-names with corresponding IP addresses and records. Traditional DNS is also unencrypted, and leaks user information to network operators. Recent efforts to secure DNS using DNS over TLS (DoT) and DNS over HTTPS (DoH) havebeen gaining traction, ostensibly protecting traffic and hiding content from on-lookers. However, one of the criticisms ofDoT and DoH is brought to bear by the small number of large-scale deployments (e.g., Comcast, Google, Cloudflare): DNS resolvers can associate query contents with client identities in the form of IP addresses. Oblivious DNS over HTTPS (ODoH) safeguards against this problem. In this paper we ask what it would take to make ODoH practical? We describe ODoH, a practical DNS protocol aimed at resolving this issue by both protecting the client’s content and identity. We implement and deploy the protocol, and perform measurements to show that ODoH has comparable performance to protocols like DoH and DoT which are gaining widespread adoption,while improving client privacy, making ODoH a practical privacy enhancing replacement for the usage of DNS. Slashdot thread. ** *** ***** ******* *********** ************* FireEye Hacked [2020.12.09] FireEye was hacked by -- they believe -- “a nation with top-tier offensive capabilities”: During our investigation to date, we have found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security. These tools mimic the behavior of many cyber threat actors and enable FireEye to provide essential diagnostic security services to our customers. None of the tools contain zero-day exploits. Consistent with our goal to protect the community, we are proactively releasing methods and means to detect the use of our stolen Red Team tools. We are not sure if the attacker intends to use our Red Team tools or to publicly disclose them. Nevertheless, out of an abundance of caution, we have developed more than 300 countermeasures for our customers, and the community at large, to use in order to minimize the potential impact of the theft of these tools. We have seen no evidence to date that any attacker has used the stolen Red Team tools. We, as well as others in the security community, will continue to monitor for any such activity. At this time, we want to ensure that the entire security community is both aware and protected against the attempted use of these Red Team tools. Specifically, here is what we are doing: We have prepared countermeasures that can detect or block the use of our stolen Red Team tools. We have implemented countermeasures into our security products. We are sharing these countermeasures with our colleagues in the security community so that they can update their security tools. We are making the countermeasures publicly available on our GitHub. We will continue to share and refine any additional mitigations for the Red Team tools as they become available, both publicly and directly with our security partners. Consistent with a nation-state cyber-espionage effort, the attacker primarily sought information related to certain government customers. While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems. If we discover that customer information was taken, we will contact them directly. From the New York Times: The hack was the biggest known theft of cybersecurity tools since those of the National Security Agency were purloined in 2016 by a still-unidentified group that calls itself the ShadowBrokers. That group dumped the N.S.A.’s hacking tools online over several months, handing nation-states and hackers the “keys to the digital kingdom,” as one former N.S.A. operator put it. North Korea and Russia ultimately used the N.S.A.’s stolen weaponry in destructive attacks on government agencies, hospitals and the world’s biggest conglomerates - at a cost of more than $10 billion. The N.S.A.’s tools were most likely more useful than FireEye’s since the U.S. government builds purpose-made digital weapons. FireEye’s Red Team tools are essentially built from malware that the company has seen used in a wide range of attacks. Russia is presumed to be the attacker. Reuters article. Boing Boing post. Slashdot thread. Wired article. ** *** ***** ******* *********** ************* Finnish Data Theft and Extortion [2020.12.10] The Finnish psychotherapy clinic Vastaamo was the victim of a data breach and theft. The criminals tried extorting money from the clinic. When that failed, they started extorting money from the patients: Neither the company nor Finnish investigators have released many details about the nature of the breach, but reports say the attackers initially sought a payment of about 450,000 euros to protect about 40,000 patient records. The company reportedly did not pay up. Given the scale of the attack and the sensitive nature of the stolen data, the case has become a national story in Finland. Globally, attacks on health care organizations have escalated as cybercriminals look for higher-value targets. [...] Vastaamo said customers and employees had “personally been victims of extortion” in the case. Reports say that on Oct. 21 and Oct. 22, the cybercriminals began posting batches of about 100 patient records on the dark web and allowing people to pay about 500 euros to have their information taken down. ** *** ***** ******* *********** ************* A Cybersecurity Policy Agenda [2020.12.11] The Aspen Institute’s Aspen Cybersecurity Group -- I’m a member -- has released its cybersecurity policy agenda for the next four years. The next administration and Congress cannot simultaneously address the wide array of cybersecurity risks confronting modern society. Policymakers in the White House, federal agencies, and Congress should zero in on the most important and solvable problems. To that end, this report covers five priority areas where we believe cybersecurity policymakers should focus their attention and resources as they contend with a presidential transition, a new Congress, and massive staff turnover across our nation’s capital. Education and Workforce Development Public Core Resilience Supply Chain Security Measuring Cybersecurity Promoting Operational Collaboration Lots of detail in the 70-page report. ** *** ***** ******* *********** ************* Authentication Failure [2020.12.14] This is a weird story of a building owner commissioning an artist to paint a mural on the side of his building -- except that he wasn't actually the building's owner. The fake landlord met Hawkins in person the day after Thanksgiving, supplying the paint and half the promised fee. They met again a couple of days later for lunch, when the job was mostly done. Hawkins showed him photographs. The patron seemed happy. He sent Hawkins the rest of the (sorry) dough. But when Hawkins invited him down to see the final result, his client didn't answer the phone. Hawkins called again. No answer. Hawkins emailed. Again, no answer. [...] Two days later, Hawkins got a call from the real Comte. And that Comte was not happy. Comte says that he doesn't believe Hawkins's story, but I don't think I would have demanded to see a photo ID before taking the commission. ** *** ***** ******* *********** ************* Upcoming Speaking Engagements [2020.12.14] This is a current list of where and when I am scheduled to speak: I'm speaking (online) at Western Washington University on January 20, 2021. Details to come. I’ll be speaking at an Informa event on February 28, 2021. Details to come. The list is maintained on this page. ** *** ***** ******* *********** ************* Should There Be Limits on Persuasive Technologies? [2020.12.14] Persuasion is as old as our species. Both democracy and the market economy depend on it. Politicians persuade citizens to vote for them, or to support different policy positions. Businesses persuade consumers to buy their products or services. We all persuade our friends to accept our choice of restaurant, movie, and so on. It’s essential to society; we couldn’t get large groups of people to work together without it. But as with many things, technology is fundamentally changing the nature of persuasion. And society needs to adapt its rules of persuasion or suffer the consequences. Democratic societies, in particular, are in dire need of a frank conversation about the role persuasion plays in them and how technologies are enabling powerful interests to target audiences. In a society where public opinion is a ruling force, there is always a risk of it being mobilized for ill purposes -- such as provoking fear to encourage one group to hate another in a bid to win office, or targeting personal vulnerabilities to push products that might not benefit the consumer. In this regard, the United States, already extremely polarized, sits on a precipice. There have long been rules around persuasion. The US Federal Trade Commission enforces laws that claims about products “must be truthful, not misleading, and, when appropriate, backed by scientific evidence.” Political advertisers must identify themselves in television ads. If someone abuses a position of power to force another person into a contract, undue influence can be argued to nullify that agreement. Yet there is more to persuasion than the truth, transparency, or simply applying pressure. Persuasion also involves psychology, and that has been far harder to regulate. Using psychology to persuade people is not new. Edward Bernays, a pioneer of public relations and nephew to Sigmund Freud, made a marketing practice of appealing to the ego. His approach was to tie consumption to a person’s sense of self. In his 1928 book Propaganda, Bernays advocated engineering events to persuade target audiences as desired. In one famous stunt, he hired women to smoke cigarettes while taking part in the 1929 New York City Easter Sunday parade, causing a scandal while linking smoking with the emancipation of women. The tobacco industry would continue to market lifestyle in selling cigarettes into the 1960s. Emotional appeals have likewise long been a facet of political campaigns. In the 1860 US presidential election, Southern politicians and newspaper editors spread fears of what a “Black Republican” win would mean, painting horrific pictures of what the emancipation of slaves would do to the country. In the 2020 US presidential election, modern-day Republicans used Cuban Americans’ fears of socialism in ads on Spanish-language radio and messaging on social media. Because of the emotions involved, many voters believed the campaigns enough to let them influence their decisions. The Internet has enabled new technologies of persuasion to go even further. Those seeking to influence others can collect and use data about targeted audiences to create personalized messaging. Tracking the websites a person visits, the searches they make online, and what they engage with on social media, persuasion technologies enable those who have access to such tools to better understand audiences and deliver more tailored messaging where audiences are likely to see it most. This information can be combined with data about other activities, such as offline shopping habits, the places a person visits, and the insurance they buy, to create a profile of them that can be used to develop persuasive messaging that is aimed at provoking a specific response. Our senses of self, meanwhile, are increasingly shaped by our interaction with technology. The same digital environment where we read, search, and converse with our intimates enables marketers to take that data and turn it back on us. A modern day Bernays no longer needs to ferret out the social causes that might inspire you or entice you -- you’ve likely already shared that by your online behavior. Some marketers posit that women feel less attractive on Mondays, particularly first thing in the morning -- and therefore that’s the best time to advertise cosmetics to them. The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful. Some music streaming platforms encourage users to disclose their current moods, which helps advertisers target subscribers based on their emotional states. The phones in our pockets provide marketers with our location in real time, helping deliver geographically relevant ads, such as propaganda to those attending a political rally. This always-on digital experience enables marketers to know what we are doing -- and when, where, and how we might be feeling at that moment. All of this is not intended to be alarmist. It is important not to overstate the effectiveness of persuasive technologies. But while many of them are more smoke and mirrors than reality, it is likely that they will only improve over time. The technology already exists to help predict moods of some target audiences, pinpoint their location at any given time, and deliver fairly tailored and timely messaging. How far does that ability need to go before it erodes the autonomy of those targeted to make decisions of their own free will? Right now, there are few legal or even moral limits on persuasion -- and few answers regarding the effectiveness of such technologies. Before it is too late, the world needs to consider what is acceptable and what is over the line. For example, it’s been long known that people are more receptive to advertisements made with people who look like them: in race, ethnicity, age, gender. Ads have long been modified to suit the general demographic of the television show or magazine they appear in. But we can take this further. The technology exists to take your likeness and morph it with a face that is demographically similar to you. The result is a face that looks like you, but that you don’t recognize. If that turns out to be more persuasive than coarse demographic targeting, is that okay? Another example: Instead of just advertising to you when they detect that you are vulnerable, what if advertisers craft advertisements that deliberately manipulate your mood? In some ways, being able to place ads alongside content that is likely to provoke a certain emotional response enables advertisers to do this already. The only difference is that the media outlet claims it isn’t crafting the content to deliberately achieve this. But is it acceptable to actively prime a target audience and then to deliver persuasive messaging that fits the mood? Further, emotion-based decision-making is not the rational type of slow thinking that ought to inform important civic choices such as voting. In fact, emotional thinking threatens to undermine the very legitimacy of the system, as voters are essentially provoked to move in whatever direction someone with power and money wants. Given the pervasiveness of digital technologies, and the often instant, reactive responses people have to them, how much emotion ought to be allowed in persuasive technologies? Is there a line that shouldn’t be crossed? Finally, for most people today, exposure to information and technology is pervasive. The average US adult spends more than eleven hours a day interacting with media. Such levels of engagement lead to huge amounts of personal data generated and aggregated about you -- your preferences, interests, and state of mind. The more those who control persuasive technologies know about us, what we are doing, how we are feeling, when we feel it, and where we are, the better they can tailor messaging that provokes us into action. The unsuspecting target is grossly disadvantaged. Is it acceptable for the same services to both mediate our digital experience and to target us? Is there ever such thing as too much targeting? The power dynamics of persuasive technologies are changing. Access to tools and technologies of persuasion is not egalitarian. Many require large amounts of both personal data and computation power, turning modern persuasion into an arms race where the better resourced will be better placed to influence audiences. At the same time, the average person has very little information about how these persuasion technologies work, and is thus unlikely to understand how their beliefs and opinions might be manipulated by them. What’s more, there are few rules in place to protect people from abuse of persuasion technologies, much less even a clear articulation of what constitutes a level of manipulation so great it effectively takes agency away from those targeted. This creates a positive feedback loop that is dangerous for society. In the 1970s, there was widespread fear about so-called subliminal messaging, which claimed that images of sex and death were hidden in the details of print advertisements, as in the curls of smoke in cigarette ads and the ice cubes of liquor ads. It was pretty much all a hoax, but that didn’t stop the Federal Trade Commission and the Federal Communications Commission from declaring it an illegal persuasive technology. That’s how worried people were about being manipulated without their knowledge and consent. It is time to have a serious conversation about limiting the technologies of persuasion. This must begin by articulating what is permitted and what is not. If we don’t, the powerful persuaders will become even more powerful. This essay was written with Alicia Wanless, and previously appeared in Foreign Policy. ** *** ***** ******* *********** ************* Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page. You can also read these articles on my blog, Schneier on Security. Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety. Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, We Have Root -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. Copyright © 2020 by Bruce Schneier. --- GoldED+/OSX 1.1.5-b20180707 * Origin: A Pointless Point in Connemara (618:500/14.1) |
||||||
|
Previous Message | Next Message | Back to Computer Support/Help/Discussion... <-- <--- | Return to Home Page |
Execution Time: 0.0172 seconds If you experience any problems with this website or need help, contact the webmaster. VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved. Virtual Advanced Copyright © 1995-1997 Roland De Graaf. |