AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages! You are not logged in. Login here for full access privileges. |
Previous Message | Next Message | Back to Computer Support/Help/Discussion... <-- <--- | Return to Home Page |
|
||||||
From | To | Subject | Date/Time | |||
Sean Rima | All | CRYPTO-GRAM, November 15, 2020 |
November 15, 2020 9:58 AM * |
|||
Crypto-Gram November 15, 2020 by Bruce Schneier Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com https://www.schneier.com A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. For back issues, or to subscribe, visit Crypto-Gram's web page. Read this issue on the web These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available. ** *** ***** ******* *********** ************* In this issue: If these links don't work in your email client, try reading this issue of Crypto-Gram on the web. 2020 Workshop on Economics of Information Security US Cyber Command and Microsoft Are Both Disrupting TrickBot Split-Second Phantom Images Fool Autopilots Cybersecurity Visuals NSA Advisory on Chinese Government Hacking New Report on Police Decryption Capabilities IMSI-Catchers from Canada Reverse-Engineering the Redactions in the Ghislaine Maxwell Deposition The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products Tracking Users on Waze The Legal Risks of Security Research New Windows Zero-Day Determining What Video Conference Participants Are Typing from Watching Shoulder Movements California Proposition 24 Passes Detecting Phishing Emails 2020 Was a Secure Election The Security Failures of Online Exam Proctoring "Privacy Nutrition Labels" in Apple's App Store New Zealand Election Fraud Inrupt's Solid Announcement Upcoming Speaking Engagements ** *** ***** ******* *********** ************* 2020 Workshop on Economics of Information Security [2020.10.14] The Workshop on Economics of Information Security will be online this year. Register here. ** *** ***** ******* *********** ************* US Cyber Command and Microsoft Are Both Disrupting TrickBot [2020.10.15] Earlier this month, we learned that someone is disrupting the TrickBot botnet network. Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations. On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware. But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471. A few days ago, the Washington Post reported that it’s the work of US Cyber Command: U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations. The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month. The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said. Here’s General Nakasone talking about persistent engagement. Microsoft is also disrupting Trickbot: We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems. [...] We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations. During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers. To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world. This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place. Brian Krebs comments: In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.” This is a novel use of trademark law. ** *** ***** ******* *********** ************* Split-Second Phantom Images Fool Autopilots [2020.10.19] Researchers are tricking autopilots by inserting split-second images into roadside billboards. Researchers at Israel’s Ben Gurion University of the Negev ... previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind. [...] In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video. The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions. The paper: Abstract: In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks. ** *** ***** ******* *********** ************* Cybersecurity Visuals [2020.10.20] The Hewlett Foundation just announced its top five ideas in its Cybersecurity Visuals Challenge. The problem Hewlett is trying to solve is the dearth of good visuals for cybersecurity. A Google Images Search demonstrates the problem: locks, fingerprints, hands on laptops, scary looking hackers in black hoodies. Hewlett wanted to go beyond those tropes. I really liked the idea, but find the results underwhelming. It’s a hard problem. Hewlett press release. ** *** ***** ******* *********** ************* NSA Advisory on Chinese Government Hacking [2020.10.21] The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers. This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching. ** *** ***** ******* *********** ************* New Report on Police Decryption Capabilities [2020.10.23] There is a new report on police decryption capabilities: specifically, mobile device forensic tools (MDFTs). Short summary: it’s not just the FBI that can do it. This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed. Lots of details in the report. And in this news article: At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does. [...] The tools mostly come from Grayshift, an Atlanta company co-founded by a former Apple engineer, and Cellebrite, an Israeli unit of Japan’s Sun Corporation. Their flagship tools cost roughly $9,000 to $18,000, plus $3,500 to $15,000 in annual licensing fees, according to invoices obtained by Upturn. ** *** ***** ******* *********** ************* IMSI-Catchers from Canada [2020.10.26] Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments: L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete. The article goes on to talk about replacement surveillance systems from the Canadian company Octasic. Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G). [...] A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone. Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner. ** *** ***** ******* *********** ************* Reverse-Engineering the Redactions in the Ghislaine Maxwell Deposition [2020.10.27] Slate magazine was able to cleverly read the Ghislaine Maxwell deposition and reverse-engineer many of the redacted names. We’ve long known that redacting is hard in the modern age, but most of the failures to date have been a result of not realizing that covering digital text with a black bar doesn’t always remove the text from the underlying digital file. As far as I know, this reverse-engineering technique is new. EDITED TO ADD: A similar technique was used in 1991 to recover the Dead Sea Scrolls. ** *** ***** ******* *********** ************* The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products [2020.10.28] Senator Ron Wyden asked, and the NSA didn’t answer: The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others. These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications. The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines. [...] The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws. “At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.” Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries. The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary. Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC. Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere. Juniper has never identified the customer, and declined to comment for this story. Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used. Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security. ** *** ***** ******* *********** ************* Tracking Users on Waze [2020.10.29] A security researcher discovered a wulnerability in Waze that breaks the anonymity of users: I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road. The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different. ** *** ***** ******* *********** ************* The Legal Risks of Security Research [2020.10.30] Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.” From a summary: Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law. Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform. Comprehensive, and well worth reading. Here’s a Twitter thread by Kendra. ** *** ***** ******* *********** ************* New Windows Zero-Day [2020.11.02] Google’s Project Zero has discovered and published a buffer overflow vulnerability in the Windows Kernel Cryptography Driver. The exploit doesn’t affect the cryptography, but allows attackers to escalate system privileges: Attackers were combining an exploit for it with a separate one targeting a recently fixed flaw in Chrome. The former allowed the latter to escape a security sandbox so the latter could execute code on vulnerable machines. The vulnerability is being exploited in the wild, although Microsoft says it’s not being exploited widely. Everyone expects a fix in the next Patch Tuesday cycle. ** *** ***** ******* *********** ************* Determining What Video Conference Participants Are Typing from Watching Shoulder Movements [2020.11.04] Accuracy isn’t great, but that it can be done at all is impressive. Murtuza Jadiwala, a computer science professor heading the research project, said his team was able to identify the contents of texts by examining body movement of the participants. Specifically, they focused on the movement of their shoulders and arms to extrapolate the actions of their fingers as they typed. Given the widespread use of high-resolution web cams during conference calls, Jadiwala was able to record and analyze slight pixel shifts around users’ shoulders to determine if they were moving left or right, forward or backward. He then created a software program that linked the movements to a list of commonly used words. He says the “text inference framework that uses the keystrokes detected from the video ... predict[s] words that were most likely typed by the target user. We then comprehensively evaluate[d] both the keystroke/typing detection and text inference frameworks using data collected from a large number of participants.” In a controlled setting, with specific chairs, keyboards and webcam, Jadiwala said he achieved an accuracy rate of 75 percent. However, in uncontrolled environments, accuracy dropped to only one out of every five words being correctly identified. Other factors contribute to lower accuracy levels, he said, including whether long sleeve or short sleeve shirts were worn, and the length of a user’s hair. With long hair obstructing a clear view of the shoulders, accuracy plummeted. ** *** ***** ******* *********** ************* California Proposition 24 Passes [2020.11.05] California’s Proposition 24, aimed at improving the California Consumer Privacy Act, passed this week. Analyses are very mixed. I was very mixed on the proposition, but on the whole I supported it. The proposition has some serious flaws, and was watered down by industry, but voting for privacy feels like it’s generally a good thing. ** *** ***** ******* *********** ************* Detecting Phishing Emails [2020.11.06] Research paper: Rick Wash, “How Experts Detect Phishing Scam Emails“: Abstract: Phishing scam emails are emails that pretend to be something they are not in order to get the recipient of the email to undertake some action they normally would not. While technical protections against phishing reduce the number of phishing emails received, they are not perfect and phishing remains one of the largest sources of security risk in technology and communication systems. To better understand the cognitive process that end users can use to identify phishing messages, I interviewed 21 IT experts about instances where they successfully identified emails as phishing in their own inboxes. IT experts naturally follow a three-stage process for identifying phishing emails. In the first stage, the email recipient tries to make sense of the email, and understand how it relates to other things in their life. As they do this, they notice discrepancies: little things that are “off” about the email. As the recipient notices more discrepancies, they feel a need for an alternative explanation for the email. At some point, some feature of the email -- usually, the presence of a link requesting an action -- triggers them to recognize that phishing is a possible alternative explanation. At this point, they become suspicious (stage two) and investigate the email by looking for technical details that can conclusively identify the email as phishing. Once they find such information, then they move to stage three and deal with the email by deleting it or reporting it. I discuss ways this process can fail, and implications for improving training of end users about phishing. ** *** ***** ******* *********** ************* 2020 Was a Secure Election [2020.11.10] Over at Lawfare: “2020 Is An Election Security Success Story (So Far).” What’s more, the voting itself was remarkably smooth. It was only a few months ago that professionals and analysts who monitor election administration were alarmed at how badly unprepared the country was for voting during a pandemic. Some of the primaries were disasters. There were not clear rules in many states for voting by mail or sufficient opportunities for voting early. There was an acute shortage of poll workers. Yet the United States saw unprecedented turnout over the last few weeks. Many states handled voting by mail and early voting impressively and huge numbers of volunteers turned up to work the polls. Large amounts of litigation before the election clarified the rules in every state. And for all the president’s griping about the counting of votes, it has been orderly and apparently without significant incident. The result was that, in the midst of a pandemic that has killed 230,000 Americans, record numbers of Americans voted -- and voted by mail -- and those votes are almost all counted at this stage. On the cybersecurity front, there is even more good news. Most significantly, there was no serious effort to target voting infrastructure. After voting concluded, the director of the Cybersecurity and Infrastructure Security Agency (CISA), Chris Krebs, released a statement, saying that “after millions of Americans voted, we have no evidence any foreign adversary was capable of preventing Americans from voting or changing vote tallies.” Krebs pledged to “remain vigilant for any attempts by foreign actors to target or disrupt the ongoing vote counting and final certification of results,” and no reports have emerged of threats to tabulation and certification processes. A good summary. ** *** ***** ******* *********** ************* The Security Failures of Online Exam Proctoring [2020.11.11] Proctoring an online exam is hard. It’s hard to be sure that the student isn’t cheating, maybe by having reference materials at hand, or maybe by substituting someone else to take the exam for them. There are a variety of companies that provide online proctoring services, but they’re uniformly mediocre: The remote proctoring industry offers a range of services, from basic video links that allow another human to observe students as they take exams to algorithmic tools that use artificial intelligence (AI) to detect cheating. But asking students to install software to monitor them during a test raises a host of fairness issues, experts say. “There’s a big gulf between what this technology promises, and what it actually does on the ground,” said Audrey Watters, a researcher on the edtech industry who runs the website Hack Education. “(They) assume everyone looks the same, takes tests the same way, and responds to stressful situations in the same way.” The article discusses the usual failure modes: facial recognition systems that are more likely to fail on students with darker faces, suspicious-movement-detection systems that fail on students with disabilities, and overly intrusive systems that collect all sorts of data from student computers. I teach cybersecurity policy at the Harvard Kennedy School. My solution, which seems like the obvious one, is not to give timed closed-book exams in the first place. This doesn’t work for things like the legal bar exam, which can’t modify itself so quickly. But this feels like an arms race where the cheater has a large advantage, and any remote proctoring system will be plagued with false positives. ** *** ***** ******* *********** ************* "Privacy Nutrition Labels" in Apple's App Store [2020.11.12] Apple will start requiring standardized privacy labels for apps in its app store, starting in December: Apple allows data disclosure to be optional if all of the following conditions apply: if it’s not used for tracking, advertising or marketing; if it’s not shared with a data broker; if collection is infrequent, unrelated to the app’s primary function, and optional; and if the user chooses to provide the data in conjunction with clear disclosure, the user’s name or account name is prominently displayed with the submission. Otherwise, the privacy labeling is mandatory and requires a fair amount of detail. Developers must disclose the use of contact information, health and financial data, location data, user content, browsing history, search history, identifiers, usage data, diagnostics, and more. If a software maker is collecting the user’s data to display first or third-party adverts, this has to be disclosed. These disclosures then get translated to a card-style interface displayed with app product pages in the platform-appropriate App Store. The concept of a privacy nutrition label isn’t new, and has been well-explored at CyLab at Carnegie Mellon University. ** *** ***** ******* *********** ************* New Zealand Election Fraud [2020.11.13] It seems that this election season has not gone without fraud. In New Zealand, a vote for “Bird of the Year” has been marred by fraudulent votes: More than 1,500 fraudulent votes were cast in the early hours of Monday in the country’s annual bird election, briefly pushing the Little-Spotted Kiwi to the top of the leaderboard, organizers and environmental organization Forest & Bird announced Tuesday. Those votes -- which were discovered by the election’s official scrutineers -- have since been removed. According to election spokesperson Laura Keown, the votes were cast using fake email addresses that were all traced back to the same IP address in Auckland, New Zealand’s most populous city. It feels like writing this story was a welcome distraction from writing about the US election: “No one has to worry about the integrity of our bird election,” she told Radio New Zealand, adding that every vote would be counted. Asked whether Russia had been involved, she denied any “overseas interference” in the vote. I’m sure that’s a relief to everyone involved. ** *** ***** ******* *********** ************* Inrupt's Solid Announcement [2020.11.13] Earlier this year, I announced that I had joined Inrupt, the company commercializing Tim Berners-Lee’s Solid specification: The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things -- your computer, your phone, your IoT whatever -- is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod. This week, Inrupt announced the availability of the commercial-grade Enterprise Solid Server, along with a small but impressive list of initial customers of the product and the specification (like the UK National Health Service). This is a significant step forward to realizing Tim’s vision: The technologies we’re releasing today are a component of a much-needed course correction for the web. It’s exciting to see organizations using Solid to improve the lives of everyday people -- through better healthcare, more efficient government services and much more. These first major deployments of the technology will kick off the network effect necessary to ensure the benefits of Solid will be appreciated on a massive scale. Once users have a Solid Pod, the data there can be extended, linked, and repurposed in valuable new ways. And Solid’s growing community of developers can be rest assured that their apps will benefit from the widespread adoption of reliable Solid Pods, already populated with valuable data that users are empowered to share. A few news articles. Slashdot thread. ** *** ***** ******* *********** ************* Upcoming Speaking Engagements [2020.11.14] This is a current list of where and when I am scheduled to speak: I’m speaking at the (ISC)² Security Congress 2020, November 16, 2020. I’ll be on a panel at the OECD Global Blockchain Policy Forum 2020 on November 17, 2020. The panel is called “Deep Dive: Digital Security and Distributed Ledger Technology: Myths and Reality.” I’m speaking on “Securing a World of Physically Capable Computers” as part of Cary Library’s Science & Economics Series on November 17, 2020. I’ll be keynoting the HITB CyberWeek Virtual Edition on November 18, 2020. I’m appearing on a panel called “The Privacy Paradox and Security Dilemma” as part of the Web Summit conference, on December 2, 2020. I’ll be speaking at an Informa event on February 28, 2021. Details to come. The list is maintained on this page. ** *** ***** ******* *********** ************* Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page. You can also read these articles on my blog, Schneier on Security. Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety. Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, We Have Root -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. Copyright © 2020 by Bruce Schneier. --- GoldED+/OSX 1.1.5-b20180707 * Origin: A Pointless Point in Connemara (618:500/14.1) |
||||||
|
Previous Message | Next Message | Back to Computer Support/Help/Discussion... <-- <--- | Return to Home Page |
Execution Time: 0.0213 seconds If you experience any problems with this website or need help, contact the webmaster. VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved. Virtual Advanced Copyright © 1995-1997 Roland De Graaf. |