AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [834 / 1582] RSS
 From   To   Subject   Date/Time 
Message   TCOB1    All   CRYPTO-GRAM, April 15, 2023   April 16, 2023
 4:18 PM *  

Crypto-Gram
April 15, 2023

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

NetWire Remote Access Trojan Maker Arrested How AI Could Write Our Laws
Upcoming Speaking Engagements
US Citizen Hacked by Spyware
ChatGPT Privacy Flaw
Mass Ransomware Attack
Exploding USB Sticks
A HackerΓÇÖs Mind News
Hacks at Pwn2Own Vancouver 2023
Security Vulnerabilities in Snipping Tools The Security Vulnerabilities of
Message Interoperability Russian Cyberwarfare Documents Leaked UK Runs Fake
DDoS-for-Hire Sites
North Korea Hacking Cryptocurrency Sites with 3CX Exploit FBI (and Others) Shut
Down Genesis Market
Research on AI in Adversarial Settings LLMs and Phishing
Car Thieves Hacking the CAN Bus
FBI Advising People to Avoid Public Charging Stations
Bypassing a Theft Threat Model
Gaining an Advantage in Roulette
Hacking Suicide
Upcoming Speaking Engagements
** *** ***** ******* *********** *************

NetWire Remote Access Trojan Maker Arrested

[2023.03.14] From Brian Krebs:

A Croatian national has been arrested for allegedly operating NetWire, a Remote
Access Trojan (RAT) marketed on cybercrime forums since 2012 as a stealthy way
to spy on infected systems and siphon passwords. The arrest coincided with a
seizure of the NetWire sales website by the U.S. Federal Bureau of Investigation
(FBI). While the defendant in this case hasnΓÇÖt yet been named publicly, the
NetWire website has been leaking information about the likely true identity and
location of its owner for the past 11 years.

The article details the mistakes that led to the personΓÇÖs address.

** *** ***** ******* *********** *************

How AI Could Write Our Laws

[2023.03.14] Nearly 90% of the multibillion-dollar federal lobbying apparatus in
the United States serves corporate interests. In some cases, the objective of
that money is obvious. Google pours millions into lobbying on bills related to
antitrust regulation. Big energy companies expect action whenever there is a
move to end drilling leases for federal lands, in exchange for the tens of
millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are
not always so obvious. Consider, for example, a 2013 Massachusetts bill that
tried to restrict the commercial use of data collected from K-12 students using
services accessed via the internet. The bill appealed to many privacy-conscious
education advocates, and appropriately so. But behind the justification of
protecting students lay a market-altering policy: the bill was introduced at the
behest of Microsoft lobbyists, in an effort to exclude Google Docs from
classrooms.

What would happen if such legal-but-sneaky strategies for tilting the rules in
favor of one group over another become more widespread and effective? We can see
hints of an answer in the remarkable pace at which artificial-intelligence tools
for everything from writing to graphic design are being developed and improved.
And the unavoidable conclusion is that AI will make lobbying more guileful, and
perhaps more successful.

It turns out there is a natural opening for this technology: microlegislation.

ΓÇ£MicrolegislationΓÇ¥ is a term for small pieces of proposed law that cater --
sometimes unexpectedly -- to narrow interests. Political scientist Amy McKay
coined the term. She studied the 564 amendments to the Affordable Care Act
(ΓÇ£ObamacareΓÇ¥) considered by the Senate Finance Committee in 2009, as well as
the positions of 866 lobbying groups and their campaign contributions. She
documented instances where lobbyist comments -- on health-care research, vaccine
services, and other provisions -- were translated directly into microlegislation
in the form of amendments. And she found that those groupsΓÇÖ financial
contributions to specific senators on the committee increased the amendmentsΓÇÖ
chances of passing.

Her finding that lobbying works was no surprise. More important, McKayΓÇÖs work
demonstrated that computer models can predict the likely fate of proposed
legislative amendments, as well as the paths by which lobbyists can most
effectively secure their desired outcomes. And that turns out to be a critical
piece of creating an AI lobbyist.

Lobbying has long been part of the give-and-take among human policymakers and
advocates working to balance their competing interests. The danger of
microlegislation -- a danger greatly exacerbated by AI -- is that it can be used
in a way that makes it difficult to figure out who the legislation truly
benefits.

Another word for a strategy like this is a ΓÇ£hack.ΓÇ¥ Hacks follow the rules of
a system but subvert their intent. Hacking is often associated with computer
systems, but the concept is also applicable to social systems like financial
markets, tax codes, and legislative processes.

While the idea of monied interests incorporating AI assistive technologies into
their lobbying remains hypothetical, specific machine-learning technologies
exist today that would enable them to do so. We should expect these techniques
to get better and their utilization to grow, just as weΓÇÖve seen in so many
other domains.

HereΓÇÖs how it might work.

Crafting an AI microlegislator

To make microlegislation, machine-learning systems must be able to uncover the
smallest modification that could be made to a bill or existing law that would
make the biggest impact on a narrow interest.

There are three basic challenges involved. First, you must create a policy
proposal -- small suggested changes to legal text -- and anticipate whether or
not a human reader would recognize the alteration as substantive. This is
important; a change that isnΓÇÖt detectable is more likely to pass without
controversy. Second, you need to do an impact assessment to project the
implications of that change for the short- or long-range financial interests of
companies. Third, you need a lobbying strategizer to identify what levers of
power to pull to get the best proposal into law.

Existing AI tools can tackle all three of these.

The first step, the policy proposal, leverages the core function of generative
AI. Large language models, the sort that have been used for general-purpose
chatbots such as ChatGPT, can easily be adapted to write like a native in
different specialized domains after seeing a relatively small number of
examples. This process is called fine-tuning. For example, a model
ΓÇ£pre-trainedΓÇ¥ on a large library of generic text samples from books and the
internet can be ΓÇ£fine-tunedΓÇ¥ to work effectively on medical literature,
computer science papers, and product reviews.

Given this flexibility and capacity for adaptation, a large language model could
be fine-tuned to produce draft legislative texts, given a data set of previously
offered amendments and the bills they were associated with. Training data is
available. At the federal level, itΓÇÖs provided by the US Government Publishing
Office, and there are already tools for downloading and interacting with it.
Most other jurisdictions provide similar data feeds, and there are even
convenient assemblages of that data.

Meanwhile, large language models like the one underlying ChatGPT are routinely
used for summarizing long, complex documents (even laws and computer code) to
capture the essential points, and they are optimized to match human
expectations. This capability could allow an AI assistant to automatically
predict how detectable the true effect of a policy insertion may be to a human
reader.

Today, it can take a highly paid team of human lobbyists days or weeks to
generate and analyze alternative pieces of microlegislation on behalf of a
client. With AI assistance, that could be done instantaneously and cheaply. This
opens the door to dramatic increases in the scope of this kind of
microlegislating, with a potential to scale across any number of bills in any
jurisdiction.

Teaching machines to assess impact

Impact assessment is more complicated. There is a rich series of methods for
quantifying the predicted outcome of a decision or policy, and then also
optimizing the return under that model. This kind of approach goes by different
names in different circles -- mathematical programming in management science,
utility maximization in economics, and rational design in the life sciences.

To train an AI to do this, we would need to specify some way to calculate the
benefit to different parties as a result of a policy choice. That could mean
estimating the financial return to different companies under a few different
scenarios of taxation or regulation. Economists are skilled at building risk
models like this, and companies are already required to formulate and disclose
regulatory compliance risk factors to investors. Such a mathematical model could
translate directly into a reward function, a grading system that could provide
feedback for the model used to create policy proposals and direct the process of
training it.

The real challenge in impact assessment for generative AI models would be to
parse the textual output of a model like ChatGPT in terms that an economic model
could readily use. Automating this would require extracting structured financial
information from the draft amendment or any legalese surrounding it. This kind
of information extraction, too, is an area where AI has a long history; for
example, AI systems have been trained to recognize clinical details in
doctorsΓÇÖ notes. Early indications are that large language models are fairly
good at recognizing financial information in texts such as investor call
transcripts. While it remains an open challenge in the field, they may even be
capable of writing out multi-step plans based on descriptions in free text.

Machines as strategists

The last piece of the puzzle is a lobbying strategizer to figure out what
actions to take to convince lawmakers to adopt the amendment.

Passing legislation requires a keen understanding of the complex interrelated
networks of legislative offices, outside groups, executive agencies, and other
stakeholders vying to serve their own interests. Each actor in this network has
a baseline perspective and different factors that influence that point of view.
For example, a legislator may be moved by seeing an allied stakeholder take a
firm position, or by a negative news story, or by a campaign contribution.

It turns out that AI developers are very experienced at modeling these kinds of
networks. Machine-learning models for network graphs have been built, refined,
improved, and iterated by hundreds of researchers working on incredibly diverse
problems: lidar scans used to guide self-driving cars, the chemical functions of
molecular structures, the capture of motion in actorsΓÇÖ joints for computer
graphics, behaviors in social networks, and more.

In the context of AI-assisted lobbying, political actors like legislators and
lobbyists are nodes on a graph, just like users in a social network. Relations
between them are graph edges, like social connections. Information can be passed
along those edges, like messages sent to a friend or campaign contributions made
to a member. AI models can use past examples to learn to estimate how that
information changes the network. Calculating the likelihood that a campaign
contribution of a given size will flip a legislatorΓÇÖs vote on an amendment is
one application.

McKayΓÇÖs work has already shown us that there are significant, predictable
relationships between these actions and the outcomes of legislation, and that
the work of discovering those can be automated. Others have shown that graphs of
neural network models like those described above can be applied to political
systems. The full-scale use of these technologies to guide lobbying strategy is
theoretical, but plausible.

Put together, these three components could create an automatic system for
generating profitable microlegislation. The policy proposal system would create
millions, even billions, of possible amendments. The impact assessor would
identify the few that promise to be most profitable to the client. And the
lobbying strategy tool would produce a blueprint for getting them passed.

What remains is for human lobbyists to walk the floors of the Capitol or state
house, and perhaps supply some cash to grease the wheels. These final two
aspects of lobbying -- access and financing -- cannot be supplied by the AI
tools we envision. This suggests that lobbying will continue to primarily
benefit those who are already influential and wealthy, and AI assistance will
amplify their existing advantages.

The transformative benefit that AI offers to lobbyists and their clients is
scale. While individual lobbyists tend to focus on the federal level or a single
state, with AI assistance they could more easily infiltrate a large number of
state-level (or even local-level) law-making bodies and elections. At that
level, where the average cost of a seat is measured in the tens of thousands of
dollars instead of millions, a single donor can wield a lot of influence -- if
automation makes it possible to coordinate lobbying across districts.

How to stop them

When it comes to combating the potentially adverse effects of assistive AI, the
first response always seems to be to try to detect whether or not content was
AI-generated. We could imagine a defensive AI that detects anomalous lobbyist
spending associated with amendments that benefit the contributing group. But by
then, the damage might already be done.

In general, methods for detecting the work of AI tend not to keep pace with its
ability to generate convincing content. And these strategies wonΓÇÖt be
implemented by AIs alone. The lobbyists will still be humans who take the
results of an AI microlegislator and further refine the computerΓÇÖs strategies.
These hybrid human-AI systems will not be detectable from their output.

But the good news is: the same strategies that have long been used to combat
misbehavior by human lobbyists can still be effective when those lobbyists get
an AI assist. We donΓÇÖt need to reinvent our democracy to stave off the worst
risks of AI; we just need to more fully implement long-standing ideals.

First, we should reduce the dependence of legislatures on monolithic,
multi-thousand-page omnibus bills voted on under deadline. This style of
legislating exploded in the 1980s and 1990s and continues through to the most
recent federal budget bill. Notwithstanding their legitimate benefits to the
political system, omnibus bills present an obvious and proven vehicle for
inserting unnoticed provisions that may later surprise the same legislators who
approved them.

The issue is not that individual legislators need more time to read and
understand each bill (that isnΓÇÖt realistic or even necessary). ItΓÇÖs that
omnibus bills must pass. There is an imperative to pass a federal budget bill,
and so the capacity to push back on individual provisions that may seem
deleterious (or just impertinent) to any particular group is small. Bills that
are too big to fail are ripe for hacking by microlegislation.

Moreover, the incentive for legislators to introduce microlegislation catering
to a narrow interest is greater if the threat of exposure is lower. To
strengthen the threat of exposure for misbehaving legislative sponsors, bills
should focus more tightly on individual substantive areas and, after the
introduction of amendments, allow more time before the committee and floor
votes. During this time, we should encourage public review and testimony to
provide greater oversight.

Second, we should strengthen disclosure requirements on lobbyists, whether
theyΓÇÖre entirely human or AI-assisted. State laws regarding lobbying
disclosure are a hodgepodge. North Dakota, for example, only requires lobbying
reports to be filed annually, so that by the time a disclosure is made, the
policy is likely already decided. A lobbying disclosure scorecard created by
Open Secrets, a group researching the influence of money in US politics, tracks
nine states that do not even require lobbyists to report their compensation.

Ideally, it would be great for the public to see all communication between
lobbyists and legislators, whether it takes the form of a proposed amendment or
not. Absent that, letΓÇÖs give the public the benefit of reviewing what
lobbyists are lobbying for -- and why. Lobbying is traditionally an activity
that happens behind closed doors. Right now, many states reinforce that: they
actually exempt testimony delivered publicly to a legislature from being
reported as lobbying.

In those jurisdictions, if you reveal your position to the public, youΓÇÖre no
longer lobbying. LetΓÇÖs do the inverse: require lobbyists to reveal their
positions on issues. Some jurisdictions already require a statement of position
(a ΓÇÿyeaΓÇÖ or ΓÇÿnayΓÇÖ) from registered lobbyists. And in most (but not all)
states, you could make a public records request regarding meetings held with a
state legislator and hope to get something substantive back. But we can expect
more -- lobbyists could be required to proactively publish, within a few days, a
brief summary of what they demanded of policymakers during meetings and why they
believe itΓÇÖs in the general interest.

We canΓÇÖt rely on corporations to be forthcoming and wholly honest about the
reasons behind their lobbying positions. But having them on the record about
their intentions would at least provide a baseline for accountability.

Finally, consider the role AI assistive technologies may have on lobbying firms
themselves and the labor market for lobbyists. Many observers are rightfully
concerned about the possibility of AI replacing or devaluing the human labor it
automates. If the automating potential of AI ends up commodifying the work of
political strategizing and message development, it may indeed put some
professionals on K Street out of work.

But donΓÇÖt expect that to disrupt the careers of the most astronomically
compensated lobbyists: former members Congress and other insiders who have
passed through the revolving door. There is no shortage of reform ideas for
limiting the ability of government officials turned lobbyists to sell access to
their colleagues still in government, and they should be adopted and -- equally
important -- maintained and enforced in successive Congresses and
administrations.

None of these solutions are really original, specific to the threats posed by
AI, or even predominantly focused on microlegislation -- and thatΓÇÖs the point.
Good governance should and can be robust to threats from a variety of techniques
and actors.

But what makes the risks posed by AI especially pressing now is how fast the
field is developing. We expect the scale, strategies, and effectiveness of
humans engaged in lobbying to evolve over years and decades. Advancements in AI,
meanwhile, seem to be making impressive breakthroughs at a much faster pace
-- and itΓÇÖs still accelerating.

The legislative process is a constant struggle between parties trying to control
the rules of our society as they are updated, rewritten, and expanded at the
federal, state, and local levels. Lobbying is an important tool for balancing
various interests through our system. If itΓÇÖs well-regulated, perhaps lobbying
can support policymakers in making equitable decisions on behalf of us all.

This article was co-written with Nathan E. Sanders and originally appeared in
MIT Technology Review.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2023.03.14] This is a current list of where and when I am scheduled to speak:

IΓÇÖm speaking on ΓÇ£How to Reclaim Power in the Digital WorldΓÇ¥ at EPFL in
Lausanne, Switzerland, on Thursday, March 16, 2023, at 5:30 PM CET. IΓÇÖll be
discussing my new book A HackerΓÇÖs Mind: How the Powerful Bend SocietyΓÇÖs
Rules at Harvard Science Center in Cambridge, Massachusetts, USA, on Friday,
March 31, 2023, at 6:00 PM EDT. IΓÇÖll be discussing my book A HackerΓÇÖs Mind
with Julia Angwin at the Ford Foundation Center for Social Justice in New York
City, on Thursday, April 6, 2023, at 6:30 PM EDT. Register here IΓÇÖm speaking
at IT-S Now 2023 in Vienna, Austria, on June 2, 2023, at 8:30 AM CEST.
The list is maintained on this page.

** *** ***** ******* *********** *************

US Citizen Hacked by Spyware

[2023.03.21] The New York Times is reporting that a US citizenΓÇÖs phone was
hacked by Predator spyware.

A U.S. and Greek national who worked on MetaΓÇÖs security and trust team while
based in Greece was placed under a yearlong wiretap by the Greek national
intelligence service and hacked with a powerful cyberespionage tool, according
to documents obtained by The New York Times and officials with knowledge of the
case.

The disclosure is the first known case of an American citizen being targeted in
a European Union country by the advanced snooping technology, the use of which
has been the subject of a widening scandal in Greece. It demonstrates that the
illicit use of spyware is spreading beyond use by authoritarian governments
against opposition figures and journalists, and has begun to creep into European
democracies, even ensnaring a foreign national working for a major global
corporation.

The simultaneous tapping of the targetΓÇÖs phone by the national intelligence
service and the way she was hacked indicate that the spy service and whoever
implanted the spyware, known as Predator, were working hand in hand.

** *** ***** ******* *********** *************

ChatGPT Privacy Flaw

[2023.03.22] OpenAI has disabled ChatGPTΓÇÖs privacy history, almost certainly
because it had a security flaw where users were seeing each othersΓÇÖ histories.

** *** ***** ******* *********** *************

Mass Ransomware Attack

[2023.03.23] A vulnerability in a popular data transfer tool has resulted in a
mass ransomware attack:

TechCrunch has learned of dozens of organizations that used the affected
GoAnywhere file transfer software at the time of the ransomware attack,
suggesting more victims are likely to come forward.

However, while the number of victims of the mass-hack is widening, the known
impact is murky at best.

Since the attack in late January or early February -- the exact date is not
known -- Clop has disclosed less than half of the 130 organizations it claimed
to have compromised via GoAnywhere, a system that can be hosted in the cloud or
on an organizationΓÇÖs network that allows companies to securely transfer huge
sets of data and other large files.

** *** ***** ******* *********** *************

Exploding USB Sticks

[2023.03.24] In case you donΓÇÖt have enough to worry about, people are hiding
explosives -- actual ones -- in USB sticks:

In the port city of Guayaquil, journalist Lenin Artieda of the Ecuavisa private
TV station received an envelope containing a pen drive which exploded when he
inserted it into a computer, his employer said.

Artieda sustained slight injuries to one hand and his face, said police official
Xavier Chango. No one else was hurt.

Chango said the USB drive sent to Artieda could have been loaded with RDX, a
military-type explosive.

More:

According to police official Xavier Chango, the flash drive that went off had a
5-volt explosive charge and is thought to have used RDX. Also known as T4,
according to the Environmental Protection Agency (PDF), militaries, including
the USΓÇÖs, use RDX, which ΓÇ£can be used alone as a base charge for detonators
or mixed with other explosives, such as TNT.ΓÇ¥ Chango said it comes in capsules
measuring about 1 cm, but only half of it was activated in the drive that
Artieda plugged in, which likely saved him some harm.

Reminds me of assassination by cell phone.

** *** ***** ******* *********** *************

A HackerΓÇÖs Mind News

[2023.03.24] My latest book continues to sell well. Its ranking hovers between
1,500 and 2,000 on Amazon. ItΓÇÖs been spied in airports.

Reviews are consistently good. I have been enjoying giving podcast interviews.
It all feels pretty good right now.

You can order a signed book from me here.

** *** ***** ******* *********** *************

Hacks at Pwn2Own Vancouver 2023

[2023.03.27] An impressive array of hacks were demonstrated at the first day of
the Pwn2Own conference in Vancouver:

On the first day of Pwn2Own Vancouver 2023, security researchers successfully
demoed Tesla Model 3, Windows 11, and macOS zero-day exploits and exploit chains
to win $375,000 and a Tesla Model 3.

The first to fall was Adobe Reader in the enterprise applications category after
Haboob SAΓÇÖs Abdul Aziz Hariri (@abdhariri) used an exploit chain targeting a
6-bug logic chain abusing multiple failed patches which escaped the sandbox and
bypassed a banned API list on macOS to earn $50,000.

The STAR Labs team (@starlabs_sg) demoed a zero-day exploit chain targeting
MicrosoftΓÇÖs SharePoint team collaboration platform that brought them a
$100,000 reward and successfully hacked Ubuntu Desktop with a previously known
exploit for $15,000.

Synacktiv (@Synacktiv) took home $100,000 and a Tesla Model 3 after successfully
executing a TOCTOU (time-of-check to time-of-use) attack against the
Tesla-Gateway in the Automotive category. They also used a TOCTOU zero-day
vulnerability to escalate privileges on Apple macOS and earned $40,000.

Oracle VirtualBox was hacked using an OOB Read and a stacked-based buffer
overflow exploit chain (worth $40,000) by Qrious SecurityΓÇÖs Bien Pham
(@bienpnn).

Last but not least, Marcin Wi─àzowski elevated privileges on Windows 11 using an
improper input validation zero-day that came with a $30,000 prize.

The conΓÇÖs second and third days were equally impressive.

** *** ***** ******* *********** *************

Security Vulnerabilities in Snipping Tools

[2023.03.28] Both GoogleΓÇÖs PixelΓÇÖs Markup Tool and the Windows Snipping Tool
have vulnerabilities that allow people to partially recover content that was
edited out of images.

EDITED TO ADD (4/14): Steven Murdoch has a good explanation as to why this
happened -- and to two very different snipping tools.

** *** ***** ******* *********** *************

The Security Vulnerabilities of Message Interoperability

[2023.03.29] Jenny Blessing and Ross Anderson have evaluated the security of
systems designed to allow the various Internet messaging platforms to
interoperate with each other:

The Digital Markets Act ruled that users on different platforms should be able
to exchange messages with each other. This opens up a real PandoraΓÇÖs box. How
will the networks manage keys, authenticate users, and moderate content? How
much metadata will have to be shared, and how?

In our latest paper, One Protocol to Rule Them All? On Securing Interoperable
Messaging, we explore the security tensions, the conflicts of interest, the
usability traps, and the likely consequences for individual and institutional
behaviour.

Interoperability will vastly increase the attack surface at every level in the
stack from the cryptography up through usability to commercial incentives and
the opportunities for government interference.

ItΓÇÖs a good idea in theory, but will likely result in the overall security
being the worst of each platformΓÇÖs security.

** *** ***** ******* *********** *************

Russian Cyberwarfare Documents Leaked

[2023.03.30] Now this is interesting:

Thousands of pages of secret documents reveal how VulkanΓÇÖs engineers have
worked for Russian military and intelligence agencies to support hacking
operations, train operatives before attacks on national infrastructure, spread
disinformation and control sections of the internet.

The companyΓÇÖs work is linked to the federal security service or FSB, the
domestic spy agency; the operational and intelligence divisions of the armed
forces, known as the GOU and GRU; and the SVR, RussiaΓÇÖs foreign intelligence
organisation.

Lots more at the link.

The documents are in Russian, so it will be a while before we get translations.

EDITED TO ADD (4/1): More information.

** *** ***** ******* *********** *************

UK Runs Fake DDoS-for-Hire Sites

[2023.04.03] Brian Krebs is reporting that the UKΓÇÖs National Crime Agency is
setting up fake DDoS-for-hire sites as part of a sting operation:

The NCA says all of its fake so-called ΓÇ£booterΓÇ¥ or ΓÇ£stresserΓÇ¥ sites -
which have so far been accessed by several thousand people -- have been created
to look like they offer the tools and services that enable cyber criminals to
execute these attacks.

ΓÇ£However, after users register, rather than being given access to cyber crime
tools, their data is collated by investigators,ΓÇ¥ reads an NCA advisory on the
program. ΓÇ£Users based in the UK will be contacted by the National Crime Agency
or police and warned about engaging in cyber crime. Information relating to
those based overseas is being passed to international law enforcement.ΓÇ¥

The NCA declined to say how many phony booter sites it had set up, or for how
long they have been running. The NCA says hiring or launching attacks designed
to knock websites or users offline is punishable in the UK under the Computer
Misuse Act 1990.

ΓÇ£Going forward, people who wish to use these services canΓÇÖt be sure who is
actually behind them, so why take the risk?ΓÇ¥ the NCA announcement continues.

** *** ***** ******* *********** *************

North Korea Hacking Cryptocurrency Sites with 3CX Exploit

[2023.04.04] News:

Researchers at Russian cybersecurity firm Kaspersky today revealed that they
identified a small number of cryptocurrency-focused firms as at least some of
the victims of the 3CX software supply-chain attack thatΓÇÖs unfolded over the
past week. Kaspersky declined to name any of those victim companies, but it
notes that theyΓÇÖre based in ΓÇ£western Asia.ΓÇ¥

Security firms CrowdStrike and SentinelOne last week pinned the operation on
North Korean hackers, who compromised 3CX installer software thatΓÇÖs used by
600,000 organizations worldwide, according to the vendor. Despite the
potentially massive breadth of that attack, which SentinelOne dubbed ΓÇ£Smooth
Operator,ΓÇ¥ Kaspersky has now found that the hackers combed through the victims
infected with its corrupted software to ultimately target fewer than 10 machines
-- at least as far as Kaspersky could observe so far -- and that they seemed to
be focusing on cryptocurrency firms with ΓÇ£surgical precision.ΓÇ¥

** *** ***** ******* *********** *************

FBI (and Others) Shut Down Genesis Market

[2023.04.05] Genesis Market is shut down:

Active since 2018, Genesis MarketΓÇÖs slogan was, ΓÇ£Our store sells bots with
logs, cookies, and their real fingerprints.ΓÇ¥ Customers could search for
infected systems with a variety of options, including by Internet address or by
specific domain names associated with stolen credentials.

But earlier today, multiple domains associated with Genesis had their homepages
replaced with a seizure notice from the FBI, which said the domains were seized
pursuant to a warrant issued by the U.S. District Court for the Eastern District
of Wisconsin.

The U.S. AttorneyΓÇÖs Office for the Eastern District of Wisconsin did not
respond to requests for comment. The FBI declined to comment.

But sources close to the investigation tell KrebsOnSecurity that law enforcement
agencies in the United States, Canada and across Europe are currently serving
arrest warrants on dozens of individuals thought to support Genesis, either by
maintaining the site or selling the service bot logs from infected systems.

The seizure notice includes the seals of law enforcement entities from several
countries, including Australia, Canada, Denmark, Germany, the Netherlands,
Spain, Sweden and the United Kingdom.

Slashdot story.

** *** ***** ******* *********** *************

Research on AI in Adversarial Settings

[2023.04.06] New research: ΓÇ£Achilles Heels for AGI/ASI via Decision Theoretic
AdversariesΓÇ¥:

As progress in AI continues to advance, it is important to know how advanced
systems will make choices and in what ways they may fail. Machines can already
outsmart humans in some domains, and understanding how to safely build ones
which may have capabilities at or above the human level is of particular
concern. One might suspect that artificially generally intelligent (AGI) and
artificially superintelligent (ASI) will be systems that humans cannot reliably
outsmart. As a challenge to this assumption, this paper presents the Achilles
Heel hypothesis which states that even a potentially superintelligent system may
nonetheless have stable decision-theoretic delusions which cause them to make
irrational decisions in adversarial settings. In a survey of key dilemmas and
paradoxes from the decision theory literature, a number of these potential
Achilles Heels are discussed in context of this hypothesis. Several novel
contributions are made toward understanding the ways in which these weaknesses
might be implanted
 into a system.

** *** ***** ******* *********** *************

LLMs and Phishing

[2023.04.10] HereΓÇÖs an experiment being run by undergraduate computer science
students everywhere: Ask ChatGPT to generate phishing emails, and test whether
these are better at persuading victims to respond or click on the link than the
usual spam. ItΓÇÖs an interesting experiment, and the results are likely to vary
wildly based on the details of the experiment.

But while itΓÇÖs an easy experiment to run, it misses the real risk of large
language models (LLMs) writing scam emails. TodayΓÇÖs human-run scams arenΓÇÖt
limited by the number of people who respond to the initial email contact.
TheyΓÇÖre limited by the labor-intensive process of persuading those people to
send the scammer money. LLMs are about to change that. A decade ago, one type of
spam email had become a punchline on every late-night show: ΓÇ£I am the son of
the late king of Nigeria in need of your assistance....ΓÇ¥ Nearly everyone had
gotten one or a thousand of those emails, to the point that it seemed everyone
must have known they were scams.

So why were scammers still sending such obviously dubious emails? In 2012,
researcher Cormac Herley offered an answer: It weeded out all but the most
gullible. A smart scammer doesnΓÇÖt want to waste their time with people who
reply and then realize itΓÇÖs a scam when asked to wire money. By using an
obvious scam email, the scammer can focus on the most potentially profitable
people. It takes time and effort to engage in the back-and-forth communications
that nudge marks, step by step, from interlocutor to trusted acquaintance to
pauper.

Long-running financial scams are now known as pig butchering, growing the
potential mark up until their ultimate and sudden demise. Such scams, which
require gaining trust and infiltrating a targetΓÇÖs personal finances, take
weeks or even months of personal time and repeated interactions. ItΓÇÖs a high
stakes and low probability game that the scammer is playing.

Here is where LLMs will make a difference. Much has been written about the
unreliability of OpenAIΓÇÖs GPT models and those like them: They
ΓÇ£hallucinateΓÇ¥ frequently, making up things about the world and confidently
spouting nonsense. For entertainment, this is fine, but for most practical uses
itΓÇÖs a problem. It is, however, not a bug but a feature when it comes to
scams: LLMsΓÇÖ ability to confidently roll with the punches, no matter what a
user throws at them, will prove useful to scammers as they navigate hostile,
bemused, and gullible scam targets by the billions. AI chatbot scams can ensnare
more people, because the pool of victims who will fall for a more subtle and
flexible scammer -- one that has been trained on everything ever written online
-- is much larger than the pool of those who believe the king of Nigeria wants
to give them a billion dollars.

Personal computers are powerful enough today that they can run compact LLMs.
After FacebookΓÇÖs new model, LLaMA, was leaked online, developers tuned it to
run fast and cheaply on powerful laptops. Numerous other open-source LLMs are
under development, with a community of thousands of engineers and scientists.

A single scammer, from their laptop anywhere in the world, can now run hundreds
or thousands of scams in parallel, night and day, with marks all over the world,
in every language under the sun. The AI chatbots will never sleep and will
always be adapting along their path to their objectives. And new mechanisms,
from ChatGPT plugins to LangChain, will enable composition of AI with thousands
of API-based cloud services and open source tools, allowing LLMs to interact
with the internet as humans do. The impersonations in such scams are no longer
just princes offering their countryΓÇÖs riches. They are forlorn strangers
looking for romance, hot new cryptocurrencies that are soon to skyrocket in
value, and seemingly-sound new financial websites offering amazing returns on
deposits. And people are already falling in love with LLMs.

This is a change in both scope and scale. LLMs will change the scam pipeline,
making them more profitable than ever. We donΓÇÖt know how to live in a world
with a billion, or 10 billion, scammers that never sleep.

There will also be a change in the sophistication of these attacks. This is due
not only to AI advances, but to the business model of the internet --
surveillance capitalism -- which produces troves of data about all of us,
available for purchase from data brokers. Targeted attacks against individuals,
whether for phishing or data collection or scams, were once only within the
reach of nation-states. Combine the digital dossiers that data brokers have on
all of us with LLMs, and you have a tool tailor-made for personalized scams.

Companies like OpenAI attempt to prevent their models from doing bad things. But
with the release of each new LLM, social media sites buzz with new AI jailbreaks
that evade the new restrictions put in place by the AIΓÇÖs designers. ChatGPT,
and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their
release, and in dozens of different ways. Most protections against bad uses and
harmful output are only skin-deep, easily evaded by determined users. Once a
jailbreak is discovered, it usually can be generalized, and the community of
users pulls the LLM open through the chinks in its armor. And the technology is
advancing too fast for anyone to fully understand how they work, even the
designers.

This is all an old story, though: It reminds us that many of the bad uses of AI
are a reflection of humanity more than they are a reflection of AI technology
itself. Scams are nothing new -- simply intent and then action of one person
tricking another for personal gain. And the use of others as minions to
accomplish scams is sadly nothing new or uncommon: For example, organized crime
in Asia currently kidnaps or indentures thousands in scam sweatshops. Is it
better that organized crime will no longer see the need to exploit and
physically abuse people to run their scam operations, or worse that they and
many others will be able to scale up scams to an unprecedented level?

Defense can and will catch up, but before it does, our signal-to-noise ratio is
going to drop dramatically.

This essay was written with Barath Raghavan, and previously appeared on
Wired.com.

** *** ***** ******* *********** *************

Car Thieves Hacking the CAN Bus

[2023.04.11] Car thieves are injecting malicious software into a carΓÇÖs network
through wires in the headlights (or taillights) that fool the car into believing
that the electronic key is nearby.

News articles.

** *** ***** ******* *********** *************

FBI Advising People to Avoid Public Charging Stations

[2023.04.12] The FBI is warning people against using public phone-charging
stations, worrying that the combination power-data port can be used to inject
malware onto the devices:

Avoid using free charging stations in airports, hotels, or shopping centers. Bad
actors have figured out ways to use public USB ports to introduce malware and
monitoring software onto devices that access these ports. Carry your own charger
and USB cord and use an electrical outlet instead.

How much of a risk is this, really? I am unconvinced, although I do carry a USB
condom for charging stations I find suspicious.

News article.

** *** ***** ******* *********** *************

Bypassing a Theft Threat Model

[2023.04.13] Thieves cut through the wall of a coffee shop to get to an Apple
store, bypassing the alarms in the process.

I wrote about this kind of thing in 2000, in Secrets and Lies (page 318):

My favorite example is a band of California art thieves that would break into
peopleΓÇÖs houses by cutting a hole in their walls with a chainsaw. The attacker
completely bypassed the threat model of the defender. The countermeasures that
the homeowner put in place were door and window alarms; they didnΓÇÖt make a
difference to this attack.

The article says they took half a million dollars worth of iPhones. I donΓÇÖt
understand iPhone device security, but donΓÇÖt they have a system of denying
stolen phones access to the network?

EDITED TO ADD (4/13): A commenter says: ΓÇ£Locked idevices will still sell for
40-60% of their value on eBay and co, they will go to Chinese shops to be
stripped for parts. A aftermarket ΓÇÿoem-qualityΓÇÖ iPhone 14 display is $400+
alone on ifixit.ΓÇ¥

** *** ***** ******* *********** *************

Gaining an Advantage in Roulette

[2023.04.14] You can beat the game without a computer:

On a perfect [roulette] wheel, the ball would always fall in a random way. But
over time, wheels develop flaws, which turn into patterns. A wheel thatΓÇÖs even
marginally tilted could develop what Barnett called a ΓÇÿdrop zone.ΓÇÖ When the
tilt forces the ball to climb a slope, the ball decelerates and falls from the
outer rim at the same spot on almost every spin. A similar thing can happen on
equipment worn from repeated use, or if a croupierΓÇÖs hand lotion has left
residue, or for a dizzying number of other reasons. A drop zone is the
AchillesΓÇÖ heel of roulette. That morsel of predictability is enough for
software to overcome the random skidding and bouncing that happens after the
drop.ΓÇ¥

** *** ***** ******* *********** *************

Hacking Suicide

[2023.04.14] HereΓÇÖs a religious hack:

You want to commit suicide, but itΓÇÖs a mortal sin: your soul goes straight to
hell, forever. So what you do is murder someone. That will get you executed, but
if you confess your sins to a priest beforehand you avoid hell. Problem solved.

This was actually a problem in the 17th and 18th centuries in Northern Europe,
particularly Denmark. And it remained a problem until capital punishment was
abolished for murder.

ItΓÇÖs a clever hack. I didnΓÇÖt learn about it in time to put it in my book, A
HackerΓÇÖs Mind, but I have several other good hacks of religious rules.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2023.04.14] This is a current list of where and when I am scheduled to speak:

IΓÇÖm speaking on ΓÇ£Cybersecurity Thinking to Reinvent DemocracyΓÇ¥ at RSA
Conference 2023 in San Francisco, California, on Tuesday, April 25, 2023, at
9:40 AM PT.
IΓÇÖm speaking at IT-S Now 2023 in Vienna, Austria, on June 2, 2023 at 8:30 AM
CEST.
The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

** *** ***** ******* *********** *************

--- BBBS/Li6 v4.10 Toy-5
 * Origin: TCOB1 - binkd.thecivv.ie (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0162 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106