AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [833 / 1624] RSS
 From   To   Subject   Date/Time 
Message   TCOB1    All   CRYPTO-GRAM, March 15, 2023   March 15, 2023
 11:30 AM *  

Crypto-Gram
March 15, 2023

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along
with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don't work in your email client, try reading this issue of
Crypto-Gram on the web.

Camera the Size of a Grain of Salt
ChatGPT Is Ingesting Corporate Secrets Defending against AI Lobbyists
Fines as a Security System
The Insecurity of Photo Cropping
A Device to Turn Traffic Lights Green Cyberwar Lessons from the War in Ukraine
Putting Undetectable Backdoors in Machine Learning Models Banning TikTok
Side-Channel Attack against CRYSTALS-Kyber Fooling a Voice Authentication System
with an AI-Generated Voice Dumb Password Rules
Nick Weaver on Regulating Cryptocurrency New National Cybersecurity Strategy
Prompt Injection Attacks on Large Language Models BlackLotus Malware Hijacks
Windows Secure Boot Process Another Malware with Persistence
Elephant Hackers
NetWire Remote Access Trojan Maker Arrested How AI Could Write Our Laws
Upcoming Speaking Engagements
** *** ***** ******* *********** *************

Camera the Size of a Grain of Salt

[2023.02.15] Cameras are getting smaller and smaller, changing the scale and
scope of surveillance.

** *** ***** ******* *********** *************

ChatGPT Is Ingesting Corporate Secrets

[2023.02.16] Interesting:

According to internal Slack messages that were leaked to Insider, an Amazon
lawyer told workers that they had ΓÇ£already seen instancesΓÇ¥ of text generated
by ChatGPT that ΓÇ£closelyΓÇ¥ resembled internal company data.

This issue seems to have come to a head recently because Amazon staffers and
other tech workers throughout the industry have begun using ChatGPT as a
ΓÇ£coding assistantΓÇ¥ of sorts to help them write or improve strings of code,
the report notes.

[...]

ΓÇ£This is important because your inputs may be used as training data for a
further iteration of ChatGPT,ΓÇ¥ the lawyer wrote in the Slack messages viewed
by Insider, ΓÇ£and we wouldnΓÇÖt want its output to include or resemble our
confidential information.ΓÇ¥

** *** ***** ******* *********** *************

Defending against AI Lobbyists

[2023.02.17] When is it time to start worrying about artificial intelligence
interfering in our democracy? Maybe when an AI writes a letter to The New York
Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we
wrote, weΓÇÖre starting to get worried. And while the technology can be
regulated, the real solution lies in recognizing that the problem is human
actors -- and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a
system that can generate text realistic enough to appear to be written by a
human, poses significant threats to democratic processes. The ability to produce
high quality political messaging quickly and at scale, if combined with
AI-assisted capabilities to strategically target those messages to policymakers
and the public, could become a powerful accelerant of an already sprawling and
poorly constrained force in modern democratic life: lobbying.

We speculated that AI-assisted lobbyists could use generative models to write
op-eds and regulatory comments supporting a position, identify members of
Congress who wield the most influence over pending legislation, use network
pattern identification to discover undisclosed or illegal political
coordination, or use supervised machine learning to calibrate the optimal
contribution needed to sway the vote of a legislative committee member.

These are all examples of what we call AI hacking. Hacks are strategies that
follow the rules of a system, but subvert its intent. Currently a human creative
process, future AIs could discover, develop, and execute these same strategies.

While some of these activities are the longtime domain of human lobbyists, AI
tools applied against the same task would have unfair advantages. They can scale
their activity effortlessly across every state in the country -- human lobbyists
tend to focus on a single state -- they may uncover patterns and approaches
unintuitive and unrecognizable by human experts, and do so nearly
instantaneously with little chance for human decision makers to keep up.

These factors could make AI hacking of the democratic process fundamentally
ungovernable. Any policy response to limit the impact of AI hacking on political
systems would be critically vulnerable to subversion or control by an AI hacker.
If AI hackers achieve unchecked influence over legislative processes, they could
dictate the rules of our society: including the rules that govern AI.

We admit that this seemed far fetched when we first wrote about it in 2021. But
now that the emanations and policy prescriptions of ChatGPT have been given an
audience in the New York Times and innumerable other outlets in recent weeks,
itΓÇÖs getting harder to dismiss.

At least one group of researchers is already testing AI techniques to
automatically find and advocate for bills that benefit a particular interest.
And one Massachusetts representative used ChatGPT to draft legislation
regulating AI.

The AI technology of two years ago seems quaint by the standards of ChatGPT.
What will the technology of 2025 seem like if we could glimpse it today? To us
there is no question that now is the time to act.

First, letΓÇÖs dispense with the concepts that wonΓÇÖt work. We cannot solely
rely on explicit regulation of AI technology development, distribution, or use.
Regulation is essential, but it would be vastly insufficient. The rate of AI
technology development, and the speed at which AI hackers might discover
damaging strategies, already outpaces policy development, enactment, and
enforcement.

Moreover, we cannot rely on detection of AI actors. The latest research suggests
that AI models trying to classify text samples as human- or AI-generated have
limited precision, and are ill equipped to handle real world scenarios. These
reactive, defensive techniques will fail because the rate of advancement of the
ΓÇ£offensiveΓÇ¥ generative AI is so astounding.

Additionally, we risk a dragnet that will exclude masses of human constituents
that will use AI to help them express their thoughts, or machine translation
tools to help them communicate. If a written opinion or strategy conforms to the
intent of a real person, it should not matter if they enlisted the help of an AI
(or a human assistant) to write it.

Most importantly, we should avoid the classic trap of societies wrenched by the
rapid pace of change: privileging the status quo. Slowing down may seem like the
natural response to a threat whose primary attribute is speed. Ideas like
increasing requirements for human identity verification, aggressive detection
regimes for AI-generated messages, and elongation of the legislative or
regulatory process would all play into this fallacy. While each of these
solutions may have some value independently, they do nothing to make the already
powerful actors less powerful.

Finally, it wonΓÇÖt work to try to starve the beast. Large language models like
ChatGPT have a voracious appetite for data. They are trained on past examples of
the kinds of content that they will be asked to generate in the future.
Similarly, an AI system built to hack political systems will rely on data that
documents the workings of those systems, such as messages between constituents
and legislators, floor speeches, chamber and committee voting results,
contribution records, lobbying relationship disclosures, and drafts of and
amendments to legislative text. The steady advancement towards the digitization
and publication of this information that many jurisdictions have made is
positive. The threat of AI hacking should not dampen or slow progress on
transparency in public policymaking.

Okay, so what will help?

First, recognize that the true threats here are malicious human actors. Systems
like ChatGPT and our still-hypothetical political-strategy AI are still far from
artificial general intelligences. They do not think. They do not have free will.
They are just tools directed by people, much like lobbyist for hire. And, like
lobbyists, they will be available primarily to the richest individuals, groups,
and their interests.

However, we can use the same tools that would be effective in controlling human
political influence to curb AI hackers. These tools will be familiar to any
follower of the last few decades of U.S. political history.

Campaign finance reforms such as contribution limits, particularly when applied
to political action committees of all types as well as to candidate operated
campaigns, can reduce the dependence of politicians on contributions from
private interests. The unfair advantage of a malicious actor using AI lobbying
tools is at least somewhat mitigated if a political targetΓÇÖs entire career is
not already focused on cultivating a concentrated set of major donors.

Transparency also helps. We can expand mandatory disclosure of contributions and
lobbying relationships, with provisions to prevent the obfuscation of the
funding source. Self-interested advocacy should be transparently reported
whether or not it was AI-assisted. Meanwhile, we should increase penalties for
organizations that benefit from AI-assisted impersonation of constituents in
political processes, and set a greater expectation of responsibility to avoid
ΓÇ£unknowingΓÇ¥ use of these tools on their behalf.

Our most important recommendation is less legal and more cultural. Rather than
trying to make it harder for AI to participate in the political process, make it
easier for humans to do so.

The best way to fight an AI that can lobby for moneyed interests is to help the
little guy lobby for theirs. Promote inclusion and engagement in the political
process so that organic constituent communications grow alongside the potential
growth of AI-directed communications. Encourage direct contact that generates
more-than-digital relationships between constituents and their representatives,
which will be an enduring way to privilege human stakeholders. Provide paid
leave to allow people to vote as well as to testify before their legislature and
participate in local town meetings and other civic functions. Provide childcare
and accessible facilities at civic functions so that more community members can
participate.

The threat of AI hacking our democracy is legitimate and concerning, but its
solutions are consistent with our democratic values. Many of the ideas above are
good governance reforms already being pushed and fought over at the federal and
state level.

We donΓÇÖt need to reinvent our democracy to save it from AI. We just need to
continue the work of building a just and equitable political system. Hopefully
ChatGPT will give us all some impetus to do that work faster.

This essay was written with Nathan Sanders, and appeared on the Belfer Center
blog.

** *** ***** ******* *********** *************

Fines as a Security System

[2023.02.20] Tile has an interesting security solution to make its tracking tags
harder to use for stalking:

The Anti-Theft Mode feature will make the devices invisible to Scan and Secure,
the companyΓÇÖs in-app feature that lets you know if any nearby Tiles are
following you. But to activate the new Anti-Theft Mode, the Tile owner will have
to verify their real identity with a government-issued ID, submit a biometric
scan that helps root out fake IDs, agree to let Tile share their information
with law enforcement and agree to be subject to a $1 million penalty if
convicted in a court of law of using Tile for criminal activity. So although it
technically makes the device easier for stalkers to use Tiles silently, it makes
the penalty of doing so high enough to (at least in theory) deter them from
trying.

Interesting theory. But it wonΓÇÖt work against attackers who donΓÇÖt have any
money.

Hulls believes the approach is superior to AppleΓÇÖs solution with AirTag, which
emits a sound and notifies iPhone users that one of the trackers is following
them.

My complaint about the technical solutions is that they only work for users of
the system. Tile security requires an ΓÇ£in-app feature.ΓÇ¥ AppleΓÇÖs AirTag
ΓÇ£notifies iPhone users.ΓÇ¥ What we need is a common standard that is
implemented on all smartphones, so that people who donΓÇÖt use the trackers can
be alerted if they are being surveilled by one of them.

** *** ***** ******* *********** *************

The Insecurity of Photo Cropping

[2023.02.21] The Intercept has a long article on the insecurity of photo
cropping:

One of the hazards lies in the fact that, for some of the programs, downstream
crop reversals are possible for viewers or readers of the document, not just the
fileΓÇÖs creators or editors. Official instruction manuals, help pages, and
promotional materials may mention that cropping is reversible, but this
documentation at times fails to note that these operations are reversible by any
viewers of a given image or document.

[...]

Uncropped versions of images can be preserved not just in Office apps, but also
in a fileΓÇÖs own metadata. A photograph taken with a modern digital camera
contains all types of metadata. Many image files record text-based metadata such
as the camera make and model or the GPS coordinates at which the image was
captured. Some photos also include binary data such as a thumbnail version of
the original photo that may persist in the fileΓÇÖs metadata even after the
photo has been edited in an image editor.

** *** ***** ******* *********** *************

A Device to Turn Traffic Lights Green

[2023.02.22] HereΓÇÖs a story about a hacker who reprogrammed a device called
ΓÇ£Flipper ZeroΓÇ¥ to mimic Opticom transmitters -- to turn traffic lights in
his path green.

As mentioned earlier, the Flipper Zero has a built-in sub-GHz radio that lets
the device receive data (or transmit it, with the right firmware in approved
regions) on the same wireless frequencies as keyfobs and other devices. Most
traffic preemption devices intended for emergency traffic redirection donΓÇÖt
actually transmit signals over RF. Instead, they use optical technology to beam
infrared light from vehicles to static receivers mounted on traffic light poles.

Perhaps the most well-known branding for these types of devices is called
Opticom. Essentially, the tech works by detecting a specific pattern of infrared
light emitted by the Mobile Infrared Transmitter (MIRT) installed in a police
car, fire truck, or ambulance when the MIRT is switched on. When the receiver
detects the light, the traffic system then initiates a signal change as the
emergency vehicle approaches an intersection, safely redirecting the traffic
flow so that the emergency vehicle can pass through the intersection as if it
were regular traffic and potentially avoid a collision.

This seems easy to do, but itΓÇÖs also very illegal. ItΓÇÖs called
ΓÇ£impersonating an emergency vehicle,ΓÇ¥ and it comes with hefty penalties if
youΓÇÖre caught.

** *** ***** ******* *********** *************

Cyberwar Lessons from the War in Ukraine

[2023.02.23] The Aspen Institute has published a good analysis of the successes,
failures, and absences of cyberattacks as part of the current war in Ukraine:
ΓÇ£The Cyber Defense Assistance Imperative Lessons from Ukraine.ΓÇ¥

Its conclusion:

Cyber defense assistance in Ukraine is working. The Ukrainian government and
Ukrainian critical infrastructure organizations have better defended themselves
and achieved higher levels of resiliency due to the efforts of CDAC and many
others. But this is not the end of the road -- the ability to provide cyber
defense assistance will be important in the future. As a result, it is timely to
assess how to provide organized, effective cyber defense assistance to safeguard
the post-war order from potential aggressors.

The conflict in Ukraine is resetting the table across the globe for geopolitics
and international security. The US and its allies have an imperative to
strengthen the capabilities necessary to deter and respond to aggression that is
ever more present in cyberspace. Lessons learned from the ad hoc conduct of
cyber defense assistance in Ukraine can be institutionalized and scaled to
provide new approaches and tools for preventing and managing cyber conflicts
going forward.

I am often asked why where werenΓÇÖt more successful cyberattacks by Russia
against Ukraine. I generally give four reasons: (1) Cyberattacks are more
effective in the ΓÇ£grey zoneΓÇ¥ between peace and war, and there are better
alternatives once the shooting and bombing starts. (2) Setting these attacks up
takes time, and Putin was secretive about his plans. (3) Putin was concerned
about attacks spilling outside the war zone, and affecting other countries. (4)
Ukrainian defenses were good, aided by other countries and companies. This paper
gives a fifth reasons: they were technically successful, but keeping them out of
the news made them operationally unsuccessful.

** *** ***** ******* *********** *************

Putting Undetectable Backdoors in Machine Learning Models

[2023.02.24] This is really interesting research from a few months ago:

Abstract: Given the computational cost and technical expertise required to train
machine learning models, users may delegate the task of learning to a service
provider. Delegation of learning has clear benefits, and at the same time raises
serious concerns of trust. This work studies possible abuses of power by
untrusted learners.We show how a malicious learner can plant an undetectable
backdoor into a classifier. On the surface, such a backdoored classifier behaves
normally, but in reality, the learner maintains a mechanism for changing the
classification of any input, with only a slight perturbation. Importantly,
without the appropriate ΓÇ£backdoor key,ΓÇ¥ the mechanism is hidden and cannot
be detected by any computationally-bounded observer. We demonstrate two
frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature
schemes. The construction guarantees that given query access to the original
model and the backdoored version, it is computationally infeasible to find even
a single input where they differ. This property implies that the backdoored
model has generalization error comparable with the original model. Moreover,
even if the distinguisher can request backdoored inputs of its choice, they
cannot backdoor a new inputa property we call non-replicability.

Second, we demonstrate how to insert undetectable backdoors in models trained
using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht;
NeurIPS 2007). In this construction, undetectability holds against powerful
white-box distinguishers: given a complete description of the network and the
training data, no efficient distinguisher can guess whether the model is
ΓÇ£cleanΓÇ¥ or contains a backdoor. The backdooring algorithm executes the RFF
algorithm faithfully on the given training data, tampering only with its random
coins. We prove this strong guarantee under the hardness of the Continuous
Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a
similar white-box undetectable backdoor for random ReLU networks based on the
hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).

Our construction of undetectable backdoors also sheds light on the related issue
of robustness to adversarial examples. In particular, by constructing
undetectable backdoor for an ΓÇ£adversarially-robustΓÇ¥ learning algorithm, we
can produce a classifier that is indistinguishable from a robust classifier, but
where every input has an adversarial example! In this way, the existence of
undetectable backdoors represent a significant theoretical roadblock to
certifying adversarial robustness.

Turns out that securing ML systems is really hard.

** *** ***** ******* *********** *************

Banning TikTok

[2023.02.27] Congress is currently debating bills that would ban TikTok in the
United States. We are here as technologists to tell you that this is a terrible
idea and the side effects would be intolerable. Details matter. There are
several ways Congress might ban TikTok, each with different efficacies and side
effects. In the end, all the effective ones would destroy the free Internet as
we know it.

ThereΓÇÖs no doubt that TikTok and ByteDance, the company that owns it, are
shady. They, like most large corporations in China, operate at the pleasure of
the Chinese government. They collect extreme levels of information about users.
But theyΓÇÖre not alone: Many apps you use do the same, including Facebook and
Instagram, along with seemingly innocuous apps that have no need for the data.
Your data is bought and sold by data brokers youΓÇÖve never heard of who have
few scruples about where the data ends up. They have digital dossiers on most
people in the United States.

If we want to address the real problem, we need to enact serious privacy laws,
not security theater, to stop our data from being collected, analyzed, and sold
-- by anyone. Such laws would protect us in the long term, and not just from
the app of the week. They would also prevent data breaches and ransomware
attacks from spilling our data out into the digital underworld, including hacker
message boards and chat servers, hostile state actors, and outside hacker
groups. And, most importantly, they would be compatible with our bedrock values
of free speech and commerce, which CongressΓÇÖs current strategies are not.

At best, the TikTok ban considered by Congress would be ineffective; at worst, a
ban would force us to either adopt ChinaΓÇÖs censorship technology or create our
own equivalent. The simplest approach, advocated by some in Congress, would be
to ban the TikTok app from the Apple and Google app stores. This would
immediately stop new updates for current users and prevent new users from
signing up. To be clear, this would not reach into phones and remove the app.
Nor would it prevent Americans from installing TikTok on their phones; they
would still be able to get it from sites outside of the United States. Android
users have long been able to use alternative app repositories. Apple maintains a
tighter control over what apps are allowed on its phones, so users would have to
ΓÇ£jailbreakΓÇ¥ -- or manually remove restrictions from -- their devices to
install TikTok.

Even if app access were no longer an option, TikTok would still be available
more broadly. It is currently, and would still be, accessible from browsers,
whether on a phone or a laptop. As long as the TikTok website is hosted on
servers outside of the United States, the ban would not affect browser access.

Alternatively, Congress might take a financial approach and ban US companies
from doing business with ByteDance. Then-President Donald Trump tried this in
2020, but it was blocked by the courts and rescinded by President Joe Biden a
year later. This would shut off access to TikTok in app stores and also cut
ByteDance off from the resources it needs to run TikTok. US cloud-computing and
content-distribution networks would no longer distribute TikTok videos, collect
user data, or run analytics. US advertisers -- and this is critical -- could no
longer fork over dollars to ByteDance in the hopes of getting a few seconds of a
userΓÇÖs attention. TikTok, for all practical purposes, would cease to be a
business in the United States.

But Americans would still be able to access TikTok through the loopholes
discussed above. And they will: TikTok is one of the most popular apps ever
made; about 70% of young people use it. There would be enormous demand for
workarounds. ByteDance could choose to move its US-centric services right over
the border to Canada, still within reach of American users. Videos would load
slightly slower, but for todayΓÇÖs TikTok users, it would probably be
acceptable. Without US advertisers ByteDance wouldnΓÇÖt make much money, but it
has operated at a loss for many years, so this wouldnΓÇÖt be its death knell.

Finally, an even more restrictive approach Congress might take is actually the
most dangerous: dangerous to Americans, not to TikTok. Congress might ban the
use of TikTok by anyone in the United States. The Trump executive order would
likely have had this effect, were it allowed to take effect. It required that US
companies not engage in any sort of transaction with TikTok and prohibited
circumventing the ban. . If the same restrictions were enacted by Congress
instead, such a policy would leave business or technical implementation details
to US companies, enforced through a variety of law enforcement agencies.

This would be an enormous change in how the Internet works in the United States.
Unlike authoritarian states such as China, the US has a free, uncensored
Internet. We have no technical ability to ban sites the government doesnΓÇÖt
like. Ironically, a blanket ban on the use of TikTok would necessitate a
national firewall, like the one China currently has, to spy on and censor
AmericansΓÇÖ access to the Internet. Or, at the least, authoritarian government
powers like IndiaΓÇÖs, which could force Internet service providers to censor
Internet traffic. Worse still, the main vendors of this censorship technology
are in those authoritarian states. China, for example, sells its firewall
technology to other censorship-loving autocracies such as Iran and Cuba.

All of these proposed solutions raise constitutional issues as well. The First
Amendment protects speech and assembly. For example, the recently introduced
Buck-Hawley bill, which instructs the president to use emergency powers to ban
TikTok, might threaten separation of powers and may be relying on the same
mechanisms used by Trump and stopped by the court. (Those specific emergency
powers, provided by the International Emergency Economic Powers Act, have a
specific exemption for communications services.) And individual states trying to
beat Congress to the punch in regulating TikTok or social media generally might
violate the ConstitutionΓÇÖs Commerce Clause -- which restricts individual
states from regulating interstate commerce -- in doing so.

Right now, thereΓÇÖs nothing to stop AmericansΓÇÖ data from ending up overseas.
WeΓÇÖve seen plenty of instances -- from Zoom to Clubhouse to others -- where
data about Americans collected by US companies ends up in China, not by accident
but because of how those companies managed their data. And the Chinese
government regularly steals data from US organizations for its own use: Equifax,
Marriott Hotels, and the Office of Personnel Management are examples.

If we want to get serious about protecting national security, we have to get
serious about data privacy. Today, data surveillance is the business model of
the Internet. Our personal lives have turned into data; itΓÇÖs not possible to
block it at our national borders. Our data has no nationality, no cost to copy,
and, currently, little legal protection. Like water, it finds every crack and
flows to every low place. TikTok wonΓÇÖt be the last app or service from abroad
that becomes popular, and it is distressingly ordinary in terms of how much it
spies on us. Personal privacy is now a matter of national security. That needs
to be part of any debate about banning TikTok.

This essay was written with Barath Raghavan, and previously appeared in Foreign
Policy.

EDITED TO ADD (3/13): Glenn Gerstell, former general counsel of the NSA, has
similar things to say.

** *** ***** ******* *********** *************

Side-Channel Attack against CRYSTALS-Kyber

[2023.02.28] CRYSTALS-Kyber is one of the public-key algorithms currently
recommended by NIST as part of its post-quantum cryptography standardization
process.

Researchers have just published a side-channel attack -- using power consumption
-- against an implementation of the algorithm that was supposed to be resistant
against that sort of attack.

The algorithm is not ΓÇ£brokenΓÇ¥ or ΓÇ£crackedΓÇ¥ -- despite headlines to the
contrary -- this is just a side-channel attack. What makes this work really
interesting is that the researchers used a machine-learning model to train the
system to exploit the side channel.

** *** ***** ******* *********** *************

Fooling a Voice Authentication System with an AI-Generated Voice

[2023.03.01] A reporter used an AI synthesis of his own voice to fool the voice
authentication system for LloydΓÇÖs Bank.

** *** ***** ******* *********** *************

Dumb Password Rules

[2023.03.02] Examples of dumb password rules.

There are some pretty bad disasters out there.

My worst experiences are with sites that have artificial complexity requirements
that cause my personal password-generation systems to fail. Some of the systems
on the list are even worse: when they fail they donΓÇÖt tell you why, so you
just have to guess until you get it right.

** *** ***** ******* *********** *************

Nick Weaver on Regulating Cryptocurrency

[2023.03.03] Nicholas Weaver wrote an excellent paper on the problems of
cryptocurrencies and the need to regulate the space -- with all existing
regulations. His conclusion:

Regulators, especially regulators in the United States, often fear accusations
of stifling innovation. As such, the cryptocurrency space has grown over the
past decade with very little regulatory oversight.

But fortunately for regulators, there is no actual innovation to stifle.
Cryptocurrencies cannot revolutionize payments or finance, as the basic nature
of all cryptocurrencies render them fundamentally unsuitable to revolutionize
our financial system -- which, by the way, already has decades of successful
experience with digital payments and electronic money. The supposedly
ΓÇ£decentralizedΓÇ¥ and ΓÇ£trustlessΓÇ¥ cryptocurrency systems, both
technically and socially, fail to provide meaningful benefits to society -- and
indeed, necessarily also fail in their foundational claims of decentralization
and trustlessness.

When regulating cryptocurrencies, the best starting point is history. Regulating
various tokens is best done through the existing securities law framework, an
area where the US has a near century of well-established law. It starts with
regulating the issuance of new cryptocurrency tokens and related securities.
This should substantially reduce the number of fraudulent offerings.

Similarly, active regulation of the cryptocurrency exchanges should offer
substantial benefits, including eliminating significant consumer risk, blocking
key money-laundering channels, and overall producing a far more regulated and
far less manipulated market.

Finally, the stablecoins need basic regulation as money transmitters. Unless
action is taken they risk becoming substantial conduits for money laundering,
but requiring them to treat all users as customers should prevent this risk from
developing further.

Read the whole thing.

** *** ***** ******* *********** *************

New National Cybersecurity Strategy

[2023.03.06] Last week, the Biden administration released a new National
Cybersecurity Strategy (summary here). There is lots of good commentary out
there. ItΓÇÖs basically a smart strategy, but the hard parts are always the
implementation details. ItΓÇÖs one thing to say that we need to secure our cloud
infrastructure, and another to detail what the means technically, who pays for
it, and who verifies that itΓÇÖs been done.

One of the provisions getting the most attention is a move to shift liability to
software vendors, something IΓÇÖve been advocating for since at least 2003.

Slashdot thread.

** *** ***** ******* *********** *************

Prompt Injection Attacks on Large Language Models

[2023.03.07] This is a good survey on prompt injection attacks on large language
models (like ChatGPT).

Abstract: We are currently witnessing dramatic advances in the capabilities of
Large Language Models (LLMs). They are already being adopted in practice and
integrated into many systems, including integrated development environments
(IDEs) and search engines. The functionalities of current LLMs can be modulated
via natural language prompts, while their exact internal functionality remains
implicit and unassessable. This property, which makes them adaptable to even
unseen tasks, might also make them susceptible to targeted adversarial
prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI)
attacks have been introduced. In such attacks, an adversary can prompt the LLM
to produce malicious content or override the original instructions and the
employed filtering schemes. Recent work showed that these attacks are hard to
mitigate, as state-of-the-art LLMs are instruction-following. So far, these
attacks assumed that the adversary is directly prompting the LLM.

In this work, we show that augmenting LLMs with retrieval and API calling
capabilities (so-called Application-Integrated LLMs) induces a whole new set of
attack vectors. These LLMs might process poisoned content retrieved from the Web
that contains malicious prompts pre-injected and selected by adversaries. We
demonstrate that an attacker can indirectly perform such PI attacks. Based on
this key insight, we systematically analyze the resulting threat landscape of
Application-Integrated LLMs and discuss a variety of new attack vectors. To
demonstrate the practical viability of our attacks, we implemented specific
demonstrations of the proposed attacks within synthetic applications. In
summary, our work calls for an urgent evaluation of current mitigation
techniques and an investigation of whether new techniques are needed to defend
LLMs against these threats.

** *** ***** ******* *********** *************

BlackLotus Malware Hijacks Windows Secure Boot Process

[2023.03.08] Researchers have discovered malware that ΓÇ£can hijack a
computerΓÇÖs boot process even when Secure Boot and other advanced protections
are enabled and running on fully updated versions of Windows.ΓÇ¥

Dubbed BlackLotus, the malware is whatΓÇÖs known as a UEFI bootkit. These
sophisticated pieces of malware target the UEFI -- short for Unified Extensible
Firmware Interface -- the low-level and complex chain of firmware responsible
for booting up virtually every modern computer. As the mechanism that bridges a
PCΓÇÖs device firmware with its operating system, the UEFI is an OS in its own
right. ItΓÇÖs located in an SPI-connected flash storage chip soldered onto the
computer motherboard, making it difficult to inspect or patch. Previously
discovered bootkits such as CosmicStrand, MosaicRegressor, and MoonBounce work
by targeting the UEFI firmware stored in the flash storage chip. Others,
including BlackLotus, target the software stored in the EFI system partition.

Because the UEFI is the first thing to run when a computer is turned on, it
influences the OS, security apps, and all other software that follows. These
traits make the UEFI the perfect place to launch malware. When successful, UEFI
bootkits disable OS security mechanisms and ensure that a computer remains
infected with stealthy malware that runs at the kernel mode or user mode, even
after the operating system is reinstalled or a hard drive is replaced.

ESET has an analysis:

The number of UEFI vulnerabilities discovered in recent years and the failures
in patching them or revoking vulnerable binaries within a reasonable time window
hasnΓÇÖt gone unnoticed by threat actors. As a result, the first publicly known
UEFI bootkit bypassing the essential platform security feature
-- UEFI Secure Boot -- is now a reality. In this blogpost we present the first
public analysis of this UEFI bootkit, which is capable of running on even
fully-up-to-date Windows 11 systems with UEFI Secure Boot enabled. Functionality
of the bootkit and its individual features leads us to believe that we are
dealing with a bootkit known as BlackLotus, the UEFI bootkit being sold on
hacking forums for $5,000 since at least October 2022.

[...]

ItΓÇÖs capable of running on the latest, fully patched Windows 11 systems with
UEFI Secure Boot enabled.
It exploits a more than one year old vulnerability (CVE-2022-21894) to bypass
UEFI Secure Boot and set up persistence for the bootkit. This is the first
publicly known, in-the-wild abuse of this vulnerability. Although the
vulnerability was fixed in MicrosoftΓÇÖs January 2022 update, its exploitation
is still possible as the affected, validly signed binaries have still not been
added to the UEFI revocation list. BlackLotus takes advantage of this, bringing
its own copies of legitimate -- but vulnerable -- binaries to the system in
order to exploit the vulnerability. ItΓÇÖs capable of disabling OS security
mechanisms such as BitLocker, HVCI, and Windows Defender.
Once installed, the bootkitΓÇÖs main goal is to deploy a kernel driver (which,
among other things, protects the bootkit from removal), and an HTTP downloader
responsible for communication with the C&C and capable of loading additional
user-mode or kernel-mode payloads.
This is impressive stuff.

** *** ***** ******* *********** *************

Another Malware with Persistence

[2023.03.09] HereΓÇÖs a piece of Chinese malware that infects SonicWall security
appliances and survives firmware updates.

On Thursday, security firm Mandiant published a report that said threat actors
with a suspected nexus to China were engaged in a campaign to maintain long-term
persistence by running malware on unpatched SonicWall SMA appliances. The
campaign was notable for the ability of the malware to remain on the devices
even after its firmware received new firmware.

ΓÇ£The attackers put significant effort into the stability and persistence of
their tooling,ΓÇ¥ Mandiant researchers Daniel Lee, Stephen Eckels, and Ben Read
wrote. ΓÇ£This allows their access to the network to persist through firmware
updates and maintain a foothold on the network through the SonicWall Device.ΓÇ¥

To achieve this persistence, the malware checks for available firmware upgrades
every 10 seconds. When an update becomes available, the malware copies the
archived file for backup, unzips it, mounts it, and then copies the entire
package of malicious files to it. The malware also adds a backdoor root user to
the mounted file. Then, the malware rezips the file so itΓÇÖs ready for
installation.

ΓÇ£The technique is not especially sophisticated, but it does show considerable
effort on the part of the attacker to understand the appliance update cycle,
then develop and test a method for persistence,ΓÇ¥ the researchers wrote.

** *** ***** ******* *********** *************

Elephant Hackers

[2023.03.10] An elephant uses its right-of-way privileges to stop sugar-cane
trucks and grab food.

** *** ***** ******* *********** *************

NetWire Remote Access Trojan Maker Arrested

[2023.03.14] From Brian Krebs:

A Croatian national has been arrested for allegedly operating NetWire, a Remote
Access Trojan (RAT) marketed on cybercrime forums since 2012 as a stealthy way
to spy on infected systems and siphon passwords. The arrest coincided with a
seizure of the NetWire sales website by the U.S. Federal Bureau of Investigation
(FBI). While the defendant in this case hasnΓÇÖt yet been named publicly, the
NetWire website has been leaking information about the likely true identity and
location of its owner for the past 11 years.

The article details the mistakes that led to the personΓÇÖs address.

** *** ***** ******* *********** *************

How AI Could Write Our Laws

[2023.03.14] Nearly 90% of the multibillion-dollar federal lobbying apparatus in
the United States serves corporate interests. In some cases, the objective of
that money is obvious. Google pours millions into lobbying on bills related to
antitrust regulation. Big energy companies expect action whenever there is a
move to end drilling leases for federal lands, in exchange for the tens of
millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are
not always so obvious. Consider, for example, a 2013 Massachusetts bill that
tried to restrict the commercial use of data collected from K-12 students using
services accessed via the internet. The bill appealed to many privacy-conscious
education advocates, and appropriately so. But behind the justification of
protecting students lay a market-altering policy: the bill was introduced at the
behest of Microsoft lobbyists, in an effort to exclude Google Docs from
classrooms.

What would happen if such legal-but-sneaky strategies for tilting the rules in
favor of one group over another become more widespread and effective? We can see
hints of an answer in the remarkable pace at which artificial-intelligence tools
for everything from writing to graphic design are being developed and improved.
And the unavoidable conclusion is that AI will make lobbying more guileful, and
perhaps more successful.

It turns out there is a natural opening for this technology: microlegislation.

ΓÇ£MicrolegislationΓÇ¥ is a term for small pieces of proposed law that cater --
sometimes unexpectedly -- to narrow interests. Political scientist Amy McKay
coined the term. She studied the 564 amendments to the Affordable Care Act
(ΓÇ£ObamacareΓÇ¥) considered by the Senate Finance Committee in 2009, as well as
the positions of 866 lobbying groups and their campaign contributions. She
documented instances where lobbyist comments -- on health-care research, vaccine
services, and other provisions -- were translated directly into microlegislation
in the form of amendments. And she found that those groupsΓÇÖ financial
contributions to specific senators on the committee increased the amendmentsΓÇÖ
chances of passing.

Her finding that lobbying works was no surprise. More important, McKayΓÇÖs work
demonstrated that computer models can predict the likely fate of proposed
legislative amendments, as well as the paths by which lobbyists can most
effectively secure their desired outcomes. And that turns out to be a critical
piece of creating an AI lobbyist.

Lobbying has long been part of the give-and-take among human policymakers and
advocates working to balance their competing interests. The danger of
microlegislation -- a danger greatly exacerbated by AI -- is that it can be used
in a way that makes it difficult to figure out who the legislation truly
benefits.

Another word for a strategy like this is a ΓÇ£hack.ΓÇ¥ Hacks follow the rules of
a system but subvert their intent. Hacking is often associated with computer
systems, but the concept is also applicable to social systems like financial
markets, tax codes, and legislative processes.

While the idea of monied interests incorporating AI assistive technologies into
their lobbying remains hypothetical, specific machine-learning technologies
exist today that would enable them to do so. We should expect these techniques
to get better and their utilization to grow, just as weΓÇÖve seen in so many
other domains.

HereΓÇÖs how it might work.

Crafting an AI microlegislator

To make microlegislation, machine-learning systems must be able to uncover the
smallest modification that could be made to a bill or existing law that would
make the biggest impact on a narrow interest.

There are three basic challenges involved. First, you must create a policy
proposal -- small suggested changes to legal text -- and anticipate whether or
not a human reader would recognize the alteration as substantive. This is
important; a change that isnΓÇÖt detectable is more likely to pass without
controversy. Second, you need to do an impact assessment to project the
implications of that change for the short- or long-range financial interests of
companies. Third, you need a lobbying strategizer to identify what levers of
power to pull to get the best proposal into law.

Existing AI tools can tackle all three of these.

The first step, the policy proposal, leverages the core function of generative
AI. Large language models, the sort that have been used for general-purpose
chatbots such as ChatGPT, can easily be adapted to write like a native in
different specialized domains after seeing a relatively small number of
examples. This process is called fine-tuning. For example, a model
ΓÇ£pre-trainedΓÇ¥ on a large library of generic text samples from books and the
internet can be ΓÇ£fine-tunedΓÇ¥ to work effectively on medical literature,
computer science papers, and product reviews.

Given this flexibility and capacity for adaptation, a large language model could
be fine-tuned to produce draft legislative texts, given a data set of previously
offered amendments and the bills they were associated with. Training data is
available. At the federal level, itΓÇÖs provided by the US Government Publishing
Office, and there are already tools for downloading and interacting with it.
Most other jurisdictions provide similar data feeds, and there are even
convenient assemblages of that data.

Meanwhile, large language models like the one underlying ChatGPT are routinely
used for summarizing long, complex documents (even laws and computer code) to
capture the essential points, and they are optimized to match human
expectations. This capability could allow an AI assistant to automatically
predict how detectable the true effect of a policy insertion may be to a human
reader.

Today, it can take a highly paid team of human lobbyists days or weeks to
generate and analyze alternative pieces of microlegislation on behalf of a
client. With AI assistance, that could be done instantaneously and cheaply. This
opens the door to dramatic increases in the scope of this kind of
microlegislating, with a potential to scale across any number of bills in any
jurisdiction.

Teaching machines to assess impact

Impact assessment is more complicated. There is a rich series of methods for
quantifying the predicted outcome of a decision or policy, and then also
optimizing the return under that model. This kind of approach goes by different
names in different circles -- mathematical programming in management science,
utility maximization in economics, and rational design in the life sciences.

To train an AI to do this, we would need to specify some way to calculate the
benefit to different parties as a result of a policy choice. That could mean
estimating the financial return to different companies under a few different
scenarios of taxation or regulation. Economists are skilled at building risk
models like this, and companies are already required to formulate and disclose
regulatory compliance risk factors to investors. Such a mathematical model could
translate directly into a reward function, a grading system that could provide
feedback for the model used to create policy proposals and direct the process of
training it.

The real challenge in impact assessment for generative AI models would be to
parse the textual output of a model like ChatGPT in terms that an economic model
could readily use. Automating this would require extracting structured financial
information from the draft amendment or any legalese surrounding it. This kind
of information extraction, too, is an area where AI has a long history; for
example, AI systems have been trained to recognize clinical details in
doctorsΓÇÖ notes. Early indications are that large language models are fairly
good at recognizing financial information in texts such as investor call
transcripts. While it remains an open challenge in the field, they may even be
capable of writing out multi-step plans based on descriptions in free text.

Machines as strategists

The last piece of the puzzle is a lobbying strategizer to figure out what
actions to take to convince lawmakers to adopt the amendment.

Passing legislation requires a keen understanding of the complex interrelated
networks of legislative offices, outside groups, executive agencies, and other
stakeholders vying to serve their own interests. Each actor in this network has
a baseline perspective and different factors that influence that point of view.
For example, a legislator may be moved by seeing an allied stakeholder take a
firm position, or by a negative news story, or by a campaign contribution.

It turns out that AI developers are very experienced at modeling these kinds of
networks. Machine-learning models for network graphs have been built, refined,
improved, and iterated by hundreds of researchers working on incredibly diverse
problems: lidar scans used to guide self-driving cars, the chemical functions of
molecular structures, the capture of motion in actorsΓÇÖ joints for computer
graphics, behaviors in social networks, and more.

In the context of AI-assisted lobbying, political actors like legislators and
lobbyists are nodes on a graph, just like users in a social network. Relations
between them are graph edges, like social connections. Information can be passed
along those edges, like messages sent to a friend or campaign contributions made
to a member. AI models can use past examples to learn to estimate how that
information changes the network. Calculating the likelihood that a campaign
contribution of a given size will flip a legislatorΓÇÖs vote on an amendment is
one application.

McKayΓÇÖs work has already shown us that there are significant, predictable
relationships between these actions and the outcomes of legislation, and that
the work of discovering those can be automated. Others have shown that graphs of
neural network models like those described above can be applied to political
systems. The full-scale use of these technologies to guide lobbying strategy is
theoretical, but plausible.

Put together, these three components could create an automatic system for
generating profitable microlegislation. The policy proposal system would create
millions, even billions, of possible amendments. The impact assessor would
identify the few that promise to be most profitable to the client. And the
lobbying strategy tool would produce a blueprint for getting them passed.

What remains is for human lobbyists to walk the floors of the Capitol or state
house, and perhaps supply some cash to grease the wheels. These final two
aspects of lobbying -- access and financing -- cannot be supplied by the AI
tools we envision. This suggests that lobbying will continue to primarily
benefit those who are already influential and wealthy, and AI assistance will
amplify their existing advantages.

The transformative benefit that AI offers to lobbyists and their clients is
scale. While individual lobbyists tend to focus on the federal level or a single
state, with AI assistance they could more easily infiltrate a large number of
state-level (or even local-level) law-making bodies and elections. At that
level, where the average cost of a seat is measured in the tens of thousands of
dollars instead of millions, a single donor can wield a lot of influence -- if
automation makes it possible to coordinate lobbying across districts.

How to stop them

When it comes to combating the potentially adverse effects of assistive AI, the
first response always seems to be to try to detect whether or not content was
AI-generated. We could imagine a defensive AI that detects anomalous lobbyist
spending associated with amendments that benefit the contributing group. But by
then, the damage might already be done.

In general, methods for detecting the work of AI tend not to keep pace with its
ability to generate convincing content. And these strategies wonΓÇÖt be
implemented by AIs alone. The lobbyists will still be humans who take the
results of an AI microlegislator and further refine the computerΓÇÖs strategies.
These hybrid human-AI systems will not be detectable from their output.

But the good news is: the same strategies that have long been used to combat
misbehavior by human lobbyists can still be effective when those lobbyists get
an AI assist. We donΓÇÖt need to reinvent our democracy to stave off the worst
risks of AI; we just need to more fully implement long-standing ideals.

First, we should reduce the dependence of legislatures on monolithic,
multi-thousand-page omnibus bills voted on under deadline. This style of
legislating exploded in the 1980s and 1990s and continues through to the most
recent federal budget bill. Notwithstanding their legitimate benefits to the
political system, omnibus bills present an obvious and proven vehicle for
inserting unnoticed provisions that may later surprise the same legislators who
approved them.

The issue is not that individual legislators need more time to read and
understand each bill (that isnΓÇÖt realistic or even necessary). ItΓÇÖs that
omnibus bills must pass. There is an imperative to pass a federal budget bill,
and so the capacity to push back on individual provisions that may seem
deleterious (or just impertinent) to any particular group is small. Bills that
are too big to fail are ripe for hacking by microlegislation.

Moreover, the incentive for legislators to introduce microlegislation catering
to a narrow interest is greater if the threat of exposure is lower. To
strengthen the threat of exposure for misbehaving legislative sponsors, bills
should focus more tightly on individual substantive areas and, after the
introduction of amendments, allow more time before the committee and floor
votes. During this time, we should encourage public review and testimony to
provide greater oversight.

Second, we should strengthen disclosure requirements on lobbyists, whether
theyΓÇÖre entirely human or AI-assisted. State laws regarding lobbying
disclosure are a hodgepodge. North Dakota, for example, only requires lobbying
reports to be filed annually, so that by the time a disclosure is made, the
policy is likely already decided. A lobbying disclosure scorecard created by
Open Secrets, a group researching the influence of money in US politics, tracks
nine states that do not even require lobbyists to report their compensation.

Ideally, it would be great for the public to see all communication between
lobbyists and legislators, whether it takes the form of a proposed amendment or
not. Absent that, letΓÇÖs give the public the benefit of reviewing what
lobbyists are lobbying for -- and why. Lobbying is traditionally an activity
that happens behind closed doors. Right now, many states reinforce that: they
actually exempt testimony delivered publicly to a legislature from being
reported as lobbying.

In those jurisdictions, if you reveal your position to the public, youΓÇÖre no
longer lobbying. LetΓÇÖs do the inverse: require lobbyists to reveal their
positions on issues. Some jurisdictions already require a statement of position
(a ΓÇÿyeaΓÇÖ or ΓÇÿnayΓÇÖ) from registered lobbyists. And in most (but not all)
states, you could make a public records request regarding meetings held with a
state legislator and hope to get something substantive back. But we can expect
more -- lobbyists could be required to proactively publish, within a few days, a
brief summary of what they demanded of policymakers during meetings and why they
believe itΓÇÖs in the general interest.

We canΓÇÖt rely on corporations to be forthcoming and wholly honest about the
reasons behind their lobbying positions. But having them on the record about
their intentions would at least provide a baseline for accountability.

Finally, consider the role AI assistive technologies may have on lobbying firms
themselves and the labor market for lobbyists. Many observers are rightfully
concerned about the possibility of AI replacing or devaluing the human labor it
automates. If the automating potential of AI ends up commodifying the work of
political strategizing and message development, it may indeed put some
professionals on K Street out of work.

But donΓÇÖt expect that to disrupt the careers of the most astronomically
compensated lobbyists: former members Congress and other insiders who have
passed through the revolving door. There is no shortage of reform ideas for
limiting the ability of government officials turned lobbyists to sell access to
their colleagues still in government, and they should be adopted and -- equally
important -- maintained and enforced in successive Congresses and
administrations.

None of these solutions are really original, specific to the threats posed by
AI, or even predominantly focused on microlegislation -- and thatΓÇÖs the point.
Good governance should and can be robust to threats from a variety of techniques
and actors.

But what makes the risks posed by AI especially pressing now is how fast the
field is developing. We expect the scale, strategies, and effectiveness of
humans engaged in lobbying to evolve over years and decades. Advancements in AI,
meanwhile, seem to be making impressive breakthroughs at a much faster pace
-- and itΓÇÖs still accelerating.

The legislative process is a constant struggle between parties trying to control
the rules of our society as they are updated, rewritten, and expanded at the
federal, state, and local levels. Lobbying is an important tool for balancing
various interests through our system. If itΓÇÖs well-regulated, perhaps lobbying
can support policymakers in making equitable decisions on behalf of us all.

This article was co-written with Nathan E. Sanders and originally appeared in
MIT Technology Review.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2023.03.14] This is a current list of where and when I am scheduled to speak:

IΓÇÖm speaking on ΓÇ£How to Reclaim Power in the Digital WorldΓÇ¥ at EPFL in
Lausanne, Switzerland, on Thursday, March 16, 2023, at 5:30 PM CET. IΓÇÖll be
discussing my new book A HackerΓÇÖs Mind: How the Powerful Bend SocietyΓÇÖs
Rules at Harvard Science Center in Cambridge, Massachusetts, USA, on Friday,
March 31, 2023, at 6:00 PM EDT. IΓÇÖll be discussing my book A HackerΓÇÖs Mind
with Julia Angwin at the Ford Foundation Center for Social Justice in New York
City, on Thursday, April 6, 2023, at 6:30 PM EDT.
IΓÇÖm speaking at IT-S Now 2023 in Vienna, Austria, on June 2, 2023, at 8:30 AM
CEST.
The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries,
analyses, insights, and commentaries on security technology. To subscribe, or to
read back issues, see Crypto-Gram's web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and
friends who will find it valuable. Permission is also granted to reprint
CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a
security guru by the Economist. He is the author of over one dozen books --
including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles,
essays, and academic papers. His newsletter and blog are read by over 250,000
people. Schneier is a fellow at the Berkman Klein Center for Internet & Society
at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy
School; a board member of the Electronic Frontier Foundation, AccessNow, and the
Tor Project; and an Advisory Board Member of the Electronic Privacy Information
Center and VerifiedVoting.org. He is the Chief of Security Architecture at
Inrupt, Inc.

Copyright © 2023 by Bruce Schneier.

--- BBBS/Li6 v4.10 Toy-5
 * Origin: TCOB1 - binkd.thecivv.ie (618:500/14)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0186 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.241108