AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1668 / 1834] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, December 15, 2024 Part 6   December 23, 2024
 11:41 AM *  

There is potential for AI models to be much more scalable and adaptable to
more languages and countries than organizations of human moderators. But
the implementations to date on platforms like Meta demonstrate that a lot
more work
[https://dig.watch/updates/the-consequences-of...]
needs to be done to make these systems fair and effective.

One thing that didnΓÇÖt matter much in 2024 was corporate AI developersΓÇÖ
prohibitions on using their tools for politics. Despite market leader
OpenAIΓÇÖs emphasis on banning political uses
[https://www.washingtonpost.com/technology/202...]
and its use of AI to automatically reject a quarter-million requests
[https://www.nbcnews.com/tech/chatgpt-rejected...]
to generate images of political candidates, the companyΓÇÖs enforcement has
been ineffective
[https://www.washingtonpost.com/technology/202...]
and actual use is widespread.

* THE GENIE IS LOOSE

All of these trends -- both good and bad -- are likely to continue. As AI
gets more powerful and capable, it is likely to infiltrate every aspect of
politics. This will happen whether the AIΓÇÖs performance is superhuman or
suboptimal, whether it makes mistakes or not, and whether the balance of
its use is positive or negative. All it takes is for one party, one
campaign, one outside group, or even an individual to see an advantage in
automation.

_This essay was written with Nathan E. Sanders, and originally appeared in
The Conversation
[https://theconversation.com/the-apocalypse-th...]._

** *** ***** ******* *********** *************


** DETECTING PEGASUS INFECTIONS
------------------------------------------------------------

[2024.12.06]
[https://www.schneier.com/blog/archives/2024/1...]
This tool
[https://arstechnica.com/security/2024/12/1-ph...]
seems to do a pretty good job.

> The companyΓÇÖs Mobile Threat Hunting feature uses a combination of malware
signature-based detection, heuristics, and machine learning to look for
anomalies in iOS and Android device activity or telltale signs of spyware
infection. For paying iVerify customers, the tool regularly checks devices
for potential compromise. But the company also offers a free version of the
feature for anyone who downloads the iVerify Basics app for $1. These users
can walk through steps to generate and send a special diagnostic utility
file to iVerify and receive analysis within hours. Free users can use the
tool once a month. iVerifyΓÇÖs infrastructure is built to be
privacy-preserving, but to run the Mobile Threat Hunting feature, users
must enter an email address so the company has a way to contact them if a
scan turns up spyware -- as it did in the seven recent Pegasus discoveries.

** *** ***** ******* *********** *************


** TRUST ISSUES IN AI
------------------------------------------------------------

[2024.12.09]
[https://www.schneier.com/blog/archives/2024/1...]
_This essay was written with Nathan E. Sanders. It originally appeared as a
response to Evgeny Morozov in _Boston Review_ΓÇÿs forum, ΓÇ£The AI We Deserve
[https://www.bostonreview.net/forum/the-ai-we-...].ΓÇ¥_

For a technology that seems startling in its modernity, AI sure has a long
history. Google Translate, OpenAI chatbots, and Meta AI image generators
are built on decades of advancements in linguistics, signal processing,
statistics, and other fields going back to the early days of computing --
and, often, on seed funding from the U.S. Department of Defense. But
todayΓÇÖs tools are hardly the intentional product of the diverse generations
of innovators that came before. We agree with Morozov that the
ΓÇ£refuseniks,ΓÇ¥ as he calls
[https://www.bostonreview.net/forum/the-ai-we-...] them, are wrong to
see AI as ΓÇ£irreparably taintedΓÇ¥ by its origins. AI is better understood as
a creative, global field of human endeavor that has been largely captured
by U.S. venture capitalists, private equity, and Big Tech. But that was
never the inevitable outcome, and it doesnΓÇÖt need to stay that way.

The internet is a case in point. The fact that it originated in the
military is a historical curiosity, not an indication of its essential
capabilities or social significance. Yes, it was created to connect
different, incompatible Department of Defense networks. Yes, it was
designed to survive the sorts of physical damage expected from a nuclear
war. And yes, back then it was a bureaucratically controlled space where
frivolity was discouraged and commerce was forbidden.

Over the decades, the internet transformed from military project to
academic tool to the corporate marketplace it is today. These forces, each
in turn, shaped what the internet was and what it could do. For most of us
billions online today, the only internet we have ever known has been
corporate -- because the internet didnΓÇÖt flourish until the capitalists got
hold of it.

AI followed a similar path. It was originally funded by the military, with
the militaryΓÇÖs goals in mind. But the Department of Defense didnΓÇÖt design
the modern ecosystem of AI any more than it did the modern internet.
Arguably, its influence on AI was even less because AI simply didnΓÇÖt work
back then. While the internet exploded in usage, AI hit a series of dead
ends. The research discipline went through multiple ΓÇ£wintersΓÇ¥ when funders
of all kinds -- military and corporate -- were disillusioned and research
money dried up for years at a time. Since the release of ChatGPT, AI has
reached the same endpoint as the internet: it is thoroughly dominated by
corporate power. Modern AI, with its deep reinforcement learning and large
language models, is shaped by venture capitalists, not the military -- nor
even by idealistic academics anymore.

We agree with much of MorozovΓÇÖs critique of corporate control, but it does
not follow that we must reject the value of instrumental reason. Solving
problems and pursuing goals is not a bad thing, and there is real cause to
be excited about the uses of current AI. Morozov illustrates this from his
own experience: he uses AI to pursue the explicit goal of language
learning.

AI tools promise to increase our individual power, amplifying our
capabilities and endowing us with skills, knowledge, and abilities we would
--- 
 * Origin: High Portable Tosser at my node (618:500/14.1)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.015 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224