No Result
View All Result
  • Login
Sunday, October 12, 2025
FeeOnlyNews.com
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading
No Result
View All Result
FeeOnlyNews.com
No Result
View All Result
Home Economy

Are We Waking Up Fast Enough to the Dangers of AI Militarism?

by FeeOnlyNews.com
1 day ago
in Economy
Reading Time: 8 mins read
A A
0
Are We Waking Up Fast Enough to the Dangers of AI Militarism?
Share on FacebookShare on TwitterShare on LInkedIn


By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at [email protected]. Originally published at Common Dreams

Yves here. The stoopid, it burns. AI errors and shortcomings are getting more and more press, yet implementation in high risk settings continues. This post discusses Trump Administration’s eagerness to use AI for critical military decision despite poor performance in war games and similar tests.

By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at [email protected]. Originally published at Common Dreams

AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.

The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.

To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”

More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.

The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”

Widening The Public Debate

If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:

Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”

Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Timemagazine that government funding has been the main driver of AI development for many years.

This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.

The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:

The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.

Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.

The AI Technocratic Takeover: Planned for Decades

Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:

In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…

Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.

The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.

That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.

No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.

Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?



Source link

Tags: dangersFastMilitarismWaking
ShareTweetShare
Previous Post

How Tether’s $127B in US Treasuries will hit top-5 foreign holders by 2033

Next Post

Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

Related Posts

Fighting for Peace

Fighting for Peace

by FeeOnlyNews.com
October 11, 2025
0

Not all Americans are NEOCONS looking for World War III like Lindsey Olin Graham. I believe Trump is being played by the NEOCONS,...

Breaking Free From State Rule

Breaking Free From State Rule

by FeeOnlyNews.com
October 11, 2025
0

Wars are mass-murder, massive theft, and unrelenting propaganda. In this country they’re lucrative overseas entanglements, as government diverts loot from...

How Double Standards Erode Free Speech

How Double Standards Erode Free Speech

by FeeOnlyNews.com
October 11, 2025
0

Free speech is not dead—it has just been parceled out among favored groups. This explains why the British Prime Minister...

Links 10/11/2025 | naked capitalism

Links 10/11/2025 | naked capitalism

by FeeOnlyNews.com
October 11, 2025
0

Surprise: Fog in March. Louis enjoys his morning kayaking YouTube (resilc). A soothing oldie. This Creamy Pasta Trick Just Earned...

What Version of Democracy Will Prevail?

What Version of Democracy Will Prevail?

by FeeOnlyNews.com
October 11, 2025
0

Yves here. While this post contains useful historical detail on the evolution of democratic systems in many countries, its optimistic...

Interview With Retirement Lifestyle Advocates

Interview With Retirement Lifestyle Advocates

by FeeOnlyNews.com
October 11, 2025
0

Click here to listen to the interview. Renowned forecaster Martin Armstrong joins Dennis Tubbergen to discuss his groundbreaking computer model...

Next Post
Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

“Hot, Hot, Hot”: Robert Kiyosaki on silver and Ethereum, sees white metal hitting

“Hot, Hot, Hot”: Robert Kiyosaki on silver and Ethereum, sees white metal hitting $75

  • Trending
  • Comments
  • Latest
Bitcoin: Breakout Above 7K Resistance Could Unlock Fresh Upside

Bitcoin: Breakout Above $117K Resistance Could Unlock Fresh Upside

September 19, 2025
Government shutdown could drain financial advisor optimism

Government shutdown could drain financial advisor optimism

October 7, 2025
Vanguard reaches .5M SEC settlement

Vanguard reaches $19.5M SEC settlement

August 29, 2025
Russia appeals global aviation agency’s decision blaming it for downing MH17 over Ukraine in 2014

Russia appeals global aviation agency’s decision blaming it for downing MH17 over Ukraine in 2014

September 19, 2025
Meet a 23-year-old electrician who was a ‘good student’ but skipped college to become his own boss. He makes 6 figures

Meet a 23-year-old electrician who was a ‘good student’ but skipped college to become his own boss. He makes 6 figures

September 14, 2025
Commonwealth advisors head to Raymond James, Cetera

Commonwealth advisors head to Raymond James, Cetera

October 9, 2025
Pentera buys AI remediation co Dev Ocean

Pentera buys AI remediation co Dev Ocean

0
5 Signs Your Retirement Community Isn’t Transparent About Hidden Fees

5 Signs Your Retirement Community Isn’t Transparent About Hidden Fees

0
Morgan Stanley drops crypto fund restrictions for wealth clients

Morgan Stanley drops crypto fund restrictions for wealth clients

0
Top UK Investment Firm Cautions Against Bitcoin After Slack in Retail Crypto Rules

Top UK Investment Firm Cautions Against Bitcoin After Slack in Retail Crypto Rules

0
Best CD rates today, October 11, 2025 (best account provides 4.1% APY)

Best CD rates today, October 11, 2025 (best account provides 4.1% APY)

0
When Pinker Doesn’t Know | Mises Institute

When Pinker Doesn’t Know | Mises Institute

0
Gold and silver ETFs account for 72% of passive mutual fund inflows: AMFI

Gold and silver ETFs account for 72% of passive mutual fund inflows: AMFI

October 12, 2025
Ripple Sees Strong Opportunities in Europe’s Expanding Tokenization Market

Ripple Sees Strong Opportunities in Europe’s Expanding Tokenization Market

October 11, 2025
ETH And ETH/BTC Signal Strength Despite Bearish Close

ETH And ETH/BTC Signal Strength Despite Bearish Close

October 11, 2025
Diane Keaton’s quiet activism helped preserve these Los Angeles landmarks

Diane Keaton’s quiet activism helped preserve these Los Angeles landmarks

October 11, 2025
Simmer Down, Bitcoin Is Going To Be Ok: Look At The Data

Simmer Down, Bitcoin Is Going To Be Ok: Look At The Data

October 11, 2025
Could Buying Amazon Stock Today Set You Up For Life?

Could Buying Amazon Stock Today Set You Up For Life?

October 11, 2025
FeeOnlyNews.com

Get the latest news and follow the coverage of Business & Financial News, Stock Market Updates, Analysis, and more from the trusted sources.

CATEGORIES

  • Business
  • Cryptocurrency
  • Economy
  • Financial Planning
  • Investing
  • Market Analysis
  • Markets
  • Money
  • Personal Finance
  • Startups
  • Stock Market
  • Trading

LATEST UPDATES

  • Gold and silver ETFs account for 72% of passive mutual fund inflows: AMFI
  • Ripple Sees Strong Opportunities in Europe’s Expanding Tokenization Market
  • ETH And ETH/BTC Signal Strength Despite Bearish Close
  • Our Great Privacy Policy
  • Terms of Use, Legal Notices & Disclaimers
  • About Us
  • Contact Us

Copyright © 2022-2024 All Rights Reserved
See articles for original source and related links to external sites.

Welcome Back!

Sign In with Facebook
Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading

Copyright © 2022-2024 All Rights Reserved
See articles for original source and related links to external sites.