No Result
View All Result
  • Login
Monday, April 20, 2026
FeeOnlyNews.com
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading
No Result
View All Result
FeeOnlyNews.com
No Result
View All Result
Home Startups

OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants

by FeeOnlyNews.com
2 months ago
in Startups
Reading Time: 7 mins read
A A
0
OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants
Share on FacebookShare on TwitterShare on LInkedIn


Add Silicon Canals to your Google News feed.

There’s a number that’s been circulating among AI policy researchers this quarter, and once you see it, it’s hard to unsee. In the first three months of 2025, OpenAI, Anthropic, and Google each spent more on federal lobbying efforts than the entire independent AI safety research field received in grant funding during the same period.

Sit with that for a moment. The companies building the most powerful AI systems on Earth are collectively outspending the people trying to understand whether those systems are safe by a ratio that would be comical if the stakes weren’t so high.

corporate lobbying spending chart
Photo by Jonathan Cooper on Pexels

The numbers tell a story we should pay attention to

According to federal lobbying disclosures filed with the Senate Office of Public Records, OpenAI spent approximately $2.2 million on lobbying in Q1 2025. Google’s parent company Alphabet, which has increasingly focused its lobbying on AI-related policy, disclosed significantly more. Anthropic, the company that has positioned itself as the “safety-first” AI lab, also ramped up its Washington presence considerably.

Meanwhile, a recent analysis published by the Centre for the Governance of AI estimated that total grant funding to independent AI safety research organizations in Q1 2025 was in the low single-digit millions globally. Some estimates place it even lower when you exclude funding that ultimately flows to university labs affiliated with the major companies themselves.

The pattern is stark. The organizations with the most financial incentive to shape AI regulation are the ones with the loudest voice in the rooms where regulation gets written. The organizations whose entire purpose is to evaluate risks are working on shoestring budgets, often relying on individual philanthropists who may or may not renew their commitments year to year.

What lobbying actually buys

There’s a common misconception that lobbying is inherently corrupt, that it’s about backroom deals and briefcases of cash. The reality is more subtle and, in some ways, more concerning.

What lobbying buys is access. It buys the ability to be the first voice a congressional staffer hears when they’re drafting language for a bill they don’t fully understand. It buys the opportunity to frame the conversation before the conversation even begins. A 2014 study published in Perspectives on Politics by Martin Gilens and Benjamin Page at Princeton found that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.

Apply that finding to AI policy, and the picture becomes clear. When OpenAI’s lobbyists sit down with legislators, they’re shaping the vocabulary, the assumptions, and the boundaries of what “reasonable regulation” looks like. When independent safety researchers want to present their findings, they often can’t afford a flight to Washington.

The psychology of self-regulation promises

In my recent piece on how most companies have a permission problem rather than a communication problem, I explored how organizations develop invisible norms about what’s safe to say and what isn’t. The same dynamic operates at an industry level.

AI companies have spent the last two years making voluntary safety commitments, publishing responsible use policies, and signing pledges at the White House. These gestures create what psychologists call a “moral licensing” effect. Research by Nina Mazar and Chen-Bo Zhong, published in Psychological Science, demonstrated that people who establish moral credentials in one domain feel unconsciously liberated to behave less ethically in another.

The corporate version of this is familiar. A company publishes a responsible AI charter on Tuesday. On Wednesday, its lobbyists argue against binding legislation that would enforce those exact same principles. The charter provides the moral license; the lobbying provides the market advantage.

Anthropic is a particularly interesting case here. Founded explicitly as a safety-focused alternative to OpenAI, the company has increasingly behaved like a standard tech company when it comes to policy influence. Their lobbying spend has grown quarter over quarter. This tracks with a broader pattern: organizations that define themselves by their values are often the most vulnerable to the gap between stated principles and institutional behavior.

The asymmetry problem

There’s a structural issue here that goes beyond any single company’s choices. The incentives are fundamentally misaligned.

AI safety research generates no revenue. It produces papers, frameworks, and warnings. It asks companies to slow down, to test more, to be transparent about failure modes. Every dollar spent on genuine safety research is a dollar that produces no quarterly return, no stock price bump, no competitive moat.

Lobbying, on the other hand, can be extraordinarily profitable. A study by researchers at the University of Kansas, published in the Journal of Financial Economics, found that firms that lobby strategically can see returns of over 200% on their lobbying expenditures when favorable policy outcomes are achieved. For AI companies racing to capture a market projected to be worth trillions, spending a few million on policy influence is one of the highest-ROI investments available.

scales of justice imbalance
Photo by KATRIN BOLOVTSOVA on Pexels

This creates what economists call an asymmetric contest. One side has every financial incentive to participate aggressively. The other side runs on idealism and grant cycles. The outcome is predictable.

Europe offers an instructive contrast

The European Union’s approach to AI regulation has been different, though far from perfect. The EU AI Act, which entered enforcement phases in 2025, was developed through a process that gave considerably more weight to civil society organizations, academic researchers, and independent policy groups.

That doesn’t mean European regulation is ideal. Plenty of critics argue the AI Act is too rigid, too slow, or too focused on categorization rather than outcomes. But the process itself embedded a different assumption: that the people building powerful technology shouldn’t be the primary architects of the rules governing that technology.

In the U.S., we’ve essentially inverted that assumption. The builders are the architects, the referees, and increasingly, the ones writing the rulebook.

What this means for the rest of us

I’ve been thinking lately about the difference between problems that feel abstract and problems that are merely delayed. This spending asymmetry feels abstract to most people. AI lobbying disclosures don’t make headlines the way a chatbot generating misinformation does.

But policy is infrastructure. It determines what’s legal, what’s required, what’s incentivized, and what’s ignored. The lobbying happening right now in Washington is shaping the regulatory environment that will govern AI systems for the next decade, possibly longer. And the people doing that shaping have a very specific set of financial interests that may or may not align with public welfare.

Daniel Kahneman’s work on prospect theory showed us that humans consistently underweight risks that feel distant or probabilistic. We’re wired to respond to immediate, vivid threats and to discount slow-moving structural ones. The lobbying-to-safety-research spending ratio is a slow-moving structural threat. It’s the kind of thing we’ll look back on in ten years and wonder why we didn’t pay more attention.

A question worth asking

Here’s what I keep coming back to. If these companies genuinely believe their own safety rhetoric, if they truly think AI poses existential-level risks (as several of their CEOs have publicly stated), then why is their lobbying budget larger than their contributions to independent safety research?

The answer, of course, is that institutional incentives and personal beliefs operate on different tracks. A CEO can sincerely believe AI is dangerous and simultaneously lead a company whose institutional machinery works to minimize regulatory oversight. There’s no contradiction once you understand that organizations are not people, regardless of how often we anthropomorphize them.

The question for the rest of us is simpler: who do we want writing the rules? The companies with billions on the line, or an independent research community with the freedom to say uncomfortable things? Right now, we’ve made our choice, mostly by not making one at all.

Where this leaves the safety community

Several prominent AI safety researchers have begun speaking publicly about the funding gap, and a few have left academia entirely for industry positions, citing the impossibility of doing meaningful work on grants that barely cover a postdoc’s salary. This brain drain is its own kind of lobbying victory: if the best safety researchers work for the companies they’re supposed to be evaluating, the independence that makes their work credible evaporates.

There are counterexamples. Some foundations have increased their AI safety funding significantly. Some researchers have found creative ways to maintain independence while collaborating with industry. But the overall trajectory is clear, and the Q1 2025 numbers make it undeniable.

We are in a period where the most consequential technology in a generation is being governed primarily by the financial interests of the people building it. The psychological and structural forces enabling this are well-documented. The question is whether we’ll act on that knowledge or simply note it with the detached appreciation of someone watching a slow-motion collision from a comfortable distance.

The numbers are public. The pattern is visible. What happens next is a choice.

Feature image by Negative Space on Pexels

From the editors

Undercurrent — our weekly newsletter. The sharpest writing from Silicon Canals, curated reads from across the web, and an editorial connecting what others cover in isolation. Every Sunday.

Free. No spam. Unsubscribe anytime.



Source link

Tags: AnthropicEntirefieldGoogleGrantslobbyingOpenAIreceivedResearchsafetyspent
ShareTweetShare
Previous Post

Market dips are buying opportunities, focus on emerging trends: Ramesh Damani

Next Post

Over 100,000 Israelis stranded abroad

Related Posts

People who accomplished remarkable things by 60 share one pattern — they changed their minds more often and their identity less often

People who accomplished remarkable things by 60 share one pattern — they changed their minds more often and their identity less often

by FeeOnlyNews.com
April 19, 2026
0

I’ve got a friend back in Melbourne, a guy I’ve known since uni, who spent thirty years convinced he was...

The real cost of letting AI do it for you

The real cost of letting AI do it for you

by FeeOnlyNews.com
April 19, 2026
0

Researchers at MIT Media Lab hooked 54 people up to EEG headsets and watched their brains go quiet. The ones...

Psychology says people who constantly apologize for things that aren’t their fault aren’t being polite. They grew up in an environment where someone else’s bad mood was always their responsibility to fix

Psychology says people who constantly apologize for things that aren’t their fault aren’t being polite. They grew up in an environment where someone else’s bad mood was always their responsibility to fix

by FeeOnlyNews.com
April 19, 2026
0

When I was twelve and my parents divorced, I became fascinated with understanding why people behave the way they do....

Psychology says adult children don’t grieve their aging parents all at once — they grieve them in a thousand tiny deaths, like the first time your mother forgets she told you the same story twice, or the afternoon you notice your father’s hands shaking when he signs his name

Psychology says adult children don’t grieve their aging parents all at once — they grieve them in a thousand tiny deaths, like the first time your mother forgets she told you the same story twice, or the afternoon you notice your father’s hands shaking when he signs his name

by FeeOnlyNews.com
April 18, 2026
0

We talk about grief as though it only arrives after a death. The funeral, the casseroles, the sympathy cards, the...

The quiet power of emotional intelligence at work

The quiet power of emotional intelligence at work

by FeeOnlyNews.com
April 18, 2026
0

The quantifiable relationship between emotional intelligence and workplace outcomes has, over the past two decades, moved from the margins of...

The loneliest people at any gathering are almost never the ones standing alone by the wall. They’re the ones laughing in the middle of the group who will drive home afterward in complete silence and not call anyone about it.

The loneliest people at any gathering are almost never the ones standing alone by the wall. They’re the ones laughing in the middle of the group who will drive home afterward in complete silence and not call anyone about it.

by FeeOnlyNews.com
April 18, 2026
0

Most of our cultural understanding of loneliness is built around the wrong image. We picture the person eating alone, the...

Next Post
Over 100,000 Israelis stranded abroad

Over 100,000 Israelis stranded abroad

Mitsubishi’s global output rises 8% in January

Mitsubishi’s global output rises 8% in January

  • Trending
  • Comments
  • Latest
Wells Fargo Transfer Partners: What to Know

Wells Fargo Transfer Partners: What to Know

April 16, 2026
The 23 Largest Global Startup Funding Rounds of February 2026 – AlleyWatch

The 23 Largest Global Startup Funding Rounds of February 2026 – AlleyWatch

March 27, 2026
Easter Basket Ideas for Kids

Easter Basket Ideas for Kids

March 23, 2026
LPL’s Mariner Advisor Network deal fuels already hot year for RIA M&A

LPL’s Mariner Advisor Network deal fuels already hot year for RIA M&A

April 16, 2026
Royal Caribbean, Bank of America Launching New Credit Cards

Royal Caribbean, Bank of America Launching New Credit Cards

March 31, 2026
CVS Deals Under  This Week

CVS Deals Under $1 This Week

March 30, 2026
El Al to launch subsidized Tel Aviv – Buenos Aires flights

El Al to launch subsidized Tel Aviv – Buenos Aires flights

0
From Taxation To Confiscation | Armstrong Economics

From Taxation To Confiscation | Armstrong Economics

0
Goldman Sachs’ blunt words for Amazon stock investors after big deal

Goldman Sachs’ blunt words for Amazon stock investors after big deal

0
Occidental Petroleum Drops 7.5% Amid Sector-Wide Selling

Occidental Petroleum Drops 7.5% Amid Sector-Wide Selling

0
Mixed geopolitical signals making market moves hard to decode: Seth R Freeman

Mixed geopolitical signals making market moves hard to decode: Seth R Freeman

0
Coinbase Starts Rolling out AI Agents Modeled After ‘Legendary’ Employees

Coinbase Starts Rolling out AI Agents Modeled After ‘Legendary’ Employees

0
Coinbase Starts Rolling out AI Agents Modeled After ‘Legendary’ Employees

Coinbase Starts Rolling out AI Agents Modeled After ‘Legendary’ Employees

April 20, 2026
Mixed geopolitical signals making market moves hard to decode: Seth R Freeman

Mixed geopolitical signals making market moves hard to decode: Seth R Freeman

April 20, 2026
‘Resumption of hostilities’: seized ship, vessel attacks push U.S.-Iran ceasefire toward brink

‘Resumption of hostilities’: seized ship, vessel attacks push U.S.-Iran ceasefire toward brink

April 20, 2026
Asian markets gain despite weekend flare-up in U.S.-Iran tensions; Ch

Asian markets gain despite weekend flare-up in U.S.-Iran tensions; Ch

April 20, 2026
Importers Rush to File as US Launches Tariff Refund Claims Portal

Importers Rush to File as US Launches Tariff Refund Claims Portal

April 20, 2026
El Al to launch subsidized Tel Aviv – Buenos Aires flights

El Al to launch subsidized Tel Aviv – Buenos Aires flights

April 20, 2026
FeeOnlyNews.com

Get the latest news and follow the coverage of Business & Financial News, Stock Market Updates, Analysis, and more from the trusted sources.

CATEGORIES

  • Business
  • Cryptocurrency
  • Economy
  • Financial Planning
  • Investing
  • Market Analysis
  • Markets
  • Money
  • Personal Finance
  • Startups
  • Stock Market
  • Trading

LATEST UPDATES

  • Coinbase Starts Rolling out AI Agents Modeled After ‘Legendary’ Employees
  • Mixed geopolitical signals making market moves hard to decode: Seth R Freeman
  • ‘Resumption of hostilities’: seized ship, vessel attacks push U.S.-Iran ceasefire toward brink
  • Our Great Privacy Policy
  • Terms of Use, Legal Notices & Disclaimers
  • About Us
  • Contact Us

Copyright © 2022-2024 All Rights Reserved
See articles for original source and related links to external sites.

Welcome Back!

Sign In with Facebook
Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading

Copyright © 2022-2024 All Rights Reserved
See articles for original source and related links to external sites.