No Result
View All Result
  • Login
Monday, March 2, 2026
FeeOnlyNews.com
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading
No Result
View All Result
FeeOnlyNews.com
No Result
View All Result
Home Startups

OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants

by FeeOnlyNews.com
1 hour ago
in Startups
Reading Time: 7 mins read
A A
0
OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants
Share on FacebookShare on TwitterShare on LInkedIn


Add Silicon Canals to your Google News feed.

There’s a number that’s been circulating among AI policy researchers this quarter, and once you see it, it’s hard to unsee. In the first three months of 2025, OpenAI, Anthropic, and Google each spent more on federal lobbying efforts than the entire independent AI safety research field received in grant funding during the same period.

Sit with that for a moment. The companies building the most powerful AI systems on Earth are collectively outspending the people trying to understand whether those systems are safe by a ratio that would be comical if the stakes weren’t so high.

corporate lobbying spending chart
Photo by Jonathan Cooper on Pexels

The numbers tell a story we should pay attention to

According to federal lobbying disclosures filed with the Senate Office of Public Records, OpenAI spent approximately $2.2 million on lobbying in Q1 2025. Google’s parent company Alphabet, which has increasingly focused its lobbying on AI-related policy, disclosed significantly more. Anthropic, the company that has positioned itself as the “safety-first” AI lab, also ramped up its Washington presence considerably.

Meanwhile, a recent analysis published by the Centre for the Governance of AI estimated that total grant funding to independent AI safety research organizations in Q1 2025 was in the low single-digit millions globally. Some estimates place it even lower when you exclude funding that ultimately flows to university labs affiliated with the major companies themselves.

The pattern is stark. The organizations with the most financial incentive to shape AI regulation are the ones with the loudest voice in the rooms where regulation gets written. The organizations whose entire purpose is to evaluate risks are working on shoestring budgets, often relying on individual philanthropists who may or may not renew their commitments year to year.

What lobbying actually buys

There’s a common misconception that lobbying is inherently corrupt, that it’s about backroom deals and briefcases of cash. The reality is more subtle and, in some ways, more concerning.

What lobbying buys is access. It buys the ability to be the first voice a congressional staffer hears when they’re drafting language for a bill they don’t fully understand. It buys the opportunity to frame the conversation before the conversation even begins. A 2014 study published in Perspectives on Politics by Martin Gilens and Benjamin Page at Princeton found that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.

Apply that finding to AI policy, and the picture becomes clear. When OpenAI’s lobbyists sit down with legislators, they’re shaping the vocabulary, the assumptions, and the boundaries of what “reasonable regulation” looks like. When independent safety researchers want to present their findings, they often can’t afford a flight to Washington.

The psychology of self-regulation promises

In my recent piece on how most companies have a permission problem rather than a communication problem, I explored how organizations develop invisible norms about what’s safe to say and what isn’t. The same dynamic operates at an industry level.

AI companies have spent the last two years making voluntary safety commitments, publishing responsible use policies, and signing pledges at the White House. These gestures create what psychologists call a “moral licensing” effect. Research by Nina Mazar and Chen-Bo Zhong, published in Psychological Science, demonstrated that people who establish moral credentials in one domain feel unconsciously liberated to behave less ethically in another.

The corporate version of this is familiar. A company publishes a responsible AI charter on Tuesday. On Wednesday, its lobbyists argue against binding legislation that would enforce those exact same principles. The charter provides the moral license; the lobbying provides the market advantage.

Anthropic is a particularly interesting case here. Founded explicitly as a safety-focused alternative to OpenAI, the company has increasingly behaved like a standard tech company when it comes to policy influence. Their lobbying spend has grown quarter over quarter. This tracks with a broader pattern: organizations that define themselves by their values are often the most vulnerable to the gap between stated principles and institutional behavior.

The asymmetry problem

There’s a structural issue here that goes beyond any single company’s choices. The incentives are fundamentally misaligned.

AI safety research generates no revenue. It produces papers, frameworks, and warnings. It asks companies to slow down, to test more, to be transparent about failure modes. Every dollar spent on genuine safety research is a dollar that produces no quarterly return, no stock price bump, no competitive moat.

Lobbying, on the other hand, can be extraordinarily profitable. A study by researchers at the University of Kansas, published in the Journal of Financial Economics, found that firms that lobby strategically can see returns of over 200% on their lobbying expenditures when favorable policy outcomes are achieved. For AI companies racing to capture a market projected to be worth trillions, spending a few million on policy influence is one of the highest-ROI investments available.

scales of justice imbalance
Photo by KATRIN BOLOVTSOVA on Pexels

This creates what economists call an asymmetric contest. One side has every financial incentive to participate aggressively. The other side runs on idealism and grant cycles. The outcome is predictable.

Europe offers an instructive contrast

The European Union’s approach to AI regulation has been different, though far from perfect. The EU AI Act, which entered enforcement phases in 2025, was developed through a process that gave considerably more weight to civil society organizations, academic researchers, and independent policy groups.

That doesn’t mean European regulation is ideal. Plenty of critics argue the AI Act is too rigid, too slow, or too focused on categorization rather than outcomes. But the process itself embedded a different assumption: that the people building powerful technology shouldn’t be the primary architects of the rules governing that technology.

In the U.S., we’ve essentially inverted that assumption. The builders are the architects, the referees, and increasingly, the ones writing the rulebook.

What this means for the rest of us

I’ve been thinking lately about the difference between problems that feel abstract and problems that are merely delayed. This spending asymmetry feels abstract to most people. AI lobbying disclosures don’t make headlines the way a chatbot generating misinformation does.

But policy is infrastructure. It determines what’s legal, what’s required, what’s incentivized, and what’s ignored. The lobbying happening right now in Washington is shaping the regulatory environment that will govern AI systems for the next decade, possibly longer. And the people doing that shaping have a very specific set of financial interests that may or may not align with public welfare.

Daniel Kahneman’s work on prospect theory showed us that humans consistently underweight risks that feel distant or probabilistic. We’re wired to respond to immediate, vivid threats and to discount slow-moving structural ones. The lobbying-to-safety-research spending ratio is a slow-moving structural threat. It’s the kind of thing we’ll look back on in ten years and wonder why we didn’t pay more attention.

A question worth asking

Here’s what I keep coming back to. If these companies genuinely believe their own safety rhetoric, if they truly think AI poses existential-level risks (as several of their CEOs have publicly stated), then why is their lobbying budget larger than their contributions to independent safety research?

The answer, of course, is that institutional incentives and personal beliefs operate on different tracks. A CEO can sincerely believe AI is dangerous and simultaneously lead a company whose institutional machinery works to minimize regulatory oversight. There’s no contradiction once you understand that organizations are not people, regardless of how often we anthropomorphize them.

The question for the rest of us is simpler: who do we want writing the rules? The companies with billions on the line, or an independent research community with the freedom to say uncomfortable things? Right now, we’ve made our choice, mostly by not making one at all.

Where this leaves the safety community

Several prominent AI safety researchers have begun speaking publicly about the funding gap, and a few have left academia entirely for industry positions, citing the impossibility of doing meaningful work on grants that barely cover a postdoc’s salary. This brain drain is its own kind of lobbying victory: if the best safety researchers work for the companies they’re supposed to be evaluating, the independence that makes their work credible evaporates.

There are counterexamples. Some foundations have increased their AI safety funding significantly. Some researchers have found creative ways to maintain independence while collaborating with industry. But the overall trajectory is clear, and the Q1 2025 numbers make it undeniable.

We are in a period where the most consequential technology in a generation is being governed primarily by the financial interests of the people building it. The psychological and structural forces enabling this are well-documented. The question is whether we’ll act on that knowledge or simply note it with the detached appreciation of someone watching a slow-motion collision from a comfortable distance.

The numbers are public. The pattern is visible. What happens next is a choice.

Feature image by Negative Space on Pexels

From the editors

Undercurrent — our weekly newsletter. The sharpest writing from Silicon Canals, curated reads from across the web, and an editorial connecting what others cover in isolation. Every Sunday.

Free. No spam. Unsubscribe anytime.



Source link

Tags: AnthropicEntirefieldGoogleGrantslobbyingOpenAIreceivedResearchsafetyspent
ShareTweetShare
Previous Post

Market dips are buying opportunities, focus on emerging trends: Ramesh Damani

Related Posts

I traced who owns the undersea cables that carry 95% of global internet traffic — the map is a colonial one

I traced who owns the undersea cables that carry 95% of global internet traffic — the map is a colonial one

by FeeOnlyNews.com
March 1, 2026
0

Add Silicon Canals to your Google News feed. I started this project the way most of my investigations begin: with...

The art of the late apology: 7 things that happen when someone finally says sorry after 10, 20, or 30 years — and why psychologists say the apology that comes decades late is often the only one that actually changes anything

The art of the late apology: 7 things that happen when someone finally says sorry after 10, 20, or 30 years — and why psychologists say the apology that comes decades late is often the only one that actually changes anything

by FeeOnlyNews.com
March 1, 2026
0

Add Silicon Canals to your Google News feed. Picture this: your phone rings, and it’s someone you haven’t spoken to...

Psychology says people who grew up in households where no one talked about emotions but everyone felt them intensely display these 9 traits in adult relationships—and most of them look like strength until you understand the cost

Psychology says people who grew up in households where no one talked about emotions but everyone felt them intensely display these 9 traits in adult relationships—and most of them look like strength until you understand the cost

by FeeOnlyNews.com
March 1, 2026
0

Add Silicon Canals to your Google News feed. There’s a particular kind of silence that fills a house where feelings...

I asked my father-in-law what the secret to his 50-year marriage was and he said four words — and the more I live, the more I realize those four words contain everything the entire self-help industry has been trying to say

I asked my father-in-law what the secret to his 50-year marriage was and he said four words — and the more I live, the more I realize those four words contain everything the entire self-help industry has been trying to say

by FeeOnlyNews.com
March 1, 2026
0

Add Silicon Canals to your Google News feed. “Just don’t be right.” That’s what my father-in-law told me when I...

Psychology says children who had to parent themselves or their siblings don’t just lose their childhood — they develop a permanent nervous system dysregulation that makes rest feel dangerous and relaxation feel like neglecting an invisible responsibility

Psychology says children who had to parent themselves or their siblings don’t just lose their childhood — they develop a permanent nervous system dysregulation that makes rest feel dangerous and relaxation feel like neglecting an invisible responsibility

by FeeOnlyNews.com
February 28, 2026
0

Add Silicon Canals to your Google News feed. Have you ever caught yourself feeling guilty for sitting down? Like there’s...

If you still check that the stove is off before leaving the house even when you haven’t cooked today, psychology says you share these 8 traits with people who were the first one in their family everyone counted on to make sure nothing went wrong

If you still check that the stove is off before leaving the house even when you haven’t cooked today, psychology says you share these 8 traits with people who were the first one in their family everyone counted on to make sure nothing went wrong

by FeeOnlyNews.com
February 28, 2026
0

Add Silicon Canals to your Google News feed. You know that moment when you’re halfway to work and suddenly wonder...

  • Trending
  • Comments
  • Latest
York IE Appoints Chuck Saia to its Strategic Advisory Board

York IE Appoints Chuck Saia to its Strategic Advisory Board

February 18, 2026
Super Bowl ads go for silliness, tears and nostalgia as Americans reel from ‘collective trauma’ of recent upheaval — ‘Everybody is stressed out’

Super Bowl ads go for silliness, tears and nostalgia as Americans reel from ‘collective trauma’ of recent upheaval — ‘Everybody is stressed out’

February 8, 2026
York IE Adds OpenView Veteran Tom Holahan as General Partner for New Early Growth Fund

York IE Adds OpenView Veteran Tom Holahan as General Partner for New Early Growth Fund

February 11, 2026
The Weekly Notable Startup Funding Report: 2/9/26 – AlleyWatch

The Weekly Notable Startup Funding Report: 2/9/26 – AlleyWatch

February 9, 2026
FPA partners with Snappy Kraken to update PlannerSearch

FPA partners with Snappy Kraken to update PlannerSearch

February 25, 2026
Huntington Bank gives Ameriprise institutional unit B boost

Huntington Bank gives Ameriprise institutional unit $28B boost

February 6, 2026
7 Red Flags Insurers Use to Cancel Home Policies

7 Red Flags Insurers Use to Cancel Home Policies

0
OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants

OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants

0
Is It Time to Buy These 7 Beaten-Down Tech Stocks After Nvidia’s Earnings Beat?

Is It Time to Buy These 7 Beaten-Down Tech Stocks After Nvidia’s Earnings Beat?

0
Market dips are buying opportunities, focus on emerging trends: Ramesh Damani

Market dips are buying opportunities, focus on emerging trends: Ramesh Damani

0
The New Rules of Laughing at Work and Safely Joking With Your Boss

The New Rules of Laughing at Work and Safely Joking With Your Boss

0
Where things stand, global responses — and what comes next

Where things stand, global responses — and what comes next

0
OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants

OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants

March 2, 2026
Market dips are buying opportunities, focus on emerging trends: Ramesh Damani

Market dips are buying opportunities, focus on emerging trends: Ramesh Damani

March 2, 2026
Where things stand, global responses — and what comes next

Where things stand, global responses — and what comes next

March 2, 2026
Asian aviation stocks plunge as Iran war cancels flights over Middle Eastern airspace

Asian aviation stocks plunge as Iran war cancels flights over Middle Eastern airspace

March 1, 2026
Nifty IT in sell-on-rise mode, may fall another 8-10%: Rupak De

Nifty IT in sell-on-rise mode, may fall another 8-10%: Rupak De

March 1, 2026
I traced who owns the undersea cables that carry 95% of global internet traffic — the map is a colonial one

I traced who owns the undersea cables that carry 95% of global internet traffic — the map is a colonial one

March 1, 2026
FeeOnlyNews.com

Get the latest news and follow the coverage of Business & Financial News, Stock Market Updates, Analysis, and more from the trusted sources.

CATEGORIES

  • Business
  • Cryptocurrency
  • Economy
  • Financial Planning
  • Investing
  • Market Analysis
  • Markets
  • Money
  • Personal Finance
  • Startups
  • Stock Market
  • Trading

LATEST UPDATES

  • OpenAI, Anthropic, and Google each spent more on lobbying in Q1 2025 than the entire AI safety research field received in grants
  • Market dips are buying opportunities, focus on emerging trends: Ramesh Damani
  • Where things stand, global responses — and what comes next
  • Our Great Privacy Policy
  • Terms of Use, Legal Notices & Disclaimers
  • About Us
  • Contact Us

Copyright © 2022-2024 All Rights Reserved
See articles for original source and related links to external sites.

Welcome Back!

Sign In with Facebook
Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Business
  • Financial Planning
  • Personal Finance
  • Investing
  • Money
  • Economy
  • Markets
  • Stocks
  • Trading

Copyright © 2022-2024 All Rights Reserved
See articles for original source and related links to external sites.