Add Silicon Canals to your Google News feed. ![]()
There’s a number that’s been circulating among AI policy researchers this quarter, and once you see it, it’s hard to unsee. In the first three months of 2025, OpenAI, Anthropic, and Google each spent more on federal lobbying efforts than the entire independent AI safety research field received in grant funding during the same period.
Sit with that for a moment. The companies building the most powerful AI systems on Earth are collectively outspending the people trying to understand whether those systems are safe by a ratio that would be comical if the stakes weren’t so high.

The numbers tell a story we should pay attention to
According to federal lobbying disclosures filed with the Senate Office of Public Records, OpenAI spent approximately $2.2 million on lobbying in Q1 2025. Google’s parent company Alphabet, which has increasingly focused its lobbying on AI-related policy, disclosed significantly more. Anthropic, the company that has positioned itself as the “safety-first” AI lab, also ramped up its Washington presence considerably.
Meanwhile, a recent analysis published by the Centre for the Governance of AI estimated that total grant funding to independent AI safety research organizations in Q1 2025 was in the low single-digit millions globally. Some estimates place it even lower when you exclude funding that ultimately flows to university labs affiliated with the major companies themselves.
The pattern is stark. The organizations with the most financial incentive to shape AI regulation are the ones with the loudest voice in the rooms where regulation gets written. The organizations whose entire purpose is to evaluate risks are working on shoestring budgets, often relying on individual philanthropists who may or may not renew their commitments year to year.
What lobbying actually buys
There’s a common misconception that lobbying is inherently corrupt, that it’s about backroom deals and briefcases of cash. The reality is more subtle and, in some ways, more concerning.
What lobbying buys is access. It buys the ability to be the first voice a congressional staffer hears when they’re drafting language for a bill they don’t fully understand. It buys the opportunity to frame the conversation before the conversation even begins. A 2014 study published in Perspectives on Politics by Martin Gilens and Benjamin Page at Princeton found that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.
Apply that finding to AI policy, and the picture becomes clear. When OpenAI’s lobbyists sit down with legislators, they’re shaping the vocabulary, the assumptions, and the boundaries of what “reasonable regulation” looks like. When independent safety researchers want to present their findings, they often can’t afford a flight to Washington.
The psychology of self-regulation promises
In my recent piece on how most companies have a permission problem rather than a communication problem, I explored how organizations develop invisible norms about what’s safe to say and what isn’t. The same dynamic operates at an industry level.
AI companies have spent the last two years making voluntary safety commitments, publishing responsible use policies, and signing pledges at the White House. These gestures create what psychologists call a “moral licensing” effect. Research by Nina Mazar and Chen-Bo Zhong, published in Psychological Science, demonstrated that people who establish moral credentials in one domain feel unconsciously liberated to behave less ethically in another.
The corporate version of this is familiar. A company publishes a responsible AI charter on Tuesday. On Wednesday, its lobbyists argue against binding legislation that would enforce those exact same principles. The charter provides the moral license; the lobbying provides the market advantage.
Anthropic is a particularly interesting case here. Founded explicitly as a safety-focused alternative to OpenAI, the company has increasingly behaved like a standard tech company when it comes to policy influence. Their lobbying spend has grown quarter over quarter. This tracks with a broader pattern: organizations that define themselves by their values are often the most vulnerable to the gap between stated principles and institutional behavior.
The asymmetry problem
There’s a structural issue here that goes beyond any single company’s choices. The incentives are fundamentally misaligned.
AI safety research generates no revenue. It produces papers, frameworks, and warnings. It asks companies to slow down, to test more, to be transparent about failure modes. Every dollar spent on genuine safety research is a dollar that produces no quarterly return, no stock price bump, no competitive moat.
Lobbying, on the other hand, can be extraordinarily profitable. A study by researchers at the University of Kansas, published in the Journal of Financial Economics, found that firms that lobby strategically can see returns of over 200% on their lobbying expenditures when favorable policy outcomes are achieved. For AI companies racing to capture a market projected to be worth trillions, spending a few million on policy influence is one of the highest-ROI investments available.

This creates what economists call an asymmetric contest. One side has every financial incentive to participate aggressively. The other side runs on idealism and grant cycles. The outcome is predictable.
Europe offers an instructive contrast
The European Union’s approach to AI regulation has been different, though far from perfect. The EU AI Act, which entered enforcement phases in 2025, was developed through a process that gave considerably more weight to civil society organizations, academic researchers, and independent policy groups.
That doesn’t mean European regulation is ideal. Plenty of critics argue the AI Act is too rigid, too slow, or too focused on categorization rather than outcomes. But the process itself embedded a different assumption: that the people building powerful technology shouldn’t be the primary architects of the rules governing that technology.
In the U.S., we’ve essentially inverted that assumption. The builders are the architects, the referees, and increasingly, the ones writing the rulebook.
What this means for the rest of us
I’ve been thinking lately about the difference between problems that feel abstract and problems that are merely delayed. This spending asymmetry feels abstract to most people. AI lobbying disclosures don’t make headlines the way a chatbot generating misinformation does.
But policy is infrastructure. It determines what’s legal, what’s required, what’s incentivized, and what’s ignored. The lobbying happening right now in Washington is shaping the regulatory environment that will govern AI systems for the next decade, possibly longer. And the people doing that shaping have a very specific set of financial interests that may or may not align with public welfare.
Daniel Kahneman’s work on prospect theory showed us that humans consistently underweight risks that feel distant or probabilistic. We’re wired to respond to immediate, vivid threats and to discount slow-moving structural ones. The lobbying-to-safety-research spending ratio is a slow-moving structural threat. It’s the kind of thing we’ll look back on in ten years and wonder why we didn’t pay more attention.
A question worth asking
Here’s what I keep coming back to. If these companies genuinely believe their own safety rhetoric, if they truly think AI poses existential-level risks (as several of their CEOs have publicly stated), then why is their lobbying budget larger than their contributions to independent safety research?
The answer, of course, is that institutional incentives and personal beliefs operate on different tracks. A CEO can sincerely believe AI is dangerous and simultaneously lead a company whose institutional machinery works to minimize regulatory oversight. There’s no contradiction once you understand that organizations are not people, regardless of how often we anthropomorphize them.
The question for the rest of us is simpler: who do we want writing the rules? The companies with billions on the line, or an independent research community with the freedom to say uncomfortable things? Right now, we’ve made our choice, mostly by not making one at all.
Where this leaves the safety community
Several prominent AI safety researchers have begun speaking publicly about the funding gap, and a few have left academia entirely for industry positions, citing the impossibility of doing meaningful work on grants that barely cover a postdoc’s salary. This brain drain is its own kind of lobbying victory: if the best safety researchers work for the companies they’re supposed to be evaluating, the independence that makes their work credible evaporates.
There are counterexamples. Some foundations have increased their AI safety funding significantly. Some researchers have found creative ways to maintain independence while collaborating with industry. But the overall trajectory is clear, and the Q1 2025 numbers make it undeniable.
We are in a period where the most consequential technology in a generation is being governed primarily by the financial interests of the people building it. The psychological and structural forces enabling this are well-documented. The question is whether we’ll act on that knowledge or simply note it with the detached appreciation of someone watching a slow-motion collision from a comfortable distance.
The numbers are public. The pattern is visible. What happens next is a choice.
Feature image by Negative Space on Pexels
From the editors
Undercurrent — our weekly newsletter. The sharpest writing from Silicon Canals, curated reads from across the web, and an editorial connecting what others cover in isolation. Every Sunday.
Free. No spam. Unsubscribe anytime.










