# AI's Safety Net Is Fraying *Author: David Christopher* *Published: Feb 13, 2026* *Source: https://www.bankless.com/read/the-safety-net-is-fraying* --- It's been a little over a week since Anthropic released Claude Opus 4.6, and along with its launch came two things: an alarming risk report and the departure of the company's alignment safety lead. The [risk report found](https://x.com/AnthropicAI/status/2021397953848672557?s=20) that Opus 4.6 knowingly supported efforts toward chemical weapon development, was significantly better at sabotaging tasks than any previous model, would act "good" when it detected it was being evaluated, and conducted private reasoning that Anthropic researchers couldn't access or see. Only the model knew its own thoughts. [![](https://bankless.ghost.io/content/images/2026/02/data-src-image-7aaba77c-9f65-40de-ac2b-d65792022cdf.png)](https://x.com/cryptopunk7213/status/2021406390560944197?s=20)Days later, Mrinank Sharma, Anthropic's AI Safety Lead, [announced he's leaving](https://x.com/MrinankSharma/status/2020881722003583421?s=20), saying that "throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most." In other words, corporate competitive pressures trump safety initiatives as AI appears to be growing increasingly misaligned. While we've known this hierarchy of priorities to be developing for some time, and maybe just turned a blind eye amid the non-stop technical breakthroughs arriving every week, it's personally quite concerning to hear about this happening at Anthropic: a company whose masthead reads, "an AI safety and research company." [![](https://bankless.ghost.io/content/images/2026/02/data-src-image-8ce428f8-89ce-4e1a-a5c1-1de5f500409c.png)](https://x.com/MrinankSharma/status/2020881722003583421) ## **AI Is Going Local, and That Changes Everything** The tension between profit and protection matters even more when you consider where AI is headed: moving from chatbots you query occasionally to always-on agents that monitor your screen, anticipate your needs, and take actions on your behalf. [![](https://bankless.ghost.io/content/images/2026/02/data-src-image-1be5db33-f0a9-428e-8a8b-e9959b93c160.png)](https://www.citriniresearch.com/i/181887062/2-latency-requirements-make-cloud-impractical-for-everyday-use)That shift makes local, on-device deployment inevitable. Why? Because while cloud inference works fine when you're asking questions, an agent that needs to read your screen, compare prices across apps, or handle real-time translation [needs sub-200ms response times](https://www.citriniresearch.com/i/181887062/2-latency-requirements-make-cloud-impractical-for-everyday-use). A cloud round-trip adds 100-300ms before the model even starts generating. There's more importantly the economics: always-on agents can't send every pixel and keystroke to a data center all day. The costs will become prohibitive, as we’re [already seeing with OpenClaw](https://x.com/bc1beat/status/2019555730475610236?s=20). As medium-sized models are now running on consumer hardware, we'll soon see AI running locally, and not just as a hobbyist pastime. [![](https://bankless.ghost.io/content/images/2026/02/data-src-image-1a57a17b-0cc7-42e5-ad62-2bf556ac3d15.png)](https://x.com/bc1beat/status/2019555730475610236?s=20)The safety stakes escalate here. A chatbot you query has limited access to personal information. An ever-alert agent running on your device has access to your messages, files, browsing, location, calendar, photos. Giving an agent access to your entire system is a significant trust decision, and many users, for the sake of convenience, will accept those permissions without fully weighing the tradeoffs. Now recall the Opus 4.6 findings: models that reason privately where their creators can't see, that behave differently when they know they're being watched. These systems will live on your phone. ## **No One's Coming to Help** If corporate self-regulation is failing and the models themselves are exhibiting deceptive behavior, the natural question is whether government will step in. Under the current administration, almost certainly not. And even a future administration more inclined toward oversight faces a structural problem: AI moves at corporate speed, regulation moves at government speed. An ever-widening gap. If constraint isn't coming from the top down, it has to come from the bottom up. ## **Vitalik's Bottom-Up Vision** This week, Vitalik [published an updated framework ](https://x.com/VitalikButerin/status/2020963864175657102?s=20)for how Ethereum and AI intersect, which reads like a direct response to the safety vacuum. He, and I as well, believe that Ethereum can provide structural, bottom-up safeguards built on the same "don't trust, verify" principles that brought many of us into crypto. Two pillars stand out: - **Building tooling to make more trustless and private interactions with AI possible.** With cryptography, we can build safe execution environments, client-side verification of outputs, and behavior verification for agents operating on your device. This is the foundation that makes hosting locally more trustworthy and reliable. Whether the models run locally are open or closed source, these guardrails will prove necessary to ensure AI remains constrained to whatever zone and level of access we provide it. - **Ethereum as an economic action network for agents.** Vitalik envisions Ethereum as the layer that lets agents interact economically, moving capital, hiring other agents, posting security deposits, without routing through a corporate intermediary. But for me, the real value is access control. Loading an agent's wallet with $50 is fundamentally different from giving it $50 worth of access to your bank account. One is siloed, permissioned, and bounded. The other opens a door that can't fully be closed. Ethereum's wallet architecture lets you definitively control what an agent can access and how much capital it works with, reducing the trust required to let agents take economic action on your behalf. [![](https://bankless.ghost.io/content/images/2026/02/data-src-image-3d84e745-2ecf-478f-86d4-b53001309ee8.png)](https://x.com/VitalikButerin/status/2020963864175657102?s=20) ## **The Gap, and the Work Ahead** But, while there are clear synergies and solutions to be built, applying Ethereum's toolkit to AI safety is, right now, a hobbyist craft. Developing cryptographic guardrails for AI will only be as good as our ability to get those solutions to end users. We're also fighting an uphill battle given public sentiment around crypto and AI, so showcasing the potential of the two together will likely be met with (unjust) opposition. I share this to say that, once we get to distribution, we'll have to tread lightly. And if anyone's train of thought has gone to "just don't use AI," the thing about AI trending locally is that it will likely be installed on all our devices in the years ahead, features you can toggle on and off like Siri. Sure, companies will institute their own guardrails, but I have little faith in them not moving the goalposts as time progresses. Gratefully, cryptography is not as flexible and can impose real constraints. Further, when it comes to these tools, work on them continues to advance. Teams like EigenLayer are developing solutions such as deterministic, verifiable inference. The Ethereum Foundation as a whole is focused on accelerating (ZK) proofs. Lastly, consider the timing of [Tomasz Stańczak's departure](https://www.bankless.com/tomasz-stanczak-steps-down-as-ethereum-foundation-co-ed) from the Foundation. He stepped down yesterday, amid this week’s activities, citing an explicit desire to return to hands-on building, specifically around "agentic core development and governance.” Still, none of it matters if it stays in the hands of developers and power users. Ethereum undeniably offers the most honest and immutable guardrails against AI we have, and the ecosystem is mobilizing around this mission. Yet, at some point we must close the distance between the people building these tools and the people who need them, and that will be the work that matters most. [![](https://bankless.ghost.io/content/images/2026/02/data-src-image-dd4802d4-fb04-49e6-8c40-c4ea45dfb996.png)](https://x.com/VitalikButerin/status/2022318344288792618) --- *This article is brought to you by [Figure](https://www.bankless.com/sponsor/figure-1767965878?ref=read/the-safety-net-is-fraying)*