Frax - Sponsor Image Frax - Fraxtal Ecosystem: Where DeFi Meets AI Friend & Sponsor Learn more
Podcast

Ethereum Beast Mode - Scaling L1 to 10k and Beyond | Justin Drake

Justin Drake unveils “Lean Ethereum,” a bold blueprint to make the base layer dramatically faster and cheaper, without turning it into a datacenter chain.
Nov 12, 202502:17:51
1
0

Inside the episode

Ryan:
[0:02] Justin, what is lean Ethereum at the highest level?

Justin or David:
[0:07] Sure. So lean Ethereum is the conviction that we can use this very powerful technology called SNARKs, this magical cryptography to bring Ethereum to the next level, both in terms of performance and scale, but also in terms of security and decentralization. And I call the former beast mode and the latter fort mode.

Ryan:
[0:29] Wait, okay. Beast mode and fort mode. So what's beast mode and what's fort mode?

David:
[0:34] Yeah. So part of beast mode is this vision of scaling the L1 to one giga gas per second. I call it the giga gas frontier or the giga gas era, as well as dramatically increasing the data availability so that we can do one tera gas per second on the L2. So that's 10,000 TPS on the L1, 10 million TPS on the L2s. And if you were to summarize this in one sentence, it's basically having enough throughput for all of finance.

Ryan:
[1:07] And so beast mode is on the execution layer, I suppose, which will further define a little bit more. But that's where the block space and the gas transactions and all of that activity and smart contracts, all of that activity happens on the execution layer. It's DeFi. DeFi, payments, yeah.

David:
[1:27] Exactly. It also includes the data availability layer for the L2s. And that gives us, roughly speaking, a 1,000x amplifier relative to what we can do with the L1.

Ryan:
[1:37] Okay. And even the data availability layer for the L2s, what do the L2s do? Execution. So it's all about execution in beast mode. Okay. Fort mode. What is fort mode?

David:
[1:47] Fourth mode is about having totally uncompromising security. Best-in-class uptime, best-in-class decentralization, post-quantum security, making sure that the MEV pipeline is cleanly decoupled from the validators, and also having best-in-class finality in a matter of seconds as opposed to what we have right now, which is in a matter of 10 minutes, 10 to 20 minutes. People have called this.

Ryan:
[2:13] This is the consensus layer, right? So consensus layer is fort mode, beast mode is execution layer. Exactly.

David:
[2:20] There is a little bit of a tying in the sense that the technology that we're going to use to solve beast mode also allows us to solve fort mode.

David:
[2:29] And the reason is that the snarks allow the validators to verify very, very small proofs. And that really helps with decentralization because the barrier to entry to becoming a validator from a hardware standpoint perspective is extremely low.

Justin or David:
[2:44] And beast mode and fort mode, I feel like these are just offensive and defensive. Like execution, beast mode is, Ethereum is being aggressive. We're going forward. We're pushing forward beast mode. And then fort mode is kind of what Ethereum has always done. And we are just continuing to do it, which is kind of what we call World War Three resistance is like everything in the world will go wrong. But Ethereum will still be producing blocks because it's that resilient. It's the bunker coin. So we have like offensive defensive is maybe like a way to portray this.

David:
[3:18] Exactly. And with Snarks, we basically have permission to dream bigger dreams on the aggressive scaling and performance.

Justin or David:
[3:27] It's worth highlighting, Justin, that Ethereum has never really done beast mode before. We've never really gone on the offensive. Like lean Ethereum is kind of the first time that we can actually credibly say, yes, Ethereum is scaling not marginally, but aggressively. Is that correct?

David:
[3:50] I mean, I would argue that the data availability that we've been working on for many years now is part of beast mode for unlocking the L2s. But the L1 has remained stagnant.

Justin or David:
[4:02] At the L1, beast mode at the L1. Correct.

David:
[4:04] So for four years, well, four years ago, the gas limit was 30 million gas, and today it's 45 million gas. So in four years, we've only scaled the gas limit by 50%, underperforming even Moore's law and hardware improvements.

Justin or David:
[4:20] Yeah, not very beastly.

David:
[4:23] But now again, we have the permission to be extremely ambitious now that the technology is reaching maturity.

Ryan:
[4:29] Okay. Last four years, we've gone from 30 million to 45 million on the Ethereum layer one. As I understand it though, in the early days, maybe this is back in 2016, it was like 6 million gas limit. So we did go from something lower to 30 million. And the way we accomplished that is just like raw engineering. But to call that scaling on the L1 anyway, beast mode, not quite right. I mean, what did we do? This is like a three to five X, something like that. And it took 10 years of Ethereum history. But when we say gas and gas limits and that sort of thing, what we're talking about is transactions per second, right? Or at least it's a proxy for transactions per second. We're going to be referring to block size throughout this episode a lot. So can you just define what block size actually is and how that relates to transactions per second and just like overall scaling?

David:
[5:32] Absolutely. So the simplest transaction possible is called a transfer and it consumes 21,000 gas. You don't have to remember this number, but on average, we're doing more complex transactions.

David:
[5:44] For example, we have DEX swaps, and those consume 100,000 gas. So if you have one giga gas per second, that's 1 billion gas per second, and you divide that by your average transaction of 100,000 gas, you get the 10K TPS. And continuing with our theme of powers of 10, there's roughly 100,000 seconds in a day, and there's roughly 10 billion people on Earth. And so if you were to denominate per human, what you get is 0.1 transactions, per human per day, which in my opinion is a great start, but it's just not enough for all the finance, right? As humans, we make more than one financial transaction every 10 days.

Ryan:
[6:28] And there's also the robots that are coming on chain too.

David:
[6:31] Absolutely. A great amplifier. And so what I'm hoping we can achieve in terms of scale in the longer run is 10 million transactions per second. That's the TerraGas era. And that unlocks a hundred transactions per day per human.

Ryan:
[6:46] Okay, but so the one giga gas is that the hope is to achieve that and the plan in lean Ethereum is to achieve that on Ethereum layer one. Am I correct in that? And then the teragas is in the Ethereum layer two ecosystem by committing, you know, a data to the Ethereum's scaled up data availability layer. Correct.

David:
[7:08] So TerraGas is the aggregate of all of the roll-ups combined. And you can think of it, roughly speaking, as being a thousand copies of the L1, each doing one giga gas.

Ryan:
[7:19] Okay. Where are we now? So if we're trying to get to giga gas on the L1 and TerraGas on the L2, you mentioned some numbers of where Ethereum is. Was it 60 million gas limit? What's the current gas limit and how far are we away?

David:
[7:36] Yeah, so the gas limit is a lot confusing because we have slots of 12 seconds. You have to re-denominate everything down to the second. But at L1, we're about 500x away from that goal. So between two and three orders of magnitude. And to a very large extent, the primary bottleneck that we have today is the validator. So we set ourselves as a constraint, as a goal, to have maximum decentralization. And we're not allowing the validators, or at least we're not assuming that the validators have powerful hardware. They're running on laptops. The meme has been Raspberry Pi, And by removing this bottleneck, we can easily, in my opinion, get a 10x, a 100x. And with sufficient work, we can get this 500x that gets us to one giga gas per second.

Ryan:
[8:28] Okay. And so how many giga gas per second is Ethereum right now? So you said it's just because we're translating multiple things. You said... It's two megagas per second. It's how much?

David:
[8:39] Two megagas per second.

Ryan:
[8:42] Okay. So it's two megagas per second right now.

Justin or David:
[8:45] And we want 1,000 megagasts.

Ryan:
[8:48] Okay, that's why we're 500x off. Another way to translate that is we have 20, around 20 transactions per second, maybe, for those simple transactions on Ethereum. And we want to be 10,000. So again, that's 500x off right now. That's what beast mode is saying. We're going to do 500x on Ethereum layer 1, correct?

David:
[9:08] That is my hope. That is the vision that I'm trying to put forward. And slowly every week with more and more developments on the ZKVMs, people are starting to believe this vision that it is indeed possible.

Ryan:
[9:20] Okay, that's interesting. So we're going to talk about vision and execution throughout this episode in many places. But actually, before we do, since we're still setting the context for this, I think some people will be scratching their heads and saying, well, why are you, Justin, talking about, and why is Ethereum in general talking about scaling the L1 at all? I thought the plan was the roll-up centric roadmap where Ethereum layer one stays pretty slow. Yeah, maybe it scales up a little bit as Moore's law improves and as engineering improves a little bit, but it's not going to beast mode because the beast mode plan, I thought that was already defined as the roll-up centric roadmap. And all of the scale in Ethereum happens on L2s. Well, you're saying, no, we're going to continue scaling the L2, but we're also scaling the L1. And some people will be scratching their heads and asking, why? Is this a change? Is this a pivot?

Ryan:
[10:22] Why are we trying to scale the L1 in the first place?

David:
[10:25] I'd say, yes, it is a change. It is a pivot because we have now technology that allows us to scale while preserving decentralization. So what came first was the requirement for decentralization. And then we tried to do the best with that requirement. And that's the status quo that we have. But now that we have a new technology, we can start, rethinking the kind of scale that we can have at L1. So the first answer is just because we can. The second answer is that if we want Ethereum L1 to be a hub, meaning the place where assets are minted, where bridging happens, where forced withdrawals happens, where a lot of the high value liquidity happens, then we need L1 to have a minimum amount of scale that, And 0.1 transactions per human, you should think of it as being the highest density economic transactions that can make it. Everyone else will largely be potentially priced out. So you can think of it as being settlement transactions, as being minting transactions, as being transactions where you enact an escape hatch, where you have $100,000 stuck on an L2, the sequencer has gone offline or something, well, you can just force withdraw and you'll be willing to pay the 10 cents or whatever on L1 in order to free your funds.

Ryan:
[11:51] So those are the two reasons. Going back to the first then, because we can, this implies we couldn't before and we'll have a whole conversation about snarks and this magic cryptography that has emerged, let's say, and hardened over the last decade or so. But the idea that we're scaling the L1 now because we can means that previously we couldn't. It almost means that scaling the L1 would have been the first option if we could. Like, is scaling the L1 better than scaling the L2? And if we could, back in, say, 2018, meaning Snarks was unleashed then, that would have been the default path rather than L2s?

David:
[12:39] I think so. I think had we been able to scale, let's say, five years ago with ZKVMs, that's probably what we would have done, but it would not have been sufficient. We still would have to gone down the path of data availability to unlock the millions of transactions per second and that we need to welcome all of finance. And so in some sense, there's like a different ordering that will have happened, but I think we still would have gone down the difficult path of working with SNARKs for execution and working with data availability sampling for bandwidth. And you can kind of think of these two tools, SNOX and data availability as being two sides of the scaling coin. It's basically bandwidth and compute, which are the two primitive resources that a blockchain will consume.

Ryan:
[13:31] Some people will still be confused by that answer, Justin, because they will say, wait, wait a second, why couldn't you scale Ethereum in 2018? And they'll point to other layer one chains today that are indeed scaling. I mean, you've got some L1s that are saying they can scale to 10,000 transactions per second and beyond. Some are in the million time range. I think Solana is doing thousands of transactions per second at peak, at least, and they promise far more. So was this a skill issue? Why couldn't Ethereum scale?

David:
[14:09] Yeah, it's a good question. So a lot of the high performance L1s have relatively poor decentralization. So on the order of 100 validators, so Monad, for example, roughly speaking, Ballpark, they have 100 validators, Say, 100 validators, BNB chain, even less validators.

David:
[14:32] And not only do they have a small number of validators, but also the barrier to entry to becoming a validator is to have a server in the data center, because you need to store a lot of states, you need to have very reliable and high throughput internet connection, you need to have lots of RAM, you need to have fast CPUs. And that's the kind of thing that's more difficult to get at home. And it's also the strategy that Solana has taken. So Solana on average is about a thousand user transactions per second, and they have less than a thousand validators. And if you look at the map of where these validators are, it's almost entirely in data centers. And the vast majority of them, more than 50% of them are, I believe, in two data centers that are like just a few dozens of kilometers apart from each other. In Europe. So it's highly, highly concentrated. And that's not something that we have tolerated on Ethereum.

Ryan:
[15:40] Can you name the constraint then? So it seems like Ethereum is not tolerating something, not tolerating validators running inside of data centers. So there is some self-imposed constraint on the design here. Name that. What is that constraint versus like, why can't we just capitulate and start running things in data centers like some of the other chains have started to do.

David:
[16:05] We care about home internet connections and commodity hardware, like a laptop. And part of the reason has to do with liveness. So recently we had an AWS outage, various chains went offline. That's not what we want with Ethereum. But we're kind of very paranoid to the point where we want to have a security model where even if all data center operators in the world decide to attack Ethereum simultaneously, it still has uptime. And there's roughly, call it 100 data center operators in the world. So this is not totally far-fetched. And I guess another difference is that, you know, we're trying to have the best in class uptime, right? We've had 100% uptime since Genesis. And part of that is not cutting corners. And I think what some of these other chains have done is they've been willing to cut corners in order to get higher performance.

Justin or David:
[17:08] When we talk about going from doing a 500X in terms of gas throughput from where we are now at two megagas a second to a thousand megagas a second, a 500X is not just something that you can engineer. The reason why we are doing this today is because Ethereum is going through

Justin or David:
[17:28] something closer to an evolution rather than like an engineering upgrade. And some of the chains that we just talked about have always been like engineering first. And that's where some of the performance benefits has come from. Like Solana has been very engineering heavy and they have just produced well-engineered nodes and execution clients. And then where are those, where is that software best expressed to be its best form? Well, in a data center, put the heavily engineered things in a data center. And that's where a lot of the modern scaling chains of 2020 through 2025 have gotten some of their throughput.

Justin or David:
[18:04] Now, Ethereum has been patient, but in order to get that 500x, it's not really an engineering thing. It's more of like an evolution. A new path has opened up with some of the stuff that you've talked about, Justin, with the whole ZK process. Era, where it's not necessarily just engineering, but it's actually cryptography that is opening up a path to do something like a 500x. And so that's kind of, and that's always kind of been in the back pocket of Ethereum from day one. That's always been like the theoretical scaling strategy. And in recent years, I think you and people in the Ethereum Foundation would be like, okay, this path is now clear to us, and now we are ready to take it. That's kind of my diagnosis of the last few years. Is that right?

David:
[18:48] Yeah, that's right. Really, the key on log here is just cryptography. And in terms of the requirements that we have for the cryptography, those are also extremely high. So one of the things that we care about, for example, is diversity. This is complicated cryptography, and we want to have the same kind of diversity that we enjoy today at the consensus layer and execution layer with the consensus teams and the execution team. So I'm hoping that we can have five different ZKE VMs with uncorrelated failures. Another strong requirement for the cryptography is called real-time proving. There's this idea that when a block is produced, the proof for that corresponding block needs to arrive before the next block. So the latency needs to be under one Ethereum slot, which is under 12 seconds. And then.

David:
[19:43] Another requirement that we have beyond the security and the latency is the power draw. So going back to this comment around the data centers, we don't want the provers to themselves be in data centers because now you've introduced a new choke point. And so what we are hoping to have is on-prem proving. And by on-prem, we mean on-premises in a home, in a garage, in an office. And the specific number that we have in mind is 10 kilowatts.

David:
[20:20] So just to give you an order of magnitude, a toaster will consume one kilowatt. So it's the equivalent of 10 toasters. And it's also the same as a Tesla home charger that will draw roughly 10 kilowatts. And so if we can have millions of home chargers around the world, then it's reasonable to have this requirement for the provers. And one thing that is worth stressing is that unlike consensus, which requires half of the participants to behave honestly, we only need one honest prover for the whole thing to work out. So that's why there's very different hardware requirements on the consensus participants. Here, we want to have the lowest barrier to entry as possible, think, you know, a Raspberry Pi or a laptop, because it's a 50% honesty assumption. But for the provers, it's a one of n assumption, and it's okay to bump it up.

Ryan:
[21:18] Okay, so we're starting to unpack almost the beast mode layer, the execution layer with some of those components. I personally am still not ready to get there, actually. So I still have some questions. What you just described is a stack that allows us to still do the blockchain

Ryan:
[21:36] validation or verification outside of a data center from a home internet connection. And I still kind of want to know why or like what use cases are important for that. You said part of this is about liveness and uptime. And indeed, Ethereum has had 10 years of uninterrupted uptime. And that's fantastic. But there are other properties that decentralization and uptime kind of imbues. One of those, quite famously, with Bitcoin and Ethereum, as you've come and argued on Bankless, is the property of having your cryptocurrency be a store of value asset. So Bitcoin is still on the cryptography 1.0. It's not doing any snarks thing. That's not really in the roadmap. But it has maintained very low throughput, very low through TPS, but also very low node requirements. So you can run a Bitcoin node from your house. It is not a data center chain.

Ryan:
[22:40] Similar to Ethereum. But I just kind of want to know why. Because for Bitcoin, they're very clear on why. It's because Bitcoin is a store of value. It's because it's a digital gold and everybody needs to access it. Now, we've argued on Bankless that at 10 transactions per second, some of that access will probably take the form of micro strategy and ETFs. And you won't be able to do things in DeFi that you can if you're actually scaling your base layer. I don't want to rehash that. But I do want to ask the question of why? What use cases on the Ethereum L1 are important? Vitalik wrote a blog post talking about slow DeFi. Is that one of them? Is the store of value use case? I'll just add one other dimension. We've had people come on the podcast and say, the Ethereum roadmap is flawed because they obsess over decentralization. They obsess over having nodes being able to run in somebody's home. If you remove this obsession, you could scale a lot faster and they don't understand the reason for the obsession. So what use cases is like Ethereum over provisioning itself. What use cases are most conducive to this decentralization, I'll call it obsession, constraint that Ethereum has self-imposed? Is it DeFi? Is it store of value? What is it?

David:
[24:00] It's store of value. It's moneyness. And you can look at it empirically speaking. You have the, Number one money, Bitcoin, which is the exact opposite of beast mode, right? It's a piece of crap, right? It's like, you have a...

Ryan:
[24:18] Wait, what's a piece of crap?

David:
[24:20] Bitcoin the asset? Blockchain, the chain, sorry. The blockchain itself.

Justin or David:
[24:25] Which even Bitcoiners will say that the Bitcoin blockchain is an encumbrance upon BTC, the asset. That's actually aligned with Bitcoin or philosophy.

David:
[24:33] And yet, it's a $2 trillion asset. And then you have something in between Ethereum, which is trying to get some performance and some robustness and credible neutrality. And we're a $500 billion asset. And then you have something that leans entirely on beast mode, like Solana, and they're a $100 billion asset. And those of the newer chains that are leaning even more on beast mode have lower valuations. And so I think the moneyness requires this mimetic premium. And there's just the market empirically has told us that robustness, thought mode, uptime, credible neutrality, moving slowly, having these long term guarantees are something that are things that are extremely important.

Ryan:
[25:27] It makes sense to me that store of value would be the primary use case for something like a Bitcoin or Ethereum, because if you just think about it from a user perspective, you want to put your value in a place. You can go into a cave, you know, for 10 years and come out and it's still there. That's store of value. You're actually storing value across time.

Ryan:
[25:50] And Bitcoin kind of has gotten this right. But I think when you're talking about store of value, it's also bearer instruments, you know, like, so for instance, I don't know if I care about USDC on Ethereum as a store of value and put that on Ethereum and go into a cave for 10 years and come out. I don't know what could happen to USDC. It could, you know, Jeremy Allaire, I'm sure it's in gray hands, whatever laws could change. You know, it could de-peg, something bad could happen to USDC. So the store of value use case is really like centered around the crypto native assets on Ethereum, chiefly ETH. Like Ether is the asset primarily for store of value-ness on top of Ethereum. So when I think about the tangible use cases, and you say kind of store of value, the things that require max decentralization, it's probably Ether the asset. And then maybe a handful of kind of DeFi primitives. That's what I think. And it's not so much the real world assets, except as they act as a trading pair for something like Ether. That's how I see it. But this is why I'm curious to understand how you see it,

Ryan:
[27:05] Justin, and how people within the Ethereum Foundation see it. What are the apps that are going to be most important on the layer one?

David:
[27:14] I mean, different people have different opinions within the Ethereum Foundation. But I would agree with you that Ethereum's most important application is being money. And that's from which all of the applications derive. If you want to have loans, exchanges, if you want to have prediction markets, it's all to a large extent predictive.

David:
[27:35] Predicated on having this strong money. And this is especially true with these power law distributions and winner-take-most situations. I've tried to argue that a single chain like Ethereum can handle all of finance. To a large extent, the reason why we have so much fragmentation at L1 is because Ethereum hasn't grown fast enough to absorb all of the innovation. But now we have a credible roadmap to just absorb the entirety of it. And when you look at monetary premium, it's winner take most. You need to somehow convince society that your asset is the most legitimate one. And if you look at competitors, for example, you look at soldy assets.

David:
[28:30] That's just been disqualified for being money, in my opinion. Right? Like the fact that it had 10 outages over a handful of years just disqualifies it immediately. And so the most important thing is just staying long enough in the game and not to get disqualified. And now we have these two, basically, assets that are competing, Bitcoin and Ether. And I think in a few years, Bitcoin will get disqualified because of its blockchain as well. Not because it failed beast mode, but because it failed fort mode. It will not be able to secure itself with the dwindling issuance.

Ryan:
[29:11] A dwindling issuance is kind of the bear case for Bitcoin then?

David:
[29:15] Yes. If you look at transaction fees right now, it's about half a percent of all of the revenue that miners make. So 99.5% comes from issuance. And we know that that fraction is going to zero, we're having every four years. And right now, Bitcoin is secured by three Bitcoin per day of fees. Three Bitcoin per day is just not enough to secure Bitcoin.

Justin or David:
[29:39] So we've talked about the dichotomy between beast mode and fort mode. And now I do want to kind of like maybe name our biases just because me, Ryan, Justin, we all came into crypto, into Ethereum 2017 and earlier. And that's truly when decentralization fort mode was the game. And that's kind of like our generation of crypto. The newer generations of crypto truly value beast mode far more than fort mode. I think like anyone that has come into crypto 2021 or later probably has a disproportionately low amount of their portfolio of Bitcoin versus people that came before 2021. And something that I want to like, even though we were talking about like, yeah, the whole idea is money. The fort mode is your entry ticket into being inside the competition of money. Nonetheless, user preferences post 2021-ish has really preferred beast mode and transaction, transaction volume, transaction fees has slowly, the pendulum has shifted towards chains that go fast, chains that do beast mode.

Ryan:
[30:45] And so while- I would add, David, regardless of constraints, right?

Justin or David:
[30:49] Regardless of constraints.

Ryan:
[30:50] No home validator type of constraint.

Justin or David:
[30:52] Yes. And so I don't want us to talk down too much about beast mode because actually that's what Ethereum is trying to do in 2025 and beyond is we feel like we've covered fort mode and now we can unlock beast mode. And beast mode has a lot of value. That's where you get global composable finance all on one chain. That's where you fit all of humanity, all of finance, all on one chain. That's where you get user adoption and all of the great things that come with a smart contract chain. And so- It's while I'm in this camp, we're all in this camp of like kind of fort mode is the cool thing that blockchains really uniquely provide to the world. Nonetheless, user preferences has been shifting away from fort mode and into

Justin or David:
[31:37] beast mode as blockchains become mainstream adopted. And now Ethereum strategy is to aggressively penetrate that market with some of the technology and the strategies that we're going to talk about in the remainder

David:
[31:49] Of this episode. Yes. So I do agree that Ethereum is trying to chart a new territory for itself with beast mode. But one thing that I want to highlight is that if you have beast mode without fort mode, it's very shallow activity. And one way to actually measure this is to look at the meme coins on Solana. That's where a lot of the activity happened. And, you know, we have over 10 million meme coins on Solana and the aggregate market cap of all the meme coins on Solana is less than $10 billion, which is an absolute drop in the bucket. So, yes, there was a lot.

Justin or David:
[32:24] Is $10 billion a drop in the bucket?

David:
[32:26] So, you know, relative to stable coins, for example, like a single stable coin on Ethereum L1 Tether is over $100 billion. So that's just one use case, 10 times bigger. You could look at loans on Avid that's like, also tens of billions of dollars. But you can also look at, you know, the Ethereum market cap, that's 50 times bigger. There's one asset 50 times bigger than 10 million assets combined. Maybe we'll talk.

Ryan:
[32:53] More about stablecoins later in the episode when, you know, because I do have this outstanding question of to what extent does, do stablecoins actually need the beast mode with decentralization security guarantees of Ethereum L1? But let's reserve that question for later.

Ryan:
[33:08] And let's take this in the sections that you had laid out. So back to what you were saying earlier in the episode, when I asked the question, what is lean Ethereum? You said it's Snarks. That's the magic cryptography that we've unlocked. It's beast mode, which is scaling Ethereum on the L1 and the L2 from a transactions per second perspective. And it's fort mode, which is defending decentralization, the lowest barrier to entry possible for someone to run a node. So let's take the rest of the episode and get into each of these sections.

Ryan:
[33:43] Snarks. Okay, this is magic cryptography. Justin, we had you on an episode. I think this is maybe the first episode that you did with Bankless. This must have been about four years ago. And you had this principle that has stuck with me since, which is basically like the true way blockchain scale is with cryptography. That's the first choice. So that's kind of how Bitcoin was able to do what it's doing. And then when cryptography fails, you go to economics and crypto economics. But the gold standard is if you can scale with cryptography in any kind of protocol design or mechanism design, then you do scale with cryptography. Both Bitcoin and Ethereum were based on cryptography that has been popular for a while. I don't know, I'll call it the cryptography that had Lindy in the 2010s, right? That's what Ethereum has been based on so far. Now enter SNARKs. Give us a history of the cryptography that Ethereum is based on today and this SNARKs-based upgrade. And what is this new cryptography?

David:
[34:51] Yeah, so since Bitcoin, the cryptography that we've been using is extremely primitive. And it's two different pieces of cryptography. The first one is called hash functions. That's the thing from which you can build blocks and chains. It's the thing that you use to Merkleize the state. And basically, it's lots of Merkle trees. And then the other piece of cryptography is called digital signatures, or just signatures. And that allows you to authenticate ownership and sign transactions.

David:
[35:25] And nowadays, looking back in 2025, this is Stone Age cryptography relative to what we have today. And this new primitive snarks really unleash a whole new design space for blockchains. And in particular, they allow us to solve this dilemma, or some people call it a trilemma, between scale and decentralization. We really can solve this age-long trade-off by basically having the validators verify these short proofs and these proofs behind them having as much execution as the block builders and the chain can absorb. So if you look at the two basic resources that we have to scale, the first one is execution. We have stocks for that. And the other one is data. And here we have the data value sampling.

David:
[36:31] Now, in addition to wanting to have these two unlocks from a scaling perspective, we also want to make sure that the cryptography that we have is long-term sustainable. And what that means in our context is post-quantum secure. So today, for the data of early sampling, we've taken a little bit of a shortcut. But we've deployed cryptography called KZG, which is not post-quantum secure. And so we need to have some sort of a plan to upgrade that. And this is where SNARKs also help beyond just scale. They also help with the post-quantum security at the data layer.

Ryan:
[37:19] I think back four years ago, you were talking about SNARKs and the term you used, in fact, I think we titled the episode is moon math, right? It was kind of this emerging moon math. And it's been out for a while, just for people who are not cryptographers, okay? I mean, we don't need to get into the details of what SNARKs are and what can do. I think for a lot of people listening, it's sufficient for them to understand, oh, this is moon math, and it's been used in practice for a while, and it's reasonably safe. When we say SNARKs and ZK, because you used the term ZK EVMs earlier, Is ZK and SNARKs, are they one in the same? Or like, how come we use ZK sometimes and now you're using SNARKs today? Like people maybe don't understand the differences between these terms.

David:
[38:06] Sure, so the technical term is SNARK. The S stands for succinct. You can think of it as being small. And then the NARC part, non-interactive argument of knowledge. That's just a fancy mumbo jumbo for proof. So Snark is nothing more than a small proof. Now, it turns out that this technology, Snarks, also give us for free another property called zero knowledge, which is relevant in the world of privacy. But we're not using that property for scaling.

Ryan:
[38:40] So how can we call them ZKEVMs?

David:
[38:42] It's so confusing.

Ryan:
[38:43] They're not private.

Justin or David:
[38:44] We don't do a very good job of naming it in this industry.

Ryan:
[38:46] Should they be called Snark EVMs? Really?

David:
[38:49] They should be called Snark EVMs, yes.

Ryan:
[38:51] Okay. We won't win that fight today. We're not here to play that game.

David:
[38:55] It's a lost fight.

Ryan:
[38:56] How long have snarks been out there? So all the first generation chains today, all the chains that we have in production, Bitcoin, Ethereum, that's all been using more primitive cryptography, as you said, hash functions, digital signatures. There was this experiment called Zcash. And the Z, I think, stands for zero knowledge or snarks, right? They use some of that tech. And that's been live since, I don't know, 2014, something like that. Zcashers, correct me on the dates here. I guess, how robust is this technology? How lindy is it? How in use is it? Are we sure we're ready for snarks for primetime now on a chain that secures almost a trillion dollars in value?

David:
[39:36] Right. So Zcash was launched, I believe, in 2016, nine years ago. And when you look back, they were absolute pioneers, but they were also DGens, like cryptographic DGens. They destroyed the cryptography, which was, you know, it was like building rockets, right? It could explode in their face. And actually, there was a moment in time where it did explode, right? I don't know if you remember, but like a few years ago, Zcash had this critical bug where anyone could mint an arbitrary large amount of ZEC tokens.

Ryan:
[40:09] Right, and because it's private, we don't actually know if that happened or not, if the bug was exploited or not, right? We don't know for sure.

David:
[40:17] Exactly. And so...

David:
[40:19] And one of the big things that we have to do is solve the security issue. And there's broadly two solutions that are satisfactory for Ethereum. The first one is to have diversity of SNARKs. So you have five different SNARK providers and you accept a block to be valid. For example, if three of these snarks return valid, and you can move on in a very similar way that we have execution and consensus layer diversity. The other way forward is what we call formal verification, where we just pick a single proof system, and we audit it to the point where we have high guarantees that there's literally zero bugs. So it's a little bit like writing a mathematical proof of correctness of your entire SNARK implementation. Now, unfortunately, we're a little too early for that end-to-end formal verification, but we've started the work. So last year, we announced a $20 million formal verification program.

David:
[41:26] Which is led by Alex Hicks. And a lot of progress is being made. And my expectation is that this decade, maybe in 2029, 2030, we will have an end-to-end formally verified snark, which has zero bugs. Now, the other thing that I want to mention is that Zcash had an extremely simple use case, which is just transfers.

David:
[41:50] And what they did is that they wrote so-called custom circuits. So they were getting their hands very, very dirty with these snarks. But the modern approach.

David:
[42:04] Is what are called ZKVMs, which is basically a way to make use of the power of SNARKs without being a SNARK expert. So a typical developer, like a Rust developer, for example, can write typical programs and compile them to the ZKVM. And this is actually one of the requirements in order for the SNARK technology to be mature enough for the L1. And the reason is that we want to take the existing EVM implementations and compile them to the ZKVM. So for example, our EVM, which is the EVM implementation within ref, which is one of the execution layer clients, we take that, we compile it to the ZKVMs. We can take EVM1, which is another implementation, compile this, EFREX, there's ZKsync OS, and there's also implementations that are written And in Go, for example, Geth has an EVM implementation. Nevermind has an implementation in C-sharp. And we want to take as many of these implementations as possible and compile them to the KVMs. And that is a very recent trend. It's something that has only really existed for one or two years.

Ryan:
[43:19] But we feel fine, I guess, relying on SNARCs as a core technology for Ethereum at the L1 layer. I mean, they're not as mature as hash functions, which have been around since what? I don't know, like decades, right? 1970s, 1980s, something like this? Digital signatures as well? I mean, these are very hardened cryptographic primitives. Snarks are what, 15 years old?

David:
[43:49] So theoretically speaking, they're something like 30 years old. But in practice, Zcash was one of the first projects to bring them in production. And that's about 10 years old.

Ryan:
[44:00] Okay. But we feel fine about Snarks as a tech stack now. And in general, are Snarks kind of commonly accepted in deep cryptography circles as like, yep, this works. The math checks out. Can't be broken.

David:
[44:14] Yes. But there's like Snarks and there's Snarks. So we have all of the requirements. We have real-time proving is a requirement. We have diversity. We have the ZKVM aspect. And we also have the real-time proving requirement. And we have the requirement of 10 kilowatts for liveness.

Justin or David:
[44:35] There's an Elon Musk quote that I think is relevant here that I like, which he says, the most common error of a smart engineer is to optimize a thing that should instead be eliminated. I want you to take that metaphor as to like, why are we doing this? So we're talking about the snarks and the math behind them and how they work. Let's actually zoom out and talk about like the why, because this is actually doing the thing that would make Elon Musk happy. It's eliminating a whole entire component, which other chains have chosen to optimize. Can you talk about that a little bit?

David:
[45:06] Absolutely. So today, when you run a validator, you're running two separate clients. You're running a consensus layer client. So at home, I'm running one called Lighthouse, and you're also running an execution layer client. And what I'm running at home is called GEF. And really what we want to try and be doing is removing the bottleneck to scalability. And in this case, it's GEF. Like GEF literally on my computer can't process a gigahertz per second, partly because the hardware is not adequate, but also the software is also not adequate. And what I'm hoping to do at DevConnect in about 25 days is shut down my GIF node and completely remove that bottleneck. And instead of relying on GIF to tell me that blocks are valid, I will be downloading ZKEVM proofs. And it doesn't matter how big the blocks are. From my perspective, the proof is always the same size. That's the S. It's distinct. And yeah, that resonates very much with Elon's quotes, which is that we shouldn't be optimizing GIF. We should just be removing it completely.

Ryan:
[46:18] So that brings us to, I think, this whole lean execution kind of trade. And to talk about that in more detail. So we have this new SNARKs magic cryptography that allows us to scale Ethereum in general. In particular, we'll talk about maybe scaling the L1 here and allows us to do that in the constrained way that Ethereum has always operated. And so something that you're talking about, Justin, is replacing Geth, which is your execution client. So this is the whole beast mode thing. With a ZKEVM client. So rather than use the old way of doing a validator, the new way, I think, shifts the role of a validator from executing every single block, right? Like every single transaction and every single block to instead of executing,

Ryan:
[47:17] Verifying that a block has been, I guess, executed correctly. Can you describe that in more detail? Because this is the part where we're talking about beast mode, we're talking about scaling the L1 here, we're talking about lean execution for Ethereum, and a core technology here is ZKEVMs that change what validators are doing. And they're moving from executing everything to just verifying things. I don't know that I have an intuition for how that works, why that's possible

Ryan:
[47:51] and how we can do this while preserving decentralization. Can you give it to me?

David:
[47:55] Absolutely. So the process of verifying a block is extremely intensive. The first thing that you have to do is download the block and that already is a bottleneck, right? If the blocks are too big, you just literally can't download them. If you're on a home internet connection. But once you have the block, what you need to do is you need to load the most recent version of the Ethereum state. And that is on the order of a hundred gigabytes. But of course, if we would dramatically increase the gas limit, it could be terabytes, tens of terabytes. So that's a problem. And then once you've loaded the state, you need to go execute the transactions. And for that, you need two resources. You need a CPU with lots of cores and you need a lot of RAM. And in addition to all of this, you need to maintain a mempool and lots of peer-to-peer connections.

David:
[48:51] And you also need to store the history, which also can be hundreds of gigabytes. So all of this crazy machinery, we just completely remove it and we just verify as knockproof. It's stateless. You don't need to keep a copy of the state. It's historyless. You don't need to keep a copy of the FM history. It's RAM-less in the sense that you don't need gigabytes of RAM. You might need 100 kilobytes of RAM. You don't need many cores, you just need one core, and it could be a very weak device. And actually, the new meme that I'm hoping to introduce is that of a Raspberry Pi Pico. So the Pico suffix refers to this $8 piece of hardware relative to the normal Raspberry Pi, which is about $100. And I believe that at least, you know, as a fun experiment, we could have a validator run on a Raspberry Pi Pico.

Ryan:
[49:54] And if that's the case, of course, you know, more people will be more familiar with, say, smartphone. You could run on your smartphone. It could run on your smartwatch, for instance, right? A Raspberry Pi Pico is even like much more constrained than those environments. So, of course, it could run on those things. No longer a laptop.

David:
[50:11] Exactly. And this brings me to another aspect of fourth mode, which is from the perspective of the users. Today, as a user, whenever I'm consuming Ethereum state, I have to do it through an intermediary that is running a full node on my behalf. And so that might be Infura, that could be Metamask, it could be Rabbi Wallet, whatever. It could be Etherscan. I'm basically trusting these entities to tell me what the state of Ethereum is. What if instead I could just directly verify the correctness of the Ethereum state within my browser tab, like on my phone, like within the app, and it costs nothing, it's instant. Well, now I'm not subject to all of the failures of these intermediaries. If, for example, Infura goes down, well, I can still make transactions. If Infura or Metamask wants to censor certain types of applications, maybe OFAC transactions, well, now they're no longer in a position to do the censorship because they're not intermediating as much. Maybe Etherscan gets hacked and now someone puts forward a malicious front end and tries to drain a bunch of people. This is the kind of thing that should be harder to do once users have more sovereignty over what is the valid state of the chain.

Ryan:
[51:35] Okay, so this is why SNARX, which is the ZK in ZKEVMs as we established, achieves both beast mode because it unlocks a bottleneck which has been execution and gets us to the potential of something like an Ethereum layer one with 10,000 transactions per second. Simultaneously, it achieves fort mode, which is what is fort mode? That's defense. This is more people can run nodes from anywhere on the most basic of consumer hardware. So the reason Snarks is so powerful is because it's a double-edged sword and allows us to achieve scale while also achieving not just maintaining the current decentralization of running an Ethereum node, but like making it even better, making it such that you can run an Ethereum node on a smartphone or a watch.

Ryan:
[52:27] Okay, but what we have done in this ZKEVM kind of setup and sort of the new execution client that you're talking about, and some of these aren't in development, we will talk about what that means today. But what we have done is something important. So we have moved these validators from executing every transaction to verifying them. But somebody is doing the execution, right? Who is doing the execution now in this world? And why is it okay to just have...

Ryan:
[52:55] Validators just verify rather than execute and verify as they have been doing? Are the executors now a centralization vector in the whole Ethereum blockchain supply chain?

David:
[53:07] Yeah, great question. So we do have a new actor, which is called the prover. And the prover is responsible for generating the proofs. And there's two regimes that we are going to be deploying in production. The first one is the optional proofs regime where we're relying on altruism. We're relying on various actors to just generate the proofs for free for the network. And then we're relying on individual validators to opt in to verify those proofs. Now, this is a great proof of concept, but eventually what we want is mandatory proofs. What does that mean? It means that as a block producer, meaning as the entity that, builds the block and proposes it, I'm responsible for generating all of the corresponding proofs. And if the proofs are missing, then that block is invalid. It's just not going to be included in the canonical chain. And now we need to look back at the incentives. We're no longer relying on altruism. We're actually leaning on the rationality. And the reason is that the block producer is receiving fees, MEV, and if they were to miss a block, they will also get a penalty for missing that block.

Ryan:
[54:26] And just when you say block producer, block producer and prover are synonymous in this world?

David:
[54:32] So they are not, but they're not necessarily, I should say. So they could be bundled as one entity and vertically integrated. But what I expect will happen is that we're going to see a separation of concerns. So even today, there's a separation of concern between the proposer and the builder. And what I expect would happen is that the builder would outsource the proving to dedicated provers.

Ryan:
[54:59] A little rusty on this, okay? Prover, builder, sorry, proposer, builder, prover, validator. Okay, run us through the whole supply chain again of how a block goes into the chain in Ethereum today, and then what this future state is going

David:
[55:17] To look like. So today, you have at every slot a randomly sampled validator that is called the proposer, and they will get to decide what block goes on chain.

Ryan:
[55:30] That's the thing. If you run a validator, you're running it at home.

David:
[55:33] Yes. But there's an important caveat, which is that the proposers are assumed to be not sophisticated enough to build the most economically valuable blocks possible. And so instead, they will delegate to more sophisticated builders that will do that on their behalf. And that is called PBS, Proposer-Builder Separation. And we have something called MevBoost that helps us with this separation. What I expect will happen going forward is that we would tack on yet another role called the prover and the builders would go outsource the proving jobs to these sophisticated provers. Now, it turns out that the builder landscape is fairly centralized. And so it's reasonable potentially to expect that actually these two roles in practice will be bundled for at least a large subset of the blocks.

Ryan:
[56:33] Why is that okay? Why is it okay that builders and potentially provers in the future are centralized, but we're taking all of this pain to make sure that validators can run on a smartwatch?

David:
[56:46] So there's a few answers here. The first one has to do with the honesty assumption. So So in order for consensus to run smoothly, you need 50% of the consensus participants to behave honestly. And this is an extremely high bar. It's very, very difficult to achieve. So today, we have 10,000 consensus participants, 10,000 validator operators, and having 5,000 of them behave honestly is a tall order, but we have achieved it.

Ryan:
[57:22] By the way, 10,000 validator operators, these are independent validator operators, because some people see numbers like into the millions of validators, but you're saying 10,000 and they don't understand why you're saying 10,000 when they see numbers like a million validators on Ethereum.

David:
[57:39] Yeah, let me explain that. So for many years, there was this constraint that an individual logical validator was 32 ETH, and we have indeed a million of these 32 ETH entities, what often happens is that there's a single operator that controls multiple of those validators. Now, recently, we've had this upgrade called MaxEB, where we increased the maximum effective balance to 2000 ETH. And so what we're starting to see is actually consolidation of the validators. If a single operator controls multiple validators, they can now squish them together into bigger and fatter validators. And actually, if you are an operator and you're listening to this podcast, I do encourage you to consolidate because it's good for you and it's also good for the network. But if you look at the individual operators, there aren't a million, there's something closer to 10,000.

Ryan:
[58:35] I've seen estimates like 10,000, some say as high as 15,000. This is in the same realm as the next most decentralized network in crypto, which is Bitcoin. I've seen estimates for Bitcoin around 15,000 to 25,000 nodes, something like that. And that's the thing we're keeping decentralized. Anyway, I just wanted to clarify that number, but continue with the flow, if you will, where you were, where, you know, I guess the question was, why is it okay that block builders and provers in the future are very few, are these centralized entities?

David:
[59:08] One reason that has to do with the fact that it's an honest minority on the proving side of things. You only need one single prover to be available for the builders in order for the chain to keep on going. And we've taken this one of N extremely seriously. So N is at least 100 because there's at least 100 data center operators that you can go pick from. But even that we're not satisfied with. We want N to be, orders of magnitude larger. And this is why we're going with the on-prem proving where, you know, crazy people like me could run a prover from home. And this is something that I intend to do.

Ryan:
[59:47] So that means all we need is Justin or some crazy person like Justin and everything's fine. Nothing is corrupt on the chain. No invalid block gets through to the other side.

David:
[1:00:01] It's a fallback for liveness. So what would happen in the worst case if we were to rely on data centers is that we'd be running at one giga gas per second. And then from one day to another, the governments are saying, hey, data centers, please stop the proving process. And we would fall back to something much lower, let's say 100 megagas. And the reason we would fall back to 100 megagas is because that's the most that could be done outside of the cloud. And that would be very bad in terms of providing guarantees to the world because we want to have this guaranteed throughput. We want to be up only, we don't want to be degrading the liveness of the chain. And so we only want to be in a position where what we deploy in production is something that we can guarantee even in a World War III type situation, which is a very tall order. But it's something that the technology is able to deliver thanks to recent innovations.

Ryan:
[1:01:03] Which, of course, that liveness guarantee is very important for the store of value use case, right? even if transaction throughput drops in these extreme edge case scenarios. Store value is still alive because you can go access your value on chain and do something with it. Let's talk about provers a little bit more. So you said you might have the ability to run provers at home.

David:
[1:01:27] That's good.

Ryan:
[1:01:28] But you also expect the prover functionality to be more centralized in, I guess, the majority of cases. As I understand it, provers, that's like a much larger hardware profile, right? And it's some specialized hardware because they're crunching some numbers or they're doing some moon math. Anyway, you're saying yes, but no. Yeah. Where am I wrong there? What does this actually look like to be a prover?

David:
[1:01:54] Yeah. So it is unusual hardware in the sense that most people won't have it, but it is made out of commodity hardware, specifically gaming GPUs. So 16 gaming GPUs, for example, the 5090 that came out recently, that is enough to prove all of Ethereum in real time. And I intend to basically build a little GPU rig at home. And a bunch of AI enthusiasts are doing that because it's the same hardware that you need for AI. Now, in addition to liveness, which is one of the questions that I ask a lot around decentralization of provers, the other very important consideration is censorship resistance.

David:
[1:02:37] Especially when we will be increasing the gas limit. Because the way that we enforce censorship resistance today, assuming that all of the builders and the whole MEV pipeline is censoring, is by relying on a small altruistic minority of validators that are willing to force include transactions from the mempool without going through the builders. And we have this proposal called Fossil, which basically increases by roughly 100x the total throughput of this forced inclusion. Today, we have about 10% of the validators that are doing this altruistically. But with fossil, we will have 16 validators at every single slot. So it's all the slots as opposed to just 10% of the slots. And within a slot, there would be 16 includers as opposed to just one. So in some sense, it's 160 times more opportunities for forced inclusion.

David:
[1:03:41] And that is something that is important to do as a prerequisite before growing to very high gas limits.

Ryan:
[1:03:50] That means that the builders, the provers, none of these more centralized components, we'll call it centralized in air quotes, you know, like entities can actually

Ryan:
[1:03:59] censor anything on chain. So you preserve the, and you actually strengthen post-fossil, the censorship resistance guarantees of Ethereum. I believe fossil is maybe slated for next year at some point in time. I know this is all squishy, but fossil is probably going to happen earlier than some of the other things that we'll be talking about today. Okay, so ZK EVMs, you take something like the execution client, something like Geth, and there are many different execution clients. You said you're running Geth today and you get a ZK EVM version of an Ethereum execution client. Maybe the best way to kind of fit these pieces together where the execution client, you know, turns into a verifier from executing every single block is to talk about your at-home setup and what you're planning for DevConnect, okay? So as I understand it, there are many different ZKEVM clients that are in development. I presume you're going to maybe select one of those. And then it sounds like you're also going to the additional step of maybe running your own provers at home. So tell us, what's the Justin Drake experiment that's going to happen by DevConnect and then maybe we'll fit this into the broader roadmap. But what are you doing?

David:
[1:05:19] Right. So the prover is going to have to wait for Christmas. I'm thinking of a Christmas present, which is a cluster of 16 GPUs. But in the shorter term, in November for DevConnect, I'm hoping to run my validator by, as you said, downloading ZKEVM Proves. But it wouldn't be just a single one. It would actually be as many as I can get my hands on. And the number that I have in mind is five.

Ryan:
[1:05:46] Five ZKEVM clients?

David:
[1:05:49] Proofs, yes. So there's these five different proof systems. And at every slot, there would be five corresponding proofs for each, well, one proof per client. So there would be a total of five of them. And these are proofs that are very short and very fast to verify. They take, for example, 10 milliseconds to verify. So if you have five of them, it just takes 50 milliseconds. It's not a big deal. So I would download all of these. And if three of them agree, then that's my source of truth. And the reason why I'm downloading more than one is because some of them might be buggy in the sense that it's possible to craft a proof for an invalid block. So that's why we want multiple of them to agree. And some of them might have what I call completeness bugs or crash faults.

David:
[1:06:43] So there are some circumstances where the proof system just can't generate a proof because there's some sort of a bug in the system. And so that's why I wouldn't require all five proofs to agree. It's okay if two of them just never appear, I would still be able to move on. And so it's a new way of thinking about client diversity because today, the way that client diversity works is that it's across validators. So you look at the whole population of validators and say, okay, 20% are running client A, 20% are running client B, et cetera. Whereas here, the diversity is internal to a single validator. And that's possible to do precisely because we have SNARKs, because it's so cheap to be running multiple ZKE VMs. And that's one of the reasons why I actually believe that ZKVMs are going to

David:
[1:07:38] give us a step up in security relative to what we have today. So reason number one is that we have internal client diversity as opposed to external across the validators. Two, the barrier to entry to be a validator is going to be lower, so we're going to have more decentralization, more security.

David:
[1:07:57] Another point is that we're going to be deleting tens of thousands of lines of code. So today, I'm running this execution layer client, and all that I really need is the core of the client, which is the EVM implementation. That is the logic, all of the stuff around it, managing the mempool, the history, and the state, and the peer-to-peer networking, a lot of that code can just be deleted from my perspective as a validator operator. And there's also something called the engine API, which is a bit of a technical thing, but it's basically the communication bus between the consensus layer and the execution layer. Historically, there's been a bunch of bugs in that interface. And that's completely going away because, again, I wouldn't be running an execution layer client. So we're getting to this point of minimalism. And actually, that feeds a little bit into the lean Ethereum meme where we're trying to be as minimal all and cut all of the fat so that we stay lean.

Ryan:
[1:09:00] Okay, just so I understand what you're actually running here and how this fits with some other things I've seen within Ethereum. So you said you're planning to run three different proving systems, ZKEVM proving systems. So right now I understand the execution layer, again, the thing that we want to get to beast mode as being a client like Geth. Let's say that's what you're running today. And then the ZKEVM version of this is this like Reth, which is another Ethereum client, plus one of these three proving systems, ZKEVM proving systems that I'm showing on screen. And for those not looking at this, this is a website called ethproofs.org. And on ethproofs.org, you can see the progress of different zkvm proving systems fit this together to me are you running rath plus one of they're like like three of these let's say proving systems or do these proving systems kind of replace rath like what exactly are we talking about here

David:
[1:10:03] Yeah, great question. So what we want to do is preserve the diversity of EVM implementations, also known as execution layer clients. So we want to have ref, but we want to have various other EVM implementations. And in terms of the clients that are the most ready, we have one called EFREX, which is a newer Rust client by Lambda class. We have one called EVM1, and we have one called zkSyncOS, which has been implemented by Matterlabs. And each one of these four can be combined with a proof system. So what I might do, for example, is run Zysk with EVM1. I might run Pico with Ref. I might run OpenVM with EFREX. And what I want to try and do is basically have as much diversity as possible, both the execution layer AVM implementations and the ZKVMs.

Ryan:
[1:11:06] I got it. Okay. So just a side question. So the reason we have all of these different execution clients, and some of those I hadn't even heard of before, you know, geth is maybe one that many people have heard of. Reth is like a Rust implementation of that from Paradigm. So they've, you know, hardened, they've engineered the heck out of it. Is the reason we have all of these different client implementations because Ethereum has like a hardened spec? You know, like most other networks don't have like more than one client implementation. I'm wondering how Ethereum has like dozens.

Justin or David:
[1:11:39] Ethereum is the only chain that has a spec, an only chain that's like meaningful level of adoption that also has a spec.

David:
[1:11:46] Yeah, so what most chains will do is that they'll have a reference implementation without spelling out the spec. And so if you want to recreate a second client, you have to look at the implementation and try and copy it bug for bug, if you will. And it's an extremely difficult process. And, you know, it's part of the reason why FireDancer in the Solana ecosystem hasn't yet shipped, despite it being, you know, three, four years that they've been working on it. Solana just doesn't have spec. And it's a similar situation with Bitcoin.

David:
[1:12:20] So, but, you know, having a spec is nice, but it's not sufficient. We also need to have a culture that encourages diversity and ultimately recognizes the value that comes with it. And the value to a very large extent is uptime. Like historically speaking, we've had many individual clients fail and they're, you know, they're replaced within a few hours, within a few days. And while they're being replaced, the other clients effectively act as fallback. And then another reason why diversity is important, It's because it provides diversity at the governance layer. So the Oracle devs plays an important role in Ethereum upgrades. And the fact that no single client team has undue weight is a very healthy thing to have. And then the final reason why diversity, in my opinion, is extremely important is because it allows us to have many different devs, hundreds of devs, all simultaneously understand the guts of Ethereum. I think Bankless is very famous for its quote that, you know, the most bullish thing for Ethereum is to be understood. And I think when you say that is, you know, from the perspective of the user, of the investor, of the application developer. But I think it's still very also very much true from the perspective of the client devs.

Ryan:
[1:13:40] Yeah, and it does propel Ethereum on this course of anyone can build a client in the world because they can read the spec, they can build the client, they have the dev chops. And so all of these clients are sort of competing with one another too in terms of innovation and adding new features. And that's a beautiful thing. Okay, so we have these, maybe these upgraded ZK EVM ready clients, the execution clients, the Geths of the world and such, even though Geth is maybe not ready for that, you name some others. And then we have this, what is going on on ETH proofs? Because this is something separate, I think, right? Or is it separate? We have a whole competition here to get real-time proving down below 12 seconds, I believe.

Ryan:
[1:14:22] So what's happening on ETH proofs? Why is this important? And how does this fit in your home setup?

David:
[1:14:30] Yeah. So on ETH proofs, most of the focus is on the ZKVMs. And we allow them to pick their favorite EVM implementation. And the vast majority of these ZKVMs actually use ref, or EVM, because that is the one that's most appropriate. With one exception, which is Airbender from ZK Sync, which is using ZK Sync OS, which is their own implementation of the EVM. Now, for the demo, I'm actually going to be downloading proofs from EveProofs, and I'm not going to be too picky on the EVM implementation. It's mostly a proof of concepts on the ZKVM side of things. But eventually, when we have the mandatory proofs, we're going to need the FM community to come to consensus on a canonical list of ZKVMs and corresponding pairings with the EVM implementations. And one of the things that you said, Ryan, is that when we have diversity, we have an opportunity for competition. And I think this is a very healthy aspect here, which is that we would more likely than not be picking the five fastest EVM implementations, that are most snark-friendly so that we can still have this property called real-time proving.

David:
[1:15:45] And GEF historically has been the leader. They were literally a monopoly. They had Genesis. That was the only option available. And they've had this reign for the last 10 years. And I think the fact that there's this competition is a breath of fresh air and should lead to lots of innovations.

Ryan:
[1:16:03] This competition specifically, perhaps people, our listeners have seen headlines. If you're in deep crypto, you know, in Ethereum, you probably have of some of these teams achieving some sort of milestone. And I think they call it like proving the EVM under 12 seconds. And this seems to keep getting faster and faster. I think Sysynct was a major team to do this at first. And they're like, we got under 12 seconds. And now there are other teams. I saw a team a couple of weeks ago called Brevis. And now they've reached new milestones here. What is this race to prove the EVM at a certain speed? And why is this important? And like, are we there yet?

David:
[1:16:47] Yeah. So the reason why it's important is because it unlocks the hope for the giga gas frontier. So it's literally providing, more likely than not, trillions of dollars of value creation for the world because we're going to unlock the gas limit. And from the perspective of the ZKVM teams, it's a way to prove that technology and also have a shot at being part of this canonical list of, for example, five ZKVMs that would be baked in to every single validator and a tester on Ethereum. And actually, every fully verifying node would have these five ZKPMs baked in. Right now, I maintain this tracker and list of ZKBMs. There's about 35 of them that try and cater for various use cases. But out of the 35, there's a big competition. And now we've narrowed it down to about 10 that are candidates for being selected as canonical for the L1.

Ryan:
[1:17:58] And why is it important, the speed under 12 seconds? And how is that improving so rapidly?

David:
[1:18:02] So the way that Ethereum works is that you have a block that's produced, and then within the rest of the slot, the attestors that are voting for the tip of the chain need to know that the block is valid. And so in order to keep this property that the validators are voting on the top of the chain, they need to receive the proof of validity before the next block arrives. And the next block arrives within one slot, which is 10 seconds. In practice, they actually need to provide the proof faster than 12 seconds. It's 12 seconds minus a small delta because there's all of the propagation time to propagate the proof. So the number that we have in mind is actually 10 seconds. So that is the goal. And we want basically all economically relevant blocks to be provable within 10 seconds. So there is this notion of a prover killer, which is an artificially built block that takes a long time to prove, more than 10 seconds. But what will happen with the mandatory proofs is that it wouldn't be rational for the block builders to generate these prover killers because they would be shooting themselves in the foot. They would be dossing themselves because they wouldn't be able to generate the proof that would lead to a missed lot. They would lose the fees and the MEV and they would also get penalized.

Justin or David:
[1:19:28] I see. So it's a defensive mechanism. Can we talk about how we get from point A to point B here? Point A being where we are currently in Ethereum with no blocks being verified to where we want to be in Ethereum, where this is like the dominant equilibrium where almost all of the blocks are being verified. And we have successfully initialized Ethereum with this ZK proving technology. Historically, as Ethereum has made hard forks, that's when we've done the big upgrades to Ethereum. We hard forked into proof of stake. We hard forked into 4844. All of the big upgrades to Ethereum have come in this very step function. Like we just, we hard forked the upgrade in.

Justin or David:
[1:20:10] That is, as I understand it, not how this is going to happen. This is going to be different. Maybe you can talk about how we get from point A to point B, which is like the integration of all of the ZK magic that we've talked about into the chain. How does it actually happen?

David:
[1:20:23] Absolutely. So the rough roadmap that I have is a four-step roadmap. So phase zero involves a very small subset of the validators, think 1%, opting in to verifying proofs that are altruistically generated. For example, generated in the context of ETH proofs, which is, to a large extent, just marketing budget from a lot of these ZKVM proofs. The downside of one of the downsides of phase zero is that me as a validator opting in will be losing the timeliness rewards. So there's a special reward in Ethereum called the timeliness rewards, which is given to those who attest immediately to a block. And I will be losing that because I'll be attesting a few seconds late. And so this brings us to phase one, where we have delayed proving or delayed attesting, or it's also called delayed execution, where basically instead of having to attest immediately when the block arrives, you have more time. Think of a whole slot basically to attest. So even if it takes a few seconds for you to attest, it's all good. You'll be getting this timeliness reward. And at that point, I expect the number of validators to opt in to go from roughly 1% to something closer to 10%. Why 10%?

Justin or David:
[1:21:44] Because it's when it starts to become incentivized.

David:
[1:21:47] It's incentive compatible, exactly. Yes. And it's actually, you know, you have incentive to do it because now you don't need to run a new, buy a new hard drive, you know, when the state grows too big and you don't need to upgrade your computer if it dies. Or, you know, I could just sell my MacBook that I'm using to validate and just buy a Raspberry Pi instead, for example. In any case, what I expect will happen is that the weakest validators, those running from home, would opt in to this mechanism. And the much more sophisticated validators think the Coinbase's, the Binance's, the Lido's, they would keep running the usual way.

Ryan:
[1:22:29] And they'd opt in because it's a lower hardware footprint.

David:
[1:22:32] Yeah, exactly. And from that point onwards, we can start increasing the gas limit, right? Because now we have two types of nodes. We have those that are verifying the proofs. We can increase the gas limit for them, no problem. And then we have the sophisticated operators that are running on more powerful hardware than just a laptop. And for them, there's just a bunch of buffer to increase the gas limit. So already in phase one, there's an opportunity to be more aggressive with the gas limit. And then phase two is where a lot of the magic really happens, which is the mandatory proofs, where we require the block producer to generate the proofs and everyone is expected to be running on ZKEVMs.

Ryan:
[1:23:17] Is that a hard fork?

David:
[1:23:20] Yes, but it's a hard fork that only changes the fork choice rule. So it's a very minimal hard fork is just one that says that when you attest, you should only attest after verifying that the proofs exist and are valid. So it's not a difficult hard fork. It's actually a fairly simple one. And then there's phase three, which is the final one. But here you need to project yourself, you know, maybe five years into the future, which is what we call enshrined proofs, where instead of having a diversity of five ZKVMs, we just pick one and we formally verify it end to end. So we have high conviction that there's literally zero bugs in that enshrined verifier. And that unlocks all sorts of, it simplifies the design first of all, but it unlocks things like native validiums, which is, I guess, maybe a topic for a different day.

Justin or David:
[1:24:16] Okay, so five years, and that's after five years of just like battle testing of the technology, because I think we kind of more or less expect bugs along the way during these phases.

Justin or David:
[1:24:28] And we just have to play whack-a-mole for a while, five years before we feel that it's sufficiently battle-tested to actually make it a formal part of the Ethereum layer one to truly unlock all of the magic that the snarks give us.

David:
[1:24:44] Exactly. We're assuming that every single individual's EKVM is broken, but in aggregate, as a group, it's secure. And this phase two, where we have mandatory proofs, you can think of it as being semi-enshrined, where we have, in some sense, an enshrined list of multiple ZKVMs, but there isn't the one that we're putting all our eggs in the basket.

Justin or David:
[1:25:07] So the theorized way that this works is that the weakest nodes, the slowest nodes, the individuals, you know, verifying Ethereum via Starlink in their camper van in some park, national park somewhere off grid. These people, the slowest nodes of the whole entire group are the ones that upgrade to the system first and they go from the slowest to the fastest. They kind of like leapfrog everyone. And as the technology gets more robust, more ready, more hardened, more efficient, it starts to march upwards up the chain of the next slowest node, the next slowest node, until we're in kind of like the median node. And then what starts to be left of the old legacy execution clients, Ethereum nodes, are the data center nodes, the Coinbase nodes, the Kraken nodes, the Binance nodes, the people with heavy, heavy infrastructure with a very large footprint that are just like of the node distribution of Ethereum are the ones that just happen to be in the data center. And they're like kind of the last to go because they have the most buffer, most bandwidth. And then at some point in time, they'll flip too because we actually just fork it into the Ethereum protocol. all. That's kind of the plan.

David:
[1:26:20] Exactly right.

Ryan:
[1:26:21] Can we talk about this and how this meshes with the idea? There's a blog post not too long ago from Donkrad who talked about the idea of a 3x increase for Ethereum in terms of gas limit every single year. And I want to show maybe a slide. I don't know where this came from. Actually, this looks like some Justin Drake handiwork. So I bet it's from one of your presentations, which kind of goes through this. And so right now, after, I believe we do two gas limit increases for Ethereum this year, or was it just one?

David:
[1:26:58] We've done two. We went from 30 to 36 and 36 to 45.

Ryan:
[1:27:02] That's right. Okay. 36 to 45. Okay. And the idea behind Donkrat's post, I believe, was some sort of kind of social commitment, roadmap stacking hands for the Ethereum community to attempt to scale Ethereum 3x in terms of transactions per second and gas limit every single year. Okay. And so if we were on track for 2025, by the end of this year, we would be at 100 megagas. It looks like we're going to be maybe you said 45 or maybe we get to 60 or something like that.

David:
[1:27:39] Yeah, so with Fusaka, which is coming in December, we'll be able to increase the gas limit. I'm told that 60 is safe and maybe we can get a little bit more, 80, maybe 100, I don't, But yeah, when I did those slides, which was a few months ago.

David:
[1:27:57] Tomas was trying to set within the FIM Foundation the goal of getting to one mega gas limit by the end of this year and trying to keep this 3x pace that Bankrat originally suggested in his EIP 7938.

David:
[1:28:13] Now, 3x per year, I think, is kind of a sweet spot between doable and ambitious. So it's quite significantly faster than Moore's law, but it's not completely impossible. And Dankrat's proposal was to have this 3x per year over a period of four years. And importantly, it's something that would happen automatically. So today, the way that we do gas limit increases is extremely laborious. What we need is the individual operators and the consensus layer clients to set new defaults or for the operators to change the default configuration in order for the gas limit to go up. So it's just at the social layer, extremely expensive and requires a lot of coordination. What Dankrat suggested instead is that at every single block, the gas limit increases a tiny, tiny bit, just one or two gas. So that once we've gone through the social coordination of doing it once, it just happens automatically. And my specific suggestion is to increase the four years to six years, because after six years of compounding three X per year, you get the 500 X that we need to get to one gigahertz per second.

Ryan:
[1:29:39] Okay, and so let's talk about that a little bit more and mesh that with kind of the lean Ethereum idea. So the reason we've been reticent to hit the accelerator on gas limit and throughput has been it will start to increase the requirements for validators and kick maybe our home validators off the network and drive Ethereum more into data centers. And that's not where we want to be. Now, I guess the rescue or the landing pad of a lean Ethereum is, as we increase gas limit, maybe 3x every year, the home validating nodes, the non-data center nodes in Ethereum, they can then migrate to a ZK EVM and run that on a Raspberry Pi or smartphone or very cheap hardware. So prior to having a ZK EVM solution, those validators would just be gone forever, basically. And we'd become more centralized, fewer validators, more data center-y. But because they have a ZKEVM, as that tide rises, they can be among the first to hop to the frontier of a ZKEVM. So this has opened up the playing field to allow Ethereum to consider increasing the gas limit on a more regular basis and maybe up to 3x every year. Is that roughly the story?

David:
[1:31:08] Yeah, that's it. Okay.

Ryan:
[1:31:10] And then one other question I have in the weeds here, there's gas limit and then there's throughput in these two sides. The thing that we're increasing is gas limit. Is that correct? And our gas limit right now is different than the mega gas that we're actually doing. You said we're at two mega gas per second, I think earlier in the episode, but then we have a gas limit of what, 45?

David:
[1:31:32] Yeah. So let me explain the math. There's like two complications. The first one is that we have 12 second slots. So it's 45 million divided by 12. And then there's another complication, which is with EAP-1559, we have a target and a limit where the target is twice as low. So you have to divide by another 2x. So if you take 45, you divide it by 12, and then you divide it by 2, that's how you get your 2 megagas per second. It's a little bit unfortunate because, you know, in some sense, the gas limit is artificial because it depends on the slot duration. And we do intend to reduce the slot duration, for example, for 12 seconds to six seconds. So my preferred mental model is to think in terms of gas per second, which is quite close to the TPS concept as well.

Ryan:
[1:32:27] Those phases, zero, one, two, final phase, three. You said, you know, getting to three might take five years. Do you have a timeline idea? I guess zero technically kicks off, you know, with you maybe among the first running this hardware. In about a month or so next month. What about the timeline for the rest of this?

David:
[1:32:49] Yeah. So 2025 for phase zero and then one year for every other phase. So phase one, 2026, phase two, 2027, phase three, 2028, for example. I think that that's a reasonable timeline.

Ryan:
[1:33:01] Okay. ZKEVMs allow us to increase block size, allow us to scale throughput. Real-time proving is something we've talked about. We're under 12 seconds. Block times on Ethereum are 12 seconds right now. is part of the beast mode to get that down to six and below six and how far can we push that and how does that fit into kind of zk evms do we basically have to wait until zk evm provers are fast enough to get us safely under six seconds and then we can drop the real time like the the block space production to something like six like what are the puts and takes of the constraints there?

David:
[1:33:42] Yeah. So it turns out that the proposal to reduce the slot duration is somewhat in competition with the ZKVMs because we're going to have overall less latency to do the proving and it's going to make things harder. But I still think even if we were to reduce the slot duration to six seconds, we'll be able to get there no problem. It would just delay things by a number of months, maybe six months. And so it's a decision that the community has to do. Do we want to reduce the slot duration at the expense of delaying ZKBNs by six months? I guess it's above my pay grade to make that particular decision. But if you're willing to project yourself multiple years in the future, for example, 2029, I'm hoping that we can go beyond the six seconds. In the beam chain talk less than a year ago, I was trying to advertise four seconds. And recently, we had this workshop in Cambridge with a bunch of researchers, and we actually came up with a new idea that could unlock fast, even faster slots, potentially two second slots. So I don't want to like over promise this, but I do think we'll be able to go

David:
[1:35:02] under six seconds in the context of lean consensus, which is a rebranding of the beam chain.

Ryan:
[1:35:10] Okay. So I guess in both cases, whether we're increasing gas limits and making the blocks bigger, having them house more transactions, or whether we're decreasing slot times, that's all toward the same goal of getting towards giga gas, right? Both of those kind of numbers increase our giga. Is that wrong?

David:
[1:35:28] No, no, no. So reducing the slot duration doesn't change the throughput. So if we were to reduce... It does not change the throughput. If we were to, for example, go from 12 seconds to six second slots, we would correspondingly reduce the gas limit by 2x.

Ryan:
[1:35:42] Right, okay. It's the reverse. Yes, of course.

David:
[1:35:46] Okay.

Ryan:
[1:35:46] Yeah. That's why these two things are at odds.

David:
[1:35:50] I mean, on paper, reducing the slot duration is actually neutral because at the time of the fork, the slot duration reduces by a factor of two, the gas limit reduces by a factor of two, and these cancel out. But in terms of the engineering to get real-time proving, yes, they are a little bit at odds because every second of prover time that we have is actually very valuable. And it just means that the ZKVM teams will just have to work that much harder to squeeze things down. In the fullness of time.

Justin or David:
[1:36:22] When this technology is just completely mature, haven't we just eliminated the time constraint anyways? Yes. So like, say, five plus years, like right now, we're really talking about like, how can we get this integrated as soon as possible? And that's when like one second really matters in terms of block time. But in the future, one second won't matter at all, right? Can you talk about that? Yes.

David:
[1:36:43] And in the endgame, what I'm envisioning is that we have SNOC CPUs that generate the proof as they're doing the computation. So you have a typical CPU that's running, let's say, at three gigahertz. Not only would it be doing the computation at three gigahertz, it would be producing a proof at the same time as it does the computation. And you can think of a CPU core, for example, RISC-V core as being one square millimeter of silicon on the die. So it doesn't consume much space. And nowadays we're able of building chips easily with, let's say, a hundred square millimeters of die area. So you can imagine the future being that you buy your CPU, it's a pretty big chip, 100 square millimeters. 1% of it is used to do the raw computation and 99% of it is used to do the proving in real time. But here we don't mean in real time with Ethereum time, which is one slot. We're meaning in terms of like CPU time, which is like nanoseconds. Right.

Justin or David:
[1:37:49] Yeah. Interesting.

Ryan:
[1:37:50] The one piece of your home setup, I just want to understand. So you're going to be at first, you're running kind of a ZKEVM type setup, as we discussed. Running provers at home, okay, so that's your Christmas present, you get these GPUs, you know, Santa's been good to you, you've been a good boy, I guess, whatever this is. But it does require some power, some energy to run at-home provers. And as I understand it, some of the teams are working to make that more efficient. So can you talk about if you wanted to go to the length of running your own prover at home as well, what is like the energy output required? This is basically electricity of your home required today. And then what does it need to be moving forward to make sure that we have at least some level of decentralization to this prover network?

David:
[1:38:44] Yeah, absolutely. So 10 kilowatts I mentioned, it's about 10 toasters. It's also an electric car charger. It's also like a very powerful electric oven or a powerful water heater for your shower. So this is something that has been installed and you don't need to kind of buy a new house, I guess, in order to draw 10 kilowatts. Now, the GPUs that we're talking about, these gaming GPUs, and they draw about hundreds of watts each, like the maximum rated power draw is something like 500 watts, which is a half a kilowatt. And so what I have in mind in terms of the size of the cluster is 16 GPUs. So 16 times 500 watts, that's 8 kilowatts. And then you need to have a buffer for the host machines and the cooling, because you're going to need fans or whatever to circulate air, and that's also going to consume electricity. So what I'm intending to do for Christmas is buy a cluster of 16 GPUs, connect them to my home and my home internet connection, and basically be producing a block, a proof for every single Ethereum block in real time.

David:
[1:39:57] Now, if you had asked me two months ago, when would I be able to do this demo? I would have told you maybe six months in the future. But the pace of Snarks is just so incredible that today, 16 GPUs is enough.

David:
[1:40:13] So we've already achieved the requirement that we set ourselves of 10 kilowatts. And we have multiple teams that have achieved that. You mentioned Pico. And just yesterday, another team, the Zysk team, basically achieved that. I mean, technically they used 24 GPUs, but it's getting very close to the 16. And we have various other teams. For example, the AirBender team, I expect the Succinct team to also get to 16 GPUs. And so come November 22nd for DevConnect, we will see how many teams have achieved this 16 GPU milestone. And I'm expecting it to be at least two, hopefully three, and maybe four of them. And if you want to participate in this demo in real time, you can sign up to EveProofsDay. So that's EveProofs.Day. Unfortunately, the venue that we have is limited to a few hundred people. And our waiting list is close to 300 people at this point. But do sign up nonetheless, because we will be releasing more tickets.

Ryan:
[1:41:23] Is that 10 kilowatts going to be, basically, is that going to go down? So once you get your Christmas present, right, you're going to be running these provers, your electricity utility bills are going to spike, right? So it's like running a Tesla charger 24-7, basically. So you're going to be paying a little bit extra. Is that just going to be the cost of running a prover? or can we get it down from 10 toasters to like one toaster?

David:
[1:41:47] Yeah, that's a great question. So there's two aspects here. The first one is that as the ZKVMs improve and you need fewer and fewer GPUs, that's going to be an opportunity to increase the gas limit. And so really what we want to be doing is keep increasing the gas limit so that we're always at the 10 kilowatts. So that's staying at 10 kilowatts. And this is how we get to one giga gas per second. The other thing that I want to mention is that, you know, this crazy altruistic phase is not really representative of what will happen eventually, which is that, we don't really need the fallbacks to be proving every single block all the time. We only need them to activate whenever there's a problem. So if all of the cloud providers suddenly go offline, well, now the block builders can use me as a prover, but I'll only turn on in the, you know, one in 10,000 blocks where that's necessary. Most of the time, I'll be consuming zero electricity other than, you know, sufficient electricity to be connected to the internet. And so you can think of it as like some sort of like reserve army that's only activated when necessary.

Ryan:
[1:43:03] I like that analogy. I like that a lot. Before you fire up your own provers, who's going to be doing the proving? By the way, in this whole setup, are provers incentivized? Do they get a share of block rewards as a portion of this? Or like, what's in it for them?

David:
[1:43:20] Yeah. So ultimately, the provers are incentivized by fees and MEV, and they're going to be paid by the block builders. Now, one thing that I think is worth coming in with eyes wide open is that the fees are ultimately going to come from the users. And so, the users are actually going to have to pay more for their transactions. And specifically, you're going to have to pay a transaction fee,

David:
[1:43:45] which is going to cover the cost of proving. But the good news is that the cost of proving for a typical transaction is a hundredth of a cent. So for most applications, you won't even realize that there is an extra proving fee which is being added on. What I expect will happen is that the MEV will be much larger and also the data availability fee will be larger than that. And of course, as the ZKVMs improve, we're going to go from a hundredth of a cent per transaction to a thousandth of a cent. And so, yeah, that's ultimately how the incentives work out.

Ryan:
[1:44:26] Just transaction fees and MEV, no consensus rewards, no issuance rewards.

David:
[1:44:31] So one thing that we can do as mechanism designers is instead of leaning on rewards, which I think are totally unnecessary because we have the fees and we have the MEV, we can lean on penalties. So we can have a setup where if for whatever reason you don't generate the proofs and you're acting maliciously, then you get penalized. And the number that I have in mind is one ETH. So you miss an Ethereum slot, boom, you get penalized one ETH because that should never happen, especially in a context where we have this upgrade called APS, attested-proposer separation, where we remove the proposer from the equation and we only have sophisticated entities, the builders and the provers. And those should basically have extremely high uptime. And, you know, there is a negative externality to Ethereum missing blocks, right? It means that in some sense, Ethereum like skips a heartbeat and that's not good. And so putting a price on missing a slot, I think, is healthy. And it's something that we can do once we have this APS upgrade. Because today we make this assumption that a bunch of validators are, you know, running on home internet connections. and every once in a while my ISP just messes up and I don't have internet and we don't want to be, having this one ETH penalty just because you're offline and you got unlucky. But for builders and provers, yeah, we can slap them with one ETH, no problem.

Ryan:
[1:45:59] Wow. Okay. So that is beast mode for scaling Ethereum, the layer one. I got to confess, Justin, I've not understood it until this conversation that we've just had. Now I understand it a lot more. So many other things we didn't touch and we're already at almost two hours. So we can't cover everything here. But very quickly, when you introduced beast mode, You also said it's not just in five years, right? This could happen in five years, for instance, and it will scale up. It's not just Big Bang after five years. That's something we're at 10,000 transactions per second. But this plan potentially gets us to one gig of gas, 10,000 transactions per second on Ethereum layer one by 2030. Okay. We're also simultaneously scaling data availability through kind of the dank sharding setup we have right now so that L2s can get to teragas per second. So does that have anything to do with CKEVMs and everything we've been talking about? Or is that just happening in parallel? And every chance we get, we're just expanding the fast lane and the state availability and blob space effectively for L2s.

David:
[1:47:08] It's happening in parallel. But I do want to highlight one thing that bridges these two worlds, which is called native rollups. So a native rollup is one that has the same execution as L1, but in L2. And one of the massive advantages of native rollups is that you don't have to worry about bugs in your snark prover or your fault proof game. You don't have to worry about maintaining equivalence with the L1, which itself is an evolving thing with every hard fork and every EIP. And maybe the most important thing is that you don't have to have a security council to deal with these bugs and with this maintenance.

David:
[1:47:53] And the amazing thing about the technology, which is ZKVMs, is that for the L2s, we can effectively remove the gas limit entirely. The bottleneck is only data availability. And the reason is that the L2s, they can be generating proofs for their transactions off-chain using very powerful provers. they don't have the 10 kilowatt requirement. They can have much bigger provers if they want to. And then the only thing that they bring on chain is this tiny proof that the validators can verify. And that is the next generation of roll-ups. And we're very, very fortunate to have Luca Dono from L2Beat, who is a big believer in this idea and is championing the EIP process and all of the technical legwork in order to deploy this on mainnet.

Ryan:
[1:48:53] When does that happen in this whole timeline where we talked about the, you know, the zero through three phases of getting ZK EVMs out there?

David:
[1:49:00] So one reality of Ethereum is that we have this governance process called ACD and we have a very limited number of opportunities to do hard forks. Like historically, we've had one hard fork per year. We're trying to double this to one hard fork every six months. But even within a hard fork, there's only so much that you can do. And it turns out that many different developers want many different things. And so there's 10, 20, maybe 30 different competing proposals. And in any given hard fork, you can only clear the queue like three, four, maybe five, five at a time. And so that leads to all sorts of externalities. externalities, one of them being that it's just very hard to predict what will go through the Oracle death process. And another externality is that it just leads to, a lot of frustration. You can think of ACD as kind of being this meat grinder or this soul grinder where you have starry-eyed, enthusiastic developers that come in and they kind of come out frustrated and jaded because their EIP hasn't been selected for a very long time.

David:
[1:50:14] One example here could be Fossil, right? Fossil is something that we have the EIP, All the research has been done. A lot of the legwork has been done, and yet it's still not being included. It's been discussed for inclusion in Fusaka. It's been discussed for inclusion in Glamsterdam, and now it's being pushed and pushed and pushed. And so it's difficult to be able to predict some of these things. And this is actually part of the reason why I'm so excited about Lean Consensus, because Lean Consensus is a governance batching optimization where for an extended period of time, we're just doing pure R&D. And so we can have this really...

David:
[1:51:00] Exciting, fast-paced R&D. And then what we propose to ACD is something that is significantly better than what we currently have, let's say 10 times better. And it will take a long time for it to go through ACD, but when it goes through,

David:
[1:51:17] we will be batching together, let's say 100 EIPs that would previously be unbundled. And so instead of having the long-term future, the end game of Ethereum, if you will, being spread out over decades of small incremental upgrades, we have an opportunity to batch something, bigger upgrades on a timeframe of four years or so.

Ryan:
[1:51:41] Mostly we've been talking about kind of the lean execution layer because that was the big part I didn't understand. And that's the big part, going to beast mode and scaling layer one. We talked a little bit about lean consensus, I think, and kind of the fort mode from the perspective of all of this can be run from like a smartphone or a smartwatch. But are there any other pieces in lean consensus? Because this is another layer of the Ethereum stack that you want to make sure people understand today. Because when you say lean Ethereum, you're talking about lean execution layer and scaling that beast mode. And you're also talking about lean consensus. The lean consensus piece, I think is maybe less sexy, but maybe in some ways it's more important? And you just alluded to one of the ways that most of us users don't see why it's important. What else is in the lean consensus piece that we have not covered? And why is it important?

David:
[1:52:39] So lean Ethereum is actually three different initiatives. Within L1, you have three layers, the consensus layer, the data layer, and the execution layer. And it's.

David:
[1:52:52] Now, we haven't even touched on the data layer other than saying that it needs to be post-quantum secure. But yeah, there is indeed a lot happening in the consensus layer. The headliners are, one, replacing BLS aggregate signatures with a post-quantum equivalent. Two, having much faster finality. So instead of it taking two epochs, which is 64 slots, It might take only two slots or three slots. Another big improvement is significantly reducing the slot duration. And then the final improvement is just like ZKE VMs, we can snarkify the entirety of the consensus layer so that the really weak devices, the browser tabs, the phones can fully verify not just the execution part of Ethereum, but also the consensus layer. And so when we're building bridges, for example, between L1s, that's the same kind of technology that would be used as well. And then what you alluded to is this opportunity to do things differently in terms of governance. So we've been doing the small incremental upgrades. We've been accumulating 10 years of technical debt. It's an opportunity to refresh and.

David:
[1:54:16] Part of the reason why I'm excited about Ethereum is not because we've had 10 years of uptime, but because we're going to have another 100 years of uptime. And in the next 100 years, we're going to grow our total value secured to hundreds of trillions relative to what we have today, which is just $1 trillion of total value secured. And I think the Oracle dev process as it is structured today is a little bit the tail wagging the dog, right? The 10 years history, that's the tail. You know, we've accumulated a lot of technical debt and the dog is the next 100 years. And I think what Lean Consensus is all about is just rebalancing it a little bit so that the next 100 years where, you know, all the finance will be built on top of Ethereum, that the vision has a chance to materialize. And that's going to require some big changes at L1. And so, in some sense, Lean Ethereum is an invitation to be bold, to be ambitious, and to think about the next 100 years more so than the last 10 years.

Ryan:
[1:55:20] Justin, as we wrap this up, maybe this is a good opportunity to ask another question. And as I think about the context for this whole discussion where I see Ethereum going, it's really about upgrading the Ethereum network to Snarks. So Ethereum, like Bitcoin, is originally based on cryptography like 1.0, blockchain cryptography 1.0. Snarks is cryptography 2.0. And so now we're applying snarks and making, I think you've used the terms, which I didn't fully understand at the time, snarkifying the entire stack. That's what this is. That's what lean Ethereum actually is. It's upgrading the entire stack to cryptography 2.0, the snarks generation of cryptography. And some networks might follow in those footsteps, others might not. Tough to say what Bitcoin will do, but probably they'll ossify and stick with cryptography 1.0 for a long time.

Ryan:
[1:56:15] I guess the context of this though is, will we be able to do this fast enough? You were talking earlier about the ACD meat grinder and how Ethereum is so large, so many moving pieces, it can feel hard or even like frustrating for developers because they're like, why can't this happen faster? And so are we able to scale fast enough to beat centralized competitors, particularly competitors with some deep engineering teams? And I think part of maybe what this question is reacting to is we had one of the original scale Ethereum EIP proposal authors,

Ryan:
[1:56:51] Dankrad, recently depart the EF for, you could call them a competitor to Ethereum, maybe that's simplifying things. They're certainly going to contribute back to the Ethereum ecosystem as well, the form of the EVM and other things. But this is a new company, recently raised $5 billion in funding. So they have deep pockets. It's called Tempo. They are working with Stripe and invested in by Stripe. So they got clearly access to TradFi and stablecoins and all of these things. And it seems to be the case that they're going to be implementing some of this roadmap using RETH. I mean, it's a paradigm team, right? They're going to be speed running some of this roadmap. And maybe that helps Ethereum in some ways, but also maybe in some ways it competes against Ethereum. And from a talent perspective, certainly Donkrad has done so much for Ethereum, obviously. But is there a brain dream happening with some of these more centralized corporate chains? And are you worried about that? You're talking in terms of hundreds of years, but will we have the talent to sustain? Are we going fast enough to beat some of these competitors and implement this vision.

David:
[1:57:59] Yes, I think that just zooming out, there has been a brain drain. It's real, it's significant, but it's actually not in the direction that you expect. There has been a massive brain drain toward Ethereum. And yes, we have lost one dank rat, but I think we've gained 10 dank rats. So since I gave my BeamChain talk less than a year ago, there's been dozens of people that have come on board the Ethereum Foundation or have been working externally through all sorts of lean consensus teams. And the amount of talent that has come in in the last few months is absolutely mind-boggling. If you look at what Dankrat was doing, he was doing hardcore applied cryptography in the context of Danksharding. And there's several applied cryptographers that I'm working with on a daily and weekly basis now, including Tomar and Emil.

David:
[1:59:08] Giacomo and Angus, And like all of these people are of extremely high caliber, like at least as good as Dankrad. Like they don't have the reputation because they haven't been at it for, you know, seven years. But in terms of raw talent, I think we have it. And these are people, again, that were not on my radar even a few months ago. And then on the coordination side of things, we've brought on, you know, Will, who just keeps on impressing me every single day. We have Ladislaus, we have Sophia, and there's also people who are not doing either the hardcore cryptography or the coordination. So there's, for example, Felipe doing the specs, there's like Raul helping with the peer-to-peer networking, there's Kev doing ZKBMs, and like Farah working on EVE proofs. And when you zoom out, a lot of these people... You know, came to Ethereum. So, for example, Will and Farah came from Bitcoin.

David:
[2:00:13] Kev and Sophia, sorry, not Kev, Camille, who's one of the coordinators of one of the consensus team and Sophia, they came from Polkadot. We have Kev who came from Aztec. We have Raul, who came from Filecoin, Tomar, who came from Kakarot, and the StockNet ecosystem, and Angus, who came from Polygon. You get the idea. There's much more incoming brain drain than there is outgoing. Now, in terms of the reason for this brain drain, I think it has to do with things that competitors like Tempo just don't have. Right? Vitalik has this famous quote that a billion dollars is not going to buy you a soul as a blockchain. And we have community, we have vision, ideology, and we also have this amazing technology. And you mentioned that, you know, you think Tempo might leapfrog and use ZKVMs. I'm not holding my breath on this. You know, my base assumption is that they're going to have a very small number of validators running in data centers. And actually, I, you know, I asked Ancrad, like, how many validators do you think Tempo will have at launch? And I'm hoping I remember this properly, but I think his answer was four, like four validators.

David:
[2:01:41] And, you know, like community is very different as well. Like one thing that was very stark to me was, you know, when Dankrad left, you know, there was a massive outpour of gratefulness for all of the work that Dankrad had done and his massive contribution to Dank Shardang. And then you look at the Stripe side of things and, you know, it's like really sad that Patrick, you know, the founder of Stripe kind of made this tweet to his half a million followers saying, hey, welcome Dankrat. And his tweet got like three retweets.

David:
[2:02:22] There's no community in Tempo. There's very little soul. And I'm sure Dankrad has all sorts of reasons for leaving the Ethereum ecosystem. But the fact of the matter is that there's a massive brain drain towards Ethereum. And I guess another thing worth mentioning is that I think there's a reasonably high chance, call it like double-digit percentage, that tempo is actually in some way part of the Ethereum ecosystem, even if today they're not ready to acknowledge it explicitly. In my opinion, the incentives will be such that all of the L1s will want to become L2s so that they can tap into the network effects that Ethereum has to offer. Just yesterday, actually, or before yesterday, Ethereum crossed $100 billion of Tether on L1. And if you want to do payments, you need to have access to stable coins. And there's a lot of network effects around stable coins on Ethereum. So it wouldn't surprise me if in a couple of years' time, Tempo announces that they're pivoting to becoming an L2 and Dankrat comes back to the Ethereum ecosystem.

Ryan:
[2:03:34] Do you have a take on why there haven't been more L2s? Some of these corporate chains, why are they going with L1s instead of L2s? It's not just Tempo. If it was just Tempo, maybe you would say that, but Circle going that direction, also Plasma, kind of the Tether founding. There's been a lot of new L1s. And the Ethereum take has always been what you said, which is like, why be an L1 when you can be an L2? It's cheaper, better network effects. Why hasn't that borne out yet?

David:
[2:04:00] Yeah, I mean, we have seen this L1 premium. And I think, you know, part of the reason is that there's this new design space, which has been unexplored. And so people are maybe valuing the unknown, like very large potential. I don't know, this is just a speculation. I think Tempo, as you mentioned, they've raised $500 million at a $5 billion valuation. I think they've done an excellent job at farming the L1 premium. And now that they've secured their $500 million, I think they could safely pivot to doing the correct incentive-aligned thing, which is to tap into the maximum amount of network effects. I certainly do recommend that they keep part of their treasury and let's say at least 1% to make an emergency pivot to an L2 if they don't become successful as an L1.

Ryan:
[2:04:59] Justin Drake, this has been fantastic. Lean Ethereum. The next steps for this are what? DevConnect and you're going to give a presentation, I believe.

Justin or David:
[2:05:08] DevDisconnect of the Geth node.

David:
[2:05:11] Yeah.

Ryan:
[2:05:12] Talk about the next steps and what people can do to kind of stay abreast and get involved.

David:
[2:05:16] Yeah. So I'm hoping that DevConnect is an eye-opening moment where as a community, we can all agree that we want to go down this ZKVM path. There's a few, I guess, stranglers who are not yet fully convinced, but I think what's happening now is that we're disagreeing on timelines as opposed to fundamentals. So I think the most skeptical people will tell you that ZKVMs are something for 2029 or 2030. But I think what's happening is that over time, more and more people are getting bullish on the timelines. And one, I guess, fun story here is that Kev, who leads the ZKABM team, historically, at least a year ago, was, I guess.

David:
[2:06:11] Skeptic about ZKABMs. You know, there was a lot of open questions in his mind. And it's been really beautiful to see his thinking evolve, you know, week by week, as he's been able to tick off every single unknown and risk that he had had in his mind. And I think, you know, Kev is still like, not fully convinced on the exact timelines. But if the technology keeps on progressing in the way has been progressing in the last 12 months, then I think the timelines can only shrink from here onwards. Now, one thing that I want to stress is that there will be a tipping point where the ZKVM technology has reached parity with L1 throughput and quality. And from that point onwards, what I expect will happen is that the ZKVMs no longer become the bottleneck, meaning that the ZKVM technology will improve faster than the 3x per year, which is, I think, the fastest that we can hope to upgrade the L1. And so the burden will go back to the traditional non-moon math engineers to optimize databases and networking and things other than cryptography.

Ryan:
[2:07:27] We will end it there. Justin Drake, thank you so much for joining us.

David:
[2:07:31] Absolutely. Thanks for having me.

Ryan:
[2:07:32] Bankless Nation, got to let you know, of course, crypto is risky. You could lose what you put in but we are headed west this is the frontier it's not for everyone but we're glad you're with us on the bankless journey thanks a lot

No Responses
Search Bankless