Unichain - Sponsor Image Unichain - Faster swaps. Lower fees. Deeper liquidity. Explore Unichain on web and wallet. Friend & Sponsor Learn more
Podcast

Do We Need Another L1? - Inside Monad’s Parallel EVM with Co-Founder Keone Hon

Monad just launched Mainnet!
Nov 25, 202501:22:24
0
0

Inside the episode

Keone:
[0:00] The Monad design delivers performance that is needed right now and also delivers

Keone:
[0:07] a high degree of decentralization right now and delivers that fast finality right now. Some of the struggles in other ecosystems are related to slow finality. And in particular, that problem is already kind of addressed in Monad

Ryan:
[0:25] Keone, I got to start with the question that I think is in everyone's mind as we enter this episode. Do we really need another general purpose layer one chain?

Keone:
[0:35] I love the question. Monad is a significant engineering effort. You can think of it as a technology effort to bring new technologies to the EVM and pioneer those in a way that are all compatible with each other and that stack on top of each other to prove what's possible and to prove that decentralization can become more powerful if we focus very deeply on software architecture improvements. I agree with your line of questioning in the sense that new layer ones need to be very different and very innovative. But Monad is an.

Keone:
[1:20] Effort that is

Keone:
[1:21] Really grounded in research and engineering to deliver a really powerful experience for the EVM and to make the EVM more powerful and more performant in a highly decentralized way.

Ryan:
[1:33] I think that's a theme. It's a massive optimization of the EVM. we'll get back to that. But why not just build Monad as a layer two, maybe on Ethereum? Why instead go in the direction of the layer one?

Keone:
[1:45] There are really interesting and important optimizations that are needed both at the execution layer and at the consensus layer. Layer twos tend to focus only on the execution side, but consensus is really what gives blockchains their property of decentralization and what really gives blockchains the borderless aspect where control of the network is split across many, many entities that exist in many different countries around the world. Consensus is a really.

Keone:
[2:20] Important problem.

Keone:
[2:21] It's sort of the, you know, consensus, Nakamoto consensus is the basis of the Bitcoin project. And I just think that innovation of the consensus layer, as well as the way that consensus and execution fit together, that's an extremely important and underexplored aspect of crypto these days.

Ryan:
[2:41] You've used this word decentralization a couple times now. What does that mean to you? What does decentralization mean to you in the crypto context, in the blockchain context?

Keone:
[2:51] I'll answer your question in two different ways. From a technological perspective, decentralization means that control over the system is split into many different horcruxes, I guess you could say, like split into many different entities that all keep each other accountable and enforce the rules of the system and enforce that only state transitions that are allowed are those that are defined by code. And that's an extremely powerful aspect because when we don't have to trust each other, when trust is enabled through just the code itself and everyone following the rules, but also a system that allows actors to keep each other in check, that's when we can build more powerful applications and institutions on top of that base trustless layer. So that's sort of the first answer is like, from a technology perspective, it's about many, many nodes in the system all keeping each other accountable.

Ryan:
[3:58] It's just like the, you know, our civics class in the US is kind of the separation of powers. Is it that sort of idea where no single branch of the network gains control and has the ability to execute its own will independent of the other branches?

Keone:
[4:15] Correct, yeah. It's about that aspect and then the ensuing process, consequences and productivity and efficiency that can come from the fact that everyone knows that there is not a single power that can override state or make arbitrary state transitions.

Ryan:
[4:35] Okay. That's like the technical definition. You said there was another definition as well. I'm not sure if we've gotten into maybe the other definition with some of the political expressions of power inside of the network, but what's the other definition that you were thinking of?

Keone:
[4:48] Yeah, the other aspect is the social aspect. Decentralization means having a large number of people that are watchdogs of the system and that are contributing to the network in different ways, building networks. Applications or building integrations or connecting the system to the rest of the world. And also even people that are not directly like writing code, but people who are observing what's going on and serving as the white blood cells of the system. Yeah.

Ryan:
[5:20] Some of this, I think when we talk in terms of like, what does decentralization mean, it feels a little squishy to some folks. It's something that's not very squishy, but I think is important. And I wonder how much you think this is important and how much you've prioritized it for Monad, which is the ability of normal people outside of data centers to run a node or run a validator in a permissionless way in your network. That is something that Bitcoin, I think, has elevated. It's very important. Something that Ethereum has also tried to elevate. Something that other networks maybe not so much. How about that? The definition of decentralization, or at least a very important component for a blockchain network? Is it the ability to run a node from, say, my home office?

Keone:
[6:10] Decentralization is, a big part of it is about democracy and about the fact that anyone should be able to participate in the system. In the early days of Monad, a big part of our decision-making was, Like a constraint that we've had since day one is that anyone should be able to run a node without expensive hardware. So Monad's a consumer-grade hardware chain. It's not a data center chain. Anyone can take a Costco MacBook, the specs of that, and run a node based on that. And so in particular, 32 gigs of RAM, a 2-terabyte SSD, 100 megabits of bandwidth, and a reasonable CPU. Those are the constraints of Monad. And that made building Monad actually quite challenging because it meant that the system can't rely on keeping all the state in RAM, right? There's various technical things that are kind of downstream of that, but it's a really fun constraint to build for. And now, three and a half years later from building this, it really means that anyone can run a node in Monad and have access to the full state and verify the entire chain, verify every state transition and every account balance.

Ryan:
[7:21] Which other chains have optimized for that approach? So it does seem that the kind of the world of layer one chains and even, I guess, layer twos, it's less important. Layer twos are kind of de facto almost running in data centers,

Ryan:
[7:34] at least many of them, right? But let's go back to layer ones. So which layer ones have opted for the kind of ability to run at home versus becoming data center chains like

Keone:
[7:48] I don't know that many people think along this axis,

Ryan:
[7:52] But certainly people in the Bitcoin and Ethereum community do. And it's indeed one of the main critiques that they relay against some of these faster, high TPS type chains. Is it just Bitcoin, Ethereum, and now Monad? I don't know about Cardano and some of those chains. I'm not as familiar with those. But are those the consumer hardware type chains? And then is everything else turned into effectively a data center chain?

Keone:
[8:19] I think that maybe there are some chains that are, you know, very similar in nature to Ethereum, like using the same tech stack that are forks of Ethereum that might have a similar hardware footprint, but much less usage than Ethereum has. So yes, I agree that Monad is standing among a small set of blockchains that has this characteristic, but also more generally that is really focused on adhering to that property in perpetuity, always being able to be run by anyone, and also that's continuing to try to push the boundaries of this amount of performance we can squeeze out of those constraints. Because that's what it's really about. It's about low hardware requirements plus a really high performance.

Ryan:
[9:07] Okay, so this level of decentralization, you said it's pretty hard.

Ryan:
[9:12] Few chains have achieved it and are achieving it. I guess this begs the question of why we're maximizing for this level of decentralization, right? Why isn't it fine for validators and Kind of goes to the root question of what actually are blockchains for, in your view? What is the purpose of a blockchain?

Keone:
[9:37] In my view, the fundamental purpose of a blockchain is to give a means of coordination and a means of transaction and value transfer and asset issuance and, you know, world building that is only enabled by shared global state where we have coordination among many, many actors. If you look at, for example, When exchanges add support for a new blockchain to their network, to their product, they need to run a node. When Tesla starts accepting payments in Ethereum, they need to run a node. When other businesses start to accept payments on that blockchain, they need to run a node because they need to be able to verify for themselves that they've received a payment. And they can let the person that walked onto the Tesla dealership and drove away with a car, they actually made a payment. So it is about self-verifiability. It's about self-sovereignty. And it's also about fundamentally just enabling a layer of coordination from which there can be great amounts of productivity that are unlocked from, for example, people that are anywhere around the world getting access to the same financial tools or resources.

Ryan:
[10:58] It's interesting how I think different crypto communities would answer that question in different ways. So the Bitcoin community might say the purpose of Bitcoin, the blockchain, is store value. The application is Bitcoin itself, the asset. I think the Ethereum community might agree that store of value is an important use case, but then add that property rights are equally important and the ability to kind of scale decentralized finance in a way that, you know, they'd probably also agree with what you said, that is verifiable and permissionless and uncorruptible is important. And the Solana community says that what they're trying to build is a decentralized NASDAQ, which I think implies something a little bit different as well. Is the purpose of Monad? Do you have like a moniker? Are you trying to build an open financial system?

Ryan:
[11:51] Is this more general purpose? Have you kind of settled on a particular set of use cases? You mentioned finances. Is finance the primary use case here?

Keone:
[12:00] Finance is the use case that immediately enables greater productivity and enables more opportunities for people around the world. At the end of the day, I think that, you know, we're starting to live in a global world where people living in one country can be employed by a company in a country completely on the other side of the world. But there are significant inefficiencies, payment systems, like there's just a lot of stuff that's not very efficient right now. And I think that for me, crypto is really about unlocking greater efficiency and greater opportunity for everyone around the world. And that is enabled fundamentally by a really performant, really decentralized, permissionless layer one where everyone can get access.

Ryan:
[12:48] Three and a half years of work. You're close to main net. We'll talk about that a little bit later in the episode.

Ryan:
[12:53] How did you squeeze out this performance from the EVM and the consensus layer? And maybe take us to the existing EVM right now. What's good and bad about the EVM? and what did you really have to focus on?

Keone:
[13:06] The EVM is honestly a great bytecode standard. It is really the standard of crypto and smart contract programming. Several others have developed, you know, been proposed and used in different ecosystems, but the EVM is very much the dominant standard. It has over 80% of all TVL on chain. It has, you know, many libraries, a lot of tooling, almost all the applied cryptography research has been done in the context of the EVM. So it's really a great standard. However, there are just fundamental inefficiencies with existing implementations. And with the Monad project, we worked on and introduced a bunch of different, basically six different major improvements that kind of stack on top of each other to ultimately deliver over 10,000 TPS of throughput or in gas terms, as people frequently like to use in the space, 500 million gas per second on day one of Monad mainnet. I can tell you more about some of the optimizations, but I think maybe just the really high-level summary is that it's a combination of stacking multiple improvements on top of each other that are all needed.

Ryan:
[14:21] Why did it take so long? Why did this all take three and a half years?

Keone:
[14:24] When I think about, when I reflect on the past, I think we could have maybe done it a year faster if everything had been exactly like perfectly efficient. But the reason is that it starts from a lot of research. So the first year or so of work was, you know, building out tooling and testing and researching different approaches before actually committing to specific directions. When you're building new technologies, when you're solving a problem, like at the end of the day, purpose of Monad is to solve these existing scaling problems. You need to build a prototype in some cases of the solution to know for sure

Keone:
[15:07] that you're on the right track. And so, and with a couple of these new innovations, like there's a ton of research that went into it.

Ryan:
[15:13] Maybe you take us through some of those six things that stacked one on top of the other in terms of where you're squeezing out the optimization. Tell the story here. Where'd you start and what are some of these important components. I seem to recall something about

Keone:
[15:28] Having to redesign the entire data structure behind the EVM.

Ryan:
[15:32] Anyway, I don't know which technical questions actually asked, so just got us through it.

Keone:
[15:38] I think the way to think about, from a very technical perspective, the way to think about blockchains is that there's kind of a staged process where block proposals, which have a bunch of transactions in them have to make their way through many stages of work in order to ultimately be finalized and enshrined in the canonical chain. And in existing systems, these stages, you know, there's a lot of like, you know, stage one is happening and then the system is waiting for that to complete before being able to progress to stage two. And then that has to complete before proceeding to stage three. I think one of the common patterns of Monad is introducing pipelining, which is a really common technique in computer science. Like I don't, we certainly can't claim to have invented this at all. It's just the idea is intuitive of instead of doing all of these sequentially, it's much better to.

Keone:
[16:40] Have one piece of work that's at stage one and then when that finishes move to stage two in parallel start work on another piece of work in stage one and kind of progress them through similar to how when doing laundry you would do a load of laundry in the washer but then when that is completed then move it to the dryer and in parallel do another load in the washer so that's like the common pattern that you will see in practice what this is translated to. I'll tell you about the different improvements from the, I guess, like the highest layer as I think of it to the lowest layer. And I'll tell you them all up front and then I'll maybe try to explain them in a little bit more detail. So the highest level, the top level improvement is Monad BFT, which is a new consensus mechanism that introduces pipelining within it as well and addresses a big problem that existed in previous pipelined consensus mechanisms. That's the first thing. The second thing is asynchronous execution, which decouples the two major parts

Keone:
[17:45] of a blockchain, which are consensus and execution. It decouples them from each other. So in most blockchains, consensus proceeds and reaches consensus. And then all the nodes each go and execute all the transactions in that block. And while the execution is happening, consensus is waiting. And then when execution completes, consensus starts again.

Keone:
[18:06] But while that's happening, execution is waiting. And so in asynchronous execution.

Keone:
[18:11] We decouple those two things and run them both in parallel to each other in a pipeline fashion. So that's the second thing. The third thing is parallel execution. and.

Keone:
[18:21] So when the execution process has the job of executing a whole long list of transactions, in a block, the transactions are all ordered from one through, let's say, a thousand. And the true state of the world is the state of the world after executing those transactions one after the other. Like that's how it's officially defined. So parallel execution is a technique where many of those transactions are executed in parallel optimistically, assuming that all the inputs to those transactions are correct, producing pending results, which are optimistic executions of those transactions, and then committing those optimistically generated pending results in the original serial order and making sure that every input is correct and re-executing if one of the inputs was incorrect.

Keone:
[19:13] So it's sort of, I guess, similar kind of thing where there's, you know, computers have a bunch of cores. They can run many, even more threads to run many, many pieces of work in parallel. But what is the constraint is often like other resources, like pulling data from the database, pulling data from disk. So what you want is to be able to be doing a bunch of work, identifying dependencies for the database in parallel, and just like proceeding whenever the lookups end up returning. And it's best to do all this work in parallel and then just commit those pending results while still maintaining the correctness as if those had been executed serially. So that would be the third thing. A fourth thing is, We have a thing called just-in-time compilation or JIT compilation.

Keone:
[20:04] This is a technique where the EVM bytecode, so I actually have to take a step back here for one second. In Ethereum and in Monad, smart contracts are developed typically in Solidity, and then they're compiled down to this bytecode standard called the EVM bytecode. And this.

Keone:
[20:22] Is like kind

Keone:
[20:23] Of a unique bespoke standard. when it is executed, it needs to be executed in a virtual machine, like it's not actual machine code that could be executed directly by the CPU. So there's this sort of abstraction layer that exists in Ethereum and other blockchains to execute this EVM bytecode within a runtime, kind of similar to how, I don't know if you remember in the old days, you would sometimes have a Java program that would run in the JVM. So there was like this program that existed on your computer called the JVM runtime. And when you wanted to run Java programs, you could run them. And the benefit of the JVM was that it could be like cross-platform and people could just develop apps for common standard. Anyway, the point is like, this is kind of the same thing happening in blockchain where people build applications for the EVM. It generates this EVM bytecode, but that is not... Machine code. And so it's much less efficient. We have a compiler that compiles that EVM byte code into machine code, allowing that execution to be a lot more efficient. So that's like a huge unlock for the EVM standard that we're really excited to deliver in the Monad system along with others.

Keone:
[21:40] And then I know I'm kind of giving you a very long thing.

Ryan:
[21:44] But I'll tell you that. I'm keeping up. Yep. Number five, right?

Keone:
[21:46] So number five, we have a new database called MonadDB. So the context here is that Ethereum, at the end of every block, it stores all of the state of the world in a thing called a Merkle tree. And the Merkle tree's property is that it enables verifiability. So that the Merkle tree has, as trees do in Europe, Computer science data structure is like a tree has a root, and that root is just like a little hash, but that hash is a commitment to all of the state in that tree. So if you are running a node and I'm running a node, and we want to make sure that we have all the same state, instead of having to compare every single entry line by line, if we just compare our Merkle roots, and we see that we have the same Merkle root, that actually means that we've ensured that all of the state is the same. This is a really cool attribute that Ethereum has that enables verifiability that allows nodes all around the world to ensure that they're all in sync with each other and to do so in a very concise manner. This is the good part about a Merkle tree, but the bad part of it is that it's expensive to update and its storage is currently in existing systems very inefficient because it has to kind of get pushed into another database.

Keone:
[23:09] Typically LevelDB or RocksDB, which have another tree structure under the hood. There's a huge amount of sort of abstraction that happens that generates inefficiencies. We have a custom DB that's specifically designed to store the Ethereum Merkle tree state natively on disk so that when doing lookups of data on disk, we can deliver those much more efficiently and also pack all of the data that's relevant to each other, very close to each other in pages. Because when you, sorry, more detail, but when you look up data from a database, you get an entire page of data. You don't just get a single piece. So if you can pack a lot of those pieces of data all on the same page, the lookup is going to be much, much, much more efficient.

Ryan:
[23:53] Is that number six as well?

Keone:
[23:54] So that was number five. And then number six is a block propagation. It's a communication method called RaptorCast, which allows for really efficient communication of large blocks all around the world through just a really smartly designed multi-step process where blocks are cut up into chunks and then chunks are sent to different nodes in order to ensure that all of the nodes get enough chunks to reconstruct the original block.

Ryan:
[24:24] It sounds like you've taken kind of consensus execution, the EVM piece by piece, and really tried to engineer all of the inefficiencies out of all of those pieces. To the extent that you've kind of built some of these things, like MonadDB sounds like your own kind of data structure that you actually developed. So you've built some of these things from the ground up. Is that right?

Keone:
[24:49] That's correct. Everything is built from the ground up.

Ryan:
[24:51] I've heard Monad referred to as Solana for the EVM. And I think they're talking about sort of the spirit of continuing to optimize and parallelize execution consensus to the nth degree in order to max out the throughput in transactions per second. Do you think that fits? Is Monad Solana for the EVM?

Keone:
[25:14] I think that in some ways, Monad is the EVM's answer to Solana. I think in other senses, Monad is quite different from Solana because Solana has really high hardware requirements. And Solana has taken view that in the design, and every design needs to be opinionated, but in that opinionated design that hardware will continue to get more powerful and therefore it's fine to just require nodes to have a large amount of hardware.

Ryan:
[25:44] It's a data center chain. I mean, right. I mean, I think the requirements there for Solana node, I mean, are getting into the, if you actually want to collect some MEV and you actually want to produce blocks, you know, 10 gigabits per second, right? You're running this thing out of a data center in order to run Solana. Okay, why hasn't Ethereum done this? So some of the core values that you've espoused earlier on, like decentralization, certainly the Ethereum ecosystem, the Ethereum Foundation, they care about those things too, a lot. Sometimes almost to their detriment. Why haven't they gone piece by piece through the EVM and engineered the efficiency out of it? Instead, they're taking a different path, it seems like, which is more of a ZK type of path. They're snarkifying the EVM. They're turning their validators into verifiers such that you can run the verifiers at home. But they're not taking the approach that Monad is taking, which is like highly engineering all of the efficiency out of each piece. Why not? Why are they taking a different approach?

Keone:
[26:54] I think that for any project, there needs to be a decisive approach.

Keone:
[27:00] Direction taken. And I have a ton of respect for the Ethereum researchers and engineers and the approach that they've taken of focusing on ZK scaling. I think that for Monad, we believe that we can get a lot more performance out of a single node, out of each singular node. And we can really sort of squeeze the sponge down into just like a high level of efficiency where every node can have the full state of the world and every node can scale state to a much larger extent. One thing that I think is really cool about Monad is that as state continues to grow, the system can continue to support a massive amount of state. So I didn't really explain this super well before, but MonadDB is an effort to get the absolute most out of SSDs today. So SSDs are really cool. They're really powerful. And SSD costs.

Keone:
[28:07] Sorry, like a two terabyte SSD costs like $150 to $200 on Amazon. So they're quite cheap and they're very, very performant. You can actually load up a machine with a ton of SSDs or with 32 terabyte SSDs. This is very cheap hardware as compared to with scaling with RAM, because RAM is about 100x more expensive than SSD is. So it's very realistic to have 32 terabytes of SSD, but 32 terabytes of RAM is an insane ask for anyone to run a node. And the reason I'm telling you this is because Monad has taken a design to get the most out of the SSD and to make it possible to...

Keone:
[28:52] Have a blockchain that scales to 30 or 100 terabytes of state while still being extremely performant and without requiring a lot of hardware to do that. Whereas for other blockchains, like a lot of data center chains, like you were mentioning, or projects that are focused on single sequencer with a really large node, really high hardware requirement, that actually doesn't scale to a much larger state because you're just going to need to keep throwing more RAM at the problem. And RAM is really, really expensive. So I think the reason that this matters is because if we want to grow crypto adoption massively, if we want to have a billion people using Aave for their banking, basically for the banking, for borrowing and lending, if they want to use Uniswap for trading and they're going to hold a bunch of assets, It's like every single thing that they're doing just adds more state. And in order for state to really scale to global adoption and to have the shared global state that can hold the entire world all coordinating with each other, we need a system that can rely on SSD rather than on RAM. So it's a very technical kind of nerdy system.

Keone:
[30:08] Reason.

Keone:
[30:08] But at the end of the day, I just think that there is a fundamental approach

Keone:
[30:13] that we believe in, that we think like a single node, we can get a ton of performance out of that. And we can have a system that has thousands of nodes all globally distributed, all keeping in sync with each other, maintaining this shared global state that the entire world is on.

Ryan:
[30:30] Just those hardware requirements again, and maybe bandwidth requirements. So like, What are they to start? And three years down the road, what do you think they'll be?

Keone:
[30:39] The hardware requirements are 32 gigs of RAM, a two terabyte SSD, a reasonable CPU, and 100 megabits of bandwidth.

Ryan:
[30:53] Okay. And this is to run a node and a validator at home on the Monad network, correct?

Keone:
[31:00] That's correct. The bandwidth, just to be very precise, the bandwidth requirement for running a validator is a little bit higher. It's 300 megabits per second. Okay, I got it. But for a full node, it's 100 megabits.

Ryan:
[31:10] And then does this scale in the future? So, you know, get more state, that kind of thing, like three years from now, what will this look like?

Keone:
[31:18] Yeah, that's a great question. I think that the way to think about the different constraints or the things that can be much larger in the future, it would be the number of people that are using the chain that are adding to the state. Another variable would be the number of validators participating in consensus, the number of full nodes out there, and overall transaction usage, like the amount of transaction flow that's going through the system. And just to answer that question directly, on the state side of, We've tested nodes that have up to 30 terabytes of SSD without any issue. So that means it's like a 15x growth in state that is possible relative to that baseline set that we have right now. For context, Ethereum is about 250 gigs of state right now. So that's two orders of magnitude, 100x more state than Ethereum has right now. And I think that can continue to scale as well.

Ryan:
[32:19] So somebody with an at-home validator would need to add those solid-state drives over time to their existing rig, certainly. It's still possible to do from home. They would just need more hard drives, more solid-state, yes?

Keone:
[32:32] That's right, yeah. And for context, I think the cost of assembling that machine that I mentioned right now is about $1,500.

Ryan:
[32:41] So maybe talk for a minute about the direction that the Ethereum roadmap's going in, because I would like your perspective on that. So it is very different than what Monad is doing. So again, the emphasis is on snarks, validators, no longer doing the full validation of each block on chain. They're turning into verifiers that verify proofs and the block validation is happening elsewhere in the block production process.

Ryan:
[33:06] What do you think of that overall design? The so-called lean Ethereum roadmap that Justin Drake and others are talking about.

Keone:
[33:15] I'm excited about it. as I said, it's a very different direction. And when designing a system, you have to choose a direction and then execute, deliver, optimize. And on some level, like, you know, we evaluate the result years down the road when we see what the system is capable of. It's like building a rocket ship. And, you know, like a bunch of scientists can get together and, decide that a rocket ship that has, you know, fins that are like this certain shape are optimal. And then another one would be like, nope, there needs to be, I don't know, like a spoke hub and spokes. I'm just making this really dumb analogy, but you choose something, you test it a lot, but ultimately you build it and we get to see the results. And I think the thing that's exciting is that, you know, like all the things I've described are here today right now in Monad, like it isn't, a roadmap. It's here right now. It's open source. Anyone can go look at it. Anyone can contribute to it or learn from it. It's just like, it's a thing that's here now. And I think that can push EVM usage forward substantially. And I will also say that not.

Keone:
[34:30] Although Ethereum is certainly going down this lean Ethereum ZK route, there are still people in the Ethereum research community that are working on things that still do fit well into the system. People are working on single slot finality, or I think maybe it's like the current proposal is three slot finality or something. People are working on, there are researchers who are working on asynchronous execution in Ethereum right now. There is actually, you know, some dovetailing of the research interests and roadmap. But the cool thing is that for some of these things, like we have them here in Monad right now, anyone can look at the code, the Ethereum community can look at the code and adopt the code. And like, this is all very exciting.

Keone:
[35:18] And we're excited to work with Ethereum researchers on in their own. Because every blockchain is different. And like, you can't just directly like port code directly into another code base. But I think some of the ideas and things that are tried, the architectures are potentially translatable and certainly something that can be collaborated on.

Ryan:
[35:39] And Keone, this whole stack that we've talked about so far, are you saying this is this is open source? So anybody in the Ethereum community can basically take a look at this, adopt this at some level if they want to. I guess maybe for the Ethmaxies in the bankless audience listening to this, what benefit does your work, your development work on the EVM provide Ethereum in the future, do you think?

Keone:
[36:03] I think it's really valuable to have a fully functioning system that exists in production and that proves out the... Benefit or cost of various design decisions. I'll give you one example. Asynchronous execution, like I was saying, this is something that some folks in Ethereum research are interested in, and they're interested in it for very good reason, which is that it's actually very inefficient that consensus and execution are interleaved in Ethereum and other blockchains right now. It really massively reduces the time budget for execution substantially because execution has to get squeezed into a very small portion of the block time because consensus takes up most of the most of the block time and yeah it's one of the foundational improvements in monad and.

Keone:
[36:56] Will improve Ethereum if it can be implemented well in Ethereum. But the process of implementing it has been definitely like a significant effort in part because there's a lot of interactions with other aspects of the system.

Keone:
[37:11] EIP 7702 is a really good example of this because EIP 7702 allows EOAs, like end user accounts, to have code themselves and thus become smart contract accounts. It's a really cool innovation that makes account abstraction a lot more available to all people that have Ethereum accounts right now, makes them a lot more portable. And then the downstream benefit of that is that now Ethereum accounts become a lot more powerful because we can have different mechanisms of authentication like pass keys or native multi-sigs or means of backing up social recovery for accounts, things like that. Those are all enabled by account abstraction, and that's specifically all enabled by EIP 7702. But I can tell you that the process of making asynchronous execution work with EIP 7702 was a massive effort that involved developing a new way that consensus actually produces blocks and the way that consensus this interact with execution. Anyway, but the good news is like for the Ethereum research community, all of this has already been explored and went down a lot of paths that didn't work, found one that did work, and now anyone can look at that and just take that.

Ryan:
[38:36] Before this, Gioni, you worked at Jump Trading. I believe you're on the high-frequency trading team at Jump. What about Jump, or what did Jump teach you in high-frequency trading teach you about scaling a blockchain?

Keone:
[38:48] I was there for eight years, and over the course of those eight years, the trading system that my trading team built evolved quite a bit. It went from, from a performance perspective, it went from tens of microseconds of latency to below a single microsecond of latency. It also went from a very sort of proof-of-concept system or a very naturally defined, intuitively defined system into something that was really finely tuned and honed for efficiency. Every day we analyzed a lot of data and made decisions based on the data that we could see and ran a lot of experiments. I worked in a small team, learned how to ship a product rapidly and iteratively. And I learned how to take risk as well and how to manage risk. It was a really, I don't know, just a very good... Precursor to entrepreneurship, although definitely leading a team that is doing engineering work, but also doing a lot of non-engineering work and a lot of ecosystem support work is quite different as well. And so that's been really fun over the past couple of years.

Ryan:
[40:04] Were you there during the exciting times of the Terra Luna, you know, trades and the downfall or had you already left by then?

Keone:
[40:12] So I left Jump in January 22. I think Terra collapsed in May of that year. I was there right before the, or the, I think my last week was the week that Wormhole got hacked. So that was crazy. And it's crazy how far the industry has come since then. I remember when the

Keone:
[40:32] Monad community first formed, it was the week of the FTX collapse. And we've been through some pretty tough bear markets and conditions since then. But at the end of the day, it's like, because we know, because there's a vision and because there's a North Star that's very clear and very needed, there's no fear.

Ryan:
[40:54] I know Jump has gotten into some client development as well. And one thing that they had been working on was the Solana Fire Dancer client, which I believe is supposed to be a massively high throughput Solana client. I'm sure you don't have particular insights into the Fire Dancer project, or maybe you do, but having gone through similar engineering types of initiatives, do you have any sense of why Fire Dancer hasn't shipped yet?

Keone:
[41:22] That's a great question. I think.

Keone:
[41:24] That I actually

Keone:
[41:26] Don't know now that you asked that, and haven't thought about Fire Dancer in a little while, so it's kind of crazy. I do think that the Solana code base is massive and there's a lot of, sort of tacked at and situations where the spec is literally just the code base, at least this is historically what was true. And so I think maybe one thing that was challenging for Firedancer in developing a second client is that there is no, in some cases, no spec. The spec is just the first client. So then in order to build something that's to spec, you have to first coordinate with the maintainers of the other client

Keone:
[42:09] to define what the spec is. And yeah, I just think that maybe due to tech debt that had accumulated, they needed to work through some of that. That would be my impression.

Ryan:
[42:20] So Keone, could you give us the throughput stats for Monad at launch? I think you mentioned 10,000 transactions per second was the goal. How about block times, you know, of some of the other stats in terms of performance?

Keone:
[42:34] Yeah. Monad delivers 400 millisecond block times with two block finality. So two times 400 or 800 millisecond finality. Every block has a gas limit of currently of 200 million gas per block. So if you divide 200 by 0.4, you get 500 million gas per second, Which is great, which enables a lot of throughput, a lot of usages. Simple transfer is 21,000 gas. If you divide 500 million by 21,000, it's about 24,000 transfers per second or for more complex transactions like gas.

Keone:
[43:22] You know, say 50,000

Keone:
[43:23] Gas transaction, then 500 million divided by 50,000 is 10,000 TPS.

Ryan:
[43:28] One thing that's interesting there is the 400 millisecond block times. Something I've been thinking a lot about is Vitalik's, I forget where he said this. I also asked him about the last time he came on Bankless, but his quip that if you focus too much on latency, I think he phrases it as, if you become a high-frequency trading blockchain, you lose your soul. I think what he's referring to essentially is when you start playing the very low latency, millisecond block time type of game, then you start to invite centralization in, co-location, that sort of thing. In fact, earlier this week at DevConnect, this is a quote from his presentation. He said, low latency is the inherent cost of decentralization. If you want a geographically distributed neutral system that can be participated in worldwide, it's impossible for it to have a latency of 50 milliseconds, not 400, 50 milliseconds. If it did have that low latency, then all activity would eventually be concentrated in one city. Having worked a jump, you've built HFT types of engines. I'm sure you're very familiar with the types of optimizations and games HFT traders actually play. What do you make of this? Like, it would seem that 400 millisecond block times might actually decrease your ability at Monad to stay decentralized if Vitalik is right. Do you think he has a useful critique here?

Keone:
[44:57] I completely agree with Vitalik. I think that decentralization is the North Star for crypto and is certainly the North Star for Ethereum and for Monad. And that means that.

Keone:
[45:14] There needs to be geographically distributed and decentralized block production and consensus. And that naturally due to the laws of physics and how big the world is means that there is a floor on the block times that are possible. Like if two nodes are on opposite sides of the world, like let's say Sydney and New York, then the transit time from one to the other is on the order of 200 milliseconds. I think it's a little bit less than that, like 170 or something like that with optimal fiber. And that's literally just like how far apart they are. That does have an impact on what the block times can be. I think the cool thing about how this all has played out, though, is that 400 millisecond block times are still very fast from a human perspective. So if you are a user who's trading on a decentralized exchange or using a social app or playing a game or something like that, that 400 millisecond latency is close to imperceptible to you as a human. So there's sort of a natural, like a nice, happy medium where we have decentralization, we have global block production, but it's still really fast from the user perspective. And...

Keone:
[46:40] Yes, there are, like in centralized exchanges, there's this, I don't even want to say like tendency, it is literally like almost like just factual that in order to compete as a high-frequency trader on a centralized exchange, you need to co-locate. You need to have a server in the same data center where the matching engine is. And the exchanges go actually to great lengths to normalize all the cable lengths between all the servers in the data center and the matching engine. So there is not unfairness. And in years past, like when I first started working in 2011.

Keone:
[47:19] There were people whose jobs were to like, you know, test out different servers in a data center and try to see if there was one that had a faster connection to the exchange. So I think incentives ultimately drive behavior. And in the case of HFT, the incentive to in reacting to a centralized exchange is to try to get as close as possible. And then that sort of pushes up the cost of operating because everyone needs to rent servers in that one data center, which then allows the data center to charge a lot of money. Another funny anecdote if i'm rambling a little bit but cme chicago mercantile exchange actually moved their entire operations from a data center that they didn't control they were just renting out to a new data center that they owned in a different location so that they could charge rent on all of the servers that were next to them because before they were kind of creating value for someone else. And then there was, that was like a bad commercial decision. So they moved everything in order to be in a data center that they controlled. And I think it's just an example of where, yes, in an environment where we have centralized actors, centralized forces, there's going to be this push toward.

Keone:
[48:35] Value extraction and middlemen kind of coming in and the benefit of a decentralized system that kind of holds the, you know, puts up a bulwark against the waves of centralization. That's really where something special can happen.

Ryan:
[48:50] Yeah. So do you think you can hold that at 400 milliseconds, I guess, is the question? Because there's some debate in the Ethereum community. And there's probably no magic number, of course. It's a spectrum, right? Ethereum is 12 seconds right now. There's talk of dropping to six, to four and then two. You get lower than two though and maybe you get into some of the HFT wars where, you know, as Vitalik said, you're kind of destined to lose your soul. The incentives for centralization become too powerful to overcome and

Keone:
[49:18] Yeah, sure, it doesn't have to be 12 seconds,

Ryan:
[49:20] But at 400 milliseconds, do you think you can really hold the line against these HFT powers of centralization at Monad?

Keone:
[49:27] I think the way that I would frame it is that there are incentives for, there are advantages that centralized actors tend to have at the start, because, as you said, they can deliver a trading experience that's, you know, milliseconds. So actually, the question that you're asking, I would flip into a statement, which is basically that it is actually extremely important to re-engineer decentralized systems to be at the limit of what is possible while being decentralized and while allowing everyone to participate and having minimal hardware requirements. We should do whatever it takes to make decentralized systems more performant and more capable so that they can exist at the limit of what's possible and thus be competitive against what otherwise would be a very unfair playing field between decentralized and centralized systems. And then when you combine, when you accomplish that, but then you also get the network effects that come from a permissionless, borderless, credibly neutral network that people can coordinate with having to trust each

Keone:
[50:46] other. That's where something really special can happen.

Ryan:
[50:49] 400 milliseconds, if you got to 100 milliseconds or 50 milliseconds, would that be kind of like too much? Would you be uncomfortable with that? Is 400 seconds kind of your line?

Keone:
[50:57] My line is wherever there is a compromise on decentralization. So... The line is close to 400 milliseconds because that is the block time at which we can still have globally distributed validators and that there is not this centralizing force. I think it is literally impossible to have 100 millisecond block times while still preserving that property. So my line is not, it's fundamentally like downstream of the physical properties of the earth and fiber optic cables.

Ryan:
[51:32] Can't break the laws of physics, can we? Can we talk for a minute about MEV? So, Hasu from Flashbots and others have impressed upon me that MEV is actually another scale dimension that's important for blockchains, right? So, you know, maximal extractable value that is kind of a tax on users of the system. It certainly can incentivize some centralization as well. I know all chains kind of struggle with this, at least to some degree. Ethereum has seemed to find some ways to manage it. Solana is working on it. Does Monad bring anything special with respect to MEV extraction, ways to mitigate that? Do you have a philosophy for this?

Keone:
[52:16] Yeah, a couple of things to point out. I think that there are forms of MEV that are toxic, that are bad for users. Sandwiching and front-running are bad for users and an ideal system would be resilient to that and would not have that be a possibility.

Keone:
[52:38] Where we are right now in blockchains in general is that sandwiching can happen because blockchains exist as, you know, with public mempools because when transactions are submitted, they sit in a pending state before they're built into, incorporated into blocks and the ordering is chosen. And the leader has discretion about how they order those transactions. So that's sort of where we are right now. In my opinion, it is a huge problem for the industry overall. Ethereum and Solana each have seen a sort of third-party system developed that allows arbitrageurs to express ordering preferences and submit them as bundles to validators and have the validators incorporate those bundles in return for an extra fee that the submitter, the arbitrageur submits. And it's sort of a good news, bad news situation where on the good side, that fee mostly goes to stakers. So it's extra revenue for those stakers. On the downside, it is... Sort of like an efficient market that has, that enables like people to submit these ordering preferences.

Keone:
[53:58] Two things that I want to point out. The first is that Monad is.

Keone:
[54:02] Kind of like

Keone:
[54:03] Shaking up the boggle dice because asynchronous execution means that leaders don't generally know the state of the world right before, when they build a block, they don't know the immediate state right before that block because of the lag between consensus and execution. They're building off of a lagged state. So the default Monad client implementation only takes into account the priority fees and does a priority gas auction, which is like the way that Ethereum was several years ago. And I think in the short term, there will be less MEV happening on Monad because of this property, as well as the fact that the systems that are, I believe, building sort of third-party MEV solutions on Monad are only enabling bundles of size two, which typically are the, you know, someone submits a bundle, they're trying to land a transaction after a pending transaction, which are generally less toxic forms of MEV. Usually this is like an Oracle update shows up and that unlocks a liquidation opportunity. and anyone can submit a transaction that does that liquidation. There is a small amount of profit available to the person that wins that competition.

Keone:
[55:21] But this is not front-running. This is like an opportunity would exist no matter what due to the Oracle update. Anyway, so my point is that in the short term, I think that Monad is better positioned right now than maybe some other blockchains are. It's a temporary situation because the ecosystem is still very nascent. But in the longer term, what I would like to see is pre-trade privacy so that.

Keone:
[55:47] Like blocks are built, but they're built in a way where the builder doesn't know the actual, like what's in the transactions until after the builder's already committed to that block. And then afterward, there's some sort of unmasking of all of those transactions. Because pre-trade privacy, or rather pre-block building privacy, that actually is the thing that will ultimately address this MEV problem.

Ryan:
[56:12] Is there a world where Monad has layer twos? I was almost going to ask the question of kind of your philosophy of monad, whether it's a, these are terms we used to use more, don't so much, but monolithic versus modular type of design. Monolithic meaning like kind of it's just one single kind of flat state versus modular meaning you might have layer twos that are built on top of monad. Do you have a perspective on this?

Keone:
[56:36] I think that there's a huge amount of value that comes from shared global state that has atomic composability, which is to say the... I guess what we would call monolithic approach. When I say monolithic, I don't like the association that that has of implying that it's like a big, chunky hardware, like hit data center chain thing. You could use.

Ryan:
[57:00] The euphemism like integrated approach,

Keone:
[57:03] Integrated state, yes. Although also to be honest, I feel like people are using the modular versus monolithic terms like a lot less than they used to like a year, a year and a half ago.

Ryan:
[57:13] So let's talk about maybe the level of decentralization of Monad at mainnet launch. One other question, I'm not sure how much it applies, but are there any kind of kill switches or backdoors or admin keys that you guys are going to launch with? What parts of Monad are not decentralized at launch?

Keone:
[57:37] That's a great question. There are no kill switches, no admin keys, no multi-sigs. That is the really cool thing about building a decentralized layer one is that everything is just enforced through code and decentralized validator set that has to make decisions for itself about whether to adopt code changes.

Ryan:
[58:00] What do you expect the launch ecosystem to look like? So on day one, what sort of things can people do on the monad chain?

Keone:
[58:07] Well, first of all, I think that even the existing integrations and basically delivering a fully backward compatible system that integrates with all of the beloved dev tooling within the EVM ecosystem, while also keeping up with really high performance, that alone is something that I'm really excited about. There are great tools in EVM like Tenderly or Falcon, Blockade, MetaMask, Chainlink, Stablecoin issuers.

Ryan:
[58:43] All of that's going to work out of the box. Everything that works for the EVM and Ethereum ecosystem right now, that's going to work out of the box.

Keone:
[58:50] It will. It wasn't an easy process to get there in some cases because I'll just give you an example. Like certain simulation platforms have their own like modified Gath client in order to deliver all the simulation that they do. But of course, Monad doesn't use Geth. Monad uses a completely new tech stack. So they had to retrofit a lot of things to make it work while keeping up with the performance of the chain. But yes, that's the whole exciting thing is that developers don't have to, like we do all that work once, so then developers don't have to worry about they can use all the same tooling.

Ryan:
[59:27] Is there a single client at launch? Just the client produced by, is it the Monad Foundation?

Keone:
[59:33] The client is produced by Category Labs, previously known as Monad Labs. I will I'm really hopeful that a couple of years from now there are multiple or many clients running the monad protocol that would be something that would give me a lot of joy but yeah definitely one of the like for me the story about decentralization is from a protocol design perspective like enabling that level of decentralization and then over time like, getting multiple clients in place, having an even larger, even larger validator set because that is the North Star.

Ryan:
[1:00:11] Let's talk about the token for a minute because you are a layer one blockchain, of course, proof of stake. So there is a token involved. That token is going to go live, I believe, at Mainnet. Now, currently there is a token sale going on actually, which is sort of a first, first that I've seen. This is on Coinbase, the Coinbase platform. Makes me think that initial coin offerings are back or something like it. So can you talk about the coin offering of Monad, how you worked with Coinbase to make that happen? Give some folks some background who actually haven't seen what's going on there and then share a little bit more about what the token does in the Monad network.

Keone:
[1:00:56] We were extremely excited that Monad is the first project on Coinbase's new token sales platform because it really was the opportunity to allow many more people to get access to the token and thus achieve much broader distribution of the token before mainnet launch compared to what projects have been able to do in the past. In the past, I think that there's been a very airdrop oriented approach for distributing tokens ahead of launch. And there are definitely some nice things about airdrops, but it's also quite challenging to distribute a token through airdrops because

Keone:
[1:01:43] there are tons of airdrop hunters and people running bot farms to Sybil protocols. And at the end of the day, what I really care about is having as broad of a set of holders of the MON token as possible and contributing to the network's decentralization through that broad holder set.

Keone:
[1:02:04] And the token sale through a really reputable platform like Coinbase, which has a, you know, existing practice of, you know, how they onboard users already. And like that, that just ultimately allows the token to be distributed more fairly. And I think that's just important to the, at the genesis of mainnet to successful long-term growth.

Ryan:
[1:02:27] We would have never seen Coinbase kind of launch a token sale in any of the years that I've been in crypto. I mean, this really is the first. What's changed from an environment perspective, from a regulatory perspective to actually make this happen?

Keone:
[1:02:42] I think that it is actually surprising to me that this hasn't happened in the past because I do think that for this Coinbase token sale, and as I understand in general, the way that they're going to be approaching their token sales, These are token sales of mainnet-ready projects that are functionally complete and that are about to turn into public mainnet and potentially list a token on an exchange. So it's a more mature set of projects with much more stringent disclosure process compared to, I think, what has happened either even in the past year with other token sales or in the past, like in the 27, 2018 era of token sales. But yeah, I think it's a combination of Coinbase believing that now is the right time to expand their offerings and deliver a product that... Gives retail users the opportunity to participate in earlier stage projects that are still quite mature and that have a high degree of disclosures and operating practices. That combined with just over the past year, the greater interest in token sales and the proliferation of such platforms.

Ryan:
[1:04:12] So let's say I have the Monad token. Once Monad is mainnet, what can I do with it? So I'm assuming it's like another layer one cryptocurrency asset in which I pay for gas fees in Monad. I'm assuming there's a way for me to stake my Monad coins and earn some sort of return. I'm assuming if I have a certain amount maybe of Monad coins, I could spin up my own validator and start to stake Monad from home. Is all of that right? What else can I do with the token?

Keone:
[1:04:46] Yeah, those are all correct assumptions. Maybe something to point out also is that all insider tokens are locked and thus are not eligible for staking, which I think is also a unique aspect of Monad and Monad's launch relative to other projects. Sorry, the part about tokens being locked for insiders, that's not, that is typical. But what is atypical is that those locked tokens cannot be staked.

Ryan:
[1:05:13] Which means they will not receive any, I'm sure there's some sort of network issuance, some sort of block reward that you're providing to stakers and all of those locked tokens would be ineligible for that block reward. Is that correct?

Keone:
[1:05:25] That's right. Until they become unlocked, they're ineligible for staking. So that means that the opportunity to stake is really.

Keone:
[1:05:34] People that are

Keone:
[1:05:36] Receiving an airdrop or acquiring the token through the Coinbase token sale or on the secondary market after that would be the people that... Are able to stake.

Ryan:
[1:05:48] How did you think about sort of the issuance schedule for monad tokens? What does that look like?

Keone:
[1:05:55] And by the way,

Ryan:
[1:05:56] Are there any like slashing fees if a validator commits some sort of offense to the network?

Keone:
[1:06:02] Yeah, the block reward is 25 mon per block, which annualizes to 2 billion-ish mon per year. The total supply is 100 billion. So that's a 2% inflation rate in year one. The inflation rate is chosen to be as low as possible, and while still being high enough to reward the participation in the network as a staker. And so it's kind of like, you know, a little bit of a needle threading thing where we think that this is the optimum. I will say two things. One is that lower issuance rate means that there is less dilution for people that are not participating as stakers.

Keone:
[1:06:45] Staking is a really important role in the network and we wouldn't want to say that it's not something that is important and worthy of rewards. But on the other hand, there are a lot of blockchains. There's a lot of cases

Keone:
[1:06:58] where the rate is too high and it does a couple of things. One is it raises the effective cost of capital for that asset in DeFi. And a second thing is that it kind of penalizes all the people that have their native tokens if they're actively participating in DeFi or other ecosystem things that are going on and are not able to stake them. So having a low-ish staking, excuse me, a low-ish inflation rate ultimately kind of ensures that there's not too much of a penalty for not staking.

Ryan:
[1:07:35] Does the issuance schedule go down over time? Do you know the way Bitcoin does, where it sort of happens every once in a while? Or is it a bit more algorithmic the way Ethereum's issuance schedule is based on the number of validators? What's the policy there?

Keone:
[1:07:47] The policy is just flat. Issuance per block and thus basically flat issuance per year, assuming the same number of blocks per year.

Ryan:
[1:07:58] There's like 2% forever kind of thing, assuming the same blocks, number of blocks per year.

Keone:
[1:08:02] Well, a little bit. It's 2% in year one and then in year two, because the denominator is 102 billion instead of 100, but the numerator is still two. So it's going to go down a little bit.

Ryan:
[1:08:12] I see. Okay. Interesting. In percentage terms. What's your take on token valuation in general for layer ones? So I think we sort of, in crypto for a while, this has been an ongoing discussion. There's been many different takes at this. I think we're now at a phase in crypto where there's kind of two ways of viewing an asset and different assets maybe fall in one or both of these buckets, which is viewing it based on revenue. How much revenue does the token actually return to token holders? Or viewing the asset as a monetary asset, a store of value. So give canonical examples for both. Store of value asset, Bitcoin, right? It's definitely not valued on its discounted cash flows, definitely not valued on revenue. Revenue asset, more something like, say, Aave, which is kind of valued as a discounted cash flow of future revenues. I've argued and Bankless has argued that

Keone:
[1:09:12] Of the asset maybe hits both of those boxes,

Ryan:
[1:09:15] Is more on the store of value side. I think the more decentralized your layer one network, the more you can kind of get away with being on the store of value side of the equation. But some people think that's just, you know, meme science. This is not really real.

Ryan:
[1:09:29] There is no such thing as store of value. It's all kind of a narrative. What's your take on token valuation for something like the Monad token?

Keone:
[1:09:37] I would say that first, at the end of the day, the most important thing is network effects and the value that's ultimately being unlocked to all of the users of that network. I think that different systems that enable that value creation that are kind of downstream of that will ultimately perhaps inherit some of that value through different mechanisms like transaction fee processing and, certain effects from being the native currency of an economy that is substantial and growing and that's enabling a lot of value transfer and value creation. But I think at the end of the day, the thing that the crypto industry needs to focus on most is just growth of amount of value that's being created for the end users and everything else will kind of follow from that.

Ryan:
[1:10:30] We've talked about Monad versus Solana. We've talked about it in comparison to Ethereum. But I think the biggest comparison that I often see for Monad is MegaEth for some reason. So MegaEth is a very high performance layer two that's coming out, I believe, at the same time. I feel like you guys are like, I don't know, talking to one another about your

Ryan:
[1:10:51] release schedules and like timing it. So you're doing things very closely. I mean, token sale around the same time. I think Anet is happening around the same time. Why do people compare Monad to MegaEth? Is it just a product of you've been building together and you have similar timelines or are there some underlying similarities there?

Keone:
[1:11:10] I think that there are certainly some surface level similarities in the sense that both, you know, people from both projects probably talk about performance and the need to make the EVM more performant. I think the differences, though, are in the approach. And I would say for Monad, we're just really focused on decentralization and the complete problem of making... Consensus really performant and building a really performant decentralized layer one ultimately to address the bottlenecks that we see and the trade-offs that exist otherwise right now without those foundational software improvements. I think MAGA ETH is maybe, from what I understand, pretty focused on hardware assumptions and having really high hardware requirements, whereas Because if you're just asking me to compare them, I would say that there's a spectrum of low hardware requirements, really high hardware requirements. And I can't speak for them, but I think on the Monad side, there's just a really high focus on allowing anyone to run a node and really making it cheap and feasible for everyone to participate.

Ryan:
[1:12:30] I think the MegaEth reply would be basically like, oh, well, you know, the decentralization part, the consensus decentralization part. We effectively as a layer two, we've outsourced that or in the process of outsourcing that to Ethereum. So we don't really need to think about decentralizing our consensus layer because that's what Ethereum is there for. And maybe there's some sequencer stuff to decentralize or something like that. But I think that would be the reply. In fact, that's the entire layer two design. So maybe it comes full circle to like, you know, comparing layer twos to a high throughput EVM layer one. But what do you make of that response?

Keone:
[1:13:11] I think that there are different layer two designs and plans that would need to be evaluated against each other with consideration for...

Keone:
[1:13:23] How they're utilizing Ethereum for data availability or not, what the trust assumptions are.

Keone:
[1:13:30] How feasible it is for, for example, a high-performance optimistic roll-up to actually have other nodes keeping up and verifying, because that's really what an optimistic roll-up is assuming, is that there are a bunch of other nodes that are all keeping up and independently verifying so that they can note if there is a fraud. And raise a fraud proof. So I think the debate between different layer two systems will kind of focus on that. I agree with your framing that ultimately, if there's a question of like the Monad approach of making the layer one really performant and introducing new technologies to achieve a degree of scale all within a singular shared global state that's fully globally decentralized versus the approach of like having a constellation of different layer twos that all are utilizing Ethereum for a component of the of the work but not others then I think that I will just say that the Monad design delivers performance that is needed right now and also delivers a high degree of decentralization right now and delivers that fast finality right now Some of the struggles that

Keone:
[1:14:49] exist right now are basically are in other ecosystems are related to slow finality. And in particular, that problem is already kind of addressed in Monad.

Ryan:
[1:14:59] Take us to the year 2030. If Monad is successful,

Keone:
[1:15:03] What does that look like?

Keone:
[1:15:04] It really looks like a couple of breakout apps that everybody uses that are powered by decentralized Rails. It looks like many more people having access to financial tools that are, not developed in their country that are just built for a global buy and for a global audience. It means everyone having access to dollars. It means everyone having access to competitive yield markets where they can earn yield on their dollar deposits at a competitive rate rather than whatever is just local to what their bank offers them. It means people being able to build businesses and, you know, take out loans or get access to capital markets that are better than whatever their local market offers them. It's really about a more interconnected world that can coordinate on top of a decentralized trustless layer.

Ryan:
[1:16:05] Talk about a failure mode. So if Monad fails, like why do you think it will fail?

Keone:
[1:16:10] That's a great question. I haven't envisioned failure in a specific way. I think that.

Keone:
[1:16:18] The most important thing is execution and speed,

Keone:
[1:16:23] And those are really related to both technology and adoption. I guess the failure mode would be that it ends up being that no one cares about the properties that we value deeply and that we're championing and that other folks in the ecosystem champion and have been

Keone:
[1:16:46] championing for a long time in a way that's very inspiring to us. I think that's really the failure mode.

Ryan:
[1:16:51] Keone, thank you so much for joining us today. This has been great. I'm not sure when this episode is going out, but remind us of the mainnet date. Is it the 24th of November?

Keone:
[1:17:02] That's correct. Next Monday.

Ryan:
[1:17:04] All right. So it's next Monday at the time of recording. This might go out earlier

Ryan:
[1:17:07] than that, or it might be on the 25th. So if we are living in the future here and you're listening to this, then the Monad mainnet may be available. And what's your advice? What should people do their very first thing if they want to go check this out?

Keone:
[1:17:21] I think that people should check out the validator map. It's kind of, it's just like a reminder of the, like the physical manifestation of decentralization. If you Google for validator map for any blockchain, you got one for Ethereum, you get one for Monad, you don't get one for many other blockchains. So I think you should check that out first. It doesn't require having a wallet or anything.

Ryan:
[1:17:46] And what should people be impressed by? I think I have looked at this, like 300 validators or so and fairly evenly distributed in various geographies, right?

Keone:
[1:17:54] Yeah, it's really showing the decentralization and the performance and the block times and the pace at which Monad moves.

Keone:
[1:18:02] Very good.

Ryan:
[1:18:03] We'll include a link in the show notes for that. Keone, thank you so much for joining us today. Thanks for having me, Ryan. Bankless Nation, got to let you know, of course, crypto is risky. So are new crypto networks. You could lose what you put in, but we are headed west. This is the frontier. It's not for everyone, but we're glad you're with us on the Bankless journey. Thanks a lot.

No Responses
Search Bankless