How Ethereum Can Achieve a Simplified Architecture Rivaling Bitcoin

·

Ethereum aims to serve as the world’s ledger—a foundational platform for storing digital assets and critical records, supporting finance, governance, and high-value data authentication. To fulfill this vision, it must achieve both scalability and resilience. The upcoming Fusaka hard fork is expected to increase data availability for Layer 2 (L2) solutions by tenfold, and the proposed 2026 roadmap includes similar large-scale enhancements for Layer 1 (L1). At the same time, Ethereum continues to improve through the Merge to Proof of Stake, greater client diversity, advances in zero-knowledge verifiability, quantum resistance, and more robust applications.

This article focuses on an often overlooked yet crucial aspect of resilience—and by extension, scalability—which is simplicity in protocol design.

Bitcoin is widely admired for its elegant and minimalist protocol architecture.

A blockchain consists of a sequence of blocks, each cryptographically linked to the previous one via hashing. Block validity is verified through Proof of Work, where the hash must begin with a certain number of zeros. Each block contains multiple transactions that spend coins originating either from mining rewards or prior transaction outputs. This constitutes the core logic of Bitcoin. Even a high school student could grasp the essentials, and a developer could implement a basic client as a side project.

Maintaining simplicity offers Bitcoin and Ethereum critical advantages as neutral global base layers:

Historically, Ethereum has not always prioritized simplicity—often in pursuit of short-term gains that proved ineffective. This has led to elevated development costs, recurring security risks, and a relatively closed development culture. In the following sections, we explore how Ethereum can approach Bitcoin’s level of simplicity over the next five years.

Simplifying the Consensus Layer

A new consensus layer proposal—previously referred to as “beam chains”—aims to integrate learnings from a decade of research in consensus theory, zero-knowledge proofs (ZK-SNARKs), and staking economics. This new architecture would represent a long-term optimal consensus mechanism for Ethereum, offering significant simplification over the current beacon chain:

Simplifying the Execution Layer

The Ethereum Virtual Machine (EVM) has grown increasingly complex over time. Many design choices—such as a 256-bit architecture optimized for cryptographic operations that are now less relevant, and underused precompiles for narrow use cases—have added needless overhead.

Incremental fixes have proven insufficient. Removing the SELFDESTRUCT opcode required extensive effort for limited gain, and recent debates around EVM Object Format (EOF) illustrate how even moderate changes can be contentious.

An alternative path is a more radical transition: instead of introducing moderately disruptive upgrades for 1.5x gains, move directly to a new virtual machine architecture capable of 100x improvements. Similar to the Merge, this approach would reduce the number of breaking changes while increasing the value of each shift. Specifically, adopting RISC-V or the virtual machine used in Ethereum ZK proof systems could offer:

The main challenge is that, unlike EOF, a new VM would take longer to benefit developers. Short-term transitional improvements—such as larger contract size limits and optimized DUP/SWAP instructions—could be implemented in the meantime.

This shift would significantly simplify the virtual machine architecture. The central question remains: how should existing EVM contracts be handled?

Backward Compatibility Strategies for VM Migration

The biggest obstacle to simplifying (or optimizing without adding complexity) any part of the EVM is maintaining backward compatibility for existing applications.

It’s important to recognize that there is no single definition of the “Ethereum codebase,” even for a single client.

The goal is to minimize the green zone—code that nodes must run to participate in Ethereum consensus. This includes computing the current state, generating and verifying proofs, and basic block construction.

The orange zone is unavoidable: if execution layer features (including VM operations or precompiles) are removed or altered, clients processing historical blocks must retain that logic. However, new clients—including ZK-EVMs and formal verification tools—could ignore it entirely.

A new yellow zone may emerge: code valuable for parsing current chain data or optimizing block construction, but not part of consensus logic. Examples include Etherscan’s support for ERC-4337 UserOperations or block builders processing legacy transaction types. If core Ethereum features (like Externally Owned Accounts and legacy transactions) are replaced with on-chain RISC-V implementations, consensus code would simplify dramatically, though specialized nodes might still use legacy logic for parsing.

Complexity in the orange and yellow zones is encapsulated. Anyone seeking to understand the protocol can safely ignore these components, and Ethereum implementations may choose not to support them. Bugs in these areas would not cause consensus failures. This means orange and yellow zone complexity is far less harmful than green zone complexity.

Migrating code from the green to the yellow zone mirrors Apple’s use of the Rosetta translation layer to ensure long-term backward compatibility.

All new precompiles should include a canonical on-chain RISC-V implementation. This approach encourages the ecosystem to adapt gradually to a RISC-V VM environment (the same strategy applies if migrating to Cairo or another superior VM):

  1. Dual native VM support: The protocol natively supports both RISC-V and EVM. Developers can choose their language, and contracts across VMs can interoperate seamlessly.
  2. Phased precompile replacement: All precompiles—except those for elliptic curve operations and KECCAK hashing (due to extreme performance requirements)—are replaced via hard fork with RISC-V implementations.
  3. Operational details: When a precompile is removed, the code at its address is set to the corresponding RISC-V implementation (using a DAO fork-style mechanism). Thanks to RISC-V’s simplicity, this step alone reduces overall system complexity.
  4. On-chain EVM interpreter: An EVM interpreter implemented in RISC-V (as already developed in ZK toolchains) is deployed on-chain as a smart contract. After several years, existing EVM contracts would run through this interpreter, completing a smooth transition.

Once step four is implemented, many “EVM implementations” will remain useful for block building, developer tooling, and chain analysis—but they will no longer be part of the core consensus specification. At that point, Ethereum consensus will natively support only the RISC-V architecture.

Simplifying Through Shared Protocol Components

A third—and often underestimated—method for reducing overall protocol complexity is to share standardized components across different layers of the protocol stack. Using different implementations for the same function is usually unnecessary and inefficient, yet it remains common due to poor coordination across roadmap initiatives. Below are examples where Ethereum could be simplified through cross-layer component reuse.

Unified Erasure Coding Scheme

Erasure coding is used in three primary scenarios:

  1. Data Availability Sampling (DAS): Clients use erasure coding to verify that block data has been published completely.
  2. Efficient P2P broadcasting: Nodes can confirm a block after receiving n/2 shards, optimizing the trade-off between latency and redundancy.
  3. Distributed history storage: Ethereum historical data is split into chunks such that:

    • Each chunk can be verified independently.
    • Any n/2 chunks can reconstruct the other n/2.

This design reduces the risk of single-point data loss.

Using the same erasure coding scheme (e.g., Reed-Solomon, random linear codes) across all three scenarios offers significant benefits:

  1. Less code to maintain.
  2. Improved efficiency: Data downloaded for one purpose (e.g., DAS) can be reused elsewhere, avoiding redundant transfers.
  3. Uniform verification: All chunks can be verified against a single root hash.

If different encodings are used, they must be compatible—for example, operating within the same finite field.

Unified Serialization Format

Ethereum’s serialization format is currently semi-normalized: data can be re-serialized arbitrarily and propagated, with the exception of transaction signature hashes, which require a canonical format for consistency.

However, future changes will increase the need for canonical serialization, driven by:

These shifts present an opportunity to unify serialization standards across three key layers: (i) execution layer, (ii) consensus layer, and (iii) smart contract ABI.

SSZ is the recommended serialization format due to its:

Work toward full SSZ adoption is already underway. Future upgrades should continue and expand these efforts.

Unified Shared Tree Structure

After transitioning from the EVM to RISC-V (or another minimalist VM), the hexary Merkle Patricia trie will become the primary performance bottleneck for execution proofs—even in common cases. Switching to a binary tree based on better hash functions would improve proving efficiency and reduce data storage costs for light clients and other applications.

During this migration, the same tree structure should be adopted for both the consensus and execution layers. This ensures the entire Ethereum stack uses a single codebase for data access and parsing.

The Path from Present to Future State

Simplicity shares much with decentralization—both are foundational to system resilience. Embracing simplicity as a core value requires cultural change: benefits are rarely immediate, while the short-term appeal of complex features is obvious. Over time, however, simplicity’s advantages compound—as clearly demonstrated by Bitcoin’s trajectory.

I propose that Ethereum protocol development draw inspiration from projects like TinyGrad by setting explicit targets for limiting code size in the long-term Ethereum specification. The goal should be to make Ethereum’s consensus-critical code nearly as simple as Bitcoin’s. Historical logic can be preserved but must be isolated from consensus-critical paths. Design choices should prioritize simpler solutions, encapsulate rather than spread complexity, and ensure all decisions come with clear, verifiable guarantees. In this way, we can foster a culture that values simplicity at its core.

👉 Explore advanced blockchain strategies


Frequently Asked Questions

What is meant by “simplicity” in blockchain design?
Simplicity refers to minimalism in protocol architecture—reducing unnecessary complexity in code, consensus rules, and system design. This makes the protocol easier to analyze, secure, and maintain over the long term.

How does simplicity improve security?
A simpler codebase has fewer potential vulnerabilities, is easier to audit, and reduces the risk of unintended interactions between components. It also lowers the technical barrier for developers and researchers to contribute meaningfully.

Will migrating to a new VM break existing smart contracts?
No. Backward compatibility strategies—such as on-chain interpreters and phased upgrades—are designed to ensure existing contracts continue functioning without interruption throughout the transition.

What is the benefit of unifying erasure coding or serialization formats?
Using the same standards across layers reduces redundant code, improves efficiency, and simplifies verification. It also ensures consistency between different components of the ecosystem.

How long might full simplification take?
This depends on community consensus and technical readiness. Some components, like consensus layer improvements, could be implemented sooner, while VM migration may take multiple years to complete thoroughly.

Can Ethereum really become as simple as Bitcoin?
Not exactly—Ethereum’s programmability inherently requires more complexity. However, by isolating necessary complexity and simplifying consensus-critical components, it can approach similar levels of elegance and robustness.