• Building
    • Firedancer
    • the Pit
    • Cyclone
  • Thinking
  • Connect
  • Building
    • Firedancer
    • the Pit
    • Cyclone
    • Collaborations
  • Thinking
  • About
Terms of Use_Privacy Policy_Disclaimers_

A Foray Into Blockchain Scalability

Rahul Maganti
Rahul Maganti
InfrastructureResearchZero-KnowledgeEcosystem

Mar 31 2022 _ 15 min read

A Foray Into Blockchain Scalability

Introduction

In the last article, we developed a framework for analyzing L1s. We pointed out briefly that the motivation behind many of these novel L1s has largely been centered around finding solutions to blockchain scalability. Let’s take a closer look at a few of these solutions. In this piece, we aim to:

  • Provide an overview of various layer-1 and layer-2 scaling solutions.
  • Analyze and compare these different solutions along some core dimensions.
  • Give our view on which scaling architectures are most promising.

The Scalability Trilemma

In a blog post from early 2017, Vitalik Buterin coined the Scalability Trilemma, referring to three primary properties that define the viability of blockchain systems: (1) Decentralization; (2) Security; (3) Scalability.‌‌

Of these three, we argue that scalability remains the most difficult to solve without excessively compromising the other two pillars. Security and decentralization are still critical to the performance of these systems, but as we’ll see later, tackling the challenge of scaling distributed systems has also provided key breakthroughs in decentralization and security for very fundamental reasons. As a result, we emphasize that the ability to scale blockchains effectively will be a key factor in determining the future success of crypto more generally.

Broadly speaking, there are two main categories of scaling: layer-1 and layer-2. Both are relevant and critical to increasing throughput on blockchains but focus on different aspects and even layers of the Web3 stack. Scaling has undoubtedly gotten a lot of attention in the past few years and is often touted as a critical path to the mass adoption of blockchain technology, especially as retail use continues to climb and transaction volume ramps up.

Layer-1 (L1s)

There are few primary scaling architectures that have risen to prominence:

  • State Sharding
  • Parallel execution
  • Improvements to Consensus models
  • Validity Proofs

State Sharding

There are many varieties of sharding but the core principles remain the same across the board:

  • shards distribute the cost of verification and computation so that every node is not required to verify every transaction.
  • nodes in the shards, like in the larger chain, must: (1) relay transactions; (2) verify transactions; (3) store the state of the shard.
  • sharded chains should preserve the security primitives of the un-sharded chain through: (1) an effective consensus mechanism; (2) secure proofs or signature aggregation.  

Sharding allows a single chain to be split into K different, independent subnets, or shards. If there are N total nodes in the network S, then there are N/K nodes operating each of the K subnets. When a set of nodes in a given shard (say K_1) verifies a block, it spits out a proof or a set of signatures claiming that this subnet is valid. Then all the other nodes, S - {K_1} need to do is verify the signatures or the proof. (the time to verify is usually much smaller than rerunning the computation itself).

To understand the scaling benefits of sharding, it’s crucial to understand the value this architecture provides in increasing the total computational capacity of the chain. Now, suppose that the capacity of one node, on average, is on the order of C: O(C). Suppose that the chain needs to process B blocks. The computational capacity of the un-sharded chain is trivially O(C); however, because the sharded chain can process the blocks in parallel, the capacity of the sharded chain is: O(CB). Generally, run-time savings are multiplicative! A more technical in-depth explanation by Vitalik can be found here. Sharding has been most notably a foundational component in the roadmap for Ethereum 2.0 and in NEAR.

Parallel Execution

Sharding and Parallel Execution are in many ways similar. While sharding attempts to parallelize the verification of blocks across different subchains, parallel execution focuses on splitting up the work of processing individual transactions across nodes. The effect of this architecture, is that nodes can now process thousands of contracts in parallel!

We won't go through the technical details for how this works but here’s a great post diving more in-depth on how parallel execution works in Solana through Sealevel.

Consensus Models

Consensus is at the core of layer-1 blockchain protocols - for transactions / data to be finalized on-chain, participants in the network need a way to mutually agree on the state of the chain.  Consensus is therefore a means of ensuring consistency in the shared state as the new transactions are added and the chain progresses. Different consensus mechanisms, however, can also lead to radical differences in the key measures by which we measure the performance of a blockchain: security, fault tolerance, decentralization, scalability, and many more. Consensus models alone, however, do not determine the performance of a blockchain system. Different consensus models work well with different scaling mechanisms and can ultimately determine the efficacy of a particular network.

Layer-2 (L2s)

Fundamentally, layer-2 scaling is based on the premise that resources on the layer-1, computational or otherwise, have become prohibitively expensive. To lower costs for users, services, and other community participants, heavy computational loads should be moved off-chain (layer-2) while still attempting to preserve the underlying security guarantees provided by cryptographic and game-theoretic primitives on the layer-1 (public-private key pairs, elliptic curves, consensus model, etc...)

Earlier attempts at this largely involved establishing a "trusted channel" between two parties off-chain and then finalizing the state update on the layer-1. State channels do precisely this by "locking up some portion of blockchain state into a multisig contract, controlled by a defined set of participants." Plasma chains, first proposed in this paper by Vitalik: allow for the creation of an infinite number of side-chains. Fraud proofs (PoW, PoS) are then used to finalize transactions on the layer-1s.

Rollups (What are they good for) + Flavors

Rollups are also a way of moving computation off-chain (layer-2) while still recording messages or transaction on-chain (layer-1). Transactions that would have otherwise been recorded, mined, and validated on the layer-1 are recorded, aggregated, and validated on layer-2 and then posted to the original layer-1. This model accomplishes two goals: (1) frees up computational resources on the base layer; (2) still preserves the underlying cryptographic security guarantees on layer-1.

Transactions are “rolled up” and then passed to a Inbox contract Transactions ordered by the Sequencer
The contract stored on L2 executes the calls off-chain Contract
The contract then posts a Merkle root of the new state back to the L1 chain as calldata

Optimistic Rollups

Optimistic rollups are just that - optimistic. Validators will post transactions to the chain with the a priori assumption that they are valid. If they so choose, other validators can challenge the transaction but are certainly not required to. (think of it like an innocent-until-proven-guilty model). Once a challenge has been initiated, however, the two parties (say Alice and Bob) are forced to participate in a dispute resolution protocol.

At a high-level the dispute resolution algorithm works as follows:

  1. Alice claims that her assertion is correct. Bob disagrees.
  2. Alice then dissects the assertion into equal parts (for simplicity assume that this is a bisection)
  3. Bob must then choose which part of the assertion (say the first half) he believes is false
  4. Recursively run steps 1 - 3.
  5. Alice and Bob play this game until the size of the sub-assertion is just one instruction. Now, the protocol simply executes this instruction. If Alice is correct, then Bob loses his stake and vice-versa.

A more in-depth explanation of Arbitrum's dispute resolution protocol can be found here.

In the optimistic case, the cost is small/constant O(1). In the case of a dispute, the algorithm runs in O(log n) where n is the size of the original assertion.

One key consequence of this optimistic validation and dispute resolution architecture is that optimistic rollups have a one-honest-party guarantee, meaning that for the chain to be secure, the protocol only requires one honest party to find and report fraud.

Zero-Knowledge Rollups

In many blockchain systems and layer-1s today, consensus is achieved by effectively "re-running" transaction computation to verify state updates to the chain. In other words, for a transaction to finalize on the network, nodes in the network need to execute the same computation. This might seems like a naïve approach to verifying the history of the chain - and it is! The question then becomes, is there a way then to ensure that we can quickly verify the correctness of a transaction without having to replicate the computation across a large set of nodes. (this idea is at the core of P vs. NP for those with some background on complexity theory) Well, yes! This is where ZK rollups come in handy - in effect, they ensure that the cost of verification is substantially less than the cost of executing computation.

Now, let's dig into how ZK-Rollups are able to achieve this while maintaining a high degree of security. At a high-level ZK-rollup protocol consist of the following components:

  • ZK Verifier - verifies the proof on-chain.
  • ZK Prover - takes in data from an application or service and spits out a proof.
  • On-chain contract - keeps track of on-chain data and verifies the state of the system.

There are are plethora of ZK proof systems that have emerged, especially in the last year. There are broadly two main classes of proofs that have risen to some measure of notoriety: (1) SNARKs; (2) STARKs, although the lines between them are getting more and more blurred everyday.

We won’t go into the technical minutiae of how ZK proof systems work now, but here’s nice a diagram that illustrates how we get from a smart contract to something resembling a proof that can be verified efficiently.

Key Dimensions of Comparison Between Rollup Flavors

Speed

The goal of scaling as we mentioned before is to provide a way to improve the speed at which transactions can be processed by the network while also reducing the cost of computation. Because Optimistic Rollups don’t have generate proofs for every transaction (in the honest case there is no additional overhead), they are generally much faster than ZK Rollups.

Privacy

ZK proofs are inherently privacy-preserving in that they do not require access to the underlying parameters of the computation to verify it. Think of the the following concrete example: let’s say I wanted to prove to you that I know the combination of a lock to a box. A naive approach would be to just share the combination with you and ask you to try and open the box. If the box opens, then it’s clear I knew the combination. But suppose I had to prove I know the combination without revealing any information about the combination. Let’s design a simple ZK-proof protocol to demonstrate how this would work:

  • I ask you to write down a sentence on sheet of paper
  • I hand the box over to you and ask you to split the piece of paper through a small slit in the box
  • I turn my back to you and punch in the combination to the box
  • I open the note and hand you back the note.
  • You confirm the note is yours!

And that’s it! a simple zero-knowledge proof. Once you confirm that the note was in fact the same note you placed in the box, I have proved to you that I was able to open the box and therefore knew the combination to the box a priori.

In this way, ZKPs are particularly good at allowing one party to prove the veracity of a statement to a counter-party without revealing any information the counter-party wouldn’t otherwise have.

EVM Compatibility

The Ethereum Virtual Machine (EVM) defines a set of instructions or opcodes that implement basic computer and blockchain-specific operations. Smart contracts on Ethereum compile down to this bytecode. The bytecode is then executed as EVM opcodes. EVM-compatibility means there is a 1:1 mapping between the virtual machine instruction set you are running and the EVM instruction set.

The largest layer-2 solutions out in the market today are built on Ethereum. When Ethereum native projects want to move to a layer-2, EVM-compatibility offers a seamless, minimal-code path to scaling. Project simply need to redeploy their contracts on the L2 and bridge over their tokens from L1.

The largest optimistic rollup projects, Arbitrum and Optimism/Boba, are both EVM-compatible. zkSync is one of the few ZKRollups that was built with EVM-compatibility in mind but still lacks support for a few EVM opcodes, including ADDMOD, SMOD, MULMOD, EXP, and CREATE2 as explained here. While the inability to support CREATE2 in particular does pose an issue for counterfactual interactions with contracts, limiting upgradeability and user onboarding, our view is that support for these opcodes will be realized imminently and will not be a significant barrier to the use of ZK rollups in the long-run.

Bridging

Because L2s are separate chains, they do not inherit native L1 tokens automatically. Native L1 tokens on Ethereum must be bridged over to the corresponding L2 to interact with dApps and services deployed there. The ability to bridge over tokens seamlessly remains a key challenge, with different projects exploring a variety of architectures. Generally, once a user calls deposit on L1, an equivalent token needs to be minted on the L2 side. Designing a highly general architecture for this process can be especially difficult, with the wide range of tokens and token standards driving protocols.

Finality

Finality refers to the ability to confirm the validity of a transaction on-chain. On layer-1, when a user submits a transaction, it is finalized almost instantaneously. (notwithstanding the time it takes for nodes to process the transactions from the mempool). On layer-2s this is not necessarily the case. A state update submitted to a layer-2 chain operating an optimistic rollup protocol will first assume that the update is valid. However, if the validator who submitted this update is malicious, there needs to be enough time for an honest party to challenge the claim. Typically, this challenge period is set to ~7 days. On average, a user looking to withdraw funds from L2 will likely have to wait around 2 weeks!

Zk rollups, on the other hand, do not require this large challenge period because every state update is verified using the proof system. As a result, transactions on ZK rollup protocols are just as final as transactions on the underlying layer-1. Unsurprisingly, the instant finality offered by ZK rollups has become a key advantage in the fight for L2 scaling supremacy.

Instant Liquidity as a means of fast finality

Some have argued that while optimistic rollups do not necessarily guarantee fast finality on L1, fast withdrawals offer a clear, easy-to-use workaround by giving users access to funds before the end of the challenge period. While this does give users a means to access their liquidity, there are several problems with this approach:

  • additional overhead for maintaining liquidity pools for L2 to L1 withdrawals.
  • fast withdrawals are not generic - only support token withdrawals. Cannot support arbitrary L2 to L1 calls.
  • Liquidity providers have no way of guaranteeing the validity of the transaction until the end of the challenge period.
  • Liquidity providers must either: (1) trust those they are providing liquidity to, limiting the benefits of decentralization; (2) construct their own fraud / validity proofs, effectively defeating the purpose of leveraging the fraud proofs/consensus protocol built into the L2 chain.

Sequencing

Sequencers are like any other full nodes on but are given arbitrary control over the ordering transactions in the inbox queue. Without this ordering, other nodes / participants in the network could not be sure of the results of a particular batch of transactions. In this sense, this offers users some measure of determinism in the execution of transactions.

The main arguments against the use of sequencers for this purpose is that they create a central point of failure — if and when the sequencer fails, liveness can be compromised. Hold on a second...what is the point of this? Doesn't this defeat the vision of decentralization? Well...kind of. Sequencers are typically run by the projects developing the L2 and are often seen as semi-trusted entities that generally act in favor of the project stakeholders. For the decentralization hardliners biting your nails at the thought of this, you may find some solace in knowing that there is a fair bit of work / research being done on decentralized fair sequencing here and here.

Recent sequencer outages on large L2 ecosystems, including on Arbitrum / Optimism, continue to demonstrate the need for fault-tolerant, decentralized sequencing.

Capital Efficiency

Another key point of comparison between optimistic rollups and their ZK counterparts is their capital efficiency. As we described earlier, optimistic L2s rely on fraud proofs to secure the chain while ZK rollups take advantage of validity proofs.

The security that fraud proofs provide are based on a simple game-theoretic principle: the cost to an attacker attempting to fork the chain should exceed the value they are able to extract from the network. In the case of optimistic rollups, validators stake a certain amount of a token (ex. ETH) on the rollup blocks they believe to be valid as the chain progresses. Malicious actors (those found to be guilty and reported on by an honest node) are slashed.

As a result, there is a fundamental tradeoff between capital efficiency and security. Improving capital efficiency likely requires shortening the delay / challenge period while also increasing the likelihood that the fraudulent assertions are not detected or challenged by other validators in the network.

Shifting the delay period is equivalent to shifting along the Capital Efficiency vs. Delay Period curve. As the delay period moves, however, users need to consider its implications on the tradeoff between security and finality - they would be otherwise indifferent to these shifts.

The current 7 delay period on projects like Arbitrum and Optimism was decided on by the community with these dimensions in mind. Here’s an in-depth explanation by Ed Felten from Offchain Labs on how they decided on the optimal length for the delay period.

By construction (reliance on cryptographic assumptions rather than game-theoretic ones), validity proofs are not susceptible to the same capital efficiency / security tradeoffs.

Application-Specific Chains / Scaling

When we talk about a multi-chain future, what exactly do we mean? Will there be a multitude of performant layer-1s with different architectures, a larger number of layer-2 scaling solutions, or a just a few layer-3 chains with bespoke optimizations targeted at custom use-cases?

Our belief is that the demand for blockchain-based services will be driven fundamentally by user demand for specific types of applications, whether that be NFT minting or DeFi protocols for lending and borrowing, staking, and etc....In the long-term, like with any technology, we expect users to want to abstract away from the underlying primitives (in this case, the L1s and L2s that provide the core infrastructure for settlement, scalability, and security).

Application-specific chains provide a mechanism to deploy high performant services by leveraging narrow optimizations. Consequently, we expect these types of chains to be a critical component of Web3 infrastructure aimed at driving mass adoption.

There are two main ways these chains could emerge:

  • independent ecosystems with their own primitives focused on very specific applications.
  • an additional layer built on top of existing L1 and L2 chains but fine-tuned to optimize performance for specific use cases.

In the short to medium term, it’s possible these independent chains may see significant growth, but we believe this to be a function of their short-term novelty as opposed a signal of sustainable interest and use. Even now, the landscape of more established application-specific chains like Celo seems to be relatively sparse. While these independent application-specific chain ecosystems offer exceptional performance for a particular use case, they often lack the very features that make other general purpose ecosystems so powerful:

  • flexibility and ease-of-use
  • high degree of composability
  • liquidity aggregation and access to native assets

The next generation of scaling infrastructure will have to strike a balance between both approaches.

The Fractal Scaling Approach

The fractal scaling approach is highly relevant to this “layered model” of blockchain scaling. It offers a unique way to unify the otherwise siloed, disparate ecosystems of application-specific chains with broader communities and in doing so, helps to preserve composability, enable access to general-purpose logic, and derive safety guarantees from the underlying L1s and L2s.

How does it work?

  • transactions split across local instances depending on the use-case they are intended to serve.
  • leverages security, scaling, and privacy properties of underlying L1/L2 layers while optimizing for unique, bespoke needs
  • takes advantage of novel architectures (for storage and compute) based on proof carrying data and recursive proofs
  • the idea that any message is accompanied by a proof that the message and the history leading up to this message is valid

Here’s a great post by Starkware discussing an architecture for fractal scaling.

Closing Thoughts

Blockchain scaling has risen to greater prominence over the last few years for good reason — the computational cost of verifying on highly decentralized chains like Ethereum has become infeasible. As blockchains have grown in popularity, the computational complexity of transactions on-chain has also been growing rapidly, further increasing the costs of securing the chain. Optimizations to existing layer-1s and architectures like dynamic sharding can be immensely valuable, but the sharp rise in demand necessitates a more nuanced approach to developing secure, scalable, and sustainable decentralized systems.

We believe that an approach based on building up layers of chains each optimized for specific behaviors, including general purpose computation to application-specific and privacy-enabling logic. As a result, we see rollups and other layer-2 technologies as core to expanding throughput by enabling off-chain computation/storage and fast verification.

Please reach out to @Rahul Maganti for any questions and comments or thoughts on what we got wrong!

References

Share

Stay up to date with the latest from Jump_

More articles

SAFU: Creating a Standard for Whitehats
SAFU: Creating a Standard for Whitehats

Whitehats and DeFi protocols need a shared understanding of security policy. We propose the SAFU - Simple Arrangement for Funding Upload - as a versatile and credible way to let whitehats know what to...

Oct 24 2022 _ 17 min

Share

Disclaimer

The information on this website and on the Brick by Brick podcast or Ship Show Twitter spaces is provided for informational, educational, and entertainment purposes only.  This information is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other type of advice.  You should not make any decision – financial, investment, trading or otherwise – based on any of the information presented here without undertaking your own due diligence and consulting with a financial adviser.  Trading, including that of digital assets or cryptocurrency, has potential rewards as well as potential risks involved. Trading may not be suitable for all individuals. Recordings of podcast episodes or Twitter spaces events may be used in the future.

Building_
Terms of Use_Privacy Policy_Disclaimers_

© 2024 Jump Crypto. All Rights Reserved.

Jump Crypto does not operate any business lines that accept funds from external investors. Any person, company, or app purporting to accept external investor funds on behalf of Jump Crypto is fraudulent.