Why Fusaka matters now
Fusaka is Ethereum’s latest hard fork, activated in December 2025 as the second major upgrade of the year after Pectra. It arrives at a specific moment in Ethereum’s evolution:
- Rollups now carry most user transactions and fee revenue, but they are increasingly constrained by data availabilitiy (DA) on L1.
- L1 gas limits have risen, but further increases require careful DoS hardening and better block-size bounds.
- User experience is still dominates by seed phrases and custom signing flows, even though mainstream platforms have standardized around device-native passkeys.
The message from the Ethereum Foundation around Fusaka is clear: Ethereum is no longer an experiment. It is infrastructure we want to onboard everyone onto, but without sacrificing decentralization, security, or its long-term sustainability.
Fusaka is designed around that constraint:
Scale L2s, scale L1, simplify UX, and keep Ethereum credibly neutral, verifiable, and run by ordinary nodes.
Where Fusaka sits on the roadmap
Ethereum now follows a roughly twice-a-year major upgrade cadence. Fusaka is the fifth big milestone after:
- Paris (The Merge), switched to proof of stake (2022)
- Shapella, enabled withdrawals and improved staking (2023)
- Dencun, introduced blobs and proto-danksharding (EIP-4844) (2024)
- Pectra, doubled blob throughput and shipped EOA code via EIP-7702 (May 2025)
- Fusaka, live December 3rd, 2025
The name “Fusaka” combines the execution-layer fork Osaka and the consensus-layer fork Fulu. Next on the roadmap is Glamsterdam, focused on parallel transaction processing and deeper execution layer optimizations.
For the Ethereum community, this cadence is important: upgrades must be a routine, predictable, and safe, closer to OS or database maintenance than to experimental rewrites. Fusaka upgrade really feels like that: focused scope, clear goals, and tight integration with the rollup-centric roadmap.
Core goals of Fusaka
Across 13 EIPs, Fusaka groups naturally into three objectives:
- Scale L2s (rollups) via data availability, EIP-7594 PeerDAS, EIP-7892 Blob-Parameter-Only (BPO) forks…
- Scale L1 safely, increase gas limit (EIP-7935), and transaction gas cap (EIP-7825), …
- Improve UX and developer ergonomics, secp256rq precompile (EIP-7951) enabling passkey-style auth, …
Behind all of these is the same design target: scale and simplify without pushing hardware requirements or trust assumptions beyond what a decentralized network can sustain.
Data availability, blobs, and why rollups were hitting a wall
DA is the bottleneck, not execution
In a rollup-centric Ethereum, most transactions execute off-chain on L2, and Ethereum L1 primarily:
- Verifies validity/fraud proofs.
- Guarantees data availability so anyone can reconstruct the rollup state if operators disappear or misbehave.
The second part, is what rollups pay when they “post data to L1”. In fact, DA often dominates the cost of L2 transactions, while execution itself is comparatively cheap.
Before Dencun, rollups used CALLDATA, which is permanent and competes directly with normal L1 transactions in the gas market, making it very expensive. Dencun (EIP-4844) introduced blobs, a cheaper, temporary DA format that lives on the consensus layer for ~18 days and then expires, while remaining verifiable via KZG commitments.
Pectra then doubled blob throughput from target/max 3/6 to 6/9 blobs per block, raising daily blob capacity to ~8.15 GB. That bought time, but blob utilization quickly climbed towards the new target as more rollups came online. [ref]
PeerDAS: data availability sampling for Ethereum
PeerDAS (EIP-7594) is Fusaka’s headline feature. Instead of every full node downloading every blob, the network:
- Erasure-codes blob data.
- Spreads it across 128 column “subnets”
- Has each node custody only a subnet of columns and sample a few more from peers.
Key properties:
- A regular full node holds ~1/8 of the blob data, leading to ~8x less blob download bandwidth and ~80% less blob disk usage at constant blob throughput.
- Any missing data is detectable with extremely high probability (reconstruction is possible as long as at least ~50% of encoded data is available)
- Validators with more stake subscribe to more subnets, so nodes with very large balances eventually store everything and help heal the network when data is missing.
From a scaling perspective, this changes the curve: per-node load now grows much more slowly than total blob throughput. Ethereum.org and several research reports estimate that this design can support up to ~8× the current blob capacity in theory, without raising normal full-node hardware beyond recommended specs.
BPO forks: scaling blobs without full hard forks
PeerDAS enables higher throughput, but Ethereum still needs a controlled mechanism to actually raise blob counts. That is EIP-7892 Blob Parameter Only forks:
- Blob parameters (target and max blobs per block, fee tuning) become client configuration, similar to the gas limit.
- Between major hard forks, client teams can coordinate small “blob-only” bumps, (e.g., from 6/9 to 10/15, then 14/21), without touching other consensus rules.
At Fusaka activation, blob limits remain at 6/9, the Pectra values. But the path to higher values is now procedural: small, pre-agreed BPO updates instead of heavy hard forks.
For rollup teams, this matters because capacity planning becomes legible. Instead of guessing when the next big upgrade will arrive, they can track a published BPO schedule and model their throughput and fee curves accordingly.
Blob fee market: keeping prices meaningful
Finally, EIP-7918 (blob base fee bounded by execution cost) prevents pathological cases where execution gas dominates and blob base fees crash to 1 wei, effectively disconnecting blob prices from actual resource usage. By pinning a reserve price linked to execution costs, the protocol ensures that:
- Blob fees still respond to congestion.
- Rollups pay at least a meaningful fraction of the compute and space they cause.
- The economic incentives for DA remain aligned with network health.
From a security perspective, this is as important as raw throughput. Under-priced DA would invite spam and misaligned ETH burn; over-priced DA would push rollups to alternative DA layers with weaker security assumptions. Fusaka tries to keep Ethereum DA competitive while preserving the chain’s censorship resistance and economic guarantees.
L1 scaling: gas limits, block size caps, and DoS hardening
Higher block gas limit with a per-tx cap
Fusaka doesn’t just help rollups. It also raises L1 execution capacity while tightening worst-case bounds.
- EIP-7935 coordinates client teams to raise the default gas limit from 45M to around 60M gas per block, targeting ~33% more L1 computation.
- EIP-7825 caps the gas per transaction at 2²⁴ = 16,777,216 gas, roughly equivalent to a pre-Pectra average block.
This combination means:
- No single transaction can monopolize an entire block as gas limits climb.
- Block composition becomes more predictable, which is crucial as the ecosystem moves toward parallel execution (Glamsterdam, EIP-7928).
MODEXP bounds and repricing
The MODEXP precompile is used in RSA verification and heavy cryptographic systems (including some ZK schemes). Historically, its gas pricing and unbounded inputs made it hard to model worst-case block validation time.
- EIP-7823 caps MODEXP input sizes at 8192 bits (1024 bytes) per operand.
- EIP-7883 increases MODEXP gas costs and removes under-pricing discounts for very large exponents and moduli.
Together, these changes:
- Remove extreme “pathological” inputs that could stall clients.
- Align gas costs more closely with actual CPU work.
- Clear one of the main blockers to higher gas limits in the future.
For auditors and client implementers, this is good news: a narrower, better-priced surface is easier to reason about and fuzz.
Block size limit separate from gas
Gas limits bound work, but not the byte size of the block. To address DoS risks from extremely large blocks, Fusaka introduces:
- EIP-7934 RLP Execution Block Size Limit, with a cap of 10 MiB, of which ~2 MiB is reserved for consensus framing.
Clients now reject any execution block whose RLP payload exceeds this bound. That aligns execution-layer behavior with the consensus layer’s gossip limits and reduces the risk of:
- Blocks that propagate slowly or inconsistently.
- Reorgs or DoS attacks based on oversized blocks.
Again, the pattern is clear: raise capacity while tightening worst-case bounds, which is exactly what you want from a security perspective
UX and wallets: passkeys, secure devices, and predictable confirmations
Passkeys on L1: secp256r1 precompile
The most visible change is EIP-7951, a precompile for the secp256r1 (P-256) curve, the default choice for FIDO2/WebAuthn, Apple Secure Enclave, Android Keystore, and many corporate HSMs. This means:
- Contracts can verify P-256 signatures using a standard call interface at a fixed address.
- The same passkey infrastructure used in browsers and mobile apps can now back Ethereum accounts and smart contract wallets directly on L1 and L2.
- There is no need for “shim” servers that translate P-256 signatures into secp256k1 or for complex multi-sig bridging schemes.
For non-custodial wallets, this unlocks flows like:
- “Sign in with passkey” using device biometrics.
- Multi-factor auth where a passkey co-signs with another key.
- Recovery flows based on platform security modules instead of raw seed phrases.
From a security-engineer viewpoint, the threat model shifts:
- You rely more on OS-level secure enclaves and vendor attestation.
- You gain resistance to phishing and UI-spoofing (WebAuthn flows are origin-bound).
- You introduce new classes of risk (supply-chain attacks in device vendors, key recovery policies, etc.) that need to be modeled in audits.
Deterministic proposer lookahead and preconfirmations
EIP-7917 deterministic proposer lookahead makes the beacon chain aware of the set of proposers in the next epoch (32 slots) in advance. This enables:
- Preconfirmation protocols: users can get a binding commitment from the upcoming proposer to include their transaction, reducing UX latency before full finality.
- More robust scheduling for rollup sequencers and block builders, who can plan around known proposers.
For users, this should surface as faster, more trustworthy “this transaction is definitely going in” signals, especially once wallet and L2 teams integrate preconfirmation schemes.
Security and decentralization: Our perspective
From our point of view, the central question is not “how many TPS can we get?” but something more fundamental: whether Fusaka allows Ethereum to scale while maintaining, or even improving, its security and decentralization properties.
PeerDAS and node requirements
PeerDAS deliberately shifts how data availability is handled so that the network, as a whole, can carry more blob data without requiring every node to hold everything. Regular full nodes and solo stakers handle only a subset of the encoded data and sample a bit more from their peers, which reduces their download and storage burden while preserving very strong availability guarantees at the protocol level.
The trade-off is that some operators, particularly those acting as large validators, will end up storing more of the blob data and will play a bigger role in healing the network when pieces are missing. That asymmetry is intentional: it keeps the baseline for participation accessible while still allowing the network to increase total DA throughput. From a decentralization perspective, what matters is that the minimum to verify the chain stays within reach of a wide set of participants. Fusaka’s design, as it stands, moves in that direction.
This introduces a new class of risks that we need to keep an eye on as security engineers. The guarantees of data availability sampling are probabilistic and rely on correct implementation, sound randomness for sampling, and honest participation by validators. The math is strong, but the real system will need to be continuously tested, monitored, and audited. PeerDAS is a powerful tool, but it is not magic; it becomes one more layer we need to reason about when we talk about “Ethereum’s security assumptions.”
L1 Limits and DoS surface
The increase in the default gas limit gives blocks more room for execution, but this is paired with a strict per-transaction gas cap so that no single transaction can monopolize an entire block as limits grow. Cryptographic heavyweights such as the MODEXP precompile are now more tightly bounded and repriced to match their real computational cost, which removes extreme worst cases that previously made stress testing and client hardening more difficult.
At the same time, the introduction of an explicit block size limit at the RLP level places a clear upper bound on how large a block can be in bytes, independently of gas. That cap is aligned with what the network can realistically gossip and propagate, and it protects against oversized blocks that could slow down propagation or be used as a bandwidth-based DoS vector.
For auditors and protocol designers, this is a net positive. Worst-case scenarios become easier to model, previous assumptions about block validation time and size can be updated in a controlled way, and one historically awkward precompile is now on a much tighter leash.
What Fusaka means for builders and protocol designers
Fusaka changes the environment in which teams design and operate Ethereum-based systems, especially for rollups and other L2 protocols. With PeerDAS and blob-parameter-only forks, data availability becomes a resource that can grow over time rather than a hard ceiling. This gives L2 builders more room to plan capacity and to keep using Ethereum as their DA layer instead of offloading data to weaker alternatives, but it does not remove the need for careful engineering.
On the UX side, wallet and dApp developers benefit from protocol-level support for passkeys and more predictable block proposer behavior. The P-256 precompile makes it possible to integrate WebAuthn-style authentication directly into smart contract wallets and account abstraction schemes, so that flows such as “sign in with passkey” or “approve with biometrics” can be implemented in a non-custodial way on Ethereum and its rollups. Deterministic proposer lookahead, together with emerging preconfirmation schemes, allows applications to offer clearer, lower-latency assurances about transaction inclusion. This combination moves the ecosystem closer to mainstream application UX.
Limitations and open questions
Fusaka is a big step, but it doesn’t solve everything. Here, some honest caveats:
- L1 gas fees: The EF itself is explicit that Fusaka does not directly lower L1 fees; the main impact is on L2 costs via more blobspace.
- Blob capacity is still finite: Even with BPO forks and PeerDAS, Ethereum cannot host infinite rollups. Alt-DA layers and validiums will continue to exist, with their own trade-offs.
- Probabilistic DA: PeerDAS introduces probabilistic guarantees into the core of Ethereum’s DA story. The math is strong, but real-world implementations must be monitored carefully. We should expect more research and maybe further tweaks over time.
- State growth and MEV: Fusaka barely touches long-term state growth and MEV extraction, which remain open problems for Ethereum’s sustainability and fairness. These are topics more likely to see progress in later roadmap items (e.g., Verkle trees, PBS variants, privacy-oriented frameworks like Kohaku).
- UX is still fragmented: Passkeys and preconfirmations are enablers, not a complete UX solution. Wallets and rollups must adopt them coherently. We may see uneven UX quality for a while.
From our perspective at Decentralized Security, this is healthy. A network of Ethereum’s importance should evolve via incremental but composable steps, each with clearly scoped benefits and trade-offs. Fusaka fits that pattern.
Conclusion
Fusaka closes a chapter that started with Dencun:
- Dencun gave rollups their own cheaper DA lane (blobs).
- Pectra doubled blob capacity and introduced more advanced wallet primitives.
- Fusaka makes blob capacity scalable via PeerDAS and BPO forks, while tightening L1 limits and unlocking passkey-based UX.
The next chapter is Glamsterdam, where Ethereum will lean into parallel transaction processing and deeper execution-layer optimizations.
From a security and decentralization standpoint, Fusaka is encouraging:
- It expands capacity for L2s and L1 without simply demanding bigger machines.
- It simplifies UX in a way that leans on open standards (WebAuthn, passkeys) instead of proprietary custodial flows.
- It keeps the design honest about bottlenecks that remain.