You can find all published reports here:
Sunnyside Devnet Reports (Internal)
18 devnets
on Fusaka-Devnet-0 and Fusaka-Devnet-1 specs with higher target/max blobs, mainly testing single-CL devnets with and without GetBlobsV2 and devnets with different CLs and ELs combined72 or above blobs per block
even with GetBlobsV2 disabled (Grandine and Lodestar got ~60 w/o GetBlobsV2)~20Mbps on average
while other fullnodes were above 80Mbps on average
, giving concerns when bandwidth limitedThis testing effort examined the performance of Ethereum clients with their current Fusaka development branches. We deployed isolated devnets for each consensus client (50 nodes each with a matching execution client) and mixed-client networks, aiming to identify how each behaves at maximum blob load. The analysis focused on conditions around 72 blobs per block
(the anticipated max) and beyond, specifically looking at:
All devnets were run using EthPandaOps’ Fusaka configurations with minor tweaks to push blob throughput. We conducted trials on both Devnet-0 spec (max target 72 blobs per block) and Devnet-1 spec (max target 128 blobs). Each network consisted of ~50–70 nodes (DigitalOcean VMs, 8 vCPUs, 16 GB RAM) distributed across multiple regions, with no artificial bandwidth limits imposed. A custom transaction spam tool (“Spamoor”) was used to gradually ramp up blob traffic – starting from 1 blob per block and increasing by +1 blob every 5 minutes, with one blob per transaction. This ensured that the devnets experienced an incrementally rising load until the target was reached or a bottleneck hit.
We ran several devnet configurations ‣ to isolate performance factors:
72 blobs/block
and maintained that rate every slot for 24 hours
, meeting the Fusaka upgrade target. Following the Berlin interop requests, we extended this testing in Devnet-1 spec: each CL was tested in two modes – one with default settings and another with the GetBlobsV2 blob-fetch API disabled. This allowed us to measure how much the new blob propagation optimization contributed to each client’s performance. (Nimbus was a known outlier – it consistently maxed out at ~10 blobs even when paired with different ELs, due to pending optimizations at the time.)72 blob blocks
as well, indicating that heterogenous nodes can cooperate at full blob throughput. However, this mixed setup also hinted at cross-client discrepancies (e.g. differing bandwidth usage and attestation timing) that might become relevant as we approach network limits.