Technical FAQ
Frequently asked technical questions about ShredStream, Solana shreds, and integration details.
What is the maximum packet size?
Each UDP packet contains exactly one shred. The maximum size is 1,280 bytes, which fits within a standard MTU of 1,500 bytes (including IP and UDP headers). In practice, most data shreds are between 1,100 and 1,280 bytes, while coding shreds are typically around 1,228 bytes.
Always allocate a receive buffer of at least 1,280 bytes per recvfrom call. If you allocate less, the kernel will truncate the packet and you will lose data.
What throughput should I expect?
Throughput depends on Solana network activity. Typical ranges:
| Metric | Low Activity | Normal | High (NFT mints, etc.) |
|---|---|---|---|
| Shreds/sec | 500-1,000 | 1,500-3,000 | 4,000-6,000+ |
| Bandwidth | ~5 Mbps | ~20 Mbps | ~50+ Mbps |
| Slots/sec | ~2.5 (Solana targets 400ms slot times) | ||
Ensure your server has sufficient bandwidth. A 100 Mbps connection is recommended for headroom during traffic spikes.
What happens when packets are lost?
UDP is a best-effort protocol -- there are no retransmissions. If a shred is lost:
- Data shreds: You will have a gap in your slot data. You can still reconstruct the full block if you receive enough coding shreds (Solana uses Reed-Solomon erasure coding with a typical rate of 2/3 data + 1/3 coding).
- Coding shreds: Losing a few coding shreds is usually harmless, as long as you have all data shreds or enough combined data + coding shreds.
If you need 100% block completeness, keep a fallback path that fetches confirmed blocks via Solana RPC (getBlock) for any slots where you detect missing shreds.
How do shreds relate to blocks?
A Solana block is the complete set of transactions processed in a single slot. The leader (block producer) breaks the block into shreds for network propagation:
- The leader serializes all transactions in the block into a contiguous byte stream called an entry batch.
- The entry batch is split into fixed-size chunks to create data shreds.
- Coding shreds are generated using Reed-Solomon encoding over the data shreds, enabling recovery from packet loss.
- All shreds for a slot share the same
slotnumber and are sequentially indexed. - The last data shred in a slot has the
LAST_SHRED_IN_SLOTflag set, so you know when the block is complete.
To reconstruct a block from shreds, collect all data shreds for a slot, sort by index, and concatenate their payloads. Then deserialize the entry batch to extract individual transactions.
import structfrom collections import defaultdict# Accumulate shreds by slotslots = defaultdict(dict) # slot -> {index: payload}def on_shred(data: bytes):if len(data) < 88:returnslot = struct.unpack_from("<Q", data, 65)[0]index = struct.unpack_from("<I", data, 73)[0]variant = data[64]# Only collect data shreds (variant & 0xF0 == 0xa0)if variant & 0xF0 != 0xa0:returnpayload = data[88:] # Data payload starts after headersslots[slot][index] = payload# Check LAST_SHRED_IN_SLOT flag (bit 1 of flags at offset 77)flags = data[77]if flags & 0x02:reconstruct_block(slot)def reconstruct_block(slot: int):shreds = slots[slot]if not shreds:return# Sort by index and concatenateordered = [shreds[i] for i in sorted(shreds.keys())]block_data = b"".join(ordered)print(f"Slot {slot}: reconstructed {len(block_data)} bytes from {len(shreds)} shreds")del slots[slot]
What is the difference between data shreds and coding shreds?
Solana uses two types of shreds for block propagation:
| Property | Data Shreds | Coding Shreds |
|---|---|---|
| Variant byte | 0xa5 (merkle), 0xa0 (legacy) | 0x55 (merkle), 0x50 (legacy) |
| Purpose | Carry actual block entry data | Enable recovery of lost data shreds |
| Typical ratio | ~67% of shreds per FEC set | ~33% of shreds per FEC set |
| Needed for block reconstruction | Yes, always needed | Only if data shreds are missing |
Forward Error Correction (FEC): Shreds within a slot are grouped into FEC sets. Each FEC set contains k data shreds and m coding shreds. You can recover any m missing data shreds from the coding shreds in the same FEC set using Reed-Solomon decoding. If you lose more than m shreds from a single FEC set, that set is unrecoverable from shreds alone and you need to fall back to RPC.
What regions are available?
ShredStream currently operates in three regions:
- us-east — Virginia, USA (lowest latency to major US data centers)
- eu-west — Frankfurt, Germany
- ap-tokyo — Tokyo, Japan
Choose the region closest to your server. You can activate streams in multiple regions simultaneously for redundancy (each region requires a separate subscription or Turbo plan).
You can change the region of an active subscription at any time from the dashboard. When you switch regions, your stream is briefly interrupted while the system deactivates the old node, assigns a new port in the target region, and activates the new stream. The entire process takes a few seconds.
Can I filter which shreds I receive?
ShredStream.com delivers all shreds for every slot. Filtering is done client-side after reception. This design ensures minimum latency -- any server-side filtering would add processing delay.
Common client-side filtering patterns:
- Data shreds only: Skip coding shreds if you do not need FEC recovery.
- Specific slots: Only process shreds for the current or next expected slot.
- Leader-based filtering: Cross-reference the leader schedule to only process shreds from validators you care about.
How does latency compare to Solana RPC?
ShredStream.com delivers shreds as they are produced by the leader, before the block is confirmed. This gives you a significant timing advantage:
| Method | Typical Latency | Notes |
|---|---|---|
| ShredStream.com UDP | ~100-200ms before confirmation | Raw shreds as they propagate |
| RPC (processed) | ~400-800ms after production | Block must be assembled and processed |
| RPC (confirmed) | ~6-12 seconds | Requires supermajority vote |
For MEV and arbitrage use cases, the shred-level latency advantage can mean the difference between capturing an opportunity or missing it.
Is the shred data signed and verified?
Yes. Every shred carries an Ed25519 signature from the block-producing validator (leader) in the first 64 bytes. You can verify this signature against the leader's identity public key from the leader schedule. However, note:
- Signature verification adds CPU overhead (~3-5 microseconds per shred with optimized Ed25519).
- Most production consumers skip verification for speed and rely on ShredStream's infrastructure to deliver authentic shreds.
- If you need to verify, batch the signature checks on a separate thread to avoid blocking the receive path.
Can I subscribe for longer periods?
Yes! When subscribing, you can use the duration multiplier to extend any plan:
- All plans (Shared, Dedicated, Turbo): 1x to 24x (up to 2 years)
Longer commitments come with progressive discounts:
| Multiplier | Discount |
|---|---|
| 3x | 5% |
| 6x | 10% |
| 12x | 15% |
| 24x+ | 20% |
The final price is calculated as: base_price × multiplier × (1 - discount). For example, 12x Dedicated = €279 × 12 × 0.85 = €2,845.80 (saving €502.20).
How do I handle Solana epoch boundaries?
Solana epochs last approximately 2-3 days (~432,000 slots). At epoch boundaries, the leader schedule rotates. ShredStream.com handles this transparently -- you continue to receive shreds without interruption. However, if you are verifying shred signatures, you need to update the leader schedule at each epoch boundary:
from solana.rpc.api import Clientrpc = Client("https://api.mainnet.solana.com")def get_leader_schedule(epoch: int) -> dict:"""Fetch the leader schedule for a given epoch.Returns {validator_pubkey: [slot_indices]}"""resp = rpc.get_leader_schedule(epoch=epoch)return resp.value# Cache the schedule and refresh when epoch changescurrent_epoch = Noneschedule = {}def on_shred(slot: int, data: bytes):global current_epoch, scheduleepoch = slot // 432_000 # Approximate epoch calculationif epoch != current_epoch:print(f"Epoch changed to {epoch}, refreshing leader schedule...")schedule = get_leader_schedule(epoch)current_epoch = epoch# Look up the leader for this slotslot_in_epoch = slot % 432_000# ... verify signature against expected leader