Platform

Overview

How It Works

Beneficiary Identity

Policy Corridors

Deterministic Finality

Architecture

Security Model

Governance

Integration

Solutions

Corridors Overview

Institutional Overview

Pricing

All Scenarios

Humanitarian Impact Fund

Assurance

Technical Assurance

Verify Receipt

Receipt Example

Developers

Documentation

APIs & Bridges

Architecture Docs

Glossary

BID API

Company

About

Team

Partners

Roadmap

Investors

Contact

Blog

All Documentation

Schedule Consultation
JIL Sovereign - L1 Infrastructure

JIL-5600 Blockchain Sizing & Storage Analysis

Storage architecture, capacity analysis, and hardware requirements for the JIL-5600 settlement engine

CONFIDENTIAL  ·  February 2026  ·  Version 1.0

1.5s
Block Time
10K
Max Tx/Block
14/20
BFT Threshold
10B
Genesis Supply

01Chain Parameters

The JIL-5600 settlement engine operates on chain ID jil-sovereign-1 with a genesis supply of 10,000,000,000 JIL (10B). All core parameters are frozen in l1/genesis.json and enforced by the ABCI application layer.

ParameterValueSource
Block Time1.5 secondsl1/genesis.json (frozen)
Max Block Size10 MB (10,485,760 bytes)l1/genesis.json
Max Transactions/Block10,000l1/genesis.json
Max Transaction Size1 MB (1,048,576 bytes)l1/jil-abci/src/app.rs
Max Block Gas100,000,000l1/jil-abci/src/app.rs
Block Reward4 JIL/block (~84M JIL/year)l1/genesis.json
Inflation Rate0.84% annual on 10B baseDerived
BFT Threshold70% (14-of-20)l1/genesis.json

Derived Throughput

Key Insight: At maximum capacity, the JIL-5600 chain can process up to 576 million transactions per day, generating 576 GB of raw block data daily. Real-world usage will be a fraction of these theoretical maximums.
MetricValue
Blocks per day57,600
Blocks per year21,024,000
Max daily transactions576,000,000
Max daily data (full blocks)576 GB

02Storage Architecture

The JIL-5600 uses a layered storage design separating L1 block/state storage from services-layer operational data. This separation allows independent scaling and optimization of each tier.

L1 Layer (Block & State Storage)

The JIL-5600 settlement engine uses a dual-mode storage engine:

  • Production: RocksDB v0.21 (feature-gated via persistent-storage)
  • DevNet: In-memory with JSON snapshot persistence

Source: l1/jil5600-core/src/storage.rs

Services Layer (PostgreSQL 16)

PostgreSQL 16 stores operational data for the 190+ microservices:

  • Transaction records, settlement events, audit logs
  • Compliance check results, routing decisions
  • NOT used for block/state storage

Source: deploy/hetzner/docker-compose.validator.yml

Storage Mode Selection: Controlled by the LEDGER_DATA_DIR environment variable. If set and the persistent-storage Cargo feature is enabled, RocksDB is used. Otherwise, in-memory storage is the default.
Storage Architecture Data Flow:

  Transactions   -->  ABCI Layer  -->  RocksDB (Block + State)
       |                  |                    |
       |                  v                    v
       |           Pruning Engine       Persistent Disk
       |          (100-block window)    (NVMe / SSD)
       |
       v
  Services Layer  -->  PostgreSQL 16
  (190+ services)      (Tx records, audit, compliance)

03RocksDB Configuration

Current production configuration from l1/jil5600-core/src/storage.rs:153-160. RocksDB provides LSM-tree based storage with LZ4 compression, tuned for the write-heavy workload of block production.

SettingValueNotes
CompressionLZ4Fast, ~3x compression ratio
Write Buffer Size64 MBPer-buffer (3 buffers max = 192 MB in-flight)
Max Write Buffers3Standard for write-heavy workloads
Target File Size64 MBL1 SST file size
Create If MissingtrueAuto-create on first run
Column FamiliesDefault onlyNo separation of blocks/state/indexes
Bloom FiltersNot configuredIncreases read amplification at scale
Block CacheDefault (8 MB)Should increase for multi-TB databases

Key Structure

RocksDB stores data using a simple key-value pattern optimized for fast account lookups and full state snapshots.

Key PatternValuePurpose
__ledger_state__Full State JSONComplete state snapshot
account:{address}Account JSONPer-account fast lookup

Pruning Configuration (ABCI Layer)

Pruning Strategy: Validators retain only the most recent 100 blocks, dramatically reducing storage requirements. Archive nodes can disable pruning to retain full history. The pruning interval of every 10 blocks balances disk reclamation with I/O overhead.
SettingValueSource
Keep Recent Blocks100l1/jil-abci/src/app.rs:73
Pruning IntervalEvery 10 blocksl1/jil-abci/src/app.rs:74
Validator Persist IntervalEvery 100 blocksJIL_PERSIST_INTERVAL env var

04Genesis Allocation (Day 0 Storage)

The 10B JIL coins are pre-minted in genesis across 7 wallets. Genesis block size is approximately 50 KB. Each account balance is a u128 number (16 bytes) stored once per account.

AccountAllocationPercentage
jil1founders1,500,000,000 JIL15%
jil1treasury3,000,000,000 JIL30%
jil1operations2,000,000,000 JIL20%
jil1publicsale1,000,000,000 JIL10%
jil1validators1,000,000,000 JIL10%
jil1ecosystem1,000,000,000 JIL10%
jil1reserve500,000,000 JIL5%
Storage Impact: The coin count itself is irrelevant to storage -each is a u128 number (16 bytes) stored once per account. Storage growth is driven by transaction volume, not coin count. Genesis is just ~50 KB regardless of total supply.

05Storage Projections

Transaction Size

A typical JIL transfer transaction serialized as JSON totals approximately 300 bytes. Binary serialization (protobuf/bincode) would reduce this to ~170 bytes (~1.8x savings).

ComponentBytes
Sender address32
Receiver address32
Amount (u128)16
Nonce (u64)8
Signature (Ed25519)64
Zone ID~12
Gas/metadata~50
JSON overhead~80
Total (JSON-serialized)~300 bytes

Block Storage Growth

ScenarioAvg Tx/BlockAvg Block SizeDaily GrowthAnnual Growth
Idle (validators only)0-1~500 bytes~28 MB~10 GB
Light (early TestNet)50~15 KB~865 MB~310 GB
Moderate (production)500~150 KB~8.4 GB~3 TB
Heavy (peak load)2,000~600 KB~33.6 GB~12 TB
Max capacity10,000~3 MB~168 GB~60 TB

With LZ4 Compression (~3x ratio)

LZ4 compression reduces storage requirements by approximately 3x across all scenarios, making multi-year operation feasible on standard hardware.

ScenarioRaw AnnualCompressed Annual
Idle10 GB~3.5 GB
Light310 GB~105 GB
Moderate3 TB~1 TB
Heavy12 TB~4 TB
Max capacity60 TB~20 TB

With Pruning (100-block window)

Pruning Impact: Pruned validators only store the most recent 100 blocks. Even at maximum capacity, this limits hot state to just 300 MB -easily fitting in RAM. This is the default mode for all validator nodes.
ScenarioPruned Hot State
Idle~50 KB
Light~1.5 MB
Moderate~15 MB
Heavy~60 MB
Max capacity~300 MB
Storage Comparison (Moderate Load, 1 Year):

  Raw blocks:        ████████████████████████████████████  3.0 TB
  LZ4 compressed:    ████████████                          1.0 TB
  Pruned (100 blk):  .                                     15 MB

06State Database Size

The state database holds all account balances, separate from block history. State grows linearly with the number of unique accounts on the network, not with transaction volume.

AccountsState Size (JSON)State Size (Binary)
1,000~500 KB~170 KB
10,000~5 MB~1.7 MB
100,000~50 MB~17 MB
1,000,000~500 MB~170 MB
10,000,000~5 GB~1.7 GB
100,000,000~50 GB~17 GB
Binary Serialization Advantage: Migrating from JSON to binary (protobuf or bincode) reduces state size by approximately 3x. At 10 million accounts, state drops from 5 GB to just 1.7 GB -easily cached in RAM for fast consensus rounds.

Early Network

1K-10K accounts
500 KB - 5 MB
Fits entirely in L2 cache

Growth Phase

100K-1M accounts
50 MB - 500 MB
Fits in validator RAM

Scale Phase

10M-100M accounts
5 GB - 50 GB
Requires optimized caching

07RocksDB Capacity Analysis

Can RocksDB Handle This?

Yes. RocksDB is designed for multi-terabyte workloads. It powers some of the largest storage systems in production today.

SystemTypical RocksDB SizeUse Case
Facebook (Meta)PetabytesSocial graph, messaging
CockroachDBMulti-TB per nodeDistributed SQL
TiKV (TiDB)Multi-TB per nodeDistributed KV
Ethereum (geth)~1.2 TB (archive)Blockchain state
Solana~500 GB+Blockchain accounts

Current JIL Config vs. Recommended for Scale

Scaling Roadmap: The current configuration is optimized for TestNet and early MainNet. As the chain grows beyond 1 TB, the recommended settings should be applied to maintain performance. All changes are runtime-configurable -no chain halt required.
SettingCurrentRecommended (>1 TB)
Write Buffer64 MB128 MB
Max Write Buffers34-6
Block Cache8 MB (default)2-4 GB
Bloom FiltersNone10-bit per key
Column FamiliesSingle3 (blocks, state, indexes)
CompressionLZ4 (all levels)LZ4 (L0-L1), ZSTD (L2+)
Compaction StyleLeveled (default)Leveled (correct)
Max Background JobsDefault (2)4-8
Target File Size64 MB128 MB

Bottleneck Analysis

Five key bottlenecks have been identified in the current configuration, each with a clear mitigation path for production scale.

BottleneckImpactMitigation
JSON serialization3-5x storage overhead, slow encode/decodeMigrate to bincode/protobuf
No bloom filtersRead amplification at >100 GBAdd 10-bit bloom filters
Single column familyAll data competes for cacheSeparate blocks/state/indexes
Small block cacheExcessive disk reads at >10 GBIncrease to 1-4 GB
Full state keyEntire state read/write on each blockUse per-account keys only
Recommended Column Family Layout:

  RocksDB Instance
  ├── CF: "blocks"     -Block headers + transaction data
  │     Key: block:{height}
  │     Compression: LZ4 (L0-L1), ZSTD (L2+)
  │     Cache: 512 MB
  │
  ├── CF: "state"      -Account balances, contract state
  │     Key: account:{address}
  │     Compression: LZ4
  │     Cache: 2 GB (hot accounts)
  │     Bloom: 10-bit
  │
  └── CF: "indexes"    -Tx-by-hash, tx-by-sender lookups
        Key: idx:{type}:{value}
        Compression: ZSTD
        Cache: 512 MB
        Bloom: 10-bit

08PostgreSQL Capacity Analysis (Services Layer)

PostgreSQL stores operational data for the 190+ microservices, not chain state. Key high-volume tables require partitioning and retention policies for sustainable growth.

TableGrowth Rate (Moderate)1yr SizeMitigation
transactions~500K rows/day~50 GBTime-based partitioning
settlement_events~100K rows/day~10 GBStandard indexes
qb_routing_log~5M rows/day~400 GBPartitioning + 90-day retention
audit_log~1M rows/day~80 GBPartitioning + 180-day retention
health_checks~500K rows/day~30 GB30-day retention

PostgreSQL Limits

MetricPostgreSQL LimitJIL Requirement
Max table size32 TB<500 GB (with partitioning)
Max row countUnlimited~2B rows/year
Max database sizeUnlimited<1 TB
Max connections~5,00082 services x PG_POOL_MAX

Recommended PostgreSQL Optimizations

1. Time-Based Partitioning

Monthly partitions on all high-volume tables. Enables fast partition drops for retention enforcement and parallel query execution.

2. Retention Policies

Drop partitions older than retention window: 30 days for health checks, 90 days for routing logs, 180 days for audit logs.

3. Connection Pooling

PgBouncer in front of PostgreSQL to multiplex 82+ service connections into a smaller pool of actual database connections.

4. Read Replicas

Streaming replicas for ops-dashboard and analytics queries, keeping the primary focused on write-heavy service workloads.

Vacuum Tuning: Set autovacuum_scale_factor = 0.01 for large tables to trigger vacuuming more frequently. Default (0.2) means a 100M-row table would accumulate 20M dead tuples before vacuum kicks in.

09Hardware Requirements

Pruned Validator Node (TestNet/MainNet)

Pruned validators store only the most recent 100 blocks, making them extremely lightweight. These specs are suitable for both TestNet and early MainNet participation.

ComponentMinimumRecommended
CPU4 cores8 cores
RAM8 GB16 GB
Storage100 GB SSD250 GB NVMe
Network100 Mbps1 Gbps

Archive Node

Archive nodes retain the full block history without pruning. Storage requirements grow linearly with chain age and transaction volume.

ComponentYear 1 (Moderate)Year 3 (Moderate)
CPU8 cores16 cores
RAM16 GB32 GB
Storage1 TB NVMe3 TB NVMe
Network1 Gbps1 Gbps

Current Hetzner Validators

Current Fleet Status: All four Hetzner validators are identically provisioned and confirmed sufficient for TestNet operations with over one year of headroom at current utilization (3% storage used, ~145 GB free).
SpecValueSufficient?
CPU4-core AMD EPYC-GenoaYes (TestNet)
RAM7.6 GiBYes (pruned validator)
Storage150 GB SSD (3% used)Yes (~145 GB free, 1+ year headroom)
OSUbuntu 24.04.3 LTSYes
Docker29.2.1Yes
Containers13 per nodeYes
Pruned Validator
TestNet & MainNet

Low Requirements

4-8 cores, 8-16 GB RAM, 100-250 GB SSD. Stores only 100 most recent blocks. Ideal for consensus participation without full archival.

Archive Node
MainNet (Full History)

Growing Requirements

8-16 cores, 16-32 GB RAM, 1-3 TB NVMe. Retains complete block history. Required for block explorers, analytics, and historical queries.

10Summary

This analysis confirms that the JIL-5600 blockchain architecture is well-suited for its target workloads, with clear optimization paths for scaling beyond initial deployment.

QuestionAnswer
How big is the blockchain at 10B coins?The coins are just numbers -genesis is ~50 KB
What drives storage growth?Transaction volume, not coin count
Can RocksDB handle it?Yes, up to multi-TB with config tuning
Can PostgreSQL handle it?Yes, with table partitioning on large tables
Are current Hetzner servers sufficient?Yes, for 1+ years as pruned validators
What needs optimization for MainNet?Binary serialization, bloom filters, column families, PG partitioning
Bottom Line: The JIL-5600 storage architecture is production-ready for TestNet and early MainNet. Genesis supply has no meaningful storage impact. At moderate production load (~500 tx/block), pruned validators need less than 15 MB of hot state, and archive nodes grow at ~1 TB/year compressed. All identified bottlenecks have clear, well-understood mitigation paths using standard RocksDB and PostgreSQL tuning.

Sources

Document generated from JIL-5600 L1 source code analysis.