Storage architecture, capacity analysis, and hardware requirements for the JIL-5600 settlement engine
The JIL-5600 settlement engine operates on chain ID jil-sovereign-1 with a genesis supply of 10,000,000,000 JIL (10B). All core parameters are frozen in l1/genesis.json and enforced by the ABCI application layer.
| Parameter | Value | Source |
|---|---|---|
| Block Time | 1.5 seconds | l1/genesis.json (frozen) |
| Max Block Size | 10 MB (10,485,760 bytes) | l1/genesis.json |
| Max Transactions/Block | 10,000 | l1/genesis.json |
| Max Transaction Size | 1 MB (1,048,576 bytes) | l1/jil-abci/src/app.rs |
| Max Block Gas | 100,000,000 | l1/jil-abci/src/app.rs |
| Block Reward | 4 JIL/block (~84M JIL/year) | l1/genesis.json |
| Inflation Rate | 0.84% annual on 10B base | Derived |
| BFT Threshold | 70% (14-of-20) | l1/genesis.json |
| Metric | Value |
|---|---|
| Blocks per day | 57,600 |
| Blocks per year | 21,024,000 |
| Max daily transactions | 576,000,000 |
| Max daily data (full blocks) | 576 GB |
The JIL-5600 uses a layered storage design separating L1 block/state storage from services-layer operational data. This separation allows independent scaling and optimization of each tier.
The JIL-5600 settlement engine uses a dual-mode storage engine:
persistent-storage)Source: l1/jil5600-core/src/storage.rs
PostgreSQL 16 stores operational data for the 190+ microservices:
Source: deploy/hetzner/docker-compose.validator.yml
LEDGER_DATA_DIR environment variable. If set and the persistent-storage Cargo feature is enabled, RocksDB is used. Otherwise, in-memory storage is the default.
Storage Architecture Data Flow:
Transactions --> ABCI Layer --> RocksDB (Block + State)
| | |
| v v
| Pruning Engine Persistent Disk
| (100-block window) (NVMe / SSD)
|
v
Services Layer --> PostgreSQL 16
(190+ services) (Tx records, audit, compliance)
Current production configuration from l1/jil5600-core/src/storage.rs:153-160. RocksDB provides LSM-tree based storage with LZ4 compression, tuned for the write-heavy workload of block production.
| Setting | Value | Notes |
|---|---|---|
| Compression | LZ4 | Fast, ~3x compression ratio |
| Write Buffer Size | 64 MB | Per-buffer (3 buffers max = 192 MB in-flight) |
| Max Write Buffers | 3 | Standard for write-heavy workloads |
| Target File Size | 64 MB | L1 SST file size |
| Create If Missing | true | Auto-create on first run |
| Column Families | Default only | No separation of blocks/state/indexes |
| Bloom Filters | Not configured | Increases read amplification at scale |
| Block Cache | Default (8 MB) | Should increase for multi-TB databases |
RocksDB stores data using a simple key-value pattern optimized for fast account lookups and full state snapshots.
| Key Pattern | Value | Purpose |
|---|---|---|
__ledger_state__ | Full State JSON | Complete state snapshot |
account:{address} | Account JSON | Per-account fast lookup |
| Setting | Value | Source |
|---|---|---|
| Keep Recent Blocks | 100 | l1/jil-abci/src/app.rs:73 |
| Pruning Interval | Every 10 blocks | l1/jil-abci/src/app.rs:74 |
| Validator Persist Interval | Every 100 blocks | JIL_PERSIST_INTERVAL env var |
The 10B JIL coins are pre-minted in genesis across 7 wallets. Genesis block size is approximately 50 KB. Each account balance is a u128 number (16 bytes) stored once per account.
| Account | Allocation | Percentage |
|---|---|---|
jil1founders | 1,500,000,000 JIL | 15% |
jil1treasury | 3,000,000,000 JIL | 30% |
jil1operations | 2,000,000,000 JIL | 20% |
jil1publicsale | 1,000,000,000 JIL | 10% |
jil1validators | 1,000,000,000 JIL | 10% |
jil1ecosystem | 1,000,000,000 JIL | 10% |
jil1reserve | 500,000,000 JIL | 5% |
A typical JIL transfer transaction serialized as JSON totals approximately 300 bytes. Binary serialization (protobuf/bincode) would reduce this to ~170 bytes (~1.8x savings).
| Component | Bytes |
|---|---|
| Sender address | 32 |
| Receiver address | 32 |
| Amount (u128) | 16 |
| Nonce (u64) | 8 |
| Signature (Ed25519) | 64 |
| Zone ID | ~12 |
| Gas/metadata | ~50 |
| JSON overhead | ~80 |
| Total (JSON-serialized) | ~300 bytes |
| Scenario | Avg Tx/Block | Avg Block Size | Daily Growth | Annual Growth |
|---|---|---|---|---|
| Idle (validators only) | 0-1 | ~500 bytes | ~28 MB | ~10 GB |
| Light (early TestNet) | 50 | ~15 KB | ~865 MB | ~310 GB |
| Moderate (production) | 500 | ~150 KB | ~8.4 GB | ~3 TB |
| Heavy (peak load) | 2,000 | ~600 KB | ~33.6 GB | ~12 TB |
| Max capacity | 10,000 | ~3 MB | ~168 GB | ~60 TB |
LZ4 compression reduces storage requirements by approximately 3x across all scenarios, making multi-year operation feasible on standard hardware.
| Scenario | Raw Annual | Compressed Annual |
|---|---|---|
| Idle | 10 GB | ~3.5 GB |
| Light | 310 GB | ~105 GB |
| Moderate | 3 TB | ~1 TB |
| Heavy | 12 TB | ~4 TB |
| Max capacity | 60 TB | ~20 TB |
| Scenario | Pruned Hot State |
|---|---|
| Idle | ~50 KB |
| Light | ~1.5 MB |
| Moderate | ~15 MB |
| Heavy | ~60 MB |
| Max capacity | ~300 MB |
Storage Comparison (Moderate Load, 1 Year):
Raw blocks: ████████████████████████████████████ 3.0 TB
LZ4 compressed: ████████████ 1.0 TB
Pruned (100 blk): . 15 MB
The state database holds all account balances, separate from block history. State grows linearly with the number of unique accounts on the network, not with transaction volume.
| Accounts | State Size (JSON) | State Size (Binary) |
|---|---|---|
| 1,000 | ~500 KB | ~170 KB |
| 10,000 | ~5 MB | ~1.7 MB |
| 100,000 | ~50 MB | ~17 MB |
| 1,000,000 | ~500 MB | ~170 MB |
| 10,000,000 | ~5 GB | ~1.7 GB |
| 100,000,000 | ~50 GB | ~17 GB |
1K-10K accounts
500 KB - 5 MB
Fits entirely in L2 cache
100K-1M accounts
50 MB - 500 MB
Fits in validator RAM
10M-100M accounts
5 GB - 50 GB
Requires optimized caching
Yes. RocksDB is designed for multi-terabyte workloads. It powers some of the largest storage systems in production today.
| System | Typical RocksDB Size | Use Case |
|---|---|---|
| Facebook (Meta) | Petabytes | Social graph, messaging |
| CockroachDB | Multi-TB per node | Distributed SQL |
| TiKV (TiDB) | Multi-TB per node | Distributed KV |
| Ethereum (geth) | ~1.2 TB (archive) | Blockchain state |
| Solana | ~500 GB+ | Blockchain accounts |
| Setting | Current | Recommended (>1 TB) |
|---|---|---|
| Write Buffer | 64 MB | 128 MB |
| Max Write Buffers | 3 | 4-6 |
| Block Cache | 8 MB (default) | 2-4 GB |
| Bloom Filters | None | 10-bit per key |
| Column Families | Single | 3 (blocks, state, indexes) |
| Compression | LZ4 (all levels) | LZ4 (L0-L1), ZSTD (L2+) |
| Compaction Style | Leveled (default) | Leveled (correct) |
| Max Background Jobs | Default (2) | 4-8 |
| Target File Size | 64 MB | 128 MB |
Five key bottlenecks have been identified in the current configuration, each with a clear mitigation path for production scale.
| Bottleneck | Impact | Mitigation |
|---|---|---|
| JSON serialization | 3-5x storage overhead, slow encode/decode | Migrate to bincode/protobuf |
| No bloom filters | Read amplification at >100 GB | Add 10-bit bloom filters |
| Single column family | All data competes for cache | Separate blocks/state/indexes |
| Small block cache | Excessive disk reads at >10 GB | Increase to 1-4 GB |
| Full state key | Entire state read/write on each block | Use per-account keys only |
Recommended Column Family Layout:
RocksDB Instance
├── CF: "blocks" -Block headers + transaction data
│ Key: block:{height}
│ Compression: LZ4 (L0-L1), ZSTD (L2+)
│ Cache: 512 MB
│
├── CF: "state" -Account balances, contract state
│ Key: account:{address}
│ Compression: LZ4
│ Cache: 2 GB (hot accounts)
│ Bloom: 10-bit
│
└── CF: "indexes" -Tx-by-hash, tx-by-sender lookups
Key: idx:{type}:{value}
Compression: ZSTD
Cache: 512 MB
Bloom: 10-bit
PostgreSQL stores operational data for the 190+ microservices, not chain state. Key high-volume tables require partitioning and retention policies for sustainable growth.
| Table | Growth Rate (Moderate) | 1yr Size | Mitigation |
|---|---|---|---|
transactions | ~500K rows/day | ~50 GB | Time-based partitioning |
settlement_events | ~100K rows/day | ~10 GB | Standard indexes |
qb_routing_log | ~5M rows/day | ~400 GB | Partitioning + 90-day retention |
audit_log | ~1M rows/day | ~80 GB | Partitioning + 180-day retention |
health_checks | ~500K rows/day | ~30 GB | 30-day retention |
| Metric | PostgreSQL Limit | JIL Requirement |
|---|---|---|
| Max table size | 32 TB | <500 GB (with partitioning) |
| Max row count | Unlimited | ~2B rows/year |
| Max database size | Unlimited | <1 TB |
| Max connections | ~5,000 | 82 services x PG_POOL_MAX |
Monthly partitions on all high-volume tables. Enables fast partition drops for retention enforcement and parallel query execution.
Drop partitions older than retention window: 30 days for health checks, 90 days for routing logs, 180 days for audit logs.
PgBouncer in front of PostgreSQL to multiplex 82+ service connections into a smaller pool of actual database connections.
Streaming replicas for ops-dashboard and analytics queries, keeping the primary focused on write-heavy service workloads.
autovacuum_scale_factor = 0.01 for large tables to trigger vacuuming more frequently. Default (0.2) means a 100M-row table would accumulate 20M dead tuples before vacuum kicks in.
Pruned validators store only the most recent 100 blocks, making them extremely lightweight. These specs are suitable for both TestNet and early MainNet participation.
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8 cores |
| RAM | 8 GB | 16 GB |
| Storage | 100 GB SSD | 250 GB NVMe |
| Network | 100 Mbps | 1 Gbps |
Archive nodes retain the full block history without pruning. Storage requirements grow linearly with chain age and transaction volume.
| Component | Year 1 (Moderate) | Year 3 (Moderate) |
|---|---|---|
| CPU | 8 cores | 16 cores |
| RAM | 16 GB | 32 GB |
| Storage | 1 TB NVMe | 3 TB NVMe |
| Network | 1 Gbps | 1 Gbps |
| Spec | Value | Sufficient? |
|---|---|---|
| CPU | 4-core AMD EPYC-Genoa | Yes (TestNet) |
| RAM | 7.6 GiB | Yes (pruned validator) |
| Storage | 150 GB SSD (3% used) | Yes (~145 GB free, 1+ year headroom) |
| OS | Ubuntu 24.04.3 LTS | Yes |
| Docker | 29.2.1 | Yes |
| Containers | 13 per node | Yes |
4-8 cores, 8-16 GB RAM, 100-250 GB SSD. Stores only 100 most recent blocks. Ideal for consensus participation without full archival.
8-16 cores, 16-32 GB RAM, 1-3 TB NVMe. Retains complete block history. Required for block explorers, analytics, and historical queries.
This analysis confirms that the JIL-5600 blockchain architecture is well-suited for its target workloads, with clear optimization paths for scaling beyond initial deployment.
| Question | Answer |
|---|---|
| How big is the blockchain at 10B coins? | The coins are just numbers -genesis is ~50 KB |
| What drives storage growth? | Transaction volume, not coin count |
| Can RocksDB handle it? | Yes, up to multi-TB with config tuning |
| Can PostgreSQL handle it? | Yes, with table partitioning on large tables |
| Are current Hetzner servers sufficient? | Yes, for 1+ years as pruned validators |
| What needs optimization for MainNet? | Binary serialization, bloom filters, column families, PG partitioning |
Document generated from JIL-5600 L1 source code analysis.
l1/genesis.json -Chain parameters and genesis allocationl1/jil5600-core/src/storage.rs -RocksDB configuration and key structurel1/jil-abci/src/app.rs -ABCI layer, pruning, and gas limitsl1/jil-validator-node-v2/src/chain.rs -Validator persistence settings