Technical Architecture

Parallel Settlement Pipeline

Fan-out/collect attestation architecture via RedPanda. Gate-based verdicts. Target: 1M settlements/min.

1M/min
Target throughput
~16,667/s
Settlements per second
32
Partitions per topic
<5s
Aggregation timeout
6
Parallel attestation services
~83K
Max concurrent in-flight
This is NOT BFT consensus. Block validators use Byzantine Fault Tolerance - 70% of validators must agree (14-of-20 quorum). Settlement attestation is a gate, not a vote. Every required check must pass. A single "denied" from any service kills the settlement immediately. There is no threshold, no majority, no quorum. The model is fail-closed: if a service is unreachable, the verdict is "Review" (hold), never "Yes".
Gate Logic: ANY check returns "denied" = verdict No / ANY check returns "hold" = verdict Review / ALL checks return "approved" = verdict Yes

Architecture

PARALLEL SETTLEMENT PIPELINE Fan-out / Collect via RedPanda - Gate Model (not BFT) Settlement Request HTTP POST or RedPanda topic Settlement Aggregator Fan-out + Collect + Verdict FAN-OUT to RedPanda topics (parallel) jil.attest.sanctions jil.attest.risk jil.attest.credentials jil.attest.policy jil.attest.ownership jil.attest.bank Sanctions Screening Cache Risk Scoring Attest Credential Registry Policy Decision API Ownership Verification (optional) Bank Attestation Ingestion (optional) jil.attest.results (all services publish here with correlation ID) COLLECT results by settlement_id Aggregator - Verdict Resolution In-memory map (~83K concurrent) - 5s timeout Yes ALL approved No ANY denied Review ANY hold/timeout jil.settlement.verdicts Settlement Consumer Execute or reject based on verdict

How It Works

  1. Request arrives - Settlement request enters via HTTP POST to /api/v1/settle or via RedPanda topic jil.attest.requests
  2. Fan-out - Aggregator publishes the request to N attestation topics simultaneously. Each service gets the same settlement_id (correlation ID) and processes independently.
  3. Parallel processing - Each attestation service (sanctions, risk, credentials, policy, ownership, bank) evaluates the settlement against its domain. No service waits for another.
  4. Result collection - Each service publishes its result to jil.attest.results with the correlation ID. The aggregator collects results in an in-memory map.
  5. Verdict resolution - When all required checks complete (or 5s timeout fires), the aggregator computes the verdict using gate logic and publishes to jil.settlement.verdicts
  6. Execution - Settlement consumer reads the verdict. Yes = execute. No = reject with reason codes. Review = hold for manual approval.

Gate Model vs. BFT Consensus

DimensionBFT Validators (Block Consensus)Settlement Pipeline (Gate)
ModelVote - 70% quorum (14-of-20)Gate - ALL must pass
Single failureTolerated (up to 6 of 20)Kills the settlement (denied = No)
Timeout behaviorQuorum still achievableHold verdict (fail-closed)
Participants20 geographically distributed validators4-6 attestation services (same datacenter)
Latency toleranceSeconds (cross-continent)Milliseconds (same Docker network)
IndependenceEach validator runs full nodeEach service has a single domain
Adversary modelByzantine (malicious nodes)Operational (service failure/timeout)
PurposeAgree on block stateVerify settlement conditions

Topic Architecture

TopicDirectionProducerConsumerPartitions
jil.attest.requestsInboundClients / Wallet APIAggregator32
jil.attest.sanctionsFan-outAggregatorSanctions Screening Cache32
jil.attest.riskFan-outAggregatorRisk Scoring Attest32
jil.attest.credentialsFan-outAggregatorCredential Registry32
jil.attest.policyFan-outAggregatorPolicy Decision API32
jil.attest.ownershipFan-outAggregatorOwnership Verification32
jil.attest.bankFan-outAggregatorBank Attestation Ingestion32
jil.attest.resultsCollectAll attestation servicesAggregator32
jil.settlement.verdictsOutputAggregatorSettlement Consumer32
jil.attest.dlqDead letterAggregatorOps monitoring8

Throughput Engineering

ParameterValueRationale
Target throughput1,000,000/min16,667 settlements/second sustained
Partitions per topic32Each partition handles ~520 msgs/sec
Fan-out multiplier4-6xEach settlement produces 4-6 fan-out messages
Total RedPanda throughput~100K msgs/sec16.7K requests x 6 fan-outs = ~100K messages/sec across all topics
RedPanda capacityMillions/secWell within RedPanda single-node limits
Aggregation timeout5,000msMax concurrent: 16,667/sec x 5s = ~83K in-flight
Memory per settlement~1KBRequest + partial results = ~83MB total RAM
DB write rate16,667 verdicts/secAsync batch insert, connection pool: 50

Horizontal Scaling

The pipeline scales horizontally at every layer:

Failure Modes

FailureBehaviorRecovery
Attestation service downTimeout fires - verdict = ReviewManual review queue, service restart
Attestation service slowTimeout fires for missing check onlyPartial results + hold verdict
RedPanda partition offlineConsumer rebalance to healthy partitionsRedPanda self-heals, messages replayed
Aggregator crashIn-flight settlements lost (in-memory)Clients re-submit, Kafka offsets preserve position
Database unreachableVerdicts still published to Kafka topicBackfill from Kafka when DB recovers
Duplicate resultMap.set() overwrites - idempotentNo impact - last result wins

API Reference

MethodEndpointDescription
POST/api/v1/settleSubmit settlement for attestation. Returns 202 with settlement_id.
GET/api/v1/settle/:idPoll verdict status. Returns in-flight state or final verdict.
GET/api/v1/verdictsList recent verdicts. Filterable by verdict type. Paginated.
GET/api/v1/pipelinePipeline status, throughput metrics, topic configuration.
GET/healthHealth check with DB + pending aggregation count.
GET/metricsPrometheus metrics (requests, verdicts, latency, errors).

Service: settlement-aggregator
Dependencies: RedPanda, PostgreSQL, attestation services
Source: services/settlement-aggregator/