Architecture Assurance

Data Architecture

Zero customer data at rest in JIL infrastructure. Customer-owned AWS S3 holds payloads. A 14-of-20 BFT L1 across 13 jurisdictions holds the audit record. We process; we don't custody.

Funds
JIL never custodies funds.
Keys
JIL never holds your keys.
Data
JIL never stores customer data at rest.
← All Assurance System Trust Assumptions Trust Boundary Diagram

The structural claim

Most platforms make compliance promises through process - controls, audits, training. JIL makes its core data-handling promises structural - built into the architecture so that violating them would require breaking the system, not just breaking a policy.

JIL never custodies funds. JIL never holds your keys. JIL never stores customer data at rest. The audit record lives on a 14-of-20 validator L1 across 13 jurisdictions - not on a JIL database we could lose, alter, or be compelled to surrender.

This page documents how each of those claims is enforced architecturally.

What lives where

The architecture splits responsibilities cleanly across customer-owned storage, JIL processing infrastructure, and the L1 blockchain. Customer payloads never enter JIL's persistent storage.

LayerWhere it livesJIL data-at-rest?
CREB™ receipts (full payload) Customer's AWS S3 (their bucket, their KMS, their region) No
Result-layer tables (T1, T2, T3, AVA™ outputs, vertical-engine outputs, settlement attestations) Customer's AWS S3 as Apache Iceberg tables No
Attestation audit record (hash + timestamp + 14-of-20 BFT signatures + engine version + bucket-URI hash) L1 blockchain (jil5600-core, 10 validators, CourtChain™) Hash only, no payload
Snowflake share metadata (schema definitions, secure-share permissions) JIL's Snowflake account Schema only, no payload
Operational audit log (who did what, when, against which engagement-ID hash) JIL Postgres, Merkle-chained on insert References & hashes only, PHI-free by construction
Working state (engagement queue, orchestration, billing, auth, tenant config, ops dashboards) JIL Postgres Transient, PHI-free
In-flight processing (T1/T2/T3 evaluation, AVA™ inference, vertical-engine evaluation) JIL service memory only Milliseconds-to-seconds, never written to disk
MPC key shards (cryptographic material protecting validator and bridge operations) HSM, AES-256-GCM encrypted; user retains 1 of 3 shards Yes - cryptographic material, never user evidence

How a customer engagement actually flows

The data-flow contract is identical across every JIL vertical - Attestyx grants integrity, customer-portal AVE/Ava substantiation, retail-v2 consumer evidence, payments attestation, and the eight industry-vertical engines all use the same path.

  • Customer onboarding gate. Before any engagement runs, the customer creates an AWS S3 bucket in their AWS account and grants JIL a write-only cross-account IAM role. JIL runs a dry-run write check and only activates the tenant after the bucket is verified accessible.
  • Evidence ingest. The customer uploads or streams evidence into JIL's processing pipeline. Data enters JIL service memory only; no JIL persistence layer writes the payload to disk.
  • Engine evaluation. The relevant engine (Verdict Engine for Attestyx, AVE for customer-portal, etc.) evaluates the evidence in memory. Results are produced in memory.
  • Stream-write to customer S3. Result rows are streamed to the customer's S3 bucket as Apache Iceberg tables via Snowpipe Streaming or direct S3 PUT. Idempotency is enforced by an offset token at the streaming client.
  • L1 anchor commit. The CREB™ hash, customer-S3 URI hash, engine version, check IDs evaluated, and a 14-of-20 BFT signature set are committed to the L1 chain. The L1 record contains no payload - only references and signatures.
  • Customer query. The customer's own Snowflake account reads its own bucket via Iceberg external tables. The customer's BI stack (Tableau, Power BI, Sigma, Looker) connects natively. The customer pays Snowflake directly; JIL bills nothing for their compute.
  • Pipeline state cleared. Once the L1 anchor is committed, JIL's processing memory is freed. No retained payload.

Why subpoena resistance is structural

A defendant facing a JIL-attested case will routinely subpoena every third party connected to the matter, hoping to obtain privileged or confidential material. Under our architecture:

  • What JIL can produce on subpoena. The L1 anchor record - hashes, timestamps, signatures, engine versions. That's it.
  • What JIL cannot produce. Payload content. We don't hold it.
  • What the customer can withhold. Everything in their bucket, under their privilege or confidentiality posture.

The 14-of-20 BFT validator quorum across 13 jurisdictions adds a second layer. Even if a single validator were compelled to surrender records, the audit trail remains independently verifiable from the other validators. There is no single point of compulsion.

Self-authentication under FRE 902(14)

Federal Rule of Evidence 902(14), effective December 2017, makes electronic records authenticated by a qualified digital identification process self-authenticating in U.S. federal courts. CREB™ bundles satisfy the rule:

  • Cryptographic hash links the bundle to a specific moment in time.
  • Distributed-validator timestamp, signed by the BFT quorum, establishes when the attestation was made.
  • L1 anchor is tamper-evident; modifying the bundle would invalidate the hash; modifying the chain record would require breaking 14-of-20 BFT consensus.
  • The rule has been adopted in substantially identical form by most state evidence codes.

Practical effect: the bundle is admissible without producing a witness to authenticate it. Opposing counsel can challenge weight, but cannot challenge authenticity on records-keeping objections.

Compliance posture this enables

The zero-data-at-rest architecture structurally narrows JIL's compliance scope in several frameworks:

FrameworkHow the architecture narrows scope
HIPAA PHI never rests on JIL infrastructure; in-flight processing only. BAA scope narrows to processing-in-transit. Breach-reporting surface area drops to "did the pipeline hold a row in memory longer than processing required" - an operational control, not an architectural failure mode.
GDPR The customer is the data controller for storage at rest (their AWS account, their region, their KMS, their retention policy). JIL is at most a processor under Art. 28 with a sharply scoped Data Processing Addendum.
PCI-DSS Cardholder data (where applicable) lives in the customer's bucket. JIL's PCI scope is limited to processing-in-flight if the workflow involves card data at all.
ISO 27001 JIL's certification covers the processing pipeline, not customer storage. Audit surface roughly halves.
Attorney-client privilege (where applicable) The customer (typically a law firm under our whistleblower-track engagement model) holds underlying material in their bucket under privilege. JIL operates as a SaaS subprocessor under the firm's engagement letter - the same legal posture that covers e-discovery vendors and forensic accountants.

The L1 audit record - what it actually contains

Every attestation produces an anchor record on CourtChain™. The record is purpose-built to be the immutable authoritative reference, with no payload exposure.

  • CREB™ hash. Content-addressed reference to the bundle in the customer's bucket.
  • Customer-S3 URI hash. A hash of the bucket location (not the URI in plaintext, since URIs can contain customer identifiers).
  • Engine version. Exact version of the evaluation engine used. Critical for reproducibility under FRE 902(14) admissibility challenges.
  • Check IDs evaluated. Which of the 175 attestation checks (or vertical-specific checks) were applied to this engagement.
  • Timestamp. The moment the attestation was committed.
  • BFT signature set. 14-of-20 SCN validator signatures across 13 jurisdictions.
  • Engagement-ID hash. Used to link related re-screen events without exposing engagement details.

The record is independently verifiable by an external party with only the CourtChain™ hash and the customer's bucket access. It cannot be modified by JIL, by the customer, or by any third party.

What about the storage we do hold

JIL holds three classes of data on its own infrastructure. None of them is customer payload.

  • Cryptographic material. MPC key shards, validator signing keys, HMAC secrets, HSM-protected material. AES-256-GCM at rest. User retains 1 of 3 shards in MPC 2-of-3, so JIL cannot sign without user consent - cryptographic guarantee, not policy guarantee.
  • Working state. Engagement queues, orchestration state, billing records, auth/session, customer/tenant configuration, ops dashboards. PHI-free by construction. Transient relative to the engagement's lifetime.
  • Operational audit log. Who-did-what records (Merkle-chained on insert) for SOC 2 evidence. Contains references and hashes; never payload content. Distinct from the L1 attestation audit.

Two kinds of audit, two homes:

  • L1 (attestation audit): proves what was attested to and when. Court-portable. Lives on the chain.
  • Postgres operational audit: proves how JIL handled the engagement internally. SOC 2 evidence. PHI-free.

Operational guardrails (how the claim stays true)

Zero data at rest is an architectural claim, but it depends on operational discipline to stay true. JIL enforces the claim through:

  • No payload content in debug logs. CI tests verify this on every release.
  • No error-bucket retention of customer rows. Errors that touch payload trigger an exception path that does not persist to disk.
  • No sidecar caching of customer data. No Redis caches of result content; no in-memory caches that persist across requests; no temp files.
  • Per-vertical engine versioning with signed binaries. The L1 anchor records the exact engine version that produced each attestation, signed at build time, for reproducibility.
  • No JIL-employee access to customer S3 buckets without customer consent and audit trail. Bucket access is logged in the customer's bucket access audit, not just JIL's.

What the architecture does not solve

Honest disclosure: the zero-data-at-rest posture handles storage but does not handle processing. JIL still touches customer data in flight. We narrow that risk by:

  • Processing in service memory only, never to disk.
  • Bounding hold times to the engagement-evaluation duration (typically milliseconds to seconds).
  • Requiring BAA for HIPAA-relevant workflows; processor scope under GDPR Art. 28 in all cases.
  • Audit-logging payload-touching operations by hash (not content) for SOC 2.

If a customer needs absolute zero-touch handling - data never enters JIL infrastructure at all - that is a different product (customer-side evaluation library, executed on their compute against their data, with the L1 anchor commit as the only JIL-touched element). Roadmap, not yet shipped.

Cross-references

Ready to verify?

Bring your own bucket. Run a real engagement. See the L1 anchor record before any commitment.

Request a POC Talk to us