Product Feb 21, 2026 7 min read

Privacy by architecture: why Burnt stores no personal data

Most verification vendors store copies of documents containing personal data. Burnt's architecture processes data ephemerally and stores only the verification result.

Verification has a privacy problem that most people do not think about. When a landlord asks a tenant applicant to upload a pay stub, the landlord receives far more than proof of income. They receive the applicant's full legal name, home address, Social Security number, employer name and address, tax withholdings, 401k contributions, health insurance deductions, and year-to-date earnings. All of that data is now in the landlord's system — or more precisely, in their verification vendor's system — indefinitely.

This is not a policy failure. It is an architectural one. And fixing it requires rethinking how verification systems are built from the ground up.

Why do traditional verification systems create privacy risk?

Traditional verification follows a collect-then-inspect model. The user uploads a document. The system stores it. A reviewer — human or automated — examines the document and makes a determination. The document stays in the system, often for years, to satisfy audit requirements or "just in case" someone needs to refer back to it.

The privacy problem is inherent to this model. A single pay stub contains at least a dozen categories of personal information. A bank statement contains transaction history that reveals spending patterns, recurring subscriptions, and financial relationships. A tax return contains virtually everything. The vast majority of this data is irrelevant to the verification at hand, but the document-upload model has no mechanism to exclude it.

This creates three compounding risks:

How does Burnt verify without storing data?

Burnt's architecture is designed around a simple principle: the only data that should persist is the verification result. Everything else is ephemeral.

Here is how it works in practice. A user initiates a verification — say, proving their income exceeds a threshold for a lease application. Instead of uploading a pay stub, they authenticate with their payroll provider through a secure session. Burnt accesses the relevant data point from the authenticated source, verifies it, and produces a result: income above threshold, true or false. The source data is processed in volatile memory and discarded immediately. It is never written to disk, never logged, never cached.

The output looks like this:

This is not data minimization in the traditional sense — collecting everything and then deleting what you do not need. This is architectural elimination. The system is designed so that personal data never reaches persistent storage in the first place.

The safest way to protect personal data is to never store it in the first place. Burnt does not minimize data retention — we eliminate it.

How does this satisfy GDPR and CCPA by design?

Privacy regulations like GDPR and CCPA impose specific requirements on how personal data is collected, processed, and stored. Most organizations satisfy these requirements through policies: data retention schedules, access controls, deletion procedures, consent management platforms. These policies work in theory. In practice, they are difficult to enforce consistently, especially across complex systems with multiple data stores and backup mechanisms.

Burnt's architecture satisfies key regulatory requirements as properties of the system, not as policies layered on top:

The difference is structural. A traditional vendor implements data minimization by collecting a full document and then deciding which fields to retain. Burnt implements data minimization by never collecting the full document. A traditional vendor implements storage limitation by scheduling deletion of stored documents. Burnt implements storage limitation by never writing documents to storage. The compliance outcome is the same, but the enforcement mechanism is fundamentally more reliable.

What does a data breach look like when there is no data?

Consider two scenarios.

Scenario A: traditional verification vendor breach. An attacker gains access to the vendor's document storage. They exfiltrate thousands of pay stubs, bank statements, and tax returns. Each document contains full PII — names, addresses, Social Security numbers, financial details. The vendor must notify every affected individual, engage forensic investigators, face regulatory scrutiny, and manage reputational fallout. The affected individuals face years of identity theft risk.

Scenario B: Burnt breach. An attacker gains access to Burnt's systems. They find verification records: source domains, attribute types, boolean results, cryptographic proofs, and timestamps. No names. No addresses. No Social Security numbers. No financial details. No documents. The records are functionally anonymous. There is nothing to weaponize for identity theft, financial fraud, or targeted attacks.

This is not a marginal improvement in breach severity. It is a categorical difference. The attack surface for personal data exploitation does not exist because the personal data does not exist in the system.

The vendor risk equation

For businesses that use verification services, this has direct implications for vendor risk management. Every verification vendor that stores documents is a node in your data supply chain. Their breach is your breach — your customers' data was in their system because you required those customers to upload documents through that vendor. With Burnt, there is no document store to breach, no PII to exfiltrate, and no downstream exposure to manage.


Privacy in verification has historically been a matter of policy. Retention schedules, access controls, encryption at rest, employee training. These are all necessary, but they are all fallible. They depend on consistent human behavior across every employee, every system, every backup, every edge case.

Architecture is not fallible in the same way. If the system is designed so that personal data never reaches persistent storage, then no retention policy failure, no access control misconfiguration, and no employee error can expose that data. The privacy guarantee is a property of the system itself. That is what privacy by architecture means.

Frequently asked questions

No. Burnt processes source data ephemerally — in memory only — and discards it immediately after producing the verification result. The only thing that persists is the result itself: a true/false answer and the cryptographic proof that backs it. No documents, no email content, no raw personal data is stored.

Burnt satisfies key privacy requirements architecturally. Data minimization: only the specific attribute is verified. Purpose limitation: data is used only for verification then discarded. Storage limitation: no PII is retained. These are properties of the system design, not policies layered on top.

Source data is loaded into memory, verified against the original source, and the specific fact is extracted. The source data is then immediately discarded. Processing happens in volatile memory with no disk persistence. The entire lifecycle takes seconds.

An attacker who breached Burnt's systems would find no personal documents, no email content, and no raw user data. The only stored records are verification results — anonymous true/false signals with cryptographic proofs. There is no PII to exfiltrate because none is retained.

Share

Burnt Team

The team behind Burnt builds verified data infrastructure that goes straight to the source.

Related posts

ProductFeb 24, 2026
Why we are replacing document uploads with source verification
ProductMar 1, 2026
Why we do not need API partnerships to verify data from any source

Verify without storing.

See how Burnt verifies data from the source without retaining any personal information.