Every messaging company in 2026 claims to care about your privacy. The marketing pages all use similar phrases — "end-to-end encrypted," "we don't read your messages," "your data stays with you." Most of them are technically true. None of them mean the same thing.
The distinction that actually matters is this: can the company see what it claims not to look at?
If yes, you have privacy by promise. The company has the data, and they pinky-swear not to do anything bad with it. You trust their current policy, their current management, their current legal exposure, and their current security posture, all at once.
If no, you have privacy by architecture. The company doesn't have the data. They couldn't hand it over if they wanted to, couldn't be subpoenaed to produce it, couldn't accidentally leak it in a breach. Promise becomes irrelevant — there's nothing to promise about.
Almost every messenger lives mostly in the first column. We try, very deliberately, to live in the second.
What "promise" buys you, and how it breaks
A promise can be perfect today and worthless tomorrow. The things that turn a promise into nothing:
- A subpoena. "We promise we won't share your data" cannot survive "share this data or face contempt of court." The promise was about voluntary disclosure; the court order is involuntary.
- A leadership change. The CEO who set the policy retires. The new CEO sees a profit opportunity in your data. The policy changes. The data was always there.
- An acquisition. The company gets bought. The new owner inherits everything, including data. Whatever was true at signup is now subject to renegotiation.
- A breach. The promise has nothing to do with whether attackers can read the database. They read it.
- A jurisdictional shift. The company moves headquarters. The new jurisdiction's compelled-disclosure laws are different. What was confidential is now subject to legal process you didn't know existed.
- An honest mistake. A new engineer adds a logging statement that captures what was supposed to be private. The promise is intact; the data leaked anyway.
None of these are hypothetical. All of them have happened to messaging companies in the last decade, sometimes more than once.
What "architecture" buys you, and how it breaks differently
Architecture-level privacy isn't a feeling. It's a fact about what data structures exist, what columns are in what tables, what functions can be called on what data. If your phone number isn't in our database, no court can order us to produce it. If we have no record of which group you're in, no breach exposes our list of your groups — because the list doesn't exist.
This is what other posts in this series have walked through:
- We have no phone number on file. The thread doesn't exist.
- We have no group membership table. The thread doesn't exist.
- We have no role table for who's an admin of what. The thread doesn't exist.
- We cannot decrypt the messages you've sent — even if subpoena, breach, and our own corruption all coincide. The math doesn't close.
Architecture has its own failure modes. It can be:
- Misimplemented. A subtle bug means the architecture is leakier than the claim. (We test for this; we're not infallible.)
- Worked around at the device level. A compromised phone is still a compromised phone — architecture on our side doesn't protect against forensic recovery from your local storage. (We do what we can — encrypted on-device storage, content-free push notifications — but the phone is yours, and we can't make the OS safer than it is.)
- Sidestepped by what we do know. We do know that your client connected to our cluster at time T. We don't know what was inside the encrypted channel, but the fact of connection is metadata. We minimize it; we can't make it zero without breaking ability to receive messages at all.
Where we still need promises
Architecture removes most of the data we'd otherwise need to promise about. But not all of it. The things we still operate on a promise basis:
- We log connection events — we have to, for abuse handling and basic operations. We promise these logs are short-lived and that we don't cross-reference them with any external identifier (because we don't have one to cross-reference with).
- We choose the code that runs on our servers. You can't audit our server-side code today (we're not open source yet). That's a real gap. We're closing it.
- We won't add features that change the architecture. "We won't start collecting phone numbers next quarter" is a promise. The architecture today makes it easy to keep; the promise commits us to keeping it easy.
The trust footprint shrinks but doesn't vanish. The promises that remain are about things that aren't already architecturally impossible — but they're a small list, and shrinking is the goal.
The dial, and where to set it
You can think of any messenger as having a dial between architecture and promise. The further toward promise, the more you're trusting the company. The further toward architecture, the more you're trusting the math.
Companies break. The math doesn't.
We've spent four blog posts so far walking through specific places we moved the dial. We'll spend more on the ones we haven't covered yet. The thing we want you to leave with: when a messenger says "we care about privacy," your next question should be "by architecture, or by promise?" And the answer should be checkable.
BlindPost