Why Unwrite isn't open source
If you care about privacy, why not open source? Fair question. Here is the honest answer.
People ask this. It is a fair question. If we bang on about privacy and "nothing leaves your browser," why not open source the whole thing so people can verify it?
Here is the honest answer.
Open source does not equal private
Open source means the code is available to read. It does not mean the code respects your privacy. Plenty of open source projects include telemetry, analytics, and tracking. The code is right there in the repository for anyone to read, and most people never do.
The reverse is also true. Closed source software can be genuinely private. What matters is architecture, not access to a Git repository.
Your browser is the audit tool
You do not need our source code to verify our privacy claims. You need a browser.
Open DevTools. Click the Network tab. Use any Unwrite tool. Watch the requests. If nothing leaves your browser, nothing leaves your browser. That is a stronger guarantee than reading source code, because source code can differ from what is actually deployed.
The network tab shows you what the live site actually does. Not what a repository says it should do. Not what a commit from six months ago intended. What it does right now, on your device, in your browser.
The "community reviews it" myth
The strongest argument for open source security is peer review. The community reads the code, finds bugs, and submits fixes. In theory this works brilliantly.
In practice it fails at scale. Consider two of the most significant security failures in recent open source history:
Heartbleed (2014): A critical vulnerability in OpenSSL, one of the most widely used open source libraries on the internet. The bug existed in the public codebase for two years before anyone noticed. Millions of servers were exposed. The code was right there. Nobody looked.
Log4Shell (2021): A remote code execution vulnerability in Log4j, a Java logging library used by nearly every enterprise Java application. The vulnerable code was public for years. The fix came from a security researcher, not from routine community review. The exploit was trivial. The exposure was catastrophic.
Open source does not guarantee that anyone competent is reading the code. For most projects, the number of people who have actually audited the security-critical paths is effectively zero.
Handing over the playbook
This is the part people tend not to think about. Open sourcing a tool like Unwrite means publishing every detection heuristic, every transformation rule, and every signal weight that our AI detection and humanisation system uses.
That is a roadmap for anyone who wants to defeat it.
We are not being hostile about this. We think AI tools are useful and people should use them. But the detection and humanisation system has value precisely because the internals are not public knowledge. Making them public does not make the tool better for users. It makes the tool worse.
Sustainability
We are a small team. The tools are free. There is no subscription paywall on any of the browser-based tools and there will not be one.
Open sourcing changes the economics. It creates an expectation of community management, issue triage, pull request review, and documentation maintenance. Those are real costs that do not generate revenue. For a project that already gives away its core product for free, adding open source maintenance overhead would accelerate burnout without improving the product for users.
Where we are open source
This is not a blanket objection to open source. We contribute to open source and have published open source work.
Unwrite Images is built on Squoosh, Google's open source image compression toolkit. Our version extends it with batch processing, additional format support, and integration with our tooling. The work is published at github.com/benpalmer1/Unwrite-Images.
Where open sourcing makes the tool better for users and does not undermine its purpose, we do it. The image tools benefit from community scrutiny because the processing logic is well-understood and there is no adversarial concern. Anyone can verify the compression algorithms produce correct output.
What we do instead
Instead of publishing source code and hoping someone reads it, we design the system so that privacy is architectural.
- All processing runs in your browser via WebAssembly and JavaScript
- No file uploads, no server-side processing for any free tool
- No analytics beacons, no tracking pixels, no cookies
- No user accounts required for any browser tool
- DevTools Network tab verifies all of this in seconds
This is what we call "architectural privacy." The system cannot violate your privacy because it is not built with the capability to do so. There is no upload endpoint to call. There is no analytics SDK to fire. There is no tracking cookie to set.
Read our privacy policy for the full details, and our post on removing all tracking from the site for the lengths we go to.
The real question
The question is not "is the source code public?" The question is "can I trust this tool with my data?"
You can verify trust in under thirty seconds. Open DevTools. Use the tool. Check the network tab. If zero requests go out, zero data was collected. That is a stronger privacy guarantee than any open source licence provides.
We think that is more honest than publishing a repository and implying that makes you safe.