edit: framework name revised from 'root2root' to 'Decentralised Attestation' because, well, DA sucks alot less

In the 18 months of time since the first round of Snowden's disclosures began, it's been my pleasure to watch from the inside as cryptostorm has evolved from a starry-eyed vision of a "post-Snowden" VPN service into a globally deployed, well-administered, high-profile leader in the network security service market. That kind of a transition in such a short period of time can leave one with a sort of future-shock: the phases blur until the only real phase is one of transition. It's exciting, challenging, exhausting, exhilirating, and fascinating all at once."There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, "Morning, boys, how's the water?" And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, "What the hell is water?" "
~ David Foster Wallace
One faces the very real risk of myopic blindness, a loss of situational awareness, when one becomes accustomed to live inside such a red-shifted existence: the world outside the bubble can come to seem distant, slow, and less relevant to the local frame of reference every day. That can make for exceptional focus on tactical obligations - I think cryptostorm's excellent record in deploying innovative tools quickly and consistently speaks to the value such a focus can bring - but it can also lead to a form of brittle ignorance of the flow of macro events.
In the past month or so, largely because my operational duties on the team are relatively small (hence the luxury I enjoy of being able to post here in the form more than most anyone else on the team) I've been able to step back from some of the red-shifted intensity of cryptostorm's internal ecosystem, and consider not only the trajectory we've been on since Snowden but also the trajectory leading forward from here.
Which all sounds awfully boring and maudlin, admittedly, so let's move along to the interesting stuff, eh?
In summing up what we do, I'd say the core of cryptostorm's mission is providing a layer of genuine security around the data our members send back and forth to online resources. That layer isn't end to end, but it does protect against local-zone snooping and it provides an ubiquitous level of identity decoupling - "anonymity" - for most all routine online activities. That, in turn, frees our members from a constant fear of having the ugly snarling bits of the internet come back down the pipeline and appear on their physical front door (amoungst other fears allayed). And although there's unquestionably areas in that remit where we can continue to improve - and must continue to improve - in general I say (with all humility) that we're pretty good at that job. That's a good thing to say; it reflects quite a bit of wisdom, experience, expertise, creativity, and bloody hard work on the part of the whole team... plus enormous support from our close colleagues and the larger community along the way.
So: yay.
But: now what?
Do we continue to iteratively improve our core "data in transit" remit as we move forward, keeping that as unitary focus? Or... is there something else that's sitting on the edge of our peripheral vision, only waiting for us to recognise it? Yes, the latter.
No need to bore with the etiological summary of how these obvious-in-hindsight revelations have come to us as a team in recent months (there's equal bits of webRTC, torstorm, deepDNS, komodia, fishycerts, torsploit, superfish, and more fishycerts mixed in with who knows how much else), let's simply lay out some things facts we've been fortunate enough to see staring us in the face, as a result:
- 1. Data-in-transit is one part of a larger challenge our members face in staying safe and secure online, in general
2. Doing our small part of that work really well is helpful and important... but leaves many other areas uncomfortably exposes
3. Most of those areas are not part of our core expertise as a team... but a few, somewhat obviously, are.
4. Of all the areas of uncomfortable exposure beyond the confines of cryptostorm's network edges, "secure" web browsing via https is unquestionably the most badly broken, most widely used, and most complex to mitigate security problem our members face online in their day to day activities.
Only that's not how any of it actually works.
I'm not even going to attempt to summarise how this all came to be, nor how it actually plays out at a systems-theoretical or technological level. Many brilliant people have written on those subjects far more effectively than I ever will, and I encourage anyone interested in these matters to read those writings rather than wasting time reading any attempt of mine. But, while I may not have the ability to articulate the CA model in all its gruesomely convoluted, counter-intuitive, opaque hideousness... I do know how it works as an insider and a specialist in this field. I know it from years of frontline engagement, elbows-deep in x.509 syntax & CRL policies & countless complex details stacked in teetering layers.
I also know it as someone who has as a professional obligation ensuring that our members are secure in the context of this insecure CA model... which is to say as someone who is tasked with making something work that's designed not to work. Because, yes, the CA model is designed to be insecure, and unreliable, and opaque, and subject to many methods of subversion. This is intrinsic in its centralised structure; indeed, it's the raison d'être of that structure itself. What exists today is a system that guarantees the identity of both sides of an "end to end" https network connection... except when the system decides to bait-and-switch one side out for an attacker... if that attacker has the leverage, resources, or connections to have access to that capability.
The CA model also puts browser projects - Chromium, Mozilla, etc. - in the role of guardians of identity integrity, through their control over who gets in (and stays in) the "trust store" of root certs held (or recognised) by the client's browser. But of course browser vendors are in fact advertising businesses and they make their daily bread on the basis of broad coverage, broad usage, and no ruffled feathers... they are the last entities in the world with any incentive to be shutting down root certs if a CA is compromised in a way that can't be easily swept under the rug. So the browser vendors loathe the role of CRL guardians, and basically don't do it. Which means every root cert out there today is going to stay "trusted" in browsers, more or less, irrespective of whether there's any actual trust in the integrity of their vouching, or not.
Editing in [6 March] a relevant summation of this dynamic from Dr. Green. Here, he's speaking in reference to Superfish - an acknowledged distribution of badly-broken (unauthorised, although in what context one can say whether a root cert is "authorised" or not becomes one of ontology, quickly) root certificate and private key in the wild:
Not smart business for Google, indeed. This makes a mockery of the entire concept of "revocation lists" - which actually become "lists of stuff Google et al may or may not revoke, depending on their own business interests at the time... and any political pressure they receive behind the scenes" rather than any kind of objective process (not picking on Google here; indeed all appearances are that they're the least-bad of the lot).The obvious solution to fixing things at the Browser level is to have Chrome and/or Mozilla push out an update to their browsers that simply revokes the Superfish certificate. There's plenty of precedent for that, and since the private key is now out in the world, anyone can use it to build their own interception proxy. Sadly, this won't work! If Google does this, they'll instantly break every Lenovo laptop with Superfish still installed and running. That's not nice, or smart business for Google.
One more aside on CRLs. First, they're accessed via plaintext by just about every root certificate I've ever looked at myself. Let me repeat that, in boldface: certificate revocation lists to revoke bunk certificates are sent out via plaintext http sessions and are accessed via plaintext per the URLs hard-coded into certificates themselves. Really. They are.
Here is a specific example, from the cert we all love to hate, namely StartCom's 30 year 4096 SHA1 '3e2b' root:
Can't imagine any problems with that, can you? I'm hardly the first person to notice this as "an issue," nor will I be the last - it's another example of structural weakness that enables those with central hegemonic authority to bend the system arbitrarily as they desire in the short term, while retaining the appearance of a "secure" infrastructure in the public mind.X509v3 CRL Distribution Points:
Full Name:
URI:http://cert.startcom.org/sfsca-crl.crl
Full Name:
URI:http://crl.startcom.org/sfsca-crl.crl
After some posts about this in our twitter feed recently, @stribika let us know that this is an intentional design decision:
Good point, but one can see how this spins quickly into a recursive pantomime of any legitimate sort of CLR-based assurance of root cert integrity.Publishing them over HTTPS wouldn't fix it because the cert is assumed to be good on CRL download failure.
~ ~ ~
From here, I could launch in to a foam-speckled summation of the DigiNotar Hack of 2011, now, to illustrate all these points. But I won't... or if I do, I'll do it in a separate thread so the foam doesn't speck over this one too much. But, yes... DigiNotar. One image to emphasise:
The CA model serves the purpose (in structural functionalist terms) of giving the appearance of reliable identity validation to the majority of nontechnical users who see the green padlock in their web browser and think "secure," while simultaneously ensuring that the door to subversion of that security is always and forever available to those with enough access to central political power to make use of it. So: if you're Microsoft and you really want to, of course you can break any https session because you can sign root certs - short term - that browsers will swallow whole-cloth, and MiTM your way to plaintext. Same for the NSA and other such spooky entities, of course. If you do it too much, too broadly, someone might notice (certificate transparency at least might do this, sometimes, maybe)... but if they do what of it? There will be a half-baked story about a "hacker" with a ski mask on, etc... no root certs pulled from trust stores, no big heat, really not much hassle at all. Give it a bit to die down, and come right back to the trough.
This is not a "failed CA model." It's the exact requirements the CA model fills. Those who seek to "fix" the CA model are trying to fix something that's doing exactly what it's supposed to do for those who make the macro decisions about how it will be managed. To say such efforts are hopeless is actually giving them more chance of success than they have. They are sink-holes for naive enthusiasm, able to sop up technological radicalism in unlimited volumes... eating entire professional lives of smart and eager activists, leaving nothing behind but impenetrable whitepapers and increasing intake of alcohol over time.
But I digress.
This all became crystal clear to many people - and was re-emphasised for those of us who already knew -via the Superfish debacle. And, personally, as I dug into that research topic, I started seeing more and more evidence of how deeply subverted the CA model is - and is designed to be. I could send many bits of foam flying talking about bunk certs and hjacked hostnames and DNS caching evils, and on and on...
I could also spend months or years documenting all that, and eventually add that pile of documentation to the mountains already in existence - more landfill fodder. But, to be blunt, I'm interested in addressing the issue - not in writing about it. I know enough firsthand to know without a quantum of uncertainty that https is unreliable as a secure transport mechanism today. That's enough - it's enough for me to move forward, knowing the facts on the ground as they exist today.
It'd be easy to say that https isn't cryptostorm's job. And it'd be basically true, in historical terms. We route packets, and if those packets carry https sessions that are themselves subverted by cert fuckery... well that's not our problem. Members should be more careful (how?), and besides we can't fix it anyhow. Well, we've debated this as a team quite a bit in recent months. I can't say we have complete consensus, to be honest... but I do feel we've got a preponderance of support for the effort I'm describing here.
Simply put, we're expanding our protection offered to on-cstorm members: we're tackling the problem of broken https at the cryptostorm level, and while we won't be able to nullify that attack surface in one step, we're already able to narrow it considerably, and our mitigation from there has ongoing room to move asymptotically towards zero viable attacks on https identity. We've started calling this mechanism for credible identity validation for https sessions "root-to-root" identity authority, as opposed the Certificate Authority model out there today. Root-to-root doesn't replace the CA model, nor is it in a "battle" with it; it subsumes it, in a sense, in a simpler wrapping of non-mediated identity validation.
In short, we're shifting the Authority of the Certificate Authority model back to individual network members... they're the real "root authorities" in a non-compromised model, and thus root-to-root sessions are the way to ensure the model meets their needs.
~ ~ ~
Implementing r2r for on-cstorm sessions requires us to be clear in what problem we're seeking to solve. That problem - verifying identity online - is actually composed of two distinct, but deeply intertwined - sub problems. Those problems, or questions, are...
- 1. How can I be sure that an entity I already know is the same entity, over time, and not some other entity pretending to be they in order to gain access to communications intended for the real one?
2. How can I be sure that when I engage in network-routed communications with a particular entity, those discussions go to that entity rather than being surreptitiously redirected through a fake transit point masquerading as that entity?
The second problem - "how can I have confidence that the network entity I am talking to is the same as the "real" entity I want to talk to?" - is presently tacked by a mongrel mix of DNS and CA model centralisation... which is to say, it's got two famously complex and insecure systems entwined in an ugly fail-dance ensuring that there's no way in hell anyone can be 100% sure - or even 95% sure - that the two systems together give a reliable answer to the question of whether I'm sensing packets to "Janet" at a network (logical) address that is actually controlled by Janet. Usually, my packets will get to Janet... except when they don't. And I'll most likely never know if they don't get there, because an attacker with access to the skeleton keys of DNS and/or CA credentials can do so invisibly. I never know when I'm being screwed, nor does Janet. This uncertainty serves central power just fine.
The second problem emerges from the ontological roots of routed networking: the divergence between physical and logical network topology, as well as the distribution and dynamic evolution of "connectome"-level entity-relationship information embedded in those model layers. The first problem, in contrast, is simply a by-product of remote communications for a species of mammal evolved to know each other in physical terms, not as amorphous, disembodied conceptual categories.
Both problems must be solved, concurrently and robustly, if we are to have easy and consistent confidence that when we visit https://janetphysicsconsulting.org we are sending our latest experimental data to "the real Janet" rather than someone pretending to be Janet, and that those data are being routed to an endpoint controlled by Janet rather than some sneaky GiTM along the way...
Currently, to send those data to Janet's website with confidence they'll arrive unmolested in Janet's custody, we have to both have confidence that the hostname "janetphysicsconsulting.com" will translate into instructions for our data to go to Janet's computer (DNS resolution and routing table integrity), and that janetphysicsconsulting.com is actually controlled by Janet and not some imposter pretending to be Janet (the TLD registrar system of authoritative nameservers, etc.). If either - or both - of those assurances fail, then no amount of clever crypto will prevent our data from getting fondled in a most unseemly way.
That's the problem, in a nutshell.
The solution, most emphatically, is not to continue to incrementally refine the CA model, or (merely) encrypt DNS queries... each might have its uses (indeed we're supporters of DNS query security ourselves): those may be useful in and of themselves, but they cannot act as a substitute for systems-level alternative mechanisms for solving this problem. I'm repeating this point over and over, because until we accept that reality, we're self-precluded from ever seeing our way forward. Like the fish in the sea who never imagined the concept of "sea," we're swimming in waters of which we remain pathetically unaware.
We're in the water, all of us. We must see that, before we can even talk about what that means.
~ ~ ~
Oh, right, I'd mentioned something about cryptostorm solving these intertwined problems of network identity for folks on-cstorm, hand't I? A quick sketch, so as to leave room for more technical exposition once we've rolled out a tangible proof-of-concept in the form of r2r-verified connections to cryptostorm itself (which should be done in a day or so... we'd scheduled that earlier, but pushed STUNnion up the queue given its serious opsec implications).
There's two main components to our r2r framework: one addresses routing, and one addresses public fingerprint verification. Fortunately, both problems have already been essentially solved (in technical terms) via creative, vibrant technologies that were essentially nonexistent a decade ago.
Verification of the integrity of publicly-published data is a problem fundamentally solved by by the blockchains. Consensus validation of chain-posted data works, and has proved robust against very strong attacks thus far. It is not perfectly implemented yet, and there's still hard problems to be tackled along the way. That said, if cryptostorm wants to post something publicly in a way that anyone can access and have extremely high confidence both that it was posted by cryptostorm, and that it has not been modified since, blockchains work. Whether with pleasant frontends such as those offered by keybase.io or onename.io (as a class), or direct to blockchain commit, this system gets data pushed into a place from which it is nearly impossible to be censored, and in which it is nearly impossible to modify ex-post. This works
Successful routing of data across an unfriendly network substrate, with exceedingly high confidence that those data not being topologically hijacked mid-stream and that the endpoint to which the data were directed at the initiation of route setup is in fact the endpoint at which they arrive (and the reverse), has been solved by meta-network technologies of Tor and i2p (in a form of convergent evolution of disparate architectures). Both mate packet transit with asymmetric cryptographic verifications of bit-level data and route trajectory, and both work. An oracular attacker sitting on mid-route infrastructure can of course kill routing entirely by downing the network itself, but no practical or theory-based attacks enable such an attacker to enjoy oracular route-determination control over such sessions. These tools, also, work.
With those two technical primitives in place, the challenge of enabling confidence in our visit to Janet's website is fundamentally met. We can verify that Janet is Janet, publicly and reliably, via blockhain commit... and, we can ensure that the essential components of this process are routed reliably through the use of meta-topological tools from either Tor or i2p. Simply put, we can do blockchain lookups via topologically hardened Tor/i2p routing constructs that allow us to establish reliably secure connectivity with Janet's website. Once we have that session instantiated, in cryptographic terms, we are in good shape: TLS works, once it's up and running, and we need not try to restructure TLS to fix the problem of route/identity validation and integrity assurance.
Rather, we graft on exogenous tools - themselves well-proven in the field but somewhat at a remove from "mainstream" https currently - atop the existing strengths of https. Further, this approach generalises into non-https network encryption. Once the extra superstructure is in place to bulwark against the structurally implicit weaknesses of the CA, DNS, and TLD-nameserver systems, there's no intrinsic bounds on how far it can be extended from there.
~ ~ ~
We're making no fundamentally new tech, at cryptostorm, in order to bring r2r to life. The tools are there, because creative and dedicated individuals and teams have invested their passion and wisdom in bringing them to life. We're using components from the DNSchain team, from the entirety of the Tor Project's work, from Drs. Bernstein & Lange's c25519 breakthroughs, and from dozens of other brilliant technologists. We're just stacking those wonderful building blocks up in a way that enables something really useful for folks seeking secure network access, via cryptostorm.
The final piece of the puzzle is our deepDNS resolver/reply system, which has emerged from our earlier work to ensure integrity of DNS queries in the micro sense. With deepDNS, we are able to deploy "active masking" at the cstorm-network level - ensuring that privacy-relevant attack surfaces are minimised for folks on-cstorm.
Once we recognised the implicit capabilities of deepDNS - once we noticed that we're swimming in the water, as it were - the jump to r2r was all but inevitable. We are able to provide robust assurances of both data-in-transit integrity and routing-trajectory integrity for the on-cstorm leg of member network sessions... and that bootstraps all the rest. It's a layered, fluid, topologically heterogeneous meta-system that makes r2r possible. And it works.
So that's that. Despite this "too many words" essay, the deploy is somewhat trivial in practice. Once we've got tangible examples of this methodology in the field, we expect to find improvements, refinements, and extensions currently not obvious to us. And we hope others will take what we're able to do, and build in turn new capabilities and technologies we don't yet imagine ourselves.
Here's to those of us who are brash enough to worship at the alter of the cult of the done...
Cheers,
- ~ ðørkßöt
ps: down with fishycerts!
