For several years, we've discussed amoungst our team and with our community the pros and cons of bringing DNS resolver functionality for on-cstorm sessions in-house (separate threads found here and here and here, for a start). As that process unfolded, our tech folks cast a wide-net over the literature and best practices surrounding DNS resolution and DNS security in general.
For anyone who has explored this area of technology, they can confirm that the waters run deep. Whether it be the often-misunderstood question of DNS leaks & how to prevent them, the general obscurity of so much client-side DNS behaviour in tunnelled settings, or the ''right DNS settings for cryptostorm members, there's hundreds of posts in the forum collating information, reviewing research, pulling together advice, and sifting useful security kit from the cryptographic catastrophes of failed DNS security efforts so commonly seen in the wild.
After a few years of work invested (on & off) by our team - and new projects developing during the interim - we saw potential coalescing for genuine improvement in both DNS security/resilience against censorship, and in on-cstorm session performance (faster lookups, lower latencies).
Back in December, we deployed a beta-version of a DNSchain-supported resolver on a test machine in our network. DNSchain is an architecture that implements Namecoin blochchain-based resolution of .bit TLD names... names that can't be seized, hijacked, or subverted by state authority - as is so trivially easy to do under the current Certificate-Authority/TLD-based model & all conventional efforts to fix its hideous flaws.
In parallel, we've been testing DNScurve, a cryptographically-robust method to defeat substantial chunks of known attacks on DNS-lookup systems as they pass through the various layers of network resources. Despite the similarity in names, the two tools - DNSchain and DNScurve - really address different components of the attack-surface landscape. Together, they close off big chunks of weak real estate.
And finally, we've been testing various methods for doing the actual boring work of query-by-query DNS lookups via the delegated-authority model underlying the entire IP-to-domain model of packet-switched networking that we know as the internet. This is, itself, a world of layers upon layers - shortcuts that bring misery and elegant solutions that avoid enormous morasses of fruitless work; competing recommendations from smart folks who have studied these questions their entire professional lives, and DDoS skiddies looking for the latest easy amplification attacks to jack up their booter firepower. The final weird factor in these decisions is the near-ubiquitous reliance on "DNS leak" testing websites that each, in turn, implements closed-source, obfuscated code (usually brutalist javascrips) to convince a web browser to ask some part of the kernel what resolver it would use if - theoretically - it were resolving something (most follow this model; a small minority throw actual lookups out of the browser sandbox, usually via privilege-escalated custom Java applets that can pull off such tricks legitimately). All the while, all sorts of other bits and pieces in local client computing ecosystems may well be doing their own decisions about what resolvers to use, and when. Plus there's the local router or gateway, and it's likely go its own hard-coded resolvers it wants to use when it feels like it.
Oh, also there's IP6 leaking all over the place in these configurations: from applications, from kernel processes, from physical NICs, from hideous monstrosities like Microsofts force-down-your-throat Taredo nightmare. Oh also there's other devices on the LAN shouting out their own IP6 "next-neighbour-discovery" queries... some of which may well route out through the gateway and into the wilds of the internet (there's no such thing as "private IP6 addresses" after all). Fire up a packet capture suite, and sit as this putrid tide of IP6 spew rolls around your LAN and out into the waiting arms of whoever's listening upstream from your vuln-riddle little kernel-unpatched gateway hardware...
But when folks visit those mysteriously-opaque "DNS leak test" sites, they better get results they expect to get - if they get variant results, irrespective of whether there's any issue or not - they'll go into a wild panic. Which is totally fair: DNS is complex to do even reasonably well for full-time network admins... cryptostorm members who aren't full-time geeks need clear markers for "safe" or "unsafe - these leak test sites have become that, whether they're worth the virtual paper they're not printed on, in terms of accuracy, or not.
From all that, we've distilled down two production DNS resolvers. For now, they're publicly available - anyone can use them, on-cstorm or off-cstorm (i.e. bareback):
- mmm1.crytostorm.net
79.134.235.131
mmm2.crytostorm.net
79.134.235.132
Those two IPs are settled on one of our best-provisioned exit nodes anywhere in our network: fenrir.cryptostorm.net - which itself sits in the safe confines of our most-beloved datacentre anywhere in the world, Datacell EHF. Thanks again, guys!

Why are they prefixed as "mmm" and not the traditional "dns1" nomenclature? Because... mmmmmmm, these are damned fine DNS resolvers, yes they are!

These two "mmm" resolvers will shift over to on-cstorm only availability in the near future. That's why we've also created two parallel hostname-resolver entities that will always be publicly available and free for anyone to use. To do this, we'll migrate off these resolvers to a dedicated machine (or machines) so they don't run the risk of impacting production network performance. The resolver/IPs are:
- yay1.crytostorm.net
79.134.235.131
yay2.crytostorm.net
79.134.235.132
And in the coming days, we'll be implementing cluster-based resolver instances. This way, every geographic cluster will have it's own on-site recursive resolvers to call - a few milliseconds away. This will dramatically speed lookup response times, which translates into a feeling of "faster" internet access (it really does). This also allows us to add mesh-based resolver redundancies between geo-close clusters so attacks on specific nodes in a cluster can never offline resolver functionality on other nodes in the cluster.
The full technical details of what's been deployed, how it goes about various categories of resolution-query completion, and what cryptographic primitives are implemented in each inter-locking layer of this structure, have been split off into a separate post (below) to avoid having this post get even longer than it is.
- - -
There's a good bit of future roadmap yet to be publicly disclosed, when it comes out our resolver framework.
For example, while we've already implemented .bit TLD lookups (if you're using cryptostorms 'mmm' resolvers, you can click on https://cryptostorm.bit and you'll see our main website load seamlessly; if you're not, nothing will come back from the DNS query thrown by the kernel) transparently via our DNSchain'd resolvers, we'll soon be rolling out a parallel capability to transparently access .onion Tor hidden services sites, when on-cstorm.
At that point, our torstorm cstorm-Tor gateway service will be opened up to everyone - not only on-cstorm access as it is currently structured. Those on-cstorm won't need it, as our resolver system will do all that work behind the scenes and .onion sites will just load like any other site in a browser (yes, we'll retain the heavily-tuned cryptographic suite cascades protecting on-cstorm .onion site access, of course. All that will change is the removal of any need to replace 'onion' with 'torstorm.org' in the URL of hidden services sites.
We're also implementing (via itpd, the C++ instantiation of i2p's original Java architecture) the same transparent-access-via-browser for on-cstorm sessions for eepsites, those with the suffix .i2p that are hosted inside the i2p "network-within-a-network" security model (these features are called "inproxies" and "outproxies" in the context of eepsites). Likely we'll enable public access to these between-network gateways via some mechanism similar to torstorm, as discussed above.
Next, we'll open up these between-network gateways to a broader range of packet traffic than only .onions/eepsites: already, we run a dedicated 100 megabit Tor relay (mishi.cryptostorm.net) as a donated resource to help support the Tor Project. It acts as a testbed and way for us to become more experience with the performance-tuning challenges of torrc-based network transit architectures. A test/dev box to act as a dedicated i2p router is also in process.
We're also baking into future widget releases last-mile DNScurve cryptographic hardening, likely via a src-modified fork of the existing DNScrypt Windows libraries (DNScrypt is, itself, a specific instantiation of the DNScurve model itself - as is CurveDNS). With a bit of trickery, we'll be able to support cstorm-resolver DNS queries from widget-based session connections before the cluster/balancer resolvers have even been queried, thus adding an additional strong layer of MiTM protection for use in highly unfriendly local network contexts.
All this control over resolver queries and server-side resolver functionality - from Namecoin lookups to c25519-based PFS crypto wrappers - allows us to step forward into a whole new layer of network capabilities for on-cstorm activities. We'll be able to wrap not only node/cluster-to-client bindings fluidly and (if we desire... which we will desire) stochastically within loose outcomes-based heuristics, but also the very core of the HAF interconnections (IP/instance to hostname and balancer) we'll then have access to full-power 'fast flux' dynamic network re-architecting on the fly. This is the sine qua non of resilient defence against entire classes of highly effective, difficult-to-avoid attacks on tunnelled/encapsulated secure networking tools from Tor to cryptostorm and everywhere in between (think Great Firewall, for example). Fast-flux has evolved as a tool to enable malware and botnets to evade shutdown efforts - we're just turning that on its head, and using the tool to help foks evade efforts to block access to secure, reliable network resources. (more on fast-flux here and here).
And beyond that, we can enable server-side protections against vast classes of known attacks and censorship of traditional PKI/CA DNS records - so even though national governments can mess with root-level domain-to-IP mappings, we can recognise what is essentially a nation-state Kamisky attack and fall back to alternative resolver resources.
Oh, and we can execute fine-grained control over things like the 'randomness' of source port assignments within DNS resolver queries over time - thereby protecting against still more classes of known-good, in-the-wild DNS resolution cache poisioning' attacks. In addition to solid improvements in security and test results from well-designed vuln-analysis tools, we get very nice-looking charts like this, as well:
Needless to say, investing the long-term research, analysis, testing, and development resources in DNS resolution functionality for cryptostorm is a decision we feel very good about, as a team. It may not be as sexy as some kinds of content-free marketing hype on the surface... but it has the benefit of providing genuine, sustainable improvements in both members security whilst on-cstorm, and in network resilience in the face of DoS/DDoS or resource censorship attacks by highly-resourced attackers.
DNS-centric systems are hardly the only area of focus for the team, as we continue to expand and improve the core network architecture and security model through 2015 and beyond.
We'll use this thread as an ongoing resource for sharing information on this part of our network framework,
- ~ cryptostorm_team