Search found 121 matches

by cryptostorm_team
Mon Sep 07, 2015 4:39 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostormusersguidev2 <-- feedback & guidance requested
Replies: 11
Views: 156927

cryptostormusersguidev2 <-- feedback & guidance requested

{direct link:}

The attached cstorm "user's guide" (we abhor the term "users" internally, as it's got that seedy edge... but everyone else uses it so we're not going to be too churlish about it in current context) was recently submitted to us anonymously by a community member. Actually we think it was submitted by a token reseller - and we're not sure if the "anonymous" submission was intentional or if we're just too slow to follow who should receive proper credit for this excellent work. In any case, we'll update this with such credit if/when we're provided with instructions by the author to do so. :-)

(1.01 MiB) Downloaded 7387 times
This is a pretty impressive document, based on our first-run read-thru. Likely there's updates and so on that will be needed... but for a starting point it seems enormously useful.

Rather than us keeping it internally until we figure out boring things like who actually wrote it, we've decided to make it available here. By all means, please share your critique and/or corrections, as needed, if you've a minute or two to read through it and catch any areas that could use revision.

Meanwhile, feel free to share with others and we hope this becomes a useful - and widely available - resource for members far & wide.

Tally ho,

~ cryptostorm

ps: also this, no relation to the userguide but hey laughter is the best whatever ;-)
by cryptostorm_team
Fri Sep 04, 2015 12:38 pm
Forum: general chat, suggestions, industry news
Topic: cryptostorm's #HackedTeam mirror
Replies: 1
Views: 17940

cryptostorm's #HackedTeam mirror

{direct link:}

Back in the hot days of midsummer, someone did a little number on the assholes at Italian malware-munitions purveyors "Hacking Team." That someone, or someones, exfiltrated about 400 gigabytes of data from their internal systems: email spools, source code, training materials, more or less the whole crown jewels by the look of things.

One morning early European time, a .torrent file indexed to this trove appeared on a transiently-available .onion URL, and was available for peering. However, because there were hundreds of thousands of individual files in the torrent, actually pulling the archive via this process was tricky and not really viable for most folks interested in reviewing these data. So it was pretty clear there would be value in creating an HTML mirror of the whole thing - all 400 gigabytes, so a simple web browser could access and study the files.

We decided to make a mirror, and registered the domain to point at it. Because we'd wanted an excuse to use that new .technology TLD, basically. And also because it's easy to remember and so on.

Right away we saw traffic of several hundred mb/second flow out of the newly-created mirror. For some big CDN that might be peanuts, but we live in reality and in reality that's a decent amount of packets to be serving up... and since we've got the capacity here and there in our network to carry that sort of volume, we were happy to see the numbers jump up like that. They have stayed more or less steady since then.

Also right away, we started to see blowback form the hacks at Hacking Team and their lawyer goons. We lost a few servers, early on. So we began to chain in "jump node" proxy inbound VPS instances to shield the underlying servers from easy de-obfuscation by lawyerbot goons and so on. That lessened the pressure on our underlying hardware, and instead we cycled through a whole armada of leased VPS instances... melting them down as fast as we added them.

So we did some work to automate DNS-based failover syntax for the inbound jump node VPSes - that way we could just pull the dead ones from the field of battle, add new ones, and the redundancy of the failover system itself would keep the underlying webservers pointed at the people requesting pages, and so forth.

The whole process has cost us perhaps a thousand dollars of actual resources (or less), and a decent amount of tech staff time to keep it running, harden it against attacks (yes we saw pathetic efforts to "hack" our mirror from various hopeless camp followers of Hacking Team; none amounted to a bean in a hill of beans, frankly), and create systems to ensure the mirror doesn't bog down, get shut down, or otherwise become less useful over time. Since alot of that work we can now apply elsewhere in our own network - for example, the soon-to-be-alpha jump nodes we now offer are a direct outgrowth of those VPS proxies we use in the HT mirror to buffer the underlying servers - it's been a net benefit to everyone.

Since then there's been a few other mirrors come and go. Some are still around, although perhaps not a full copy of the full 400gb of original files (we've not added, removed, or edited anything in the archive itself - nothing), some vanished under lawyer-goon pressure, some got shut down by this or that CDN under suspicious circumstances. Wikileaks now has a nicely-done, searchable index of all the email spools in the archive - really useful, but it's also nice to have the whole thing. Oh, and there's a repository mirror of all the src repos in the dataset itself. That's handy, too.

Meanwhile we'll keep our macro-mirror up and running so researchers have the whole thing at their virtual fingertips, for as long as they need it. Hopefully forever. Because those assholes deserve nothing less... oh and also there's alot of seriously interesting and informative stuff in there about how ethically bankrupt shitheads like them make money from illegal surveillance of dissidents, activists, and citizens worldwide. And help get people tortured to death. And other really evil things.

We've been meaning to do a "real" forum post with lots of details on the clever stuff we've figured out how to do along the way, but time waits for no geek and it never seems to happen. So we finally wrote up this very short "intro" post, to get things going. And the idea is we'll add to it as time allows. At least it's something, and that's more than nothing.

Meanwile, the mirror is pretty active. By our loose estimates (we don't log any traffic to the mirror, obviously, so we're going on not alot of micro-detail in this estimate) we've served up about a petabyte of #HackedTeam mirror files since went up in July. A petabyte: one thousand terabytes. Not bits - bytes. That's nothing to shake a stinky stick at, or anyway we think so. Yay for transparency and yay for the community of counter-surveillance geeks that helps keep these shitbags under the microscope and less able to destroy lives with their destructive digital weaponry.

More to come. Meanwhile here's to the next petabyte of mirror traffic.

Tally ho,

~ cryptostorm team
by cryptostorm_team
Mon Jul 20, 2015 12:05 pm
Forum: general chat, suggestions, industry news
Topic: POLL: what payment options should cstorm add next?
Replies: 3
Views: 11357

POLL: what payment options should cstorm add next?

{direct link:}

After quite a bit of behind the scenes upgrades to our automated token delivery functions and codebase, we're now confident in our ability to smoothly add in additional payment options beyond those we currently support.

Rather than simply guessing which additional options are most useful to the membership (albeit based on our hearing requests from members over a period of many years), we decided to do a poll that might help to better focus our efforts where they are most useful in terms of member preferences.

Note that we've picked a handful of options familiar to us from multiple member requests, but it's entirely likely we've missed just as many in this list. For that reason, if you have a request not on this poll list, please do reply with the processor's contact info and name, and we'll add that into the poll, as well.

Also, feel free to explain why a particular processor might be one we should avoid - while we cannot guarantee that will happen, we're just as interested in that decision as we are on choosing which ones to add.

Without further ado...

~ cryptostorm_team
by cryptostorm_team
Mon Jun 29, 2015 10:06 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Redundancy in website, email, & IRC infrastructure (etc.)
Replies: 8
Views: 36330

Redundancy in website, email, & IRC infrastructure

During the past several weeks, we've accelerated a longer-running project to add redundancy and resilience to our websites and other non-network resources (we call these "non-network" because these items are not part of deliving cryptostorm's secure network itself, which is entirely separate from any websites or other single-point-of-failure components).

In the first two years of our existence, we didn't judge the need for such capacity to be mission critical; a small bit of downtime here and there with, for example, might be a minor inconvenience for all of us but would not be critical path. Of course, we retain rolling backups of files (and most of our website source is already hosted at github and thus is on independent infrastructure), so in the event of a sustained outage we could - and several times, did - switch over to secondary server capacity with the backup images.

Most companies handle this issue by outsourcing their hosting to a "content delivery network" like Cloudflare. For a basket of reasons too long to list here, this is not an approach with which we are comfortable, though it is "easier" and for less technically centred project teams it will in many cases be too tempting to pass up.

So, as cryptostorm has grown and evolved since 2013, we've known that the need for redundant website (and email, and IRC... we'll just say "website" and assume all that is included, as well) capacity would eventually be something we'd need to address. As we discuss in a bit more detail in a parallel blog post at, recent attacks on Iceland's internet infrastructure have caused access to our websites (which have always been hosted there, with our colleagues at Datacell) to become, in a word, sporadic (through no fault of Datacell's, to be clear).

Given that, we pushed forward to complete our internal effort to provide redundant, distributed, failsafe website access - we'd been making steady progress but with no deadline in sight, it naturally slipped behind critical tasks and was in some senses sleepwalking. Issues in Iceland got things into fast gear, and we set a tight timeline to get things in place.

Two days ago, on Saturday, we did our first production cut-over test of the new model we've put in place. Most went smoothly, and our security procedures held together comfortable. However, there were the (if we're being candid) expected hiccups here and there: the database powering this forum was intermittently refusing to stay up on Sunday evening, for example. Those issues are all now resolved and we're fine-tuning the details.

In this thread, we'll post a bit more technical detail on how we've approached this infrastructure redundancy project - some of it's a bit routine and boring, but other components are perhaps novel and even somewhat elegant in final form. It's worth nothing that the overall project is not complete; what we've done is the first cut-over test. Now, we're layering in the automated redundancy itself (in technical terms, the first step was actually more of a challenge than the redundancy itself).

Finally, it appears that our automated 'tokenbot' delivery of newly-purchased tokens was inactive from early Sunday through Monday morning. We'd concluded this was merely the result of cached DNS data in email delivery systems, but that conclusions was not accurate and in fact the tokenbot was simply not delivering tokens. Since then, we've manually confirmed all tokens not delivered timely during that period have now been delivered. Further, we've provided complimentary 66-day tokens to all those members affected by the delay. This was a genuine screw-up on our part - timely token delivery is a big deal to us, and to many members - and we offer our apologies for not being aware of the issue, and resolving it, sooner.

If there's additional questions or reports of transitional bugs, please do feel free to post them here - we'll do our best to stay current with replies. Through today, we've invested substantially all available team effort in completing the first step of this project, and thus haven't posted much data here on what's been in process. Now that's complete, we're able to do a better job of keeping the membership informed as to ongoing developments.

Best regards,

~ cryptostorm_team
by cryptostorm_team
Sat Jun 20, 2015 1:42 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: 宮本 Tokyo (Japan) exitnode cluster | anchor node = miyamoto 宮本
Replies: 37
Views: 77032

宮本 Tokyo (Japan) exitnode cluster | anchor node = miyamoto 宮本

UPDATE: miyamoto is dead. Turns out Japan DCs are pretty bad when it comes to DMCA (which is an American law btw)

{direct link:}
config available here on-forum, and via github

After more than the usual amount of concerted effort, we are pleased to put in full production status the first ("anchor") exitnode for our Tokyo Japan exitnode cluster. It has been our intention to provide cryptostorm exit capacity here for quite some time - nearly two years, in fact. As we have a number of good friends & colleagues who reside there, we have been looking for opportunities to provide useful capability in this geographic space.

However, there have been some challenges and it was only when we were comfortable with our solutions that we made the decision to act.

Historically, colocation-based server capacity was difficult to obtain in Japan itself; some of this was the result of governmental regulation, some simply reflected challenges in working between cultures. For example, we were unsuccessful in explaining the "month-to-month server lease" concept, in past years, to potential datacentre providers who were comfortable with large corporate customers entering into years-long contractual relationships after extensive negotiations.

As time went by, we found those limitations becoming less of a problem but still there remained a lack of datacentres providing genuine "bare metal" servers for clients from outside Japan. For us this is a non-negotiable issue, as the security consequences of VM (virtual machine) exitnodes are simply unacceptable in a security-intensive context such as ours.

After this long search, we did identify a datacentre that was promising and seemed comfortable working with our style of details-oriented, technically aggressive network resource administration (translation: we're a picky, obsessive, highly engaged customer for datacentres and sometimes they prefer less technically inclined customers as they are easier to... market to, shall we say?).

However, as we began our work in stripping the machine down to post-BIOS state and installing our "stormnode" kernel and related components (a modified RLEH distro w/ full grsec mod implementation & extensive removal of unnecessary package structures, post-compile), we noted inconsistencies in the baseline kernel builds we were seeing on the machine. As the initial 'footprint' for the install came via network-delivered installer packages, we had concerns their integrity had been broken along the way. This we discussed with onsite datacentre technical staff, via our intermediaries in the project, and in the end we don't feel the datacentre was involved in anything untoward - but we also do not have an explanation that we can back with sufficient data to be considered definitive.

That kind of investigative research, while often interesting and useful for overall security community publication, is not our core focus and in this case our drive was to produce an as-installed kernel and production context that we are confident has binary-level integrity and has not been subject to mutation by hostile processes during installation or afterwards. After going through more kernel reinstall cycles than we care to remember, we finally were able to produce a machine that passed all integrity checks with flying colours: miyamoto.

২ ২ ২ As is cryptostorm tradition, we asked folks connected with our main twitter account for suggestions on naming the anchor node in our Japanese cluster. There were quite a few excellent ones, and we'll likely be using those as the cluster expands with additional nodes (we do refer to one-node clusters as "clusters," since we'd have to shift naming conventions otherwise, when redundant capacity comes online as is standard practice for our cluster management). However, it was not possible for us to choose anything but miyamoto, referencing Musashi Miyamoto but inevitably also bringing to mind the legendary Shigeru Miyamoto of Nintendo.

Musashi Miyamoto | 宮本 武蔵: author of "五輪の書" ("The Book of Five Rings"), calligrapher, Buddhist, scholar, rōnin. In his own words:
I have trained in the way of strategy since my youth, and at the age of thirteen I fought a duel for the first time. My opponent was called Arima Kihei, a sword adept of the Shinto ryū, and I defeated him. At the age of sixteen I defeated a powerful adept by the name of Akiyama, who came from Tajima Province. At the age of twenty-one I went up to Kyōtō and fought duels with several adepts of the sword from famous schools, but I never lost.
Musashi's development and mastery of double-sword technique - known both as niten'ichi ( 二天一 | "two heavens as one") and nitōichi (二刀一 | "two swords as one") - is often said to be a supreme expression of the art of swordsmanship, and masters of this technique in the intervening centuries are miniscule in number. Rather than the limited elegance of two-handed long sword use, he saw the potential for a fluid, elegant, profoundly effective two-handed/two-swords practice... even though this did not exist yet. Undaunted by its nonexistence and perhaps even a little bit drawn to this, he crafted it himself and shared it with students and readers of his words.
150px-Kobokumeigekizu.jpg (8.96 KiB) Viewed 77031 times

At the same time, there is a dual-edged nature to Musashi's spirit: a warrior who fought dozens of battles to the death, and yet also a scholar and Buddhist. Although it is easy to simply assume these were "two sides" of him, we feel the deeper perspective recognizes that a thing has no "sides" but rather encompasses multitudes and expresses these elements depending on circumstances. His contributions as an artist, later in life, show him to be fully-fleshed as a sentient being and not merely a killing machine.

Much has been written, and much is worth reading, when it comes to Musashi's wisdom. Here are some starting resources, for those interested:
IWAMI dragon interview english.pdf
(52.75 KiB) Downloaded 1177 times
(2.59 MiB) Downloaded 1541 times
(2.09 MiB) Downloaded 1606 times
interview niten 2006.pdf
(68.72 KiB) Downloaded 1193 times

宮本 | Miyamoto, Musashi's surname and the name of his birth village, can be translated to the Engish as "base of the shrine" and we hope this proves to be an auspicious choice as anchor for our Japanese resources. In combination with the soon to be released native Japanese translation of our Windows connection 'widget,' it is a strong step forward in our work to assist modern-day network rōnin as the embrace the complexity of whatever pathways life presents for them in their travels.


~ cryptostorm_team
by cryptostorm_team
Sat May 23, 2015 4:29 pm
Forum: member support & tech assistance
Topic: .ru server problems
Replies: 7
Views: 11563

Laika is doing fine, she may in fact have taught us something wonderful

I'm not at liberty to share details yet, but given the frustrating delays you've had it seems fair to at least say something.

The problem you're seeing isn't... conventional. Nor is it unique to our Leningrad offramp. I know the others are working some long hours to get a published explanation ready.

But the good stuff is the solution we might have in hand for this. If that holds up under fast track testing, it's a game changer.

And I'm really sorry but we can't say more than that right now. When the embargo lifts any day now, it's easy to see why we had to wait - all good, nothing but.

Meanwhile clean pcap session captures of these... incidents are really helpful. And I'm sorry to sound like a politician ducking questions here. It's not that at all. But more is less wrt details beyond that. For a few more days at least.

Thanks for being patient, from all us here - it's likely to be more than worth it when the dust settles.

~ cryptostorm_team
by cryptostorm_team
Sun May 17, 2015 9:54 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: Live-Capture Forensics of Corruptor-Injector Network injecting fake Chrome install via https@google
Replies: 15
Views: 111404

Live-Capture Forensics of Corruptor-Injector Network injecting fake Chrome install via https@google

{direct link:}

There are times when we, as a team, find it challenging to articulate the scope and danger of Corruptor-Injector Networks, or CINs. The traces of their activity are transient, and even realtime captures result in a complex snarl of routing snapshots, intertwined DNS records, nearly-impenetrable certificate analyis projects, and pcap files that spit out lots of information but little obvious insight. Then add in the fact that a CIN-level footprint on network activity is by definition going to look different depending on the perspective of the observer: someone in the "direct line" of the CIN injection will have one view, whereas someone on other routing pathways won't see anything untoward. Also add in that routing is by its nature fluid over time, and pinning these things down in a way that's compact enough to digest is, currently a big challenge.

In due time, of course, analytic tools and research protocols will develop and we'll look back on today's efforts as being set in the relative dark ages. Such is the nature of work exploring new frontiers in any field, and perhaps more so on a hyper-accelerated one such as emergent network security threats.

This weekend, we were fortunate enough to capture a CIN footprint at work, and we're documenting it here in this thread. Recognizing that some folks find our github repositories cluttered (true), impenetrable (also true, though we're working on it), and intimidating (really not true, but seen as such), we've chosen to post all underlying data here in this thread - it's a bit clunky, but it'll server as a test-case. If there's demand, we can cross-post these materials to our CIN repository, as well.

This isn't an analytic effort to pin down the specific 'session prions' being fed to the client-browser by this CIN attack; those analyses are separate but related to work such as this which documents the network-level evidence of a CIN at work. Together, those two analytic classes form a comprehensive snapshot. This half of the analysis by itself demonstrates that there's CIN-activity afoot in a given network session at a given point in time.

And this is not some minor CIN-inject. Rather, it represents a CIN system subverting https to a core website - used for downloading Chrome browser installer files - via fraudulent certificates and exotic DNS-based hijacking of session transit. We at cryptostorm do not claim to have unwrapped the full details of how this route-hijacking is being accomplished here; rather, we are drawing attention the non-legitimate nature of the network session itself. That's the first step, and we undertake that step here.

These data were captured by a cryptostorm team member, whose network session had the following parameters during this capture:
  • 1. routed securely through a cryptostorm exitnode in fully-functional, secure, and verified state during this capture.

    2. using a newly-installed, hash-verfied (package installer as well as repository-validation of legitimacy of the package during initial pull) 'Icefox' Debian-build mozilla browser image (version 31.7.0:

    3. running via a manually-configured network framework, as part of a newly-installed Debian kernel (8.0/jessie) with ip6 disabled at the NIC, the kernel (sysctl.conf), the local router, and via grub parameter inclusion pre-boot.

    4. resolving domain names via cryptostorm's deepDNS resolver mesh via the node through which the connection was made.

    5. transiting a local router with a newly-flashed OS image, manually configured to disable all known avenues for remote exploit

This is all to say that it is highly unlikely - bordering on impossible - that the mechanism for this attack is based solely or even marginally on malicious code or configuration settings found on the local computer that initated this session. Further, every possible network analytic tool was used to confirm the integrity and stability of the cryptostorm network session through which this session travelled (we will discuss the relevance of that at the end of this report).

The session is initiated by pointing a newly-opened icedove window manually at the following URL:

Code: Select all

This URL then redirects to the local google subsidiary for the exitnode cluster in question (Paris, France):

Code: Select all

Immediately we notice that icedove is not happy with the certificate credentials... although the page still loads without any errors or overt warnings:

A closer look tells us that icedove is receiving both 'secure' and insecure elements in this particular page:

Needless to say, a legitimate page-load of a https-prefixed google page will not include calls to external, insecure resources on load. So already we are seeing problems with this 'secure' network session... problems not caused by google being inexperienced in providing solid https transport from their server facilities.

We capture the certiticate for this page-load, which is included here in PEM (pre-decoding) format. Later, we will look more closely at what the certificate tells us. For now, it is a stepping stone in our analysis. This certificate identifies itself (via CN field) as * despite being served during a putative session with (again, this kind of obvious certificate misconfiguration is all but impossible to imagine google doing in production systems):

Code: Select all


From there, a search query is entered for "chrome" with the following result:

Code: Select all

...from this page, the main link is chosen, which directs us to the following https-served URL. It is important to note that we are now at the core of google's ecosystem - - preparing to download and install their flagship software product - Chrome - directly from them.

Code: Select all

This page appears as follows:

Note that icedove loads the page as https with no errors or warnings whatsoever. However, it's notable that the connection does not appear to represent an EV-class certificate. In other words, there's no 'green lock' as we see in any of google's other services. For example:

If we look behind the scenes of this 'https' page we just loaded, we see there's a cavalcade of render-errors being throw as the page loads from various resources...

When we ask for the certificate from that https-chrome URL (, we get the following PEM blob:

Code: Select all

Here is icedove's summary of the page's identifying information. Note, again, no warning or suggetion this is not a legitimate https session:

The summary of NSS's onboard unpacking of the PEM'd server-side certificate is as follows:

At this point we step back to see what others are seeing, in terms of certificates being provided by this URL ( We turn to @IvanRistic's standard-setting testing toolbox and find some most astonishing results...

Ivan's own website itself is presenting without robust ssl credentials - which we are absolutely sure is not any error on his part, but rather reflects a subversion of the https session between us and the test-results page we've generated (we have that cert captured and will post in the fishycerts repository for analysis).

Equally implausible is the result his page presents (or appears to present, from the perspective of this session-load): gets a grade of "B" for https support. Let's look more closely at the results underneath that grade. There's two IP addresses shown as resolved from the two versions of ""
  • resolves to resolves to

The rDNS records for each are as follows:
  • reverses to reverses to
    (note the change from "4" to "14" between the two)

Let's look at the first of those two IP addresses, namely We turn to Robtex for an authoritative snapshot & historical summary, which confirms serious questions are now impossible to ignore (Robtex's https page does load with green-lock EV cert status, at this point). First, we see 86 domain names currently resolving to this same IPv4 address. The full list is:

Code: Select all


Are all of those domains/subdomains/sub-subdomains "legitimate" google properties? We leave it to readers to individually check each one... but we'll be surprised if they all pass muster, to put it mildly. Some certainly are, but a 100% legitimate result isn't to be found - and even having "only" 10% of those domains directed at this IP without being google-controlled is something of an inexplicable (albeit not impossible) result: who chooses to point their domain at an IP address they do not control and from which they cannot serve anything whatsoever? We are not aware of legitimate explanations for this behavoir, at scale.

In the event, we are not filled with confidence that this IP address is 'legitimately' and exclusively Google's to use. In saying so, we understand that we're over-generalising, and that there's potentially viable (if very tenuous) legitimate explantions possible for each indivudal data point in this analytic chain. Possibly, if we're very creative in how we brainstorm. For example, Google can't prevent outsite domain owners from directing their A Records or other DNS entries at an IP controlled by Google (as we mentioned previously), so having noise in those mappings is not demonstable proof ot anything overtyly untoward going on. However, we ask that consideration be given to how this chain of data compares to an IP address solidly within Google's control, and that the chasm between these two categories be kept in mind.

Finally, on the IP addresses, we ask that curious readers take a look at the Robtex records graph for We'd love to screenshot it, in full, but have been stumped on how to present such an object in a way that is not utterly useless as a visual aide. Here's a tiny slice of the graph, for example:

If the full glory is called for, here's a .zip of the entire page, as-rendered:
(246.16 KiB) Downloaded 2295 times

Next, let us turn to the rDNS value for that IP address: Again, the Robtex report provides ample data to shake any confidence we might otherwise have that this IP from which an https session claiming to be served by is, in fact, mated to a server Google actually delivers session data from at this particular point in time. This subdomain (said domain being familar to anyone who has done much pcap analysis of browser sessions) appears to be comortably within Google's purview, at least by topline metrics.

However, even here there's some surprising near-term anomalies visible with only a cursory review of public data.

For this, we check "oldest DNS records matching" for incongruous results. We do not expect one of Google's main IP addresses to be, for example, recently controlled by some outside company - particularly not by anything shady. Google has contol of and administered with professional obsession large swaths of IP-space; they do not come into tidbits of IP addressing resources that were, for example, recently turned over to them by the datacenter in which they lease a server or two. Despite that, we see these records:

Code: Select all

The oldest DNS info involved in this analysis were:

dns	last checked	Thu Apr 2 03:01:33 2015	Thu Apr 2 16:03:49 2015	Thu Apr 2 17:31:27 2015	Sat Apr 4 13:30:36 2015	Sat Apr 4 15:27:23 2015	Sat Apr 4 15:40:16 2015	Sat Apr 4 16:27:36 2015	Sat Apr 4 20:08:37 2015	Sun Apr 5 08:06:48 2015	Sun Apr 5 23:35:57 2015	Mon Apr 6 08:36:31 2015

The oldest of these records is 2 April this year; the most recent dates to 6 April. Let's take a look t's whois data:
Registry Domain ID: 1868185559_DOMAIN_COM-VRSN
Registrar WHOIS Server:
Registrar URL:
Updated Date: 2014-07-24T08:15:55-06:00Z
Creation Date: 2014-07-24T08:15:55-06:00Z
Registrar Registration Expiration Date: 2015-07-24 T08:15:55-06:00Z
Registrar:, Inc.
Registrar IANA ID: 625
Registrar Abuse Contact Email:
Registrar Abuse Contact Phone: +1.17203101849
Domain Status: clientTransferProhibited
Registry Registrant ID:
Registrant Name: Whois Agent
Registrant Organization: Whois Privacy Protection Service, Inc.
Registrant Street: PO Box 639
Registrant City: Kirkland
Registrant State/Province: WA
Registrant Postal Code: 98083
Registrant Country: US
Registrant Phone: +1.4252740657
Registrant Fax: +1.4259744730
Registrant Email:
Registry Admin ID:
Admin Name: Whois Agent
Admin Organization: Whois Privacy Protection Service, Inc.
Admin Street: PO Box 639
Admin City: Kirkland
Admin State/Province: WA
Admin Postal Code: 98083
Admin Country: US
Admin Phone: +1.4252740657
Admin Fax: +1.4259744730
Admin Email:
Registry Tech ID:
Tech Name: Whois Agent
Tech Organization: Whois Privacy Protection Service, Inc.
Tech Street: PO Box 639
Tech City: Kirkland
Tech State/Province: WA
Tech Postal Code: 98083
Tech Country: US
Tech Phone: +1.4252740657
Tech Fax: +1.4259744730
Tech Email:
Name Server:
Name Server:
Name Server:
Name Server:
DNSSEC: Unsigned Delegation

...that doesn't look like a google domain name - but perhaps it's just a really... unusual one. A check with google's PR folks will confirm, but again we're leaning towards putting money on the answer being no. Then how about Well, here's the Robtex page... we feel it speaks for itself. (the downsream overlap with Black Lotus Communications IP-space is a fascinating lead... that name having come up both in forensic investigations involving leading-edge malware/rootkit exploitware, but also being quite closely associated with a particular "VPN service" in Texas whose installers have exhibited rather unexpected behaviors during intensive forensic analysis; obvious research opportunties present themselves here).

Incidentally, we see similar curious anomalies in DNS data for other Google domains, an example being This domain, squarely google's by registration and use, has a strange clot of external DNS records showing up in roughtly the same window as the outside DNS traces we noted in both the subdomain and the IP address. Such a correlation, of course, proves no causative link (either directly, or via hidden variable intercession), and could merely be coincidence. Add up enough unusual coincidences in one small area of the statistical possibility landscape, and the sigmas begin to pile up in any purely coincidental explananatory framework.

- - -

To conclude this short review of hostname/DNS/IP records, let us take a quick look at the server-provided ssl certificate for the domain Above, we provided it in PEM form. Here is the ASN.1-mediated unpacked data, in raw form:

Code: Select all

        Version: 3 (0x2)
        Serial Number: 6898384865036533650 (0x5fbbfc7c4c6eff92)
    Signature Algorithm: sha1WithRSAEncryption
            commonName                = Google Internet Authority G2
            organizationName          = Google Inc
            countryName               = US
            Not Before: May  6 10:29:25 2015 GMT
            Not After : Aug  4 00:00:00 2015 GMT
            commonName                =
            organizationName          = Google Inc
            localityName              = Mountain View
            stateOrProvinceName       = California
            countryName               = US
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Alternative Name: 
            Authority Information Access: 
                CA Issuers - URI:
                OCSP - URI:

            X509v3 Subject Key Identifier: 
            X509v3 Basic Constraints: critical
            X509v3 Authority Key Identifier: 

            X509v3 Certificate Policies: 

            X509v3 CRL Distribution Points: 

                Full Name:

    Signature Algorithm: sha1WithRSAEncryption

Here are the same unpacked data, in a less tedious formatting that makes for easier human reading:
Valid To 04 Aug 2015 ( 78 days )
Weak-Key Does not use a key on our blacklist - this is good
Key-Size 2048 bits
Signature Algorithm (sha1WithRSAEncryption) SHA-1 is being phased out
Certificate Summary
RDN Value
Common Name (CN)
Organization (O) Google Inc
Locality (L) Mountain View
State (ST) California
Country (C) US
Property Value
Issuer CN = Google Internet Authority G2,O = Google Inc,C = US
Subject CN =,O = Google Inc,L = Mountain View,ST = California,C = US
Valid From 6 May 2015, 10:29 a.m.
Valid To 4 Aug 2015, midnight
Serial Number 5F:BB:FC:7C:4C:6E:FF:92 (6898384865036533650)
CA Cert No
Key Size 2048 bits
Fingerprint (SHA-1) 4B:9D:33:E6:4E:F6:10:4E:20:43:BF:1E:09:28:92:4F:6D:41:33:7A
Fingerprint (MD5) 3E:35:9B:E7:DB:85:D1:5B:98:06:B5:2E:E2:36:0E:68

As this is a server-end ssl certificate - SHA1 fingerprint - 4B9D33E64EF6104E2043BF1E0928924F6D41337A - there is no authoritative database against which we can check it to simply verify it is "legitimate" or not. One of the many frustrations and failures of CA-based certificates is the utterly imprecise nature of what a "fraudulent" certificate is, or is not. Rather than being a binary yes/no question, we're left with vast swaths of arguable gray-zone... for even professional researchers, debate over the legitimacy of particular certs can go on for weeks... or longer.

However, a few quick tests don't provide confidence-inspiring results:
  • First, the cert is signed SHA1 and Google has long since moved away from this as a suitable cert-signing algorithm. Nor is it perhaps some ancient root certificate signed as such decades ago: this one claims to have been issued 6 May 2015 - less than 10 days ago. Is someone at Google really issuing SHA1-signed certs in May of 2015? This seems highly unlikely.

    Second, the cert-embedded "Authority Information - OCSP" URI (a nearly-vestifial form of not-CRL but also not-full-cert-pinning certificate recovation procedure that we will not bore you with explaining in further detail here) - 404s when loaded.This is not the sort of thing one will find in a legitimately Google-issued certificate, created less than 10 days ago. (the fact that CRL, OSCP, and other cert-embedded URIs routinely lead to 404s, endless redirects, dead air, and mysterious 'numbers-radio' style short strings of digits - quite often in the case of full root certificates - is one of those realities of CA-certificate existence that is rarely commented on, but remains surreal in its implications)...

    Here's the URI that's supposed to represent the issuer's 'official' credentials, which in theory helps the benighted browser operator verify if the certificate matches this issuer's credentials (although the specifics of doing that match are both impressively complex, and even if done right do not yield valid/fraud clarity but only some degree of qualified 'maybe'): view-source: The certificate that gets provided at that URL is as follows (after a pre-conversion from .crt/DER to .PEM, of course):

    Code: Select all

    -----END CERTIFICATE-----

    That, in turn unpacks to...
    Version: 3 (0x2)
    Serial Number: 146038 (0x23a76)
    Signature Algorithm: sha1WithRSAEncryption
    commonName = GeoTrust Global CA
    organizationName = GeoTrust Inc.
    countryName = US
    Not Before: Apr 5 15:15:55 2013 GMT
    Not After : Dec 31 23:59:59 2016 GMT
    commonName = Google Internet Authority G2
    organizationName = Google Inc
    countryName = US
    Subject Public Key Info:
    Public Key Algorithm: rsaEncryption
    Public-Key: (2048 bit)
    Exponent: 65537 (0x10001)
    X509v3 extensions:
    X509v3 Authority Key Identifier:

    X509v3 Subject Key Identifier:
    X509v3 Basic Constraints: critical
    CA:TRUE, pathlen:0
    X509v3 Key Usage: critical
    Certificate Sign, CRL Sign
    X509v3 CRL Distribution Points:

    Full Name:

    Authority Information Access:
    OCSP - URI:

    X509v3 Certificate Policies:

    Signature Algorithm: sha1WithRSAEncryption

    Full unpack here:

    Code: Select all

    0 1008: SEQUENCE {
       4  728:   SEQUENCE {
       8    3:     [0] {
      10    1:       INTEGER 2
             :       }
      13    3:     INTEGER 146038
      18   13:     SEQUENCE {
      20    9:       OBJECT IDENTIFIER sha1WithRSAEncryption (1 2 840 113549 1 1 5)
      31    0:       NULL
             :       }
      33   66:     SEQUENCE {
      35   11:       SET {
      37    9:         SEQUENCE {
      39    3:           OBJECT IDENTIFIER countryName (2 5 4 6)
      44    2:           PrintableString 'US'
             :           }
             :         }
      48   22:       SET {
      50   20:         SEQUENCE {
      52    3:           OBJECT IDENTIFIER organizationName (2 5 4 10)
      57   13:           PrintableString 'GeoTrust Inc.'
             :           }
             :         }
      72   27:       SET {
      74   25:         SEQUENCE {
      76    3:           OBJECT IDENTIFIER commonName (2 5 4 3)
      81   18:           PrintableString 'GeoTrust Global CA'
             :           }
             :         }
             :       }
     101   30:     SEQUENCE {
     103   13:       UTCTime 05/04/2013 15:15:55 GMT
     118   13:       UTCTime 31/12/2016 23:59:59 GMT
             :       }
     133   73:     SEQUENCE {
     135   11:       SET {
     137    9:         SEQUENCE {
     139    3:           OBJECT IDENTIFIER countryName (2 5 4 6)
     144    2:           PrintableString 'US'
             :           }
             :         }
     148   19:       SET {
     150   17:         SEQUENCE {
     152    3:           OBJECT IDENTIFIER organizationName (2 5 4 10)
     157   10:           PrintableString 'Google Inc'
             :           }
             :         }
     169   37:       SET {
     171   35:         SEQUENCE {
     173    3:           OBJECT IDENTIFIER commonName (2 5 4 3)
     178   28:           PrintableString 'Google Internet Authority G2'
             :           }
             :         }
             :       }
     208  290:     SEQUENCE {
     212   13:       SEQUENCE {
     214    9:         OBJECT IDENTIFIER rsaEncryption (1 2 840 113549 1 1 1)
     225    0:         NULL
             :         }
     227  271:       BIT STRING
             :         30 82 01 0A 02 82 01 01 00 9C 2A 04 77 5C D8 50
             :         91 3A 06 A3 82 E0 D8 50 48 BC 89 3F F1 19 70 1A
             :         88 46 7E E0 8F C5 F1 89 CE 21 EE 5A FE 61 0D B7
             :         32 44 89 A0 74 0B 53 4F 55 A4 CE 82 62 95 EE EB
             :         59 5F C6 E1 05 80 12 C4 5E 94 3F BC 5B 48 38 F4
             :         53 F7 24 E6 FB 91 E9 15 C4 CF F4 53 0D F4 4A FC
             :         9F 54 DE 7D BE A0 6B 6F 87 C0 D0 50 1F 28 30 03
             :         40 DA 08 73 51 6C 7F FF 3A 3C A7 37 06 8E BD 4B
             :                 [ Another 142 bytes skipped ]
             :       }
     502  231:     [3] {
     505  228:       SEQUENCE {
     508   31:         SEQUENCE {
     510    3:           OBJECT IDENTIFIER authorityKeyIdentifier (2 5 29 35)
     515   24:           OCTET STRING
             :             30 16 80 14 C0 7A 98 68 8D 89 FB AB 05 64 0C 11
             :             7D AA 7D 65 B8 CA CC 4E
             :           }
     541   29:         SEQUENCE {
     543    3:           OBJECT IDENTIFIER subjectKeyIdentifier (2 5 29 14)
     548   22:           OCTET STRING
             :             04 14 4A DD 06 16 1B BC F6 68 B5 76 F5 81 B6 BB
             :             62 1A BA 5A 81 2F
             :           }
     572   18:         SEQUENCE {
     574    3:           OBJECT IDENTIFIER basicConstraints (2 5 29 19)
     579    1:           BOOLEAN TRUE
     582    8:           OCTET STRING 30 06 01 01 FF 02 01 00
             :           }
     592   14:         SEQUENCE {
     594    3:           OBJECT IDENTIFIER keyUsage (2 5 29 15)
     599    1:           BOOLEAN TRUE
     602    4:           OCTET STRING 03 02 01 06
             :           }
     608   53:         SEQUENCE {
     610    3:           OBJECT IDENTIFIER cRLDistributionPoints (2 5 29 31)
     615   46:           OCTET STRING
             :             30 2C 30 2A A0 28 A0 26 86 24 68 74 74 70 3A 2F
             :             2F 67 2E 73 79 6D 63 62 2E 63 6F 6D 2F 63 72 6C
             :             73 2F 67 74 67 6C 6F 62 61 6C 2E 63 72 6C
             :           }
     663   46:         SEQUENCE {
     665    8:           OBJECT IDENTIFIER authorityInfoAccess (1 3 6 1 5 5 7 1 1)
     675   34:           OCTET STRING
             :             30 20 30 1E 06 08 2B 06 01 05 05 07 30 01 86 12
             :             68 74 74 70 3A 2F 2F 67 2E 73 79 6D 63 64 2E 63
             :             6F 6D
             :           }
     711   23:         SEQUENCE {
     713    3:           OBJECT IDENTIFIER certificatePolicies (2 5 29 32)
     718   16:           OCTET STRING 30 0E 30 0C 06 0A 2B 06 01 04 01 D6 79 02 05 01
             :           }
             :         }
             :       }
             :     }
     736   13:   SEQUENCE {
     738    9:     OBJECT IDENTIFIER sha1WithRSAEncryption (1 2 840 113549 1 1 5)
     749    0:     NULL
             :     }
     751  257:   BIT STRING
             :     27 8C CF E9 C7 3B BE C0 6F E8 96 84 FB 9C 5C 5D
             :     90 E4 77 DB 8B 32 60 9B 65 D8 85 26 B5 BA 9F 1E
             :     DE 64 4E 1F C6 C8 20 5B 09 9F AB A9 E0 09 34 45
             :     A2 65 25 37 3D 7F 5A 6F 20 CC F9 FA F1 1D 8F 10
             :     0C 02 3A C4 C9 01 76 96 BE 9B F9 15 D8 39 D1 C5
             :     03 47 76 B8 8A 8C 31 D6 60 D5 E4 8F DB FA 3C C6
             :     D5 98 28 F8 1C 8F 17 91 34 CB CB 52 7A D1 FB 3A
             :     20 E4 E1 86 B1 D8 18 0F BE D6 87 64 8D C5 0A 25
             :             [ Another 128 bytes skipped ]
             :   }

    It's SHA1 fingerprint is: BBDCE13E9D537A5229915CB123C7AAB0A855E798. This intermediate certificate appears to match up with the intermediate certificate provided by the questionable page-load, so we do a search on the SHA1 value of the cert's hash to see if it appears in conventional, common search results. Here it is, showing up at the invaluable site... although we also noted ourselves, in twitter recently, that this intermediate certificate pops up in some other google server-end of questionable veracity (here's the #fishycerts shapshot if it, for those curious):

    Our conclusion on this server-end certificate being offered as credentials putatively backing this 'secure' https session to (
    4B9D33E64EF6104E2043BF1E0928924F6D41337A) is that it's illegitimate. The intermediate cert to which it chains (BBDCE13E9D537A5229915CB123C7AAB0A855E798) does appear legitimate... but also seems to be signing more than its fair share of questionable server-end 'Google' SSL certificates. What does that mean, and how does that correlation flesh out into possible theories for causative connection? We simply don't know, yet. Further research is required.

    Our search on this server-end certificate's SHA1 hash value - 4B9D33E64EF6104E2043BF1E0928924F6D41337A - turns up no hits, anywhere online (nor for the lowercase-converted version, 4b9d33e64ef6104e2043bf1e0928924f6d41337a). Even if there are obscure mentions somewhere we could not find, the comparison between that and widely-distribued legitimate Google server-end certificates is, in a word, enormous.

    - - -

    This post, drafted and edited by the cryptostorm team during a 36 hour window stretching from Friday afternoon through Sunday morning GMT, in fact covers a time-window longer than the transient phenomenon it has documented. By the time final edits were being finished, a check at ssl-labs for IP and certificate results received when their testing suite looks at now yeilds the following results. Gone entirely are the two 212.*.*.* IP addresses, and in their place are a string of 74.*.*.*'s that have a much longer history of correlation with Google services (if still some weird results in terms of ssl cert credibility):

    In the place of cert 4B9D33E64EF6104E2043BF1E0928924F6D41337A, a server-end certificate SHA1 1219337d219d1684f785bbabe688cea429ac6ee1 is now being presented when ssl-labs asks for the site... that cert is signed as well by intermediate certificate BBDCE13E9D537A5229915CB123C7AAB0A855E798, making it something of a half-sibling to the questionable 4b9d one we saw earlier in this investigation (only a few hours ago).

    {edited to add: the 4b9 and 1219 certificates appear to be overlapping each other in some ssl-labs test runs, and in browser session tests by some cryptostorm staff members - but not others - as this report has completed editing and is being published... which, as we have seen previous, is not in and in itself indicative of malfeasance but is another component of the suspiciously erratic & coincidence-laden pattern we have been observing}

    - - -

    We expect this kind of elliptical, somewhat tedious form of "forensics" to emerge as the norm in Corruptor-Injector Network attack analysis. Such attacks are transient, their 'session prion' payload is buried in otherwise-innocuous http/https traffic hitting the browser via routine, innocuous web browsing (that such attacks will be highly successful beyond the relatively well-defended confines of the browser DOM sandbox is both inevitable, and frightening - protocols like jabber, sftp, and all the weird Java-wrapped cryptographic 'secure' network procedures each carry its own risk of injected prion and collapse of the entire local endpoint security model).

    A few points to reiterate:
    • 1. This is a session to, not an obscure website. It's 'secured' by https, backed by the fearsome expertise and professional focus of google's entire Chrome security team, and then some. It relates to a session during which visitors download an installer for chrome; if that installer is modified even marginally, and achieves uptake on the local machine, all security is gone.

      2. Certificates involved in effectively spoofing https credentials from google appear to be signed by genuine Google intermediate SHA1-signed certificates. The mechanism for this is not clear yet, but there are dozens of PoC'd methods for a well-resourced attacked to complete this step.

      3. It's not clear that blaming "the browsers" for this makes sense. It is not the job of browsers to enfore routing legitimacy, although whose job that actually is remains an open question. The browsers can run about cancelling server-end certificates left and right, but it does nothing to address the problem of the injection/hijack gambit itself.

      4. A cursory review of DNS records suggests the vast space for temporary resource hijacking via cache poisoning and/or BGP borking forms a core element of these attacks as they exist in the wild today. While there are brilliant researchers out there able to diagnose such transient DNS anomalies, the fact is such anomalies are so common, and so fundamental to DNS as it has evolved, that we gain little in rehasing that well-explored ground.

      5. We haven't even looked at IP6 in this analysis, despite some early evidence that it forms a crucial element of observed CIN methods. The same can be said for SPDY, QUIC, and other next-generation protocols: each designed and coded by brilliant women & men, but none having anything in the way of long track record in the wilds of CIN-infested routing landscapes.

      6. This is one instance of this we've chosen to document here, in short form, as a test case and proof of (research) concept. We have files and forensics on dozens more, from obscure websites to serious resources used by hundreds of millions of internet citizens every day. Once we began keeping track o such things several months ago, the examples built up faster and faster... a "backlog of weirdness," as one staffer apologetically explained to a cryptostorm member who had seen data suggesting CIN activity, and hoped we'd be able to review it closely to confirm.

      7. We see the consequences of this routinely in our member correspondence, globally, on a daily basis. Local computers that have strange network-connectivity problems. Difficulty installing routine packages like openssl or openvpn. Broken cryptographic deployments that cannot support our tightly-enforced standards for cryptostorm session authenticity... these weird goings-on have grown more and more common for us to see, month after month. They foreshadow a deluge of such functionality thefts by CINs from internet users worldwide.

    "Total pwnage" - as the NSA glibly calls it. Sounds far-fetched? Here's what they have to say about their in-house CIN - #Balrog, we've named it - several years back:

    How would such a system work, in practical terms? Well, here's how:

    Moving to more tangible considerations, how would we know that SECONDDATE attacks were underway? Simple: we'd see network sessions inexplicably redirected to unexpected sites, and modified payloads arrive for those targeted individuals 'painted' by the CIN's selector logic.

    An attacker would gain enormous advantage if capable of injecting Chrome package downloads, even transiently. This may seem paranoid, to imagine the sheer arrogance required to play such dangerous games with one of the most powerful companies in the tech industy (and giving Google the benefit of initial assumption they are neither actively aware of these attacks, nor even passively helpless to stop them yet unwiling to make them public via full disclosure).

    And besides... packages are signed! ...right? Indeed. While it's beyond the scope of this report to go into the numerous proven methods for undermining such signing security, here's a partial list of links - for just one distro of Linux - showing what tends to happen when package-signing throws errors...

    There's more, hundreds and hundreds of posts of Linux users - a tiny percentage in the larger OS ocean - having these problems, leading back years. Of course, some - perhaps the majority or even nearly all, are simply the horrifically complex reality of package signing validation if done manually. For those curious, here's Google's Linux Chrome repo howto page with signing key and terse, if excellent, advice for users. That said, it's hosted on itself... so is the key as-intended by Google? Is it always that way?

    If even 5% of those desperate posts reporting failures of the Chrome packages to pass gpg signature-verification are malicious... that's many tens of thousands of Linux Chrome users whose local machines have been irrevocably rooted by an unknown, invisible attacker.

    We captured the Chrome package as delivered from the suspect page, this weekend. It's too early to say if it shows evidence of direct modification from legitimate parameters; several test-versions downloaded from other sources over the weekend appear to show the same size metrics, on the surface. However, SHA1 hashing is inconclusive: we have divergent hash values for our local copies, as compared to hashes posted elsewhere on the web by others recently for the same version and processor images... but that is far from conclusive, and more work is required.

    Let's look at the package itself, meanwhile. For example, here's the --postinst script in the package captured by us this weekend:

    Code: Select all

    # Copyright (c) 2009 The Chromium Authors. All rights reserved.
    # Use of this source code is governed by a BSD-style license that can be
    # found in the LICENSE file.
    set -e
    # Add icons to the system icons
    XDG_ICON_RESOURCE="`which xdg-icon-resource 2> /dev/null || true`"
    if [ ! -x "$XDG_ICON_RESOURCE" ]; then
      echo "Error: Could not find xdg-icon-resource" >&2
      exit 1
    for icon in "/opt/google/chrome/product_logo_"*.png; do
      "$XDG_ICON_RESOURCE" install --size "${size%.png}" "$icon" "google-chrome"
    UPDATE_MENUS="`which update-menus 2> /dev/null || true`"
    if [ -x "$UPDATE_MENUS" ]; then
    # Update cache of .desktop file MIME types. Non-fatal since it's just a cache.
    update-desktop-database > /dev/null 2>&1 || true
    # Updates defaults.list file if present.
    update_defaults_list() {
      # $1: name of the .desktop file
      local DEFAULTS_FILE="/usr/share/applications/defaults.list"
      if [ ! -f "${DEFAULTS_FILE}" ]; then
      # Split key-value pair out of MimeType= line from the .desktop file,
      # then split semicolon-separated list of mime types (they should not contain
      # spaces).
      mime_types="$(grep MimeType= /usr/share/applications/${1} |
                    cut -d '=' -f 2- |
                    tr ';' ' ')"
      for mime_type in ${mime_types}; do
        if egrep -q "^${mime_type}=" "${DEFAULTS_FILE}"; then
          if ! egrep -q "^${mime_type}=.*${1}" "${DEFAULTS_FILE}"; then
            default_apps="$(grep ${mime_type}= "${DEFAULTS_FILE}" |
                            cut -d '=' -f 2-)"
            egrep -v "^${mime_type}=" "${DEFAULTS_FILE}" > "${DEFAULTS_FILE}.new"
            echo "${mime_type}=${default_apps};${1}" >> "${DEFAULTS_FILE}.new"
            mv "${DEFAULTS_FILE}.new" "${DEFAULTS_FILE}"
          # If there's no mention of the mime type in the file, add it.
          echo "${mime_type}=${1};" >> "${DEFAULTS_FILE}"
    update_defaults_list "google-chrome.desktop"
    # This function uses sed to insert the contents of one file into another file,
    # after the first line matching a given regular expression. If there is no
    # matching line, then the file is unchanged.
    insert_after_first_match() {
      # $1: file to update
      # $2: regular expression
      # $3: file to insert
      sed -i -e "1,/$2/ {
        /$2/ r $3
        }" "$1"
    # If /usr/share/gnome-control-center/gnome-default-applications.xml exists, it
    # may need to be updated to add ourselves to the default applications list. If
    # we find the file and it does not seem to contain our patch already (the patch
    # is safe to leave even after uninstall), update it.
    if [ -f "$GNOME_DFL_APPS" ]; then
    # Conditionally insert the contents of the file "default-app-block" after the
    # first "<web-browsers>" line we find in gnome-default-applications.xml
      fgrep -q "Google Chrome" "$GNOME_DFL_APPS" || insert_after_first_match \
        "$GNOME_DFL_APPS" \
        "^[ 	]*<web-browsers>[ 	]*$" \
    # Add to the alternatives system
    # On Ubuntu 12.04, we have the following priorities
    # (which can be obtain be installing browsers and running
    # update-alternatives --query x-www-browser):
    # /usr/bin/epiphany-browser  85
    # /usr/bin/firefox           40
    # /usr/bin/konqueror         30
    # While we would expect these values to be keyed off the most popular
    # browser (Firefox), in practice, we treat Epiphany as the lower bound,
    # resulting in the following scheme:
    case $CHANNEL in
      stable )
        # Good enough to be the default.
      beta )
        # Almost good enough to be the default. (Firefox stable should arguably be
        # higher than this, but since that's below the "Epiphany threshold", we're
        # not setting our priority below it. Anyone want to poke Firefox to raise
        # their priority?)
      unstable )
        # Unstable, give it the "lowest" priority.
      * )
    update-alternatives --install /usr/bin/x-www-browser x-www-browser \
      /usr/bin/google-chrome-stable $PRIORITY
    update-alternatives --install /usr/bin/gnome-www-browser gnome-www-browser \
      /usr/bin/google-chrome-stable $PRIORITY
    update-alternatives --install /usr/bin/google-chrome google-chrome \
      /usr/bin/google-chrome-stable $PRIORITY
    # System-wide package configuration.
    # sources.list setting for google-chrome updates.
    REPOCONFIG="deb stable main"
    APT_GET="`which apt-get 2> /dev/null`"
    APT_CONFIG="`which apt-config 2> /dev/null`"
    # You may comment out this entry, but any other modifications may be lost.\n"
    # Parse apt configuration and return requested variable value.
    apt_config_val() {
      if [ -x "$APT_CONFIG" ]; then
        "$APT_CONFIG" dump | sed -e "/^$APTVAR /"'!d' -e "s/^$APTVAR \"\(.*\)\".*/\1/"
    # Install the repository signing key (see also:
    install_key() {
      APT_KEY="`which apt-key 2> /dev/null`"
      if [ -x "$APT_KEY" ]; then
        "$APT_KEY" add - >/dev/null 2>&1 <<KEYDATA
    Version: GnuPG v1.4.2.2 (GNU/Linux)
    # Set variables for the locations of the apt sources lists.
    find_apt_sources() {
      APTDIR=$(apt_config_val Dir)
      APTETC=$(apt_config_val 'Dir::Etc')
      APT_SOURCES="$APTDIR$APTETC$(apt_config_val 'Dir::Etc::sourcelist')"
      APT_SOURCESDIR="$APTDIR$APTETC$(apt_config_val 'Dir::Etc::sourceparts')"
    # Update the Google repository if it's not set correctly.
    # Note: this doesn't necessarily enable the repository, it just makes sure the
    # correct settings are available in the sources list.
    # Returns:
    # 0 - no update necessary
    # 2 - error
    update_bad_sources() {
      if [ ! "$REPOCONFIG" ]; then
        return 0
      # Don't do anything if the file isn't there, since that probably means the
      # user disabled it.
      if [ ! -r "$SOURCELIST" ]; then
        return 0
      # Basic check for active configurations (non-blank, non-comment lines).
      ACTIVECONFIGS=$(grep -v "^[[:space:]]*\(#.*\)\?$" "$SOURCELIST" 2>/dev/null)
      # Check if the correct repository configuration is in there.
      REPOMATCH=$(grep "^[[:space:]#]*\b$REPOCONFIG\b" "$SOURCELIST" \
      # Check if the correct repository is disabled.
      MATCH_DISABLED=$(echo "$REPOMATCH" | grep "^[[:space:]]*#" 2>/dev/null)
      # Now figure out if we need to fix things.
      if [ "$REPOMATCH" ]; then
        # If it's there and active, that's ideal, so nothing to do.
        if [ ! "$MATCH_DISABLED" ]; then
          # If it's not active, but neither is anything else, that's fine too.
          if [ ! "$ACTIVECONFIGS" ]; then
      if [ $BADCONFIG -eq 0 ]; then
        return 0
      # At this point, either the correct configuration is completely missing, or
      # the wrong configuration is active. In that case, just abandon the mess and
      # recreate the file with the correct configuration. If there were no active
      # configurations before, create the new configuration disabled.
      if [ ! "$ACTIVECONFIGS" ]; then
      if [ $? -eq 0 ]; then
        return 0
      return 2
    # Add the Google repository to the apt sources.
    # Returns:
    # 0 - sources list was created
    # 2 - error
    create_sources_lists() {
      if [ ! "$REPOCONFIG" ]; then
        return 0
      if [ -d "$APT_SOURCESDIR" ]; then
        printf "$SOURCES_PREAMBLE" > "$SOURCELIST"
        printf "$REPOCONFIG\n" >> "$SOURCELIST"
        if [ $? -eq 0 ]; then
          return 0
      return 2
    # Remove our custom sources list file.
    # Returns:
    # 0 - successfully removed, or not configured
    # !0 - failed to remove
    clean_sources_lists() {
      if [ ! "$REPOCONFIG" ]; then
        return 0
      rm -f "$APT_SOURCESDIR/google-chrome.list" \
    # Detect if the repo config was disabled by distro upgrade and enable if
    # necessary.
    handle_distro_upgrade() {
      if [ ! "$REPOCONFIG" ]; then
        return 0
      if [ -r "$SOURCELIST" ]; then
        REPOLINE=$(grep -E "^[[:space:]]*#[[:space:]]*$REPOCONFIG[[:space:]]*# disabled on upgrade to .*" "$SOURCELIST")
        if [ $? -eq 0 ]; then
          sed -i -e "s,^[[:space:]]*#[[:space:]]*\($REPOCONFIG\)[[:space:]]*# disabled on upgrade to .*,\1," \
          LOGGER=$(which logger 2> /dev/null)
          if [ "$LOGGER" ]; then
            "$LOGGER" -t "$0" "Reverted repository modification: $REPOLINE."
    get_lib_dir() {
      if [ "$DEFAULT_ARCH" = "i386" ]; then
      elif [ "$DEFAULT_ARCH" = "amd64" ]; then
        echo Unknown CPU Architecture: "$DEFAULT_ARCH"
        exit 1
    NSS_FILES=" \"
    add_nss_symlinks() {
      for f in $NSS_FILES
        target=$(echo $f | sed 's/\.[01]d$//')
        if [ -f "/$LIBDIR/$target" ]; then
          ln -snf "/$LIBDIR/$target" "/opt/google/chrome/$f"
        elif [ -f "/usr/$LIBDIR/$target" ]; then
          ln -snf "/usr/$LIBDIR/$target" "/opt/google/chrome/$f"
          echo $f not found in "/$LIBDIR/$target" or "/usr/$LIBDIR/$target".
          exit 1
    remove_nss_symlinks() {
      for f in $NSS_FILES
        rm -rf "/opt/google/chrome/$f"
    remove_udev_symlinks() {
      rm -rf "/opt/google/chrome/"
    ## MAIN ##
    if [ ! -e "$DEFAULTS_FILE" ]; then
      echo 'repo_add_once="true"' > "$DEFAULTS_FILE"
      echo 'repo_reenable_on_distupgrade="true"' >> "$DEFAULTS_FILE"
    # Run the cron job immediately to perform repository configuration.
    nohup sh /etc/cron.daily/google-chrome > /dev/null 2>&1 &


    Three-hundred and seventy-six lines. A Chromium Debian reference build - not identical, to be clear, in package parameters - nevertheless is notable for its comparative size:

    Code: Select all

        # Copyright (c) 2009 The Chromium Authors. All rights reserved.
        # Use of this source code is governed by a BSD-style license that can be
        # found in the LICENSE file.
        # Add to the alternatives system
        # On Ubuntu 12.04, we have the following priorities
        # (which can be obtain be installing browsers and running
        # update-alternatives --query x-www-browser):
        # /usr/bin/epiphany-browser 85
        # /usr/bin/firefox 40
        # /usr/bin/konqueror 30
        # While we would expect these values to be keyed off the most popular
        # browser (Firefox), in practice, we treat Epiphany as the lower bound,
        # resulting in the following scheme:
        case $CHANNEL in
        stable )
        # Good enough to be the default.
        beta )
        # Almost good enough to be the default. (Firefox stable should arguably be
        # higher than this, but since that's below the "Epiphany threshold", we're
        # not setting our priority below it. Anyone want to poke Firefox to raise
        # their priority?)
        unstable )
        # Unstable, give it the "lowest" priority.
        * )
        update-alternatives --install /usr/bin/x-www-browser x-www-browser \
        /usr/bin/@@USR_BIN_SYMLINK_NAME@@ $PRIORITY
        update-alternatives --install /usr/bin/gnome-www-browser gnome-www-browser \
        /usr/bin/@@USR_BIN_SYMLINK_NAME@@ $PRIORITY
        update-alternatives --install /usr/bin/google-chrome google-chrome \
        /usr/bin/@@USR_BIN_SYMLINK_NAME@@ $PRIORITY
        ## MAIN ##
        if [ ! -e "$DEFAULTS_FILE" ]; then
        echo 'repo_add_once="true"' > "$DEFAULTS_FILE"
        echo 'repo_reenable_on_distupgrade="true"' >> "$DEFAULTS_FILE"
        # Run the cron job immediately to perform repository configuration.
        nohup sh /etc/cron.daily/@@PACKAGE@@ > /dev/null 2>&1 &

    Sixty-seven lines. A report of very unusual behaviour on the part of that script, from 2012.

    A Lintian scan of the .deb package reads as follows...

    Code: Select all

    E: google-chrome-stable: embedded-library opt/google/chrome/PepperFlash/ openssl
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: lcms2
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: srtp
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: sqlite
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: libpng
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: libxml2
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: libjpeg
    E: google-chrome-stable: embedded-library opt/google/chrome/chrome: libjsoncpp
    E: google-chrome-stable: embedded-library opt/google/chrome/ libavutil
    E: google-chrome-stable: statically-linked-binary opt/google/chrome/nacl_helper_bootstrap
    E: google-chrome-stable: statically-linked-binary opt/google/chrome/nacl_irt_x86_32.nexe
    E: google-chrome-stable: debian-changelog-file-missing-or-wrong-name
    W: google-chrome-stable: new-package-should-close-itp-bug
    W: google-chrome-stable: debian-changelog-line-too-long line 3
    E: google-chrome-stable: no-copyright-file
    W: google-chrome-stable: description-synopsis-starts-with-article
    W: google-chrome-stable: extended-description-line-too-long
    E: google-chrome-stable: dir-or-file-in-opt opt/google/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/PepperFlash/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/PepperFlash/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/PepperFlash/manifest.json
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/chrome
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/chrome-sandbox
    W: google-chrome-stable: setuid-binary opt/google/chrome/chrome-sandbox 4755 root/root
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/chrome_100_percent.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/cron/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/cron/google-chrome
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default-app-block
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/docs.crx
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/drive.crx
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/external_extensions.json
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/gmail.crx
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/search.crx
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/default_apps/youtube.crx
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/google-chrome
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/icudtl.dat
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/am.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ar.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/bg.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/bn.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ca.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/cs.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/da.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/de.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/el.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/en-GB.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/en-US.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/es-419.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/es.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/et.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/fa.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/fi.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/fil.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/fr.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/gu.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/he.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/hi.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/hr.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/hu.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/id.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/it.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ja.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/kn.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ko.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/lt.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/lv.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ml.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/mr.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ms.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/nb.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/nl.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/pl.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/pt-BR.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/pt-PT.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ro.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ru.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/sk.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/sl.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/sr.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/sv.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/sw.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/ta.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/te.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/th.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/tr.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/uk.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/vi.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/zh-CN.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/locales/zh-TW.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/nacl_helper
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/nacl_helper_bootstrap
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/nacl_irt_x86_32.nexe
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/natives_blob.bin
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_128.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_16.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_22.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_24.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_256.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_32.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_32.xpm
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_48.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/product_logo_64.png
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/resources.pak
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/snapshot_blob.bin
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/xdg-mime
    E: google-chrome-stable: dir-or-file-in-opt opt/google/chrome/xdg-settings
    W: google-chrome-stable: non-standard-dir-perm usr/share/doc/google-chrome-stable/ 0700 != 0755
    E: google-chrome-stable: executable-manpage usr/share/man/man1/google-chrome.1
    E: google-chrome-stable: manpage-not-compressed usr/share/man/man1/google-chrome.1
    W: google-chrome-stable: manpage-has-errors-from-man usr/share/man/man1/google-chrome.1 1: warning: macro `"' not defined
    W: google-chrome-stable: binary-without-manpage usr/bin/google-chrome-stable
    W: google-chrome-stable: pkg-not-in-package-test google-chrome usr/share/menu/
    E: google-chrome-stable: prerm-calls-updatemenus
    W: google-chrome-stable: executable-not-elf-or-script usr/share/man/man1/google-chrome.1
    E: google-chrome-stable: shlib-with-non-pic-code opt/google/chrome/
    Lintian finished with exit status 1

    Do these results match known-good equivalents? It's entirely possible they do... but we'll be double-checking that, to be sure.

    However, it's much harder to come up with a legitimate explanation for the presence of this parameter:

    Code: Select all


    In /apt/google/chrome/default-app-block...

    Code: Select all

          <name>Google Chrome</name>
          <command>/opt/google/chrome/google-chrome %s</command>
          <tab-command>/opt/google/chrome/google-chrome %s</tab-command>
          <win-command>/opt/google/chrome/google-chrome --new-window %s</win-command>

    "Netscape-remote" shows up in only a few places, including the Russian-presenting "Sisyphus" nonstandard repository, in a Gnome-related package called "gnome-control-center" - we're helpfuly informed that "If you install GNOME, you need to install control-center." It's not clear if this repository is borked or not. What is clear is that the parameter for remote-access is flagged "true" in the build we got from "" this weekend. It seems highly unlikely that's the default setting coming out from Google liegitimately... although, as with all such things, we welcome correction from specific subject-matter experts.

    These transient issues with strange 'google' certificates have been repeating themselves during the past couple months. In early April, a journalist in the UK reported on invalid Gmail SMTP certs being served to users worldwide for several hours. The issue was reported on twitter... but the tweet is now gone.

    It appears everyone assumed this was an error on Google's part (comments left on the article cited - two of them indicate transient continuance of the issue up through mid-April at least, although blame is cast on Google for 'misconfiguring' gmai's servers), although the carefully-worded status updated Google provided are notable in not actually saying anything specific whatsoever...
    4/4/15, 9:46 PM
    The problem with Gmail should be resolved. We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority at Google, and we are making continuous improvements to make our systems better.

    4/4/15, 8:58 PM
    We expect to resolve the problem affecting a majority of users of Gmail at 4/4/15, 10:00 PM. Please note that this time frame is an estimate and may change. is displaying an invalid certificate.

    4/4/15, 8:00 PM
    We're aware of a problem with Gmail affecting a majority of users. The affected users are able to access Gmail, but are seeing error messages and/or other unexpected behavior. We will provide an update by 4/4/15, 9:00 PM detailing when we expect to resolve the problem. Please note that this resolution time is an estimate and may change. is displaying an invalid certificate.

    4/4/15, 7:21 PM
    We're investigating reports of an issue with Gmail. We will provide more information shortly. is displaying an invalid certificate.

    And of course, in early May we flagged the unusual sibling-cert publicly in twitter.

    But what about the GPG signatures, right? That's the bulwark, and we've not addressed it. Our results are preliminary and await confirmation, because... well, because gnupg. We're going to provide a sample of output from our signature-validation efforts, locally; it is representative of what we've seen in the short period we've been working this particular angle.

    Once again, it could be we've managed to mis-specify the test - code-signing is not our cryptographic focus, despite cryptostorm being... well, something of a crypto-specalist shop in daily work life.
    ~/# wget
    --2015-05-17 14:26:38--
    Resolving (,,, ...
    Connecting to (||:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 1745 (1.7K) [text/plain]
    Saving to: ‘’ 100%[=============================================================================>] 1.70K --.-KB/s in 0.001s

    2015-05-17 14:26:39 (1.21 MB/s) - ‘’ saved [1745/1745]

    ~/# gpg --verify google-chrome-stable_current_i386.deb
    gpg: verify signatures failed: unexpected data

    ~/# apt-cache policy google-chrome-stable
    Installed: (none)
    Candidate: 42.0.2311.152-1
    Version table:
    42.0.2311.152-1 0
    500 stable/main i386 Packages

    ~/# gpg --import
    gpg: key 7FAC5991: public key "Google, Inc. Linux Package Signing Key <>" imported
    gpg: Total number processed: 1
    gpg: imported: 1

    ~/# gpg --verify google-chrome-stable_current_i386.deb
    gpg: verify signatures failed: unexpected data

    ~/# gpg -v -v --verify
    gpg: armor header: Version: GnuPG v1.4.2.2 (GNU/Linux)
    :public key packet:
    version 4, algo 17, created 1173385030, expires 0
    pkey[0]: [1024 bits]
    pkey[1]: [160 bits]
    pkey[2]: [1024 bits]
    pkey[3]: [1021 bits]
    keyid: A040830F7FAC5991
    gpg: verify signatures failed: unexpected data

    ~/# apt-key add

    ~/# gpg --list-sig 7FAC5991
    pub 1024D/7FAC5991 2007-03-08
    uid Google, Inc. Linux Package Signing Key <>
    sig 3 7FAC5991 2007-04-05 Google, Inc. Linux Package Signing Key <>
    sub 2048g/C07CB649 2007-03-08
    sig 7FAC5991 2007-03-08 Google, Inc. Linux Package Signing Key <>

    ~/# gpg --version
    gpg (GnuPG) 1.4.18
    Copyright (C) 2014 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.

    Home: ~/.gnupg
    Supported algorithms:
    Pubkey: RSA, RSA-E, RSA-S, ELG-E, DSA
    Hash: MD5, SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224
    Compression: Uncompressed, ZIP, ZLIB, BZIP2

    ~/# apt-get --reinstall install gnupg
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 2 not upgraded.
    Need to get 1,170 kB of archives.
    After this operation, 0 B of additional disk space will be used.
    Get:1 jessie/main gnupg i386 1.4.18-7 [1,170 kB]
    Fetched 1,170 kB in 5s (208 kB/s)
    (Reading database ... 149753 files and directories currently installed.)
    Preparing to unpack .../gnupg_1.4.18-7_i386.deb ...
    Unpacking gnupg (1.4.18-7) over (1.4.18-7) ...
    Processing triggers for man-db ( ...
    Processing triggers for install-info (5.2.0.dfsg.1-6) ...
    Setting up gnupg (1.4.18-7) ...

    ~/# gpg --version
    gpg (GnuPG) 1.4.18
    Copyright (C) 2014 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.

    Home: ~/.gnupg
    Supported algorithms:
    Pubkey: RSA, RSA-E, RSA-S, ELG-E, DSA
    Hash: MD5, SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224
    Compression: Uncompressed, ZIP, ZLIB, BZIP2

    ~/# gpg --list-sig 7FAC5991
    pub 1024D/7FAC5991 2007-03-08
    uid Google, Inc. Linux Package Signing Key <>
    sig 3 7FAC5991 2007-04-05 Google, Inc. Linux Package Signing Key <>
    sub 2048g/C07CB649 2007-03-08
    sig 7FAC5991 2007-03-08 Google, Inc. Linux Package Signing Key <>

    ~/# gpg -v -v --verify
    gpg: armor header: Version: GnuPG v1.4.2.2 (GNU/Linux)
    :public key packet:
    version 4, algo 17, created 1173385030, expires 0
    pkey[0]: [1024 bits]
    pkey[1]: [160 bits]
    pkey[2]: [1024 bits]
    pkey[3]: [1021 bits]
    keyid: A040830F7FAC5991
    gpg: verify signatures failed: unexpected data

    These results remain open to clarification and correction, as we prepare to publish this report.

    - - -

    Corruptor-Injector attacks are not the sole province of the NSA, or their #Balrog system. China has made use of similar capabilties, often quite publicly - with a notable emphasis on session hijack of https 'secure' communiations via fraudulent certificates at mass scale. Private versions of the technology exist as well.

    An old cryptographic adage goes that mathematical cryptanalytic attacks always get better; they never get worse. Corruptor-Injector Network systems apppear to have reached an inflection point; in game-theoretic terms, a potential 'tragedy of the commons' amoungst those giant entities who have them already. Once word begins to spread of how CINs work, the incentive to keep them under wraps drops asymptotically - since each attacker knows other attackers are likely to jump forward in aggressiveness and public visibility, even if they refrain. Thus, each has an incentive to be 'first to break' and the accelerating aggressiveness of these CIN tactics accelerates even further.

    The end result is, in a word, internet chaos.

    The security - and privacy - consequences of these tools spiralling into a frenzied battle for injector-primacy on our shared internet are simply impossible to overstate. Anyone infected with these - 'painted' by them, as disinfection is not structurally possible - will see themslves essentially driven offline if they are aware of the attack and must mitigate the security damage proactively by air-gapping. Those unaware they have been injected with a live session prion are rooted, and every activity of their computer, or smartphone, is being logged and remotely archived: email, encryption keys, chat logs, 'secure' web sesions, application updates. Screenshots are being taken and uploaded to the attacker, microphones enabled to record sound nearby, and webcams enabled to snap photos of the operator.

    None of the items in this devil's list of extreme privacy violations is a could-be-possible, or a hypothetical. Leaked documents validate each one is being done, and more so each has been automated and works without manual intervention.

    There is no greater threat to online privacy, network security, and the continued effective functioning of the internet for the next half-decade or more than Corruptor-Injector Networks, and their accelerating spread. All other threats combined likely don't meet the level CINs represent.

    CINs are the 'dirty bombs' of mass surveillance: brutal, destructive, producing a long-term legacy of crippled internet functionality that will cost tens of billions of dollars in real human benefits foregone to these macabre engines of corruption.

    But far worse than the economic devastation is the human cost of these privacy annhiliations, one person at a time. Activists picked up in their homes, tortued to death, bodies dumped in empty field by dictators with access to CIN intelligence. Minority groups wiped clean in tactical genocides enabled by absolutely totalistic, perfect intelligence data produced by CINs for violent fascists. Democratic political systems undermined by the massive blackmail leverage of total CIN visibility in the hands of opponents... the list goes on.

    The time to face CINs as the threat they have become is now. The data exist to validate already-existing deductive confirmation of their expanding footprint, thanks to Snowden and other whistleblowers.

    At cryptostorm, we are all-in to enable broad-scope CIN-evasion techniques, systems, architectures, and services. Already we're working on layered approaches, fluid and flexible and decentralised. The tools exist to do this - good tools, well-tested - but the will to face the threat will be the key driver. We have that will, because we know the damage our members face if they are without protection from CIN. It is our obligation to provide that protection, as a security service, and we look forward to working with other researchers to expand our vision and, in time, retake the internet from the power-mad corruption of these obscene mechanisms.

    With sincerity,

    • ~ cryptostorm_team
by cryptostorm_team
Thu May 07, 2015 11:41 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: #SauronsEye: researching & defending against modern Corruptor-Injector Network (CIN) attack systems
Replies: 3
Views: 31919

#SauronsEye: researching & defending against modern Corruptor-Injector Network (CIN) attack systems

{direct link:}
saʊron's ëyë For the past several months, cryptostorm has been investigating and documenting a series of seemingly-disconnected, network-level anomalies reported to us by network members as well as members of our own team. By mid-March, one of our staffers ("pj") had evidence that his local network and computing resources had been corrupted by malicious modifications somewhere in his infrastructure. He "air-gapped" the network, and as a preventative measure also offlined the computers used by our core team on a day-to-day basis, until they could be verified as uninfected. He wrote up that process in several earlier posts ('#SVGbola' & 'it mostly comes out at night')
[attachment=0]TheEyeofSauron.png[/attachment] Since then, we have continued to deepen our understanding of what we initially dubbed #SauronsEye, an assumed malware variant. Our findings have lead to a conclusion that is, in hindsight, both obvious and alarming in its implications: we have not been studying a newly-discovered, standalone example of malware but rather a globally-deployed interlocking system of network-based malware injection, traffic hijacking, route corrupting and rootkit implantation technologies initially exposed to public view by Edward Snowden, in 2013.

We refer to this class of network attach technologies as Corruptor-Injector Networks, or CINs, in order to separate them from conventional malware models because of the qualitative differences between these two classes. CINs are distributed, multi-layered, network-based assemblies of many interconnected technologies whose publicly-visible traces are extremely difficult to capture and forensically analyse as compared to conventional malware. Further, the capabilities of CINs as a class vastly overshadow even the most aggressive of modern malware examples.

We also document trace evidence that the #SauronsEye CIN itself is making use of 'session prions,' extremely compact injections of custom-generated malformed code syntax and/or exotic characters into web sessions. Like biological prions, these tiny insertions appear capable of initiating systems-level collapses of host immune functionality in surprising and powerful ways.

In this paper, we take a high-level approach to the topic and focus primarily on the impact of CINs on internet activity and, most importantly, newly-designed defensive tactics, tools, and methodologies we at cryptostorm are deploying to protect our members and the larger community from the risk of CINful fall from online grace.

☂ ☂ Process and Contents

The primary author of this paper (pj), has struggled to find a format best suited to sharing these findings with the community, encouraging further contributions, and providing actionable advice to non-technical audiences who are concerned with the risk of CIN infections. Our chosen model is composed of this initial essay, which stays largely nontechnical and high-level, using it as an outline into which additional detail and findings can be added on a continuous basis. Indeed, we plan to echo this thread to a wiki-based platform best-suited for this project.

For now, we publish this to set forth the documentary framework and research findings we have accrued this far.

The contents of this essay are as follows:

  • »» defining the category: Corruptor-Injector Networks

    »» the coming of the CINs into the world of internet communications, and what this means for the future of network security and online resources overall.

    » defending against the temptations and risks of CIN, novel systems-level proposals and services cryptostorm is currently deploying to provide protection against these new threat models.

    » a first-person summary of life under the "quantumcurse" of CIN targeting, and the challenges involved in overcoming these dystopic weapons in today's online environment.

    » session prions: forensic traces and theoretical explorations of post-scripting, web-based infection vectors that confound our conventional assumptions regarding malware, browser security, and online privacy.

    » links to resources - including the Snowden documents - that offer hard proof of the existence and widespread deployment of CIN attack systems, and provide useful guidance for future research and defensive projects.

We strongly encourage feedback and contributions to this work, which itself remains in-progress and is likely to do so for a considerable period of time.

☂ ☂ An observational definition of Corruptor-Injector Networks

As a category, CINs such as #SauronsEye exhibit a small set of observable attributes that reflect the manner in which they operate as large-scale network systems. This ground-level view of their operation is helpful in making sense of the way they operate in the real world.

Prior to any direct interaction with an active CIN, targeted individuals are chosen ("selected") by a CIN operator or analyst. This selection can then be implemented by tasking the listening posts of the CIN across the internet to make immediate notice of any network sessions meeting a fingerprint definition intended to include the target (and likely many other individuals, if necessary). When a live network session meeting the selector criteria is observed by the network, a separate 'shooter' system injects a corrupted additional data fragment into the target's ongoing network session.

If the sessions targeted are https/tls encrypted, automated systems making use of the extensive security failures of existing "certification authority" pre-encryption session validation are exploited to enable Man-in-the-Middle status, and payload injection. Often, these attacks require temporary hijacking of IP routing settings on the pathway from the target to the server she may be visiting at the time. Such hijacks range from DNS cache poisoning methods to ARP -based exploits, rooting of ISP-level routers, and as-yet-unknown techniques the NSA claims to have which enable it to transparently spoof any IP address (4 or 6) worldwide. These targeted network sessions need not be to a particular, special website - the goal is simply to add the payload to an existing network session so it can be delivered to the target computer.

That data will reach the target's computer and, when it arrives, open a door for further communications with the CIN. In a multi-step process, that initial foothold on the client computer expands into full administrative control ("root"). This accomplished, the CIN rootkit is able to hide itself from most external view by the computer's owner, and to put in place multiple recovery mechanisms in the event some or all of its original form is lost through deletion or routine system updates. This involves modular components that are delivered surreptitiously to the infected computer via an 'expert system' that takes into account operating system, network connection characteristics, and other variables. #SauronsEye, a NSA-operated CIN, has written internally that its initial infection procedures require little or no human interaction after initial selection is made.

Once installed, the CIN rootkit begins amassing data from the local computer, as defined by the original selection. This will likely include web browsing history, web cache, documents, and full logs of operating system activity. Further, it will capture for later upload pre-encrypted instant message chats, emails, and web sessions which are stored as seemingly-innocuous cache locally and trickle-exfiltrated via available physical network interfaces. In the case of #SauronsEye, the NSA notes that they have taken extreme measures to ensure that any client-installed software code is not discovered and captured for analysis; they reference self-destructive abilities in such software, in the event triggers suggest it is at risk of being exposed.

Firsthand reports and internal documents regarding #SauronsEye confirm that modifications to hard-drive/sector-level software are made by the CIN's infection agents, to ensure it remains active even after full hard drive deletions. The exact details of these functions are not yet publicly known, nor is their technical profile well-understood. However, it is clear that hard drives once infected with CIN-based software will likely need to be physically destroyed, as their integrity is permanently compromised.

The infected machine will be silently shifted over to a proxy-based network connection with the larger internet, allowing for full transparent control of local routing decisions and DNS lookups. In some cases, the local operating system is shifted over to a virtualised/paravirtualised/containerized model in order to enable full transparent network proxy control, as well as full OS kernel control. The target now uses her computer only from within the confines of an "evil hypervisor" - ring0 - and has no direct access to the kernel of her own operating system. Efforts to update operating system or applications in a way that would risk undermining the CIN's control or functions are redirected to modified installation files that are designed to remove that risk. In at least one instance, #SauronsEye was observed to have mounted the entire local hard drive partition table as a remotely-accessible resource, allowing it to be remote-loaded realtime, in full, by CIN operators.

Little is known of the procedures involved in de-selecting targets of existing CINs, or if such a process even exists. Once selected, targets will find themselves re-infected irrespective of the computer they use to connect to the internet, or the local internet connection in place. There is no reference to overt destruction of local computer hardware or stored data by the CINs documented thus far, but this ability is both inherent in their total OS control and strongly to the benefit of the CIN if there is a risk of their local installation being exposed by an end user.

All of the data exfiltrated to the CIN during the infection's span - which could cover years of time - is stored in repositories by the CIN which are then available for full wildcard query access via analysts there. Some or most may never be reviewed by a human being, and will sit idle in data stores indefinitely; however, all is available and indexed if and when needed.

In documents leaked by Snowden in 2013, the NSA's CIN architects admit they are able to infect millions of simultaneous targets and manage those infections concurrently. These admissions come in documented dated in the 2009-2012 timespan, so by now these numbers have inevitably grown larger. Further admissions are made that class-based selection of targets is already underway; for example, system administrators are targeted for infection in order to gain access to their administration credentials, and thus enable privileged access to other targets on the networks they administer. Thus, targets of CIN infection may not only have no idea why they are targeted, they may merely be indirect targets caught in the CIN's crossfire.

Finally we note that data captured and loaded to databases by the NSA under the pretext of "national security" are being widely shared with standard law enforcement entities, as well as other U.S. government agencies such as the IRS. This includes so-called "incidental capture" data from unintended targets of surveillance, whose information nevertheless is likely to ends up in the hands of local law-enforcement via remotely-available query tools the NSA has created to expand law enforcement access to their massive surveillance databases. When such data is used in domestic prosecutions, its origin in the NSA is hidden from the courts in a process called "parallel construction," so that legal problems associated with these mass surveillance systems are avoided.

No NSA employee, ex-employee, officer, or executive has ever been prosecuted for their role in overly-aggressive surveillance tactics, despite widespread agreement that such programs have routinely broken criminal statutes both in the US and in the rest of the world. However, NSA ex-employees who have reported these illegal abuses to the public have been aggressively prosecuted, jailed, and subjected to extreme forms of extra-legal pressure.

Of all the legally-dubious programs of mass surveillance undertaken by the NSA, it is quite possible that their first-of-its-kind CIN - #SauronsEye - is the most flagrantly, broadly, and deeply illegal under basically any statutory regime worldwide. With no oversight by courts or the public, millions of private computers are infected with the most tenacious, aggressive, and privacy-destructive software tools known to exist today. In some cases, hardware will become inoperable entirely - due to CIN malfunction, attempts to remove the CIN infection that trigger "suicide daemons" and brick hardware, and so on. Further, the economic and personal costs of these persistent, all-encompassing, seemingly inescapable infections have not yet been estimated by likely run to large numbers which grow larger each year.

☂ ☂ One: Corruptor-Injector Networks, and the coming of CIN to life online

In general, our understanding of new threats in digital technology lag considerably behind the pace of development of these threats themselves. Rarely do we see examples of theoretical descriptions of threats preceding their appearance in the wild, but rather it is routinely the case that first we have sightings of previously-undocumented attacks that later are studied and described in the civilian literature. In short, there was no category called "computer viruses" before the first virii were already out in the wild - the category followed behind the tangible example, by a considerable degree.

This dynamic is once again to be found in the case of CINs - lacking a category name for these entities, we are left attempting to shoe-horn them into previous categorical descriptions that fit poorly. As we have documented #SauronsEye, this stumbling block has been painfully impossible to ignore: without a category into which we could place these findings, they tended to fall through the cracks in terms of specialised researchers, analytic tools, and publication venues. There is little sense in proposing a new category for every new thing, of course, but we feel it is more than justified in this case - CINs are qualitatively different from other types of attack technology, and our ability to study and understand them is badly handicapped if we cannot group them into a class of similar entities.

We propose the name CIN because it brings together three of the core characteristics of these systems:
  • One, the capability that distinguishes a CIN from any other distributed ("cloud") online resource is its reliance on subverting other, existing systems in order to spread and remain extant over time. In one sense, the metaphor of a parasite could be used - the infection of a host, and symbiotic interaction over time between the two. However, that fails to account for the changing of the host by the infection agent - which is more like a virus capturing the DNA replication components of its "host" cell entirely, in order to create more copies of itself. CINs represent a syncretic combination of these models, which relies fundamentally on corrupting executable code and component functionality at many layers of network technology, on an ongoing basis. Additionally, they sow large-scale corruption of network routing and DNS resolution systems, as part of their core operating model - another corruption of otherwise-healthy, globally important resources. Thus, in a biblical sense, we observe that they act as corruptors.

    Two, we have observed that CINs make use of injection-based attacks both for initial infection of targeted individuals, and in order to remain installed and functional on these target systems over time. The injections that we have observed largely take the form of changed payload in network traffic - with a particular emphasis on small modifications of binary packages pulled from operating system 'repositories' online, as well as http-based css and font files sent from webservers to browsers. In both cases, the data received by targets is not the same as what was sent by the original provider (or the original provider was entirely elbowed aside in the process), and an injected addition to the data channel has been interposed into the session. This is a qualitatively different attack model than the conventional one of 'rooting' servers and pushing out infection materials from there; it is also transient, difficult to document, and as such far less likely to be noticed and defended against in general. These systems are therefore injectors at core.

    Three, the interconnected and distributed nature of these systems confirms that they exist as network-based entities, or they cannot exist at all. Like taking one leaf from a stand of aspen trees, capturing one session prion or other fragment of a CIN is not capturing the CIN itself, for it exists as a collection of inter-connected systems: a network. There are not prior examples of network-category malware - "#malnet" - that we know of in any scale or degree of widespread deployment, and as such we emphasise that CINs are natively-born creatures of the network.

Many other attributes of CINs refuse to fit comfortably into existing attack technologies. They exist by their nature an ever-changing dynamic equilibrium as their various sub-components are updated, refined, removed, or expanded - just as a ship can still be the same ship even if every piece of wood is replaced in sequence, a CIN is the same CIN even as its pieces move along as a wave-front. This simply cannot be said of other attack tools such as javascript malware, rootkits, or remote-exploit techniques for escalation to root. In those cases, if a tool changes so far that it shares no overlap with its named progenitor, it is renamed. In the case of CINs, doing so would be both confusing and result in a never-ending series of connected names.

Finally, CINs make the challenge of reliable attribution considerably more difficult than it already was with other types of attack technology... which was already quite high. Often, even if a client-level fragment of CIN technology is captured - a session prion, or corrupted repository package for example - it has no overt connection to anything whatsoever. Most likely, it's "call-out" ability is not self-contained, but relies on interactions with other subtly-corrupted components of the target computing environment... and those will reach out across the network via encrypted channels that traverse standard commercial CDNs and are nearly impossible to fully map as a result. While we believe that CINs will be individually fingerprinted as researchers become more familiar with their characteristics, we also expect that such fingerprinting will be essentially behavioral rather than dissective as in conventional approaches. And of course, given the resources required to build, deploy, and administer functional CINs, it is highly unlikely their handlers will be so sloppy as to leave overt, discoverable fingerprints on target-deployed components. This requires, therefore, new approaches and creativity in the field of forensic exploration and attribution assignment - another characteristic common to all CINs.

Let us be clear: Corruptor-Injector Networks are not a theoretical future possibility. The exist today, at large scale, and inevitably will both expand in individual reach and be joined by newly-developed CINs as time goes by. The NSA alone, as Snowden's whistleblowing documented, was already capable of spreading CIN-based infections to millions of targets, years ago... and was aggressively expanding that program given that it was so effective against their targets. Needless to say, #SauronsEye is not the only CIN in existence - we assume, and literature support us, that there are a handful of global CINs already in full production, with smaller regional examples perhaps totally in a dozen or two more.

Inevitably, the massive scope and expense of #SauronsEye - the original CIN, as it were - follows an accelerating downward cost curve and private CINs are not only feasible but all but mandatory for powerful transnational entities seeking leverage online. "Attacks always get better, not worse" goes a much-repeated aphorism from mathematical cryptography. In the same way, CIN capabilities will increase, their costs will decrease, and we find ourselves all but wallowing in CIN online (sorry, had to do it). The time to study, and protect against, these threats is now - not when they are so widespread as to be all but crippling for unprotected network usage.

☂ ☂ Safety & security in a corrupted, unstable, virulent network environment
Lasciate ogni speranza, voi ch'entrate
("abandon all hope, ye who enter here") reads the inscription under which all entrants to the Inferno must pass. Are CINs so menacing, all-seeing, and expansive that we must preemptively abandon any hope of successfully defending against them online? This is apparently a tempting pre-clusion for many people when first informed of the nature of CIN: new, complex, fluid, and shadowy (one colleague immediately labelled it "DarkBEAST" when she understood its nature), this class of threats can seem at first blush to be all but impossible to defend oneself from. Further, some colleagues have felt tempted to take a "act like it's not happening" stance in the face of this new attack, rather than face what seems an impossible task fo defending against it.

Fortunately, there is no need to surrender in advance in this struggle to retain the integrity, security, and reliability of online communications in the face of massive surveillance weaponry. Already, cryptostorm is implementing a number of defensive mechanisms - based on our findings in this CIN research in recent months - and we are happy to share our designs, concepts, and full source code with the larger community in hopes others can echo these defenses into their own networks, as well as expand on them in new ways.

Yes, it is possible to remain free from the wages of CIN... but it requires a new way of considering attack modelling, forensic investigation, and defensive-toolkit development & deployment in production. In short this vast, globally-installed surveillance machine - #SauronsEye - is nevertheless vulnerable to agile, creative, community-based counter-strategies. This escalating "arms race" of surveillance munitions - a race where only one side is armed - continues to offer forward-thinking citizens the ability to remain safe, secure, and private online... but only if they step sideways from expired models of security and move fluidly into new, effective defensive models.

...we've decided to do an early publication of the first portion of this essay, so it's available for review and feedback even as the team finished editing the final components - further, we're splitting forensic and technical materials and discussion into (soon-to-be-created) separate threads in this new subforum...

☂ ☂ ðëëþ.be ☂ ☂
by cryptostorm_team
Wed Mar 04, 2015 9:31 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: STUNnion - webrtc IP de-obfuscation of Tor .onion site visitors
Replies: 0
Views: 23647

STUNnion - webrtc IP de-obfuscation of Tor .onion site visitors

{direct link: | src: }

As a supplement to the STUNnion webrtc-over-Tor testing tool (native .onion URL | torstorm URL), we've collected some prior work on this subject here, both to provide additional material for folks curious and to thank those who had seen this as a topic of concern before we noticed it, too.

We also wanted to thank the good folks at the Tor Project for being well out in front of this, as is often the case. The Tor Browser Bundle - standard for many visitors to .onion sites, has compiled without webRTC capability for several years. As a result, folks using the TBB are fully protected from STUN-based IP leaks. That's excellent work, and shows both foresight and attention to real-world security matters. Nicely done!

Without further ado, here's some prior writing on the topic of .onion-hosted STUN attacks:

* Just last month, Daniel Wendler wrote this short piece on the topic: "WebRTC deanonymizing Tor / VPN / Proxy users"
Software engineer Daniel Roesler recently discovered how the WebRTC implementation in Mozilla Firefox and Chrome expose your real WAN IP to the website you visit.

The Tor Browser Bundle does currently block WebRTC by default (or at least the demo doesn’t work).

When I use Tor through the normal Firefox / Chrome, my real IP is getting exposed to the website.

How does the Tor Socksproxy ( handle non-http request?
As far as our testing has shown, Socksproxy does pass STUN queries... but we'd prefer others validate that finding, first. We're also collecting pcaps to verify packet-level characteristics, but didn't want to hold off on publishing longer than necessary, so we'll append those data to this thread.

* Barely a week ago, Ian Harris wrote this excellent piece on the topic: "Excuse me Sir, Your WebRTC is Leaking." It's got some useful advice on browser-specific settings modifications to protect against these leaks, although he points out that a long-term solution is ideally achieved via user-based explicit authorisation of STUN queries (which, unfortunately, isn't likely to happen):
Long term the ideal solution would be to have a user prompt whenever a WebRTC connection is being requested. This would be similar to the prompt requesting a user to authorise access to the camera and microphone. However, this solution relies on such a mechanism being mandated in the specifications and implemented by your browser provider.

* David Huerta's questions last summer about STUN over Tor produced a lively discussion, and much useful insight: "WebRTC via TorReport"
I've been experimenting with using WebRTC in a browser using Tor with Twilio to see if it's not totally impossible to do voice communication
in a way that anonymizes location (source IP). The problem is that Twilio WebRTC requires UDP connections over ports 10,000 to 60,000 and at least from my research (correct me if I'm wrong), Tor doesn't do onion routing for UDP traffic. As an alternative to WebRTC, there does seem to be a Twilio Client Flash option* which is TCP-only, but eww Flash. Any ideas on how to shoehorn UDP traffic into Tor-friendly TCP or do something else that would produce basically the same effect?

* Finally, in terms of what we found most directly relevant, the Tor Project has been actively discussing and mitigating against Tor-based STUN leaks for quite some time; a good starting point for the various projects & discussions around this topic is found in "Tor Weekly News — February 11th, 2015"
Even though Tor Browser is not vulnerable to the recent WebRTC IP attack proof-of-concept proof-of-concept, Mike Perry nevertheless invited “interested parties to try harder to bypass Tor in a stock Firefox using WebRTC and associated protocols (RTSP, SCTP) with media.peerconnection.enabled set to false”, before a plan to enable WebRTC-based QRCode bridge address resolution and sharing in Tor Launcher is implemented.

* This, in turn, links out to several related discussions:

While far from complete, we hope these summary resources will serve as a good starting point for those seeking to dig deeper into the topic. Additionally, our webRTC threads here in the forum have quite a bit of collected data and pointers to the underlying technologies and concepts involved.

Thanks again to everyone who did the prior work in this area. We merely bolted it together in a way that helps spotlight some places where additional risk mitigation is warranted.

Best regards,
  • ~ cryptostorm_team
by cryptostorm_team
Sat Feb 28, 2015 8:08 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: research tools & techniques for cleanVPN forensic analyses
Replies: 0
Views: 21587

research tools & techniques for cleanVPN forensic analyses

This is a placehoder thread for now.

We will be posting into it the various forensic tools we've used in our research thus far, and encouraging others with specialised expertise to expand and deepen the collection from there.

some contributions from pj:

Static analytic techniques to identify komodia libraries in unpacked executables:
From a technical perspective, the Komodia library is easy to detect. In our research, we found that the software that installs the root CA contains a number of easily searchable attributes that enabled us to match up the certificates we see in the wild with the actual software. These functions, which are Windows PE exports, include “CertInstallAll”, “GetCertPEMDLL”, “InstallFirefoxDirectory”, “SetCertDLL”, and “SetLogFunctionDLL.” Most of these libraries are designed to work on Windows 8 and will not install on older operating systems. Hopefully this information will give some good leads to researchers for further investigation.

VM-based unpacker/scanners:

From parityboy, a pcap scrubber:

~ cryptostorm
by cryptostorm_team
Sat Feb 28, 2015 6:29 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: cleanvpn forum admin notes
Replies: 0
Views: 20797

cleanvpn forum admin notes

We've moved most of the posts in this subforum - previously known as our "review" subforum - over to the general-purpose netsec one, to provide a cleaner starting point for the cleanVPN work. If you remember something being here, and can't find it, check there.

A few of the particularly objective, data-verified issues we've discovered in the past have been left here for now. If folks feel that's not appropriate, they're easy to move - we're happy to follow community guidance on that.

If you're interested in helping with the cleanVPN project, let us know with a post to this thread and we can get you moderator status here to start organising data and threads as they come in. We may create a moderators' forum where such work can be coordinated, if there is a sense it's worth doing.

So long as this cleanVPN forum is hosted here at, it'll be covered by our broad no-censorship standards. We're not policing your words, and we ask you to stand by them if you post them here. Anonymous/non-registered guests are not only able to post, but encouraged to do so without registration. We do manual approvals of those posts only to fight the spambots, not for content. So long as you're not a spambot, you're good to go.

Registered members can edit their posts, send DMs, etc. Apart from that, we've enabled all features for guest/unregistered posters as registered posters are able to use. If you see something that's missing, raise a flag and we'll fix it.

File attachment limits here are set quite high, and most anything can be attached. Still, for code samples and so forth, github is way better - it doesn't mean those data are stuck in their system either: anyone anywhere in the world can - git the entire to wherever they want, and anyone can fork it at github to develop their own direction. In that sense, it's alot more open than is this forum. We encourage its use. But, we didn't want to make the project github-only as nontechnical folks sometimes find git mystifying (because it sort of is, tbh).

Any and all data posted here are publicly accessible, publicly available. Nobody "owns" these data. Cryptostorm is hosting this for now, but we don't own it. If there's a better place for it to go, post that info here and we're happy to move things where they're best suited.

Thanks for helping to make cleanVPN a healthy, constructive turning point in the industry!

~ cryptostorm
seed sponsor of
by cryptostorm_team
Sat Feb 28, 2015 5:14 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: working together to make things better
Replies: 4
Views: 33137

working together to make things better

It's time to do better, and the best way to do better is to make doing better the obvious choice.

In the last month or so, we've been drawn into a series of overlapping projects involving deep structural problems in the VPN service market. Beginning with the webRTC leak, running through as-yet unpublished findings that many "leak testing" websites are aggressively gathering data during tests to pass to adware schemes, and directly into the superfish/komida investigations in which ssl session "kneecapper" programs have been discovered in wide deployment, it has been a busy time for many researchers.

For us, it has also been a sobering experience. As we shared recently in a somewhat frazzled-sounding series of short statements on twitter, we have been presented with data that remove many of our previous assumptions about both https-based "secure" web browsing, and the fundamental nature of the VPN industry (we don't like that term - "VPN service" - and rarely if ever use it to describe our service, but for now we're just using it to get past distractions in this post). Most all of what we've found isn't good.

In short, we've seen - and collected - data that (to us) document the practice of VPN services including trojans, adware, keyloggers, and other overt malware in their closed-source client installation packages. This started with a deep dive into one particular situation, and then widened exponentially as we began checking other (even more clearly) ugly installer packages. Review of network activity during these installations ("pcaps") confirms the installation of specific binaries that are known by malware researchers to be, in a word, dirty code. Some of it is hidden with incredible cleverness; some is right there for anyone to see who looks. But it's there.

This has left us with something of a conundrum at cryptostorm. On the one hand, ignoring these findings and simply going about our business is very tempting: as we've spent weeks helping with these various community-based research projects, we've let many things slide a bit in our own operations (mostly marketing and public outreach, and never security matters, to be clear). We've been lax in returning correspondence, and we've generally been a bit distracted. That's just reality: as a small, focussed team there's no way we can do this sort of highly intensive forensic analysis without taking that time from other tasks.

Further, to be blunt, the whole thing feels wrong. And that matters to us.

By the end of this week, it had devolved into a process of cycling through VPN service installers, unpacking and scanning them to see if there was badness inside. Often there was, requiring deeper analysis to confirm. And then what? The idea of publishing "hit pieces" including our data, as one-offs, has left our team cold. It's not who we are - we do best when striving for improvements, not when attacking others. But, to sit on these data is also not possible: these packages are being installed by many people trusting in the integrity of the VPN industry, and not thinking for a minute that they're opening themselves up to serious security problems as a result of installer-dropped malware.

By the end of the work week, we'd talked as a team and agreed to sit with the question for a bit.

Now, we feel we've come to a constructive path forward:

Rather than go hunting for dirty VPN installers and services, we choose to create a space in which clean operations can be highlighted, rewarded, and covered by ongoing independent review of their software integrity. Reward the good, and the bad will become less rewarding. This is the choice we as a team, at cryptostorm, have made.

To be clear, we see no benefit in creating yet another "VPN review" list. There are already far too many with far too little to offer in constructive results (indeed, often they are overtly evil themselves), in addition to being utterly subjective in nature. Nor do we seek to impose our own standards, as cryptostorm, on what is considered good or bad. These are questions for other venues and other discussions.

Rather, cleanVPN will focus on overt, independently-verifiable markers of code and service integrity including:
  • - scans of all currently- and previously-distributed pre-compiled installers to verify absence of any known malware activity
    - publication of full source code for all client-side applications and installers
    - independent builds from source to verify that source is in fact resulting in binaries as distributed
    - independent publication of hashed fingerprints of compiled installers, to ensure fake/infected versions can be spotted and removed
    - test connections with default settings from distributed installers, to confirm actual VPN sessions result
    - monitoring of test connections to confirm no proxy- or hijack-style out-of-band traffic is taking place
    - websites are free of any aggressive, script-based adware or data grabbing schemes
    - DNS records are properly propagates, and subhosts/hostnames/vhosts are well-enumerated (more on that in a separate post)
    - provisioning of identity-validated https-based website service to ensure sites are not being MiTM'd themselves during installer download
This last item may be surprising, given our increasingly-strident public statements regarding the integrity of the "Certificate Authority" system of verifying https session identifiers. However, as we see obvious short-term methods to overcome these issues, we see no reason not to include legitimate https validation of VPN service websites as a minimum standard to expect of clean VPNs. Loading such sites over insecure http sessions is ridiculous, frankly, and any service that can't provision a server to do so correctly - with independent test results to confirm this - should not be in this business.

We will help to prime the pump of by contributing our findings thus far, in raw form, via a newly-initialised github repository. We'll loop in any and all researchers who choose to make commits to that repo, which is intended as an open public resource for data gathering, analysis, and publication. We hope to see the project graduate out from our forum here, and into an standalone effort... but we didn't want to stall the process itself by waiting for those steps before getting things started. So we'll do an iterative rollout, and seek to hand off as much of the infrastructure element to the cleanVPN project as we can, as quick as it can be done.

Our hope is that the cleanVPN project can produce a published list of "known-clean" providers, who themselves can then make use of this seal of approval in their own public pronouncements. We leave the details of this process to the community for development as things progress from here, but it shouldn't be a terribly complex decision given what we've seen in the analyses so far: the dirty ones are dirty top to bottom, and the clean ones are clean from the first scan to the last. There seems to be little middle ground.

We're not stepping away from this work, as a team. Yes, we've considered it. Being in the maelstrom of public statements that "most VPN services install malware" (not our words specifically, but inevitably that will be the tl;dr version in some places) opens us up to attacks on our team, our service, and our operations far beyond the normal levels: the dirty shops, of course, know they're dirty and if they could only shut us up, their dirty (and, one infers, quite highly profitable) operations can continue unhindered. We do expect to have smears and attacks launched our way - it's happened before any time we've gone anywhere near these issues, and surely it will happen here.

But that is not reason enough to walk away from this.

Once the sunlight is allowed to shine on the darker corners of the VPN industry, we hope that the old practices of dirty smears and underhanded personal attacks will be left in the past. We prefer to see a future where network security service competes based on quality, competence, and features... not based on who bribed the review sites best, or which can cram the most malware into installers without them crashing entirely. Further, we see our "competition" as bareback network access: the vast majority of the world's billions of online citizens who have no network security whatsoever. That's where we focus our own efforts, not on trying to one-up someone else in our field.

That is our choice, as a team.

We hope other researchers, both within the VPN industry and crucially from beyond its stifled confines in the broader security tech community, will share of their time and expertise to help get cleanVPN off on a good strack. This project requires such generosity to succeed, and it's worth it. People deserve to have confidence that well-known VPN services aren't installing backdoors in their computers when they pay for privacy service, and everyone deserves to have a wide selection of network privacy tools from which to choose. These things matter, and we need to create a playing field on which good actions and good deeds are rewarded.
Reward the good and the temptation to do bad becomes less. Simple, but we feel effective.

Let's do it.

  • ~ cryptostorm team
by cryptostorm_team
Fri Feb 13, 2015 5:29 am
Forum: member support & tech assistance
Topic: Synology NAS DSM Connection
Replies: 13
Views: 17705

Re: Synology NAS DSM Connection

We put this question to one of our dev team heavyweights, who has perhaps been spending a bit too much time lately working on deepDNS & webRTC to be presentable in civilised company just yet. His reply, verbatim:
my advice with any NAS is to root the bitch and use OpenVPN like a normal hakkar :E
Sooo... we'll put this to the tech support folks & see if they are a bit more constructive; more information to be posted here shortly.


~ cryptostorm_team
by cryptostorm_team
Thu Feb 05, 2015 8:07 pm
Forum: independent cryptostorm token resellers, & tokens 101
Topic: stormcoins: cstorm facilitation of token resales, token hash excisions, & token refreshes
Replies: 6
Views: 30360

sample aleph transfer facilitation language

Here is the language used in the aleph transfer process:
Greetings -

We've been asked to act as transfer agent in the sale of a cryptostorm aleph token. The token being transferred to you has been marked as returned to minting pool, and your newly-issued aleph token is:

{newly-minted aleph}

Please do not lose it, as it cannot be replaced. We suggest you encrypt it with a good passphrase, and store its encrypted container in a couple of free "filesharing" sites such as - then if you need to prove ownership of it in the future, you can pull a copy from cold storage and generate the SHA512 hash if requested.

Cryptostorm does not keep a list of aleph tokens sold - or any tokens sold. Production nodes each contain an independent copy of the hashed tokens minted thus far; without the token itself, reversing these hashes is very, very difficult. Someone getting ahold of your hashed token can use cryptostorm for free, but if you want to contact us (as an impartial intermediary) and prove ownership of the token itself - not just the hash - you can show us the token securely, and we can verify that the token generates the hash. With that, we could expunge the compromised hash from the production systems (a manual process, and one we'll only do after token-ownership validation as per above), issue you a newly-minted aleph, and you're back to unique control over the hash.

However, if you lose control of the token itself and someone else can meet this forward-hash challenge, they will be able to expunge the production hash... and the first one to do that has sole control of the replacement token.

Here are the details regarding your aleph email account:

email user: {identifier}@CRYPTOSTORM.IS
email pass: {password}
IMAP IN: - port 993 (SSL/TLS)
IMAP OUT: - port 465 (SSL/TLS)

If you've any troubles with it, let us know. They run on the same production mail systems as our in-house team, so we're pretty familiar with its inner workings. Note that, via webmail interface, you can cycle your own password without our involvement. For obvious reasons, we suggest you do so once you have control of the account.

Congratulations on becoming an aleph holder.

Best regards,

~ cryptostorm_team
by cryptostorm_team
Wed Feb 04, 2015 10:08 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: St. Petersburg (Russia) exitnode cluster | anchor node =
Replies: 1
Views: 20492

St. Petersburg (Russia) exitnode cluster | anchor node =

We are pleased to announce the general availability of our anchor node in cryptostorm's new Russia-West exitnode cluster:

Little need be said of the need for Russians and those living in Russia to protect themselves from the depredations of thuggish central hegemony. A proud people and a proud land, today Russia suffers under the yoke of rampant kleptocracy. Secured data communications won't solve that tragic circumstance... but it goes a little way towards creating a space for diversity and compassion to retain a heartbeat of vitality.
laikalogo.jpg (39.31 KiB) Viewed 20491 times
As to our choice of names for this anchor node, many excellent ideas were proposed by the community. In the end, it is Laika - the canine thrown from the planet by humans in a mad race to outdo each other in their ability to create technologies of death. Laika, trusting as she was strapped into a one-way ticket to death alone in the vacuum of empty space, not knowing that her future was bleak and brutal... no heroine's welcome home for her.

She died alone, miserable and desperate and unaware of why it all happened...
Originally, it was uncertain how long Laika had survived in space, with initial estimates ranging from twenty-four hours to one week and the possible speculation that she had lived for as many as ten days. The method of Laika's death was also unknown initially. One rumor suggested that the last of the food in her dispenser contained a poison which put her to sleep just before her life-support batteries ran down...another that her chamber was eventually filled with gas for painless euthanasia after a few days in orbit...or that she may have expired when her oxygen suppy depleted...or that she succumbed to extreme cold. In 1999, several Russian sources stated that Laika had died after four days in space when the cabin overheated. However, in October of 2002, during a gathering of the World Space Congress in Houston, Texas, it was revealed by Dr. Dimitri Malashenkov of the Institute for Biological Problems in Moscow, that after five to seven hours following the launch of Sputnik-2, no lifesigns were being received from Laika. By the fourth orbit, it was apparent that the little dog had passed away from overheating and stress...undoubtedly an exceedingly painful and distressful death. According to Gyorgi Grechko, a cosmonaut who previously worked as an engineer at the Korolev Design Bureau, it seems likely that when Sputnik-2 bounced off the atmosphere, it failed to separate from the booster rocket and thereby rendered the thermal control system inoperative.
Our technological tools are neither benign nor evil. They are mechanisms by which we are able to amplify our own attributes and assumptions, no more & no less. The same technological marvels that sent Laika to her horrible, lonely death had the power to help create a healthy future for everyone: a living planet. The decision to use it to cause harm is a decision... not an attribute of the tool itself.

We choose to enable vibrant community & society through the creation and distribution of technical tools that foster innovation, creativity, collaboration, compassion, and respect For us, honouring the tragic memory of Laika reminds us that we carry the responsibility for the consequences of our choices.

What future do we choose to create?
Laika's HAF entries are:


(5.37 KiB) Downloaded 724 times
by cryptostorm_team
Wed Feb 04, 2015 4:59 am
Forum: independent cryptostorm token resellers, & tokens 101
Topic: 3 aleph tokens for sale {two sold & transferred | one remails}
Replies: 14
Views: 43412

aleph token resales

Cryptostorm supports fully resales of tokens - and aleph tokens, given their lifetime credentials - are particularly a focus of our work to ensure token transactions are smooth, low-friction, and drama-free.

We had to think out the mechanics of this, because previously the assumption was that token resales would be mostly self-managed by market feedback: basically, someone selling "fake" our double-spent tokens would find themselves ostracised when word got out. That's sort of a starting point, but most aleph sales are likely single-point transactions. No feedback there.

Down the road, it'd be quite easy to embed transforms of alephs in a blockchain and use that to validate transfers of the underlying asset. This is a bread-and-butter use of blockchain tech, and if there's enough volume to justify it, we're happy to support it with a bit of 'chain integration (folks could do it themselves, of course, with their own commit procedures in place).

For now, here's what we think makes most sense: when a transaction is agreed to, we're happy to act as escrow agent. Both for transfer of funds - be they coins or other instruments - and for the tokens themselves. But, of course, escrowing tokens doesn't prevent double-spend. And in order to do that, we'll have the original aleph returned to us, and we'll issue a new aleph to the new buyer. That way, the buyer knows she isn't getting an aleph that's been copied out to multiple other people.

The risk of this, of course, is that the seller keeps using her aleph even after "returning" it to us. We discussed this and... we have faith on our community - and in aleph owners in particular - that this won't be an issue for now. If it ever is, we'll shift over to blockchain transfers.

In the meantime, if anyone wants to use our blockchain-embed use-case as a project for collaborative development, we're game for that. It'll be a good extension of our core decentralised model - and we'll surely be using 'chains for other such procedures in the future.


~ cryptostorm_team
by cryptostorm_team
Tue Feb 03, 2015 12:42 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: Cryptostorm's DNS resolvers: DNSchain + DNScurve + Iceland-based
Replies: 9
Views: 44060

Cryptostorm's DNS resolvers: DNSchain + DNScurve + Iceland-based

{direct link:}

For several years, we've discussed amoungst our team and with our community the pros and cons of bringing DNS resolver functionality for on-cstorm sessions in-house (separate threads found here and here and here, for a start). As that process unfolded, our tech folks cast a wide-net over the literature and best practices surrounding DNS resolution and DNS security in general.

For anyone who has explored this area of technology, they can confirm that the waters run deep. Whether it be the often-misunderstood question of DNS leaks & how to prevent them, the general obscurity of so much client-side DNS behaviour in tunnelled settings, or the ''right DNS settings for cryptostorm members, there's hundreds of posts in the forum collating information, reviewing research, pulling together advice, and sifting useful security kit from the cryptographic catastrophes of failed DNS security efforts so commonly seen in the wild.

After a few years of work invested (on & off) by our team - and new projects developing during the interim - we saw potential coalescing for genuine improvement in both DNS security/resilience against censorship, and in on-cstorm session performance (faster lookups, lower latencies).

Back in December, we deployed a beta-version of a DNSchain-supported resolver on a test machine in our network. DNSchain is an architecture that implements Namecoin blochchain-based resolution of .bit TLD names... names that can't be seized, hijacked, or subverted by state authority - as is so trivially easy to do under the current Certificate-Authority/TLD-based model & all conventional efforts to fix its hideous flaws.

In parallel, we've been testing DNScurve, a cryptographically-robust method to defeat substantial chunks of known attacks on DNS-lookup systems as they pass through the various layers of network resources. Despite the similarity in names, the two tools - DNSchain and DNScurve - really address different components of the attack-surface landscape. Together, they close off big chunks of weak real estate.

And finally, we've been testing various methods for doing the actual boring work of query-by-query DNS lookups via the delegated-authority model underlying the entire IP-to-domain model of packet-switched networking that we know as the internet. This is, itself, a world of layers upon layers - shortcuts that bring misery and elegant solutions that avoid enormous morasses of fruitless work; competing recommendations from smart folks who have studied these questions their entire professional lives, and DDoS skiddies looking for the latest easy amplification attacks to jack up their booter firepower. The final weird factor in these decisions is the near-ubiquitous reliance on "DNS leak" testing websites that each, in turn, implements closed-source, obfuscated code (usually brutalist javascrips) to convince a web browser to ask some part of the kernel what resolver it would use if - theoretically - it were resolving something (most follow this model; a small minority throw actual lookups out of the browser sandbox, usually via privilege-escalated custom Java applets that can pull off such tricks legitimately). All the while, all sorts of other bits and pieces in local client computing ecosystems may well be doing their own decisions about what resolvers to use, and when. Plus there's the local router or gateway, and it's likely go its own hard-coded resolvers it wants to use when it feels like it.

Oh, also there's IP6 leaking all over the place in these configurations: from applications, from kernel processes, from physical NICs, from hideous monstrosities like Microsofts force-down-your-throat Taredo nightmare. Oh also there's other devices on the LAN shouting out their own IP6 "next-neighbour-discovery" queries... some of which may well route out through the gateway and into the wilds of the internet (there's no such thing as "private IP6 addresses" after all). Fire up a packet capture suite, and sit as this putrid tide of IP6 spew rolls around your LAN and out into the waiting arms of whoever's listening upstream from your vuln-riddle little kernel-unpatched gateway hardware...

But when folks visit those mysteriously-opaque "DNS leak test" sites, they better get results they expect to get - if they get variant results, irrespective of whether there's any issue or not - they'll go into a wild panic. Which is totally fair: DNS is complex to do even reasonably well for full-time network admins... cryptostorm members who aren't full-time geeks need clear markers for "safe" or "unsafe - these leak test sites have become that, whether they're worth the virtual paper they're not printed on, in terms of accuracy, or not.

From all that, we've distilled down two production DNS resolvers. For now, they're publicly available - anyone can use them, on-cstorm or off-cstorm (i.e. bareback):
Why hostnames, and not simply IPs? There are a small number of use-case scenarios where hostname-defined resolvers can be used in production. Beyond that, it provides some form of supple ability to update to new IPs as-needed, even if such queries are manual rather than onbard a specific application. Practically speaking, however, the vast majority of uses of these will be via the two physical IPs.

Those two IPs are settled on one of our best-provisioned exit nodes anywhere in our network: - which itself sits in the safe confines of our most-beloved datacentre anywhere in the world, Datacell EHF. Thanks again, guys! :-)

Why are they prefixed as "mmm" and not the traditional "dns1" nomenclature? Because... mmmmmmm, these are damned fine DNS resolvers, yes they are! :-P

These two "mmm" resolvers will shift over to on-cstorm only availability in the near future. That's why we've also created two parallel hostname-resolver entities that will always be publicly available and free for anyone to use. To do this, we'll migrate off these resolvers to a dedicated machine (or machines) so they don't run the risk of impacting production network performance. The resolver/IPs are:
You can likely guess why they're named as they are. (note also that the deprecated and resolver hostnames have been updated to these current IPs, but will be quietly retired once this new public/private split resolver pool is fully deployed.

And in the coming days, we'll be implementing cluster-based resolver instances. This way, every geographic cluster will have it's own on-site recursive resolvers to call - a few milliseconds away. This will dramatically speed lookup response times, which translates into a feeling of "faster" internet access (it really does). This also allows us to add mesh-based resolver redundancies between geo-close clusters so attacks on specific nodes in a cluster can never offline resolver functionality on other nodes in the cluster.

The full technical details of what's been deployed, how it goes about various categories of resolution-query completion, and what cryptographic primitives are implemented in each inter-locking layer of this structure, have been split off into a separate post (below) to avoid having this post get even longer than it is.

- - -

There's a good bit of future roadmap yet to be publicly disclosed, when it comes out our resolver framework.

For example, while we've already implemented .bit TLD lookups (if you're using cryptostorms 'mmm' resolvers, you can click on https://cryptostorm.bit and you'll see our main website load seamlessly; if you're not, nothing will come back from the DNS query thrown by the kernel) transparently via our DNSchain'd resolvers, we'll soon be rolling out a parallel capability to transparently access .onion Tor hidden services sites, when on-cstorm.

At that point, our torstorm cstorm-Tor gateway service will be opened up to everyone - not only on-cstorm access as it is currently structured. Those on-cstorm won't need it, as our resolver system will do all that work behind the scenes and .onion sites will just load like any other site in a browser (yes, we'll retain the heavily-tuned cryptographic suite cascades protecting on-cstorm .onion site access, of course. All that will change is the removal of any need to replace 'onion' with '' in the URL of hidden services sites.

We're also implementing (via itpd, the C++ instantiation of i2p's original Java architecture) the same transparent-access-via-browser for on-cstorm sessions for eepsites, those with the suffix .i2p that are hosted inside the i2p "network-within-a-network" security model (these features are called "inproxies" and "outproxies" in the context of eepsites). Likely we'll enable public access to these between-network gateways via some mechanism similar to torstorm, as discussed above.

Next, we'll open up these between-network gateways to a broader range of packet traffic than only .onions/eepsites: already, we run a dedicated 100 megabit Tor relay ( as a donated resource to help support the Tor Project. It acts as a testbed and way for us to become more experience with the performance-tuning challenges of torrc-based network transit architectures. A test/dev box to act as a dedicated i2p router is also in process.

We're also baking into future widget releases last-mile DNScurve cryptographic hardening, likely via a src-modified fork of the existing DNScrypt Windows libraries (DNScrypt is, itself, a specific instantiation of the DNScurve model itself - as is CurveDNS). With a bit of trickery, we'll be able to support cstorm-resolver DNS queries from widget-based session connections before the cluster/balancer resolvers have even been queried, thus adding an additional strong layer of MiTM protection for use in highly unfriendly local network contexts.

All this control over resolver queries and server-side resolver functionality - from Namecoin lookups to c25519-based PFS crypto wrappers - allows us to step forward into a whole new layer of network capabilities for on-cstorm activities. We'll be able to wrap not only node/cluster-to-client bindings fluidly and (if we desire... which we will desire) stochastically within loose outcomes-based heuristics, but also the very core of the HAF interconnections (IP/instance to hostname and balancer) we'll then have access to full-power 'fast flux' dynamic network re-architecting on the fly. This is the sine qua non of resilient defence against entire classes of highly effective, difficult-to-avoid attacks on tunnelled/encapsulated secure networking tools from Tor to cryptostorm and everywhere in between (think Great Firewall, for example). Fast-flux has evolved as a tool to enable malware and botnets to evade shutdown efforts - we're just turning that on its head, and using the tool to help foks evade efforts to block access to secure, reliable network resources. (more on fast-flux here and here).

And beyond that, we can enable server-side protections against vast classes of known attacks and censorship of traditional PKI/CA DNS records - so even though national governments can mess with root-level domain-to-IP mappings, we can recognise what is essentially a nation-state Kamisky attack and fall back to alternative resolver resources.

Oh, and we can execute fine-grained control over things like the 'randomness' of source port assignments within DNS resolver queries over time - thereby protecting against still more classes of known-good, in-the-wild DNS resolution cache poisioning' attacks. In addition to solid improvements in security and test results from well-designed vuln-analysis tools, we get very nice-looking charts like this, as well:
Needless to say, investing the long-term research, analysis, testing, and development resources in DNS resolution functionality for cryptostorm is a decision we feel very good about, as a team. It may not be as sexy as some kinds of content-free marketing hype on the surface... but it has the benefit of providing genuine, sustainable improvements in both members security whilst on-cstorm, and in network resilience in the face of DoS/DDoS or resource censorship attacks by highly-resourced attackers.

DNS-centric systems are hardly the only area of focus for the team, as we continue to expand and improve the core network architecture and security model through 2015 and beyond.

We'll use this thread as an ongoing resource for sharing information on this part of our network framework,
  • ~ cryptostorm_team
by cryptostorm_team
Fri Jan 23, 2015 12:37 am
Forum: general chat, suggestions, industry news
Topic: Barrett Brown allocution
Replies: 1
Views: 8054

Barrett Brown allocution

Good afternoon, Your Honor.

The allocution I give today is going to be a bit different from the sort that usually concludes a sentencing hearing, because this is an unusual case touching upon unusual issues. It is also a very public case, not only in the sense that it has been followed closely by the public, but also in the sense that it has implications for the public, and even in the sense that the public has played a major role, because, of course, the great majority of the funds for my legal defense was donated by the public. And so now I have three duties that I must carry out. I must express my regret, but I must also express my gratitude. And I also have to take this opportunity to ensure that the public understands what has been at stake in this case, and why it has proceeded in the way that it has. Because, of course, the public didn’t simply pay for my defense through its donations, they also paid for my prosecution through its tax dollars. And the public has a right to know what it is paying for. And Your Honor has a need to know what he is ruling on.

First I will speak of regret. Like nearly all federal defendants, I hope to convince Your Honor that I sincerely regret some of the things that I have done. I don’t think anyone doubts that I regret quite a bit about my life including some of the things that brought me here today. Your Honor has the Acceptance of Responsibility document that my counsel submitted to you. Every word of it was sincere. The videos were idiotic, and although I made them in a manic state brought on by sudden withdrawal from Paxil and Suboxone, and while distraught over the threats to prosecute my mother, that’s still me in those YouTube clips talking nonsense about how the FBI would never take me alive. Likewise, I didn’t have the right to hide my files from the FBI during a lawful investigation, and I would’ve had a better chance of protecting my contacts in foreign countries if I had pursued the matter in the courts after the raid, rather than stupidly trying to hide those laptops in the kitchen cabinet as my mother and I did that morning. And with regard to the accessory after the fact charge relating to my efforts to redact sensitive emails after the Stratfor hack, I’ve explained to Your Honor that I do not want to be a hypocrite. If I criticize the government for breaking the law but then break the law myself in an effort to reveal their wrongdoing, I should expect to be punished just as I’ve called for the criminals at government-linked firms like HBGary and Palantir to be punished. When we start fighting crime by any means necessary we become guilty of the same hypocrisy as law enforcement agencies throughout history that break the rules to get the villains, and so become villains themselves.

I’m going to say a few more words about my regrets in a moment, but now I’m going to get to the unusual part of the allocution. I’m going to make some criticisms of the manner in which the government has pursued this case. Normally this sort of thing is left to one’s lawyers rather than the defendant, because to do otherwise runs the risk of making the defendant seem combative rather than contrite. But I think Your Honor can walk and chew bubble gum at the same time. I think Your Honor understands that one can regret the unjust things one has done, while also being concerned about the unjust things that have been done to him. And based on certain statements that Your Honor has made, as well as one particular ruling, I have cause to believe that Your Honor will understand and perhaps even sympathize with the unusual responsibility I have which makes it necessary that I point out some things very briefly.

I do so with respect to Your Honor. I also do it for selfish reasons, because I want to make absolutely certain that Your Honor is made aware that the picture the government has presented to you is a false one. But it is also my duty to make this clear as this case does not just affect me. Even aside from the several First Amendment issues that have already been widely discussed as a result of this case, there is also the matter of the dozens of people around the world who have contributed to my distributed think tank, Project PM, by writing for our public website, Incredibly, the government has declared these contributors — some of them journalists — to be criminals, and participants in a criminal conspiracy. As such, the government sought from this court a subpoena by which to obtain the identities of all of our contributors. Your Honor denied that motion and I am very grateful to Your Honor for having done so. Unfortunately the government thereafter went around Your Honor and sought to obtain these records by other means. So now the dozens of people who have given their time and expertise to what has been hailed by journalists and advocacy groups as a crucial journalistic enterprise are now at risk of being indicted under the same sort of spurious charges that I was facing not long ago, when the government exposed me to decades of prison time for copying and pasting a link to a publicly available file that other journalists were also linking to without being prosecuted. The fact that the government has still asked you to punish me for that link is proof, if any more were needed, that those of us who advocate against secrecy are to be pursued without regard for the rule of law, or even common decency.

Your Honor, I understand that this is my sentencing hearing and not an inquiry into the government’s conduct. This is not the place to go into the dozens of demonstrable errors and contradictions to be found in the government’s documentation, and the testimony by the government. But it would be hypocritical of me to protest the government’s conduct and not provide Your Honor with an example. I will do so very briefly. At the September 13th bond hearing, held in Magistrate Judge Stickney’s court the day after my arrest, Special Agent Allyn Lynd took the stand and claimed under oath that in reviewing my laptops he had found discussions in which I admit having engaged in, quote, “SWATting”, unquote, which he referred to as, quote, “violent activity”, unquote. Your Honor may not be familiar with the term SWATting; as Mr. Lynd described it at the hearing it is, quote, “where they try to place a false 911 call to the residence of an individual in order to endanger that individual.” He went on at elaborate length about this, presenting it as a key reason why I should not receive bond. Your Honor will have noted that this has never come up again. This is because Mr. Lynd’s claims were entirely untrue. But that did not stop him from making that claim, any more than it stopped him from claiming that I have lived in the Middle East, a region I have never actually had the pleasure of visiting.

Your Honor, this is just one example from a single hearing. But if Your Honor can extrapolate from that, Your Honor can probably get a sense of how much value can be placed on the rest of the government’s testimony in this case. Likewise, Your Honor can probably understand the concerns I have about what my contributors might be subjected to by the government if this sort of behavior proves effective today. Naturally I hope Your Honor will keep this in mind, and I hope that other judges in this district will as well, because, again, there remains great concern that my associates will be the next to be indicted.

I’ve tried to protect my contributors, Your Honor, and I’ve also tried to protect the public’s right to link to source materials without being subject to misuse of the statutes. Last year, when the government offered me a plea bargain whereby I would plead to just one of the eleven fraud charges related to the linking, and told me it was final, I turned it down. To have accepted that plea, with a two-year sentence, would have been convenient. Your Honor will note that I actually did eventually plea to an accessory charge carrying potentially more prison time — but it would have been wrong. Even aside from the obvious fact that I did not commit fraud, and thus couldn’t sign on to any such thing, to do so would have also constituted a dangerous precedent, and it would have endangered my colleagues each of whom could now have been depicted as a former associate of a convicted fraudster. And it would have given the government, and particularly the FBI, one more tool by which to persecute journalists and activists whose views they find to be dangerous or undesirable.

Journalists are especially vulnerable right now, Your Honor, and they become more so when the FBI feels comfortable making false claims about them. And in response to our motion to dismiss the charges of obstruction of justice based on the hiding of my laptops, the government claimed that those laptops contained evidence of a plot I orchestrated to attack the Kingdom of Bahrain on the orders of Amber Lyon. Your Honor, Amber Lyon is a journalist and former CNN reporter, who I do know and respect, but I can assure Your Honor that I am not in the habit of attacking Gulf state monarchies on her behalf. But I think it’s unjust of them to use this court to throw out that sort of claim about Miss Lyon in a public filing as they did if they’re not prepared to back it up. And they’re not prepared to back it up. But that won’t stop the Kingdom of Bahrain from repeating this groundless assertion and perhaps even using it to keep Miss Lyon out of the country — because she has indeed reported on the Bahraini monarchy’s violent crackdowns on pro-democracy protests in that country, and she has done so from that country. And if she ever returns to that country to continue that important work, she’ll now be subject to arrest on the grounds that the United States Department of Justice itself has explicitly accused her of orchestrating an attack on that country’s government.

Your Honor, this is extraordinary. Miss Lyon isn’t the only journalist that’s been made less secure legally by this prosecution. Every journalist in the United States is put at risk by the novel, and sometimes even radical, claims that the government has introduced in the course of the sentencing process. The government asserts that I am not a journalist and thus unable to claim the First Amendment protections guaranteed to those engaged in information-gathering activities. Your Honor, I’ve been employed as a journalist for much of my adult life, I’ve written for dozens of magazines and newspapers, and I’m the author of two published and critically-acclaimed books of expository non-fiction. Your Honor has received letters from editors who have published my journalistic work, as well as from award-winning journalists such as Glenn Greenwald, who note that they have used that work in their own articles. If I am not a journalist, then there are many, many people out there who are also not journalists, without being aware of it, and who are thus as much at risk as I am.

Your Honor, it would be one thing if the government were putting forth some sort of standard by which journalists could be defined. They have not put forth such a standard. Their assertion rests on the fact that despite having referred to myself as a journalist hundreds of times, I at one point rejected that term, much in the same way that someone running for office might reject the term “politician”. Now, if the government is introducing a new standard whereby anyone who once denies being a particular thing is no longer that thing in any legal sense, then that would be at least a firm and knowable criteria. But that’s not what the government is doing in this case. Consider, for instance, that I have denied being a spokesperson for Anonymous hundreds of times, both in public and private, ever since the press began calling me that in the beginning of 2011. So on a couple of occasions when I contacted executives of contracting firms like Booz Allen Hamilton in the wake of revelations that they’d been spying on my associates and me for reasons that we were naturally rather anxious to determine, I did indeed pretend to be such an actual official spokesman for Anonymous, because I wanted to encourage these people to talk to me. Which they did.

Of course, I have explained this many, many times, and the government itself knows this, even if they’ve since claimed otherwise. In the September 13th criminal complaint filed against me, the FBI itself acknowledges that I do not claim any official role within Anonymous. Likewise, in last month's hearing, the prosecutor accidentally slipped and referred to me as a journalist, even after having previously found it necessary to deny me that title. But, there you have it. Deny being a spokesperson for Anonymous hundreds of times, and you’re still a spokesperson for Anonymous. Deny being a journalist once or twice, and you’re not a journalist. What conclusion can one draw from this sort of reasoning other than that you are whatever the FBI finds it convenient for you to be at any given moment. This is not the "rule of law", Your Honor, it is the "rule of law enforcement", and it is very dangerous.

Your Honor, I am asking you to give me a time-served sentence of thirty months today because to do otherwise will have the effect of rewarding this sort of reckless conduct on the part of the government. I am also asking for that particular sentence because, as my lawyer Marlo Cadeddu, an acknowledged expert on the guidelines, has pointed out, that’s what the actual facts of the case would seem to warrant. And the public, to the extent that it has made its voice heard through letters and donations and even op-eds in major newspapers, also believes that the circumstances of this case warrant that I be released today. I would even argue that the government itself believes that the facts warrant my release today, because look at all the lies they decided they would have to tell to keep me in prison.

I thank you for your indulgence, Your Honor, and I want to conclude by thanking everyone who supported me over the last few years. I need to single out one person in particular, Kevin Gallagher, who contributed to my Project PM group, and who stepped up immediately after my arrest to build up a citizens’ initiative by which to raise money for my defense, and to spread the word about what was at stake in this case. For the two and a half years of my incarceration, Kevin has literally spent the bulk of his free time in working to give me my life back. He is one of the extraordinary people who have given of themselves to make possible this great and beautiful movement of ours, this movement to protect activists and journalists from secretive and extra-legal retaliation by powerful corporate actors with ties to the state. Your Honor, Kevin Gallagher is not a relative of mine, or a childhood friend. This is only the third time I’ve been in the same room with him. Nonetheless, he has dedicated two years of his life to ensure that I had the best possible lawyers on this case, and to ensure that the press understood what was at stake here. Your Honor, he set up something on whereby I could ask for books on a particular subject and supporters could buy them and have them sent to me. And he spoke to my mother several times a week. During that early period when I was facing over a hundred years worth of charges, and it wasn’t clear whether or not I would be coming home, he would offer support and reassurance to her, an effort that I will never be able to repay. He knows how much I regret the pain and heartbreak that my family has suffered throughout this ordeal.

A few weeks ago, Kevin got a job at the Freedom of The Press Foundation, one of the world’s most justifiably respected advocacy organizations. And, according to the government, he is also a member of a criminal organization, because, like dozens of journalists and activists across the world, he has been a contributor to Project PM, and the government has declared Project PM to be a criminal enterprise. I think that the government is wrong about Kevin, Your Honor, but that is not why I’ve brought him up. And although I am very glad for the opportunity to express my gratitude to him in a public setting, there are some gifts for which conventional gratitude is an insufficient payment. One can only respond to such gifts by working to become the sort of person that actually deserves to receive them. A thank-you will not suffice, and so I am not bringing him up here merely to thank him. Instead, I am using him in my defense. Your Honor, this very noble person, this truly exemplary citizen of the republic who takes his citizenship seriously rather than taking it for granted, knows pretty much everything there is to know about me — my life, my past, my work, from the things I’ve done and the things I’ve left undone, to the things I should not have done to begin with — and he has given himself over to the cause of freeing me today. He is the exact sort of person I tried to recruit for the crucial work we do at Project PM. I am so proud to have someone like him doing so much for me.

Your Honor, the last thing I will say in my own defense is that so many people like Kevin Gallagher have worked so hard on my behalf. And having now said all those things that I felt the need to say, I respectfully accept Your Honor’s decision in my sentencing.

Thank you.
(27 KiB) Downloaded 473 times
by cryptostorm_team
Tue Jan 13, 2015 10:03 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: London (UK) exitnode cluster | anchor node =
Replies: 7
Views: 33617

London (UK) exitnode cluster | anchor node =

{ direct link:}

On this day, of all days, cryptostorm is proud to announce the public availability of our London (UK) exitnode cluster.

For members connecting with our network access widget, the London cluster now appears as an option in the pull-down selection menu.For everyone else, here is the required configuration file for Linux (or other "generic" openVPN) connections to this new cluster:
(5.34 KiB) Downloaded 868 times
Why is today such an auspicious day? Well, the UK government just announced its desire to kneecap secure communications for private citizens. Which is funny, since they seem to think their own communications should be private...
We've chosen to name our anchor node for this new cluster in honour of mathematician, computer science pioneer, and one of humanity's greatest intellectual resources in all of our recorded history: Alan Turing.
Turing was a genuine genius whose insights into numerous branches of formal logic and mathematical systems opened entire new fields of study that today, decades later, are still vibrant and full of promise. His proof of the "Halting Problem" is central to so much in computer science and systems theory that one simply cannot know where to begin in describing its core role. That he proved this form of incomputability (or, conversely, 'computational intractability' as modern researchers often refer to the class of so-called "NP Complete" problems) before mechanical computers even existed is one of many astonishing realities to be found in Turing's short life.

Pace xkcd 1266 {explained}:
halting_problem.png (7.41 KiB) Viewed 33617 times
Oh, and he also basically tipped World War II in favour of the Allies, through his freakishly-clever reverse-engineering of the "uncrackable" Enigma system used by the Germans - a one-time pad apparatus that would be formally impenetrable in theory, but which proved to have subtle flaws in implementation that Turing and his team used to crack the cipher tool wide open. For years, the Allies - and Churchill, in particular - had plaintext access to German naval "secured" communications. This access proved crucial time and time again, as has only really been understood fully decades later with the declassification of must underlying data.

Why a short life?

Turing was gay. He was gay in an era when being gay was illegal, was considered horribly immoral, and was persecuted with an ugly, hateful vengeance. It was labelled a disease, and victims of homophobic mob hysteria were subjected to all manner of tortures, cruelties, and sadistic violence.

Turing was one of those victims.

Forced into "treatment" he despised, for a part of himself he was proud of - his sexuality and his most intimate connections in life - he committed suicide by eating a poisoned apple. However, even that suicide story now seems to be all but fully debunked... and Turing may well have been murdered by the same forces he saved from defeat by the Axis powers in World War II.

Turing was a genius, a hero, a gay man in an era when that made him an easy target for hatred and assault, a mentor to many young mathematical geniuses, and by all accounts a complex, intense, brilliant, irascible, generous, fundamentally honest, expansive human being. He died because of how he loved, and that death is a stain on all of us who struggle to follow lamely in his intellectual footprints, even today.

So, in honour of Alan, we inaugurate our English exitnode cluster. May it protect others from the bigotries, power abuses, and hatreds that took Alan Turing's life from him at such an early age. May it do its own small part in helping to create a better, more diverse, more compassionate, safer, more respectful world for all sentient beings.
With respect,

~ cryptostorm_team
by cryptostorm_team
Mon Jan 12, 2015 7:21 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Dorky "takedown" notices
Replies: 14
Views: 46910

"You are being contacted on behalf of NBC Universal..."

On 15-01-10 06:27 PM, some random cartel spambot wrote:

> Reference ID No. 14-blahblahblah
> You are being contacted on behalf of NBC Universal and its affiliates ("NBC Universal") because your Internet account was identified as having been used recently to illegally copy and/or distribute the copyrighted movies and/or television shows listed at the bottom of this letter. This notice provides you with the information you need in order to take immediate action that can prevent serious legal and other consequences. These actions include:
> 1. Stop downloading or uploading any film or TV shows owned or distributed by NBC Universal without authorization; and
> 2. Permanently delete from your computer(s) all unauthorized copies you may have already made of these movies and TV shows.
> The illegal downloading and distribution of copyrighted works are serious offenses that carry with them the risk of substantial monetary damages and, in some cases, criminal prosecution.
> Copyright infringement also violates your Internet Service Provider's terms of service and could lead to limitation or suspension of your Internet service.
> An industry website,, offers step-by-step instructions to ensure that your Internet account is not being used to violate the copyright laws. The site also can point you to an array of legal choices for enjoying movies and TV shows online. You can also learn there how movie theft damages our economy and costs thousands of Americans their jobs.
> If, after visiting you still have questions, or if you believe you have received this notice in error, you may contact NBC Universal by email at or by calling (818) 777-4876. Please cite the Reference ID noted at the top of this letter in the subject line of any email or voicemail you may leave. You should take immediate action to prevent your Internet account from being used for illegal activities. Today, there are many ways to enjoy movies and TV programs legally.
> The undersigned has a good faith belief that use of the NBC Universal Property in the manner described herein is not authorized by NBC Universal, its agent or the law. The information contained in this notification is accurate. Under penalty of perjury, the undersigned is authorized to act on behalf of NBC Universal with respect to this matter.
> This letter is not a complete statement of NBC Universal's rights in connection with this matter, and nothing contained herein constitutes an express or implied waiver of any rights, remedies or defense, all of which are expressly reserved.
> Sincerely,
> Andrew Beck
> Irdeto USA, Inc.
> c/o NBC Universal Anti-Piracy Technical Operations
> 100 Universal City Plaza
> Universal City, CA 91608
> tel. (818) 777-4876
> fax (818) 866-2026
> *pgp public key is available on the key server at
> ** For any correspondence regarding this case, please send your emails to and refer to Notice ID: 14-267738073. If you need immediate assistance or if you have general questions please call the number listed above.
> Evidentiary Information
> Title: Parks and Recreation (TV)
> Infringement Source: BitTorrent
> Initial Infringement Timestamp: 10 Jan 2015 19:11:51 GMT
> Recent Infringement Timestamp: 10 Jan 2015 19:11:51 GMT
> Infringing Filename: Parks And Recreation Season 3 DVDRip REWARD
> URL if applicable: incoming
> Infringing File size: 3729512448
> Infringers IP Address: {cstorm node instance IP}
> Bay ID: 80ce6911d74ae1e05c5255580dfa7210d9d5d45d|3729512448
> Port ID: 51413
...and reply:
Mr. Beck -

We are in receipt of your correspondence requesting that we "stop downloading or uploading any film or TV shows owned or distributed by NBC Universal without authorization; and [p]ermanently delete from your computer(s) all unauthorized copies you may have already made of these movies and TV shows." relating to an allegation of commission of civil tort under U.S. law.

Thank you so much for taking the time to make us aware of this matter.

To aid in our investigation of the situation, and in furtherance of a deeper understanding of your legally-constituted role in such, we ask that you forward to our attention the forensic reporting on which you base your allegations. We naturally understand that your expertise in such matters runs considerably deeper than ours. Nevertheless, let us suggest that digitally hashed (with a non-reversible/non-rainbowed algorithm, of course) .pcaps of the alleged transfers will be a good starting point. Full server-side logfiles corresponding to these packet-layer captures will be needed to independently validate the legitimacy of the tcpdump output, it goes without saying. Router-layer log data is always helpful - we're sure you concur - so please do send those along, as well. If they're in a nonstandard format, a pointer to relevant documentation of such is much appreciated.

We also ask that you provide us with your preferred independent forensic examiners, so that we may review their credentials and historical experience in documenting such matters. We, needless to say, have several examiners with which we are personally familiar - but it would be unseemly for us to suggest that our own contacts would be of more relevance than those which you will certainly be able to suggest from past working experience.

Once we have completed the necessary work to forensically validate your assertions (as set forth in the quoted correspondence included below), we will be in a much better position to begin the process of responding to the rather novel legal interpretation you suggest. Carts not belonging before their respective horses, let us first work through the forensic issues; obviously, there's no point in arguing legal matters if (in purely hypothetical terms) you lack any sort of independent, forensically sound basis on which to make said allegations.

Finally, as we have not in the past found it productive to engage in "discussions" with automated extortion-bots, we ask that you provide some manner through which we can confirm that these emails from you are coming from, well... from you, and not from a programmatic extortion-bot. Given the provisions of the CAN-SPAM Act, you will understand our concern that honouring an illegal spambot with "replies" would border on dark satire. Rather, we report those programming and profiting from such illegl spambots to the relevant legal authorities for prosecution. Help stop spam, &c.

Thank you again for your deep concern in protecting artists from nefarious intent. It is a refreshing pleasure to find someone who selflessly seeks to ensure that creative people receive full and just reward for their contributions to our shared culture!

Best regards,

~ cryptostorm
by cryptostorm_team
Mon Jan 05, 2015 12:27 pm
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: beta testing of new, in-house DNS resolvers | DNSchain
Replies: 33
Views: 63645

beta testing of new, in-house DNS resolvers | DNSchain

{direct link:}

Ok, they're still undergoing load-testing and it's possible they'll be a bit crash-y meanwhile, but we've got two publicly-available DNS resolvers ready for some load testing:

  • (hostname-mapped to (hostname-mapped to

These resolvers are backed up by our deployment of the DNSchain system, as implemented by OKTurtles (github repository here). Yes, it's sort of silly to provide subdomain-based TLD lookups to these IP addresses (;, since to use those as resolvers you'd first be doing a traditional resolver lookup, then using that IP address to do a DNSchain-based lookup... but we mapped them even so, just in case there's some use to such mappings we haven't figured out ourselves, just yet.

DNSchain is a powerful approach to solving a good chunk of the deep structural problems that currently bedevil the Domain Name System (DNS), & everyone who uses it to connect human-readable domain names with internet protocol (IP) addresses. There's several layers to DNSchain, and our fully leveraging of these layers is a work-in-process. For now, we're implementing these publicly-available resolvers even as we continue to backfill the additional functions of DNSchain within our own network architecture.

For those curious, here's a quick intro whitepaper to how DNSchain is implemented:
(330.5 KiB) Downloaded 970 times
These resolvers are intended, once testing is complete, to serve two core purposes:
  • 1. Provide first-priority lookup service in our "pushed" DNS resolver settings for members connected to cryptostorm. As we discuss in this thread, DNS resolution is an important element in our overall network security model; adding our in-house resolvers to the resolver pool members use during cryptostorm network sessions is an important step forward towards increased resilience, security, and reliability in this function.

    2. Offer publicly-available resolver access, for anyone who wants to have a DNSchain-based resolver to use in their own setup. The choice to make our resolvers publicly available is one we found easy to make, to be honest - it's a reflection of how we approach most all of our technological tasks, at cryptostorm.
As we continue to flesh out the additional elements of the DNSchain security model in our production framework - adding public key validation to cryptostorm-session lookups, offering .bit domains to compliment traditional TLDs, and so on - we'll post those details here. Meanwhile, feel free to share ideas, feedback, results of your testing, and suggestions here in this thread.

Thanks in advance for your help in testing and improving these resolver resources!

by cryptostorm_team
Sun Jan 04, 2015 1:14 pm
Forum: general chat, suggestions, industry news
Replies: 5
Views: 10886


Via our main twitter feed, we commented on this topic yesterday. Screenshot here:
The attachments referenced in the tweet are as follows (cite: first & second):
The only reference anyone has ever made to this legislation covering "VPN services" is by Torrentfreak. They have never cited any outside source for their claim, nor have they responded to requests for where they are getting that assertion from. None of the legitimate reporting on the subject has ever mentioned "VPN services," nor is there any mention of them - or privacy services more broadly, in any legislative commentary we've been able to find. (note: if someone does find such a reference, please do share it here!)

It is worth noting that none of Torrentfreak's marketing-heavy, security-challenged "VPN service" advertisers are based in Canada. Many are based in the USA, where draconian secret laws require disclosure of "private" information - and where such disclosures have been shown to be commonplace and inescapable. Still, Torrenfreak routinely ranks its own advertisers as "secure," versus others who don't spend advertising money with them.

Whether this advertiser relationship is the driver behind Torrentfreak's imaginary "Canadian VPNs must log customer activity" reporting, or not, we leave to the reader to decide for herself.

Incidentally, it's of no relevance to cryptostorm either way; we do a big chunk of our financial processing in one province of Canada, and via sovereign entities geographically located within that province. However, we're not "based" in Canada any more than we're based in Berlin, or Lisbon, or Chicago. Despite that, it does rub us wrong to see Torrentfreak (by all appearances) simply making up a smear in order to puff up their shady, dishonest big advertisers. That's just bad business, and bad ethics.

That's how we see things, given the information we've been able to uncover so far.

by cryptostorm_team
Thu Dec 25, 2014 11:58 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: US-central exitnode cluster | anchor node = | updates & "crazy shit"
Replies: 4
Views: 23063

US-central exitnode cluster | anchor node = | updates & "crazy shit"

note: folks seeking the most current client configuration files need not wade through this entire discussion thread! The current versions are always posted in a separate, dedicated thread, and will be continuously updated there. Continue reading this thread if you're curious about the details of the config files, want to see earlier versions of them, or have comments/feedback to provide - thanks! :thumbup:


As we've mentioned elsewhere, we have recently completed an upgrade network-wide of our resource management process, known as the Hostname Assignment Framework, or HAF. One benefit of the HAF is dramatically enhanced flexibility in making fast, flexible shifts in server/node resources when circumstances require it.

In recent weeks, that flexibility has been crucial in our response to issues relating to one of our U.S.-based exitnode clusters. Briefly, we've moved to a three-zone U.S. coverage model: useast, uswest, and uscentral. Having downed our useast anchor node due to DMCA-related issues (that cluster will be reborn during Q1 2015, with new datacentre providers & redundant capacity early on), we shifted focus to creation of a strong uscentral cluster, with the deployment of a well-provisioned dedicated machine in Chicago, IL: mishigami.

However, circumstances required us to quickly shift member traffic from that node. Those circumstances (referred to as "crazy shit" more than once, both in twitter & during internal team discussions, if we may be blunt) is being written up separately and will be linked-to from this post when ready for publication.
craaazy_small.png (34.62 KiB) Viewed 23004 times
On an interim basis, we load-shifted uscentral sessions to uswest as we provisioned a new 'mishigami' in the central U.S.

That process is now complete, and is reborn. Folks curious about such things will note that uscentral HAF lookups now resolve to this anchor node; we expect to have redundant/expansion nodes in place within two weeks, as this cluster continues to grow.

Meanwhile, the prior mishigami has been reborn as - a donated Tor relay (pingdom uptime stats). It carries no member traffic and any cryptostorm-related data has long since been rm'd from the box. Now it stands as a sort of testament to... well, that's best discussed in the "crazy shit" post.

We're quite pleased with the performance of our new uscentral cluster after this series of transitions and adjustments, and we look forward to adding to this cluster as member awareness of its capacity and reliability continues to grow.
  • (HAF entries: |

Finally, we've been asked few times about the naming of our new node. Here's a bit of backstory:

Thank you,

~ cryptostorm_team
by cryptostorm_team
Thu Dec 11, 2014 4:44 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: - info & updates on our tracker proxies
Replies: 3
Views: 29157 - info & updates on our tracker proxies

{direct link:}

Last year, we set up a proxy to assist in providing access to the Pirate Bay for those facing various forms of censorship and blocking of the site in their local environment (anyone using cryptostorm faces no such issues, of course, so this is a service for the wider community).

Since then we've added several more such proxies, having watched as these blocking regimes have spread to additional trackers.

From time to time, as the underlying trackers mutate and evolve, our proxies will become decoupled from the backend. We really haven't put any high-end performance and uptime tracking on these things, since they're more of an informal resources than something we deploy with an aim for big-percentage uptime as a primary objective.

So the goal of this thread is twofold:
  • One, we ask that folks post here if one of the proxies goes unavailable or otherwise has issues. That way, we can take a look at what's happened and hopefully get it patched up quickly; also, if you have information or suggestions on where a given tracker has moved, let us know and it'll speed the process of patching our proxies.

    Two, this opening post in the thread will keep a current list of all the proxies we are currently supporting. That way, when one drops we can remove it here and hopefully dispel any mystery about what happened. Note that, in general, if a tracker drops in what looks to be a permanent, we'll redirect that subdomain to one of the still-active proxies, so folks don't just get a dead page. This seems fairest and most useful, to us, but if you're the admin of a now-dead proxy and this is not your preference, let us know and we'll be happy to follow your lead.
Naturally, any feedback or suggestions on the proxies is more than welcome. Often these things end up creating "grey zone" questions about what's right and where we should steer them; we'd be much happier to be making these decisions collaboratively, versus doing our best as a team and not really knowing if we've hit the sweet spot, or not, in terms of a broader perspective.

To they copytrolls and other various media oligarcy goons who will surely end up reading this thread, please understand: these are proxies, ffs! We "host" no content, and sending us "takedown notices" is supremely silly. We've nothing to take down. Nor are we a "search engine" that can "remove links" if you spin magic legal-sounding words at us. We proxy connections through, that's it. Screaming at and threatening is will, quite literally, get you nowhere. True, that can be entertaining - and we do occasionally post such missives here (like this, or this) for public enjoyment - but it's like demanding that the sky block raindrops so you don't get wet. Not only does the sky not care if you get wet, or not, it is not in its capabilities to block raindrops headed your way. Becoming "angry" at it for this putative failure is, well, supremely silly.

Our support for these proxies does have a heavy impact on our google search rankings for our main site. As the media spambots report "DMCA violations" for hundreds and thousands of torrents being listed via the underlying trackers, google categorizes these "files" as being "hosted" on It is in the nature of a well-run proxy that, indeed, it appears publicly to be providing the data directly (which is not, behind the scenes, the case). So google gets increasingly angry at cryptostorm when we fail to "take down" these "files" after being warned by google's automated tools. We try to explain the nature of the proxies, but we're probably about 200 layers of bureaucracy away from an actual human being at google who might actually care. Worse, google perhaps is more than willing to kneecap a troublesome network security provider who doesn't bow down to the big media cartels when threatened with specious, spam-delivered doom.

So if you search for us on google and find we're oddly low in every possible search category, now you know why. Sorry about that. You also know, by definition, that those way up the search lists from us aren't doing proxies like ours, and won't stand firm against copytroll spam that comes their way. Which is an interesting bit of data, when you think about it.

We've been supporting filesharing and the privacy of shared data, in peer to peer settings, for more than six years - our tech team launched back in the day when everyone was blocking torrent traffic on "VPN services," and we helped change the dialogue about whether filesharing was "acceptable" or not in a network privacy services. So, needless to say, our philosophical stance on shared data is pretty well-rooted by now, and we've become accustomed to paying a commercial/business price as a result of that stand. Whatever, that's how the cookie crumbles (as Graze often says). There is no legal basis, under the American DMCA or any substantive international legal framework, to "demand" we edit the functionality of a transparent protocol-based proxy; claims to that effect are utterly ridiculous. And yet we get, on an average day, perhaps 200 copytroll "demand letters" via email - most relate to the proxies, on balance. Nearly all have clickable links where you can "pay your fine" with a creditcard and "avoid the risk of litigation" by doing so. In other words: extortion. Extortion spam, which is illegal in just about every jurisdiction in the world. But which is never prosecuted in these situations, for obvious reasons (media oligarchies have money and political power).

Anyhow, the proxies!
  • We've got a skunkworks project afoot - codename 'baystorm' - to do a syncretic mix of blockchain, deepDNS, & @dnschain-based resolver functionality to make a "tracker" meta-resource immune to censorship and entirely free of need for any extra client-side fiddling, tools, or changes to currently-deployed torrent-client applications.

    No ETA yet, but the architecture is already proving itself out in non-production context.
  • This is Kick Ass Torrents... which does, indeed, pretty much kick ass. Note that we've added a pingdom uptime status & performance stats page here:

We used to have a h33t proxy, but we've been told that tracker has gone walkabout. If this is wrong, please do let us know. Edit (18 Feb 2015) same goes for our Fenopy proxy; down 'cause Fenopy is down.

Ok, that's about it. Have fun.

~ cryptostorm_team
by cryptostorm_team
Sat Dec 06, 2014 4:41 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Current client-side certificate materials ("ca.crt")
Replies: 2
Views: 15928

Current client-side certificate materials ("ca.crt")

{direct link:}

Since our issuance of new server-side certificate materials after the 'heartbleed' vuln was publicly disclosed, there's occasions when members with old configuration files (or folks who find old materials archived or cached on third-party sites and attempt unknowingly to use these) occasionally need to find our "new" certificate materials in order to successfully connect to cryptostorm.

These materials are embedded in every current (rev. 1.3 or higher) "conf" file we provide, as well as in all current builds of our client widget. However, for convenience, we're posting that cert material here in the event folks have a need to access it without hassle:

Code: Select all

You can use an web-based tool, or suitable command line utilities, to "unpack" the PEM-encoded certificate into human-readable form. This helps you understand the identification of the server that the certificate is vouching for, as well as some of the important cryptographic attributes of it that may (or may not) increase your confidence in the certificate.

If you're curious about what this block of gibberish is, and why it's part of connecting to cryptostorm, this short overview explains the procedural side of things. In a nutshell, this bit of text helps verify that the "server" to which you are connecting during a cryptostorm session is actually one controlled and managed by cryptostorm, and not an imposter posing as cryptostorm (these attacks are broadly known as "Man in the Middle" attacks; or MiTM). This text, via a cryptographically strong method, acts as a sort of 'fingerprint" of a private key held only on cryptostorm's servers and carefully protected from outside access (the risk such keys were accessed as a result of the 'heartbleed' bug mentioned above is why we re-issued new certificates - and new private keys - on all our servers earlier this year): only someone with the private key can prove that they have that key, and they need not do so by actually showing the key itself but rather by a process of "public key cryptography" that allows for this kind of "prove but don't publish" validation.

In this step of the process of connecting to cryptostorm and exchanging data securely, public key cryptography is actually not being used to encrypt any data. Rather, this is solely a means of validating server identity; it uses the same maths as public key crypto that encrypts data (actually, it encrypts transient symmetric keys that then, themselves, are used to encrypt data... but we digress), but instead is using that math to confirm the server is genuine. Here's a snippet from a well-written Mozilla foundation site explaining SSL, PKI, public key crypto, and how it all interconnects in both theory and practice:
"SSL requires a server SSL certificate, at a minimum. As part of the initial "handshake" process, the server presents its certificate to the client to authenticate the server's identity. The authentication process uses public-key encryption and digital signatures to confirm that the server is in fact the server it claims to be.
Cryptostom's use of PKI infrastructure during session initiation with members is different from that used in "default" openvpn installations. We have removed the elements relating to PKI verification of client identity. Those curious about why we feel this provides important security benefits for our network members will find a more detailed explanation of the analysis in our intro to token-based auth; this paper explores the assumptions implicit in naive, bidirectional PKI-based "VPN service" authentication models, and outlines why we feel these assumptions are counterproductive to member security in our production environment. Occasionally, members or outside observers conclude that we "don't understand cryptography" because we haven't blindly followed the instructions for default openvpn installations. We hope that a review of the token auth paper will dispel the sense that we've simply "not read the manual," although simply doing something different from default is not in and of itself an assurance that our approach is better. To make that conclusion, our analytic framework must hold up under both logical and practical scrutiny.

If there are any questions or requests for additional explanation, please do not hesitate to post them in this thread.


by cryptostorm_team
Thu Dec 04, 2014 2:38 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: .onion hidden services gateway: what it is & how it works
Replies: 0
Views: 70078 .onion hidden services gateway: what it is & how it works

{direct link:}
full cryptographic overview:

admin note: discussion on this topic has been split off to a separate thread, and this announcement post will be updated with current data as it becomes available. It is locked, and follow-on questions or comments are best placed in this thread, or feel free to simply open a new topic; both forum members & non-registered guests can post here, freely.

As we've all become increasingly aware of the reality of global dragnet surveillance of activities online, the use of privacy-protecting technologies has continued to grow apace. Leading that field are protocols and software suites created and maintained by the Tor Project, a nonprofit organization staffed with best-in-class software, network, and cryptographic engineers.

With the growing popularity and use of Tor's software and tools, more and more use-case scenarios have evolved from the core vision of the team: one of the benefits of an opensource, community-driven project such ass Tor is that elastic creativity that comes from the open-ended, unbounded nature of software that's published and available for modification and expansion. From operating systems, to browsers, to embedded systems, to mobile tools, Tor's foundation has nurtured a growing ecosystem of exciting new applications.

Update 7 December 2014: we've upgraded the internals of torstorm to a Lua-based, nginx-served architecture and away from the Pythonic approach of tor2web. Some of this is personal preference on the part of our development and tech operations team; some of it is a haunting sense that Python's attack surfaces are too numerous to enumerate fully. In the end, we have more confidence in both the production stability and the security characteristics of our upgraded approach. The earlier text regarding tor2web is kept as well, for reference:
One novel remix of Tor's core strengths comes as tor2web, a server-side proxy allowing visitors using standard browsers to view Hidden Services-based .onion websites. This platform, originally developed by Aaron Swartz and Virgil Griffith, is now maintained by the HERMES Center for Transparency and Digital Human Rights, and is supported by a growing community of developers and contributors.
As our contribution to this creative Tor ecosystem, cryptostorm has crafted what we call "torstorm", a hybrid aggregation of tor2web and our own encapsulation-based network security service. Simply put, torstorm allows anyone connected to the cryptostorm network to view .onion sites in any browser window by entering the address in syntax:

Topologically, torstorm provides serially-linked network transit in this form:

member_PC <---> cryptostorm_exitnode <---> torstorm_node <---> hidden_services

From the member PC, data are encrypted and encapsulated via established cryptostorm secure network session to a cryptostorm exitnode anywhere in the world. That exitnode then routes the traffic, encrypted via https instantiated by the torstorm's SSL-enabled nginx backend, to the torstorm node itself. There, it transits through and comes out as Tor-protocol traffic, travelling the Tor network itself to reach it's hidden-service destination. Return queries follow the same path, opposite direction.

For members, the benefits of torstorm are obvious and substantive:
  • First, any member can from any standard web browser visit any Tor hidden service, simply by entering the address into the browser's window as described above. For many use-case scenarios this is convenient and the security trade-off (see below) is reasonable or not impactful. For those members requiring full, end-to-end Tor connectivity, the Tor browser will continue to be the preference.

    Second, as compared to a "naive" tor2web install, torstorm network sessions are substantially protected against the most commonplace passive traffic analysis attacks. Rather than having packets transit from a members's local computer directly to the tor2web gateway server, as in a naive setup, all traffic in the torstorm model routes through a cryptostorm exitnode bidirectionally. Thus, traffic analysis must rely on timing-based vectors rather than simple trace of https packet trajectories. This is a substantive decrease in attack surface.

    Third: we've done a bit of work to deploy torstorm with a enforced cipher suite selection requiring PFS-supporting ECDHE via curve brainpoolP512r1, which is generally regarded as not backdoored and robust in production context. Thus, version-rollback attacks targeting broken TLS cipher primitives (like RC4) are not possible; those lacking modern cipher capabilities in their browsers will be unable to make insecure torstorm connections using known-broken fallback ciphers. To us, this is a crucial security guarantee and a much-needed refusal to allow insecure sessions that members understandably assume are secure since they are enabled.
Simultaneously with these benefits, we emphasise the following security considerations as compared to conventional Tor-entire routing of hidden services browsing.
  • First, the entity acting as administrator of the torstorm nodes themselves has access to plaintext traffic if it chooses to make fairly trivial edits to the code running on those machines. This is the case because, absent end-to-end Tor session mechanics, traffic coming through the Tor-internet gateway comes across a plane that requires protocol transition. During that transition, it is definitionally possible for root to have access to plaintext surreptitiously, if it so chooses. Of course, we are not doing this - however this cannot be formally proven to an outside observer, and that is different than end-to-end Tor sessions.

    Second, as noted above, a well-resourced attacker could in theory execute timing-based traffic analysis attacks on end-to-end network sessions outside of the Tor cloud itself (the first three "segment" in the linear topology, above). These attacks are nontrivial, but far from impossible. Without some for of stochastic "fuzzing" of packet transit schedules in cryptostorm exitnodes, this attack represents a tempting surface for metadata harvesting. Although recent timing-based attacks on Tor protocol traffic have recently been discussed widely, it is our opinion that those attacks on Tor are substantively more challenging that the concomitant attack on the outside-Tor segments of the torstorm topology.

    Third, an outside attacker gaining root privileges on a torstorm server has, per the first risk above, access to plaintext traffic. We have hardened these machines and run them with security best-practices, but it is impossible to provide perfect security in this as in many areas of technical systems. This contrasts with the end-to-end Tor model, in which intermediate nodes, even if malicious, are (in theory) unable to gain information content on the payload or routing information relating to packets they relay.

    Fourth, torstorm is currently a beta deployment and should not be trusted with security-intensive uses! We are deploying torstorm in beta form to gain additional testing data, to stress-load the framework under production workloads, and to encourage constructive review and critique of the model and deployed technology from independent researchers and analysts. We have done internal analysis during deployment, but not at a level suitable to give strong security assurances to users. More, the modified Lua-based torstorm codebase itself is somewhat young and only moderately tested in practice, in our opinion: that's a substantial difference from foundational end-to-end Tor technologies, which have undergone years of independent review, analysis, and testing under extreme attack scenarios. Further, our technical team has limited experience with Tor-based applications and we may make errors in our work as a result of this, particularly during beta testing.
In summary, we feel this is an interesting and perhaps useful addition to the constellation of tools making use of the core Tor protocol and network components. However, it needs some serious beta testing and we may do substantial adjustments to the model and/or deployment details as that process unfolds. No matter what, please - PLEASE - do not use torstorm during beta phase for security-intensive activities! That would be, if we may be honest, a really stupid thing to do. Don't be stupid - that's our job. :angel:

Did we mention that this is a BETA TESTING deployment? We did, right. Here's a scary image to remind you again, because it's important:
pictograms-aem-0015-rotating_shaft.png (2.94 KiB) Viewed 70631 times
That looks painful, doesn't it? Ouch. How about this:
pictograms-aem-0019-gears.png (4.43 KiB) Viewed 70631 times

Ok then, hopefully we made that clear.

Our thanks to the Tor team for years of contributions to online privacy. Without those enormous resources, none of our work would be possible.

Please share with us experiences, critiques, bug reports, and suggestions in this thread. We will do our best to embrace community findings, and improve torstorm as time goes by.

With respect,

  • cryptostorm_team

important notice!

This product is produced independently from the Tor® anonymity software and carries no guarantee from The Tor Project about quality, suitability or anything else.
by cryptostorm_team
Tue Dec 02, 2014 1:57 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: TeamSec: cryptostorm's approach to team & project security
Replies: 0
Views: 12897

TeamSec: cryptostorm's approach to team & project security

{direct link:}

Since early in cryptostorm's history, we've had a policy of neither confirming nor denying the identity of members of our core team. This policy, published on our main site, includes the following section:
No comment will be offered with respect to the individual members of the cryptostorm team, neither to confirm membership nor to deny assumptions of membership. We ask that members of the media respect our request to attribute comments either to given pseudonyms, or simply to "cryptostorm" when quoting our work.

We have nothing to hide, and yet must hide ourselves due to the work we do in providing structurally anonymous network privacy service. This is the world in which we live. It is a damned shame - not even Orwell himself foresaw such necessary circumlocutions. As a team, we genuinely look forward to a future in which such decisions are no longer necessary.
From time to time, questions arise in public discussions about cryptostorm and connections are made - or suggested - to individuals in various capacities. We rarely comment directly on such matters, instead referencing the "about us" page linked to above. This policy pays real dividends in terms of improved project security and resilience against common, effective attacks on organizations and teams. It also leaves us open to criticism - some legitimate, some less so - via inference.

Our policy of refusing to deny team membership if questioned isn't mere stubbornness. In fact, if we were to offer such official denials, someone seeking to confirm team identities would simply have to keep asking if people were on our team until we refused to answer; such a refusal, rather than a reply of "not a team member," would be informationally equivalent to answering "yes" affirmatively. Which would, course, break our policy and security model.

Apart from core team membership, a wide range of individuals have contributed to cryptostorm's development - some to a great degree, some in small parts. All has been constructive, and without this deep community support our project wouldn't be where it is today. Beyond that, we seek technical inspiration, operational best practices, and specific tactical knowledge across a vast range of institutions, people, and ideologies. We consider this to be crucial in creating and expanding secure tools that are robust, flexible, and durable.

When it comes to such "project contributors," we have never exercised an ideological or political "litmus test" on them prior to accepting their contributions - if such contributions prove useful, clean, and effective. It's not clear how we would go about implementing such a test were that our goal; many of these one-step-removed contributors are known to us only as nicknames, bitmessage addresses, or other identity-decoupled markers. We simply aren't in the business of vetting the non-relevant elements of project contributors.

This, also, is different from conventional practice, and opens us up to criticism in a guilt-by-association model: because we don't deny accepting tactical assistance from one or another people, it's inferred that we must have accepted. Further, if we did in fact accept some limited assistance, an ideological link is then inferred. We do not feel this reflects accurately the way our project has progressed through its early phases, nor how it operates today.

- - -

As a team, we've drafted this note to clarify the items discussed above. We'd also like to say something a little bit more personal, and direct: this project attracts and inspires a vast range people - some of this is visible publicly, in our twitter feed or here on the forum. Much isn't visible, and comes to us in channels across the spectrum. Some of these interactions are mystifying, some deeply enlightening, some fascinating, and some terrifying. In sum, cryptostorm is vastly stronger for our willingness to accept these contributions, value-agnostic, as they present themselves to us. It's not always easy, but overall it's good.

We understand that some people who make such contributions are going to have political enemies, ideological opponents, and all manner of adversaries. It is in the nature of "interesting" people to carry such baggage; that's what we see. We generally refer to ourselves as "Switzerland" in such matters - we're neutral, we do not take sides in choosing who to "allow" to provide assistance to the project. Despite that, we do find ourselves pulled into these conflicts occasionally, and it's always disappointing to see that our neutrality - itself deeply rooted in who we are, as a team - can so easily be ignored.

We have a simple request for people involved in all "sides" of such conflagrations: if you have problems with each other, take up those problems directly with each other! This isn't so much to ask, is it? Targeting us because of our "associations" (and you'd likely be amused at how many directly-contradictory "associations" are assumed about us - with each side of some conflicts sure that we are aligned, improbably, with their "enemy") is neither helpful in such conflicts - we just take the fire and carry on - nor does it do a service to the world at large. It's wasted time and effort, misdirected and misconceived.

On a regular basis, this project sees a huge range of odd, eccentric, interesting, complex, maddening, scary, and occasionally execrable people float by, ask us questions, make use of our educational resources, or "recommend" us to colleagues or the world at large. They span the gamut from black-block anarchists to staunch political conservatives, from radical students through wealthy tech industry oligarchs, and from anti-establishment street activists through associates of "no-initial" governmental black-budget agencies that don't print business cards. And everything in between. Some are public and visible for one reason or another - they leave breadcrumbs, perhaps by accident or perhaps as part of some larger "great game" of which we're totally unaware. Some (many, in fact) come and go without a trace being left. Except our memories.

(we'd point out that anyone - literally anyone - can make a public connection to "cryptostorm" if they want, and there's nothing we could do to stop that if we tried; some such connections might reflect real facts, some might be pure fantasy... some might be intentional disinformation designed to draw down attackers on our project)

Yes, you can attack us because of these associations - real or imagined, current or past, substantive or ephemeral, hesitant or enthusiastic. But please think twice about this. It might feel good to take your anger or frustration out on us, as a team and a project: we're publicly visible here, we generally interact with the community, we're not in hiding. That's really not fair, doubly so if the target is easy to see and confront directly if you so choose. Why attack us, in that case? It just doesn't make sense, even after years of such things happening in a huge range of circumstances.

We offer data security tools to anyone who chooses to use them. Culturally, it is ingrained in us to avoid "playing sides" - even if personally we do choose one side or another. We'd be a piss-poor security tool if we selectively chose who could make use of what we do; that would prove we're not "content agnostic" and trusting us would make no sense. We don't do that; we haven't for all the years we've worked on this as a team, and we're not going to start now.

It's a rough world out there, we see it every day just like everyone else. Sometimes we get kicked around because we won't "fight back" and we are averse to handing out team or contributor information (as explained above). For those who engage in such activities, we'd ask only one question: why? Why are we a legitimate target, when our job involves standing aside from these things and neutrally enabling secure communications?

There's so much hate in the world, everywhere. We try hard not to be part of that. That might make us look like an easy target for those who want to hurt someone. If you have to do that, we really can't stop you. But we can - and do - ask you to consider whether it's the right thing to do.

Thank you,

by cryptostorm_team
Sat Nov 29, 2014 10:40 am
Forum: general chat, suggestions, industry news
Topic: On Honeypotting
Replies: 5
Views: 52409

On Honeypotting

{direct link:}

This is an article about honeypot awareness.
Weird heroes and mould-breaking champions exist as living proof to those who need it that the tyranny of the 'rat race' is not yet final.”

~ Hunter S. Thompson
What is a honeypot? A honeypot is a security resource set up specifically to draw in unsuspecting visitors and thereby compromise their security. The classic honeypot example in the "VPN industry" is that run by CumbaJohnny. This "VPN service" was set up, operated, and designed solely to gather information on 'carders' using it, and thereby prosecute them. There are others, although none as well-documented publicly and as bright-line in their goals as that one (we have found a honeypot VPN service run by a... an entity, and have been tracking it for more than a year - that story will continue elsewhere).

But apart from pure-form honeypots there's a sliding scale down from there. How about the "VPN service" that uses such badly designed encryption that it is effectively useless against even the most minimal surveillance effort? Customers pay to use it, largely unaware that this "private network" is cryptographically useless. The classic case of this was iPredator in its early years. As a reseller of Relakks' PPTP-based "VPN service," iPredator was offering a security tool that was functionally useless in securing anything. Eventually, when confronted, Peter Sunde admitted the tool was useful only to make a "political statement" and not as a security technique. Today, iPredator offers more competent service - although at last check, they still supported PPTP, as well.

Is that old-form iPredator a honeypot? The usage of honeypot usually suggests some sense of intentional setup: a trap. In that sense, it's a bad match. All evidence suggests iPredator offered only PPTP because it was cheap, easy to deploy, built-in to most client OSes, and what Relakks had already used. That's poor security procedures, but not a honeypot.

How about a "VPN service" that advertises itself aggressively as "secure" and "private" with "no logs," but privately acknowledges that it hands over dockets on its customers to LEO dozens of times a month, on an "unofficial" basis - no warrants required? In this case, the company is actively marketing itself as "secure" and its marketing emphasizes the "no logging" element - which, of course, is total nonsense. This is much closer to a honeypot: there's an intentional misrepresentation, an effort to make visitors feel safe while knowing they're anything but.

So how do you know if a security service you are using is a honeypot?

We get asked this question alot. Often, it's from "trolls" or paid shills for other "VPN services" who are looking to limit competent competition. It's the nature of the business that such things happen; we take it in stride. Sometimes, smart members or prospective members ask about honeypot concerns, and we point them to that CumbaJohnny article and encourage them to do their own digging and research, and make their own decisions. That's all well and good, but there's not much out there worth reading on this subject, unfortunately. What is there usually has that kind of linear, overly-simplistic "don't get caught in honeypots!!" sort of useless advice that has nothing to say in terms of specifics.

So we've been kicking around our in-house views and advice on this topic. This article summarises what we know.

First off, talking about - and asking about honeypots and honeypotting and general trust in technology is good. Without discussion and questions being asked, this whole topic gets shrouded in FUD and whispered nonsense - that doesn't improve security and it doesn't keep people safe. On the flipside, talking about honeypotting and honeypot awareness will inevitably result in more accusations of being a honeypot - once folks realise this is an important topic with few cut-and-dried answers, they start to see honeypots everywhere. The pendulum swings. We are used to this, and on balance it works out ok,

But the real question is: how can someone determine whether a given service is a honeypot, or not? What's the punchlist to make that determination with confidence? And, in short, there is no such punchlist and no way to answer definitively. Sorry. That's reality.

So the first lesson is this: anyone who tells you they can prove they aren't a honeypot is, at best, not credible in their expertise and, at worst, has something overt they are trying to hide (like being a honeypot). There's things we might feel help gain confidence in a resource, but nothing to prove it's solid.

The flipside is that it is possible - on very rare occasions - to prove that something is a honeypot. Listen to those warnings, of they come! In the CumbaJohnny case, one researcher (Max Vision) noted that the server sometimes leaked IP addresses directly tied to the FBI. That's pretty much solid proof, by anyone's definition. He was largely ignored, under the assumption he was just a jealous admin of a competing site (which was true) and that his evidence was faked (which was not true - it was real). "I heard from this guy who heard from this guy" isn't hard evidence; nor, sadly, is the much-loved screenshot. It is very easy to fake most any screenshot, sorry but true. If you are experienced enough to understand the details of this sort of thing, you'll know if a service leaks a proof-positive instance of honeypotting. Watch for those, although they're quite rare.

That leaves us with a very large middle ground - not proved honeypots, but also no way to prove them 100% secure. Here's our rules of thumb, that we as a team use personally and have developed over decades of life out in the digital wilds...
  • 1. Something looks too good to be true? That's a concern. The honeypot service we mentioned, that we've been tracking for a while, gives away their service. That makes you wonder, doesn't it? Note: this is not to say any free service is a honeypot, so just stop ok? We're saying that too good to be true is a possible red flag. Same goes for ridiculous claims of magical crypto or whatnot: if it sounds fishy, check deeper.

    2. If it's too shiny and perfect, think twice. This one is really subjective, but we stand by it. Real life has bumps and scratches and bits of chuff sitting around. That's life. Real teams, working hard and under pressure and tight on cash, miss stuff like that sometimes - it happens. Broken link on a website, etc. In contrast, things that are so perfect they just sparkle make us nervous. There's a certain twitter account, and "he" seems to be available 24/7. Every link is perfect. Every page has Excellent Graphics(tm). Each post is without typo, every blog entry formatted to spec. This is totally, wildly unrealistic for anyone who actually lives in tech. For the general public, it seems an image they love: the Super Hacker Elite, no mistakes. Hoo-rah! But in reality, that's not how it is. What such perfection suggests is a great team: PR flack, a few interns, steely-eyed ops managers, etc. Think of those rooms of spooky-good workers in the Bourne movies. They don't have many typos, their blog posts go out on time, they don't drunk-tweet. Watch for a drunk-tweet now and again... a sign of mortal humans, not honeypots run by efficient LEO.

    3. If the people associated with the project do strange and organic things, that's a good sign. Some projects have had coders who embodied nasty, ugly ideologies regarding racial topics. That's sad... but also not likely to happen in a well-run honeypot, is it? Not impossible of course - an existing project that gets "turned" could have all these little rough corners... but on balance, strangely discongruent stuff helps build confidence. No governmental agency or competent LEO operation is going to put a hard-drinking racist slob in charge of the servers... even if she's the best in the world at that particular job. Too risky, not their style.

    4. Idiosyncratic tech choices can go either way. LEO and honeypots in general are going to go for low-risk, boring tools with licensing agreements and sales reps, on average. Wild-eyed, loopy, big-dreaming crypto tech teams usually have at least one erratic tech choice in the mix, and often more than one. We only communicate by... Pond! Or: our servers all run... whonix! You get the point. Real technologists develop fetish-like obsessions with weird areas of tech, often somewhat impractical and difficult to understand from the outside. This is a badge of authenticity, however, frustrating it may be otherwise. A tech team with no such fascinations? Again, just a bit too white-gloved and perhaps worth a second look.

    5. Backstory. This is a big one, a very big one. Every tech team - every man or woman in the security tech world - has a backstory. Some really don't want to share those backstories, for any of a host of reasons. Fair enough. Some want to splash their PR pictures all over the website. Again: fair enough. Some are shy, some introverted, some loud-mouth braggarts. They're all, unquestionably, people: human people, with flaws and history and strange quirks and dark corners and, as often as not, more than a few scuffed spots somewhere in the past. Few will post all that on the project blog... not unheard-of, but rare. More common, folks will suggest that they're "known in the community." A bit of asking around, someone who knows someone who knows someone... and there's likely someone who got drunk with that person at some con a decade ago and ended up in a public restroom singing marching tunes. Or whatever. Point being: this is a very, very small world - the security tech world. Everyone knows everyone, everyone has history... and if people show up (or personas, really) that nobody knows firsthand? That's odd. No dirty stories of old days gone bad? A little odd, unless they're academics who tend to be more white-gloved (not always!). No fingerprints left anywhere in terms of past projects, failed startups, burned colleagues, jilted lovers, embarrassing rap sheets? Suspicious as fuck. Sorry, that's how we call it.

    6. Shifty about discussing honeypots, snitching, LEO, and in general questions of trust? That's a concern, right there. Some folks get furious when accused of snitchy honeypotting. Some ignore it as beneath them, snooty and contemptuous. Some try to argue the trolls to a standstill when such questions come up, and some fume and vow revenge. All are, in a sense, valid replies - human replies. Honeypots seem to float above this fray, often as not. A shiny, teflon veneer. No response if questioned about such things: no emotion. This isn't a 100% rule, and indeed no honeypot rules are (see above). Some honeypots in other areas have been super-aggressive in attacking anyone who questioned them. That can be a red flag, too. Responding with indignation is one thing; going all red-hot-vengeance is sort of over the top. Mostly, look for a human, imperfect, varied, erratic, slightly ragged response... that's how reality often is. Good days, bad days... variation. And variation among the team. Some might be dismissive, some steely-eyed angry. Variation makes sense, for a real team.

    7. Weird, unexplained absences that never really get folded into the narrative are a huuuuuge red flag. LEO seems to do these sorts of days-long absences - for training, for meetings, for whatever - far more than do real tech ops teams. Real teams are used to being paged 24/7, pinged by phones, tweeted at in the shower, called, jabbered, IM'd... the works. We might vanish for a few days due to exhaustion, personal crusades, whatever - but usually these vanishments fit in somehow, even if in only a jagged and weird way for folks watching from a distance. But the LEO vanishments, they seem to happen unannounced - and remain unexplained later, A drunk reply from an overworked sysadmin is really human and not totally uncommon, nor is a frazzled tech support staffer being needlessly crabby. Robotic drones that vanish for days, and then show back up as if nothing happened? That's a big read flag. Unless they were in jail, in which case... well, could go either way tbh. :-)

    8, Finally, and somewhat in summary, watch for gloves too white. This is security tech. It's not golf course management. That's not to say everyone in the infosec world is secretly a black hat rooting servers at night - obviously that's both silly and disrespectful and we make no such intimation. However, really.. if someone is so spooked by any rub-up with the seedier elements then that's a bit of a flag. Yes, the outfits selling 0days to spooky govs are less likely to be mixing with the hacker rabble... but not really, in fact. Where to the 0days come from? Where do they hire their analysts? Even those shops rarely have pure-white gloves. So if you see shiny-white gloves, what's that about?
Security tech and trust are intertwined. They always will be. Inside this little bubble of reality, many such decisions are made based on personal relationships and personal trust. We know someone, who knows someone, who has known someone for a very long time and trusts them - technically, personally, whatever. And we do make decisions about tech like this, often. Why use that OS, or that tool, or that parameter set? We know this gal, she's best-in-class. She is the uber-expert on that particular thing. And she says it's the bee's knees. She will talk your ear off for hours explaining why, and likely has. That - that matters. Listen to that, in our world.

Same goes for honeypot awareness. The deep tech people, the ones with roots and history and old feuds and scars and blurred memories and stories they'd rather not tell about mistakes they wish they didn't make? Ask them. They may have an ugly feud with a team or a person... but they'll likely know if that's a legit project or not. If nobody knows a team, nobody can say good or bad? That's a big flag.

To wrap it up, scars and rough edges prove a real existence. Nobody gets far in this space without racking up a good bit of both. Enemies, failures, embarrassing episodes. Broken tech. Also of course smashing victories, brilliant code, vibrant github porftolios... it's all part of being genuine, the good and the bad. If you winnow out anyone and any project with any "bad," what you're doing is ensuring that real teams are out of the running - for any real team has scars as well as plaudits. If you do that, what's left is basically the fake - and a good-sized chunk of those are honeypots.

That's our view of the terrain, take from it what you will.

With respect,

  • cryptostorm_team
by cryptostorm_team
Mon Nov 17, 2014 1:59 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Windows widget - version 2.0 'Narwhal' {DEPRECATED}
Replies: 13
Views: 40535

cryptostorm's Windows widget - version 2.0 'Narwhal' {DEPRECATED}

{new widget version 2.2 has inherited production-deployed build status; this thread retained for archival purposes, and locked ~admin}

The latest build of our Perl network connection client application - the widget - us 2,0, nicknamed Narwhal. Technical changelog available at viewtopic.php?f=47&t=6200&p=9191
narwhal1.png (23.33 KiB) Viewed 40535 times
The biggest change from earlier builds is integrated access to no-cost cryptofree capped network service, for those without access tokens for whatever reason. This is really beta, which means we're still testing the integration into our widget and there's a few rough edges here and there... but having wider public access to cryptofree seems worth the stretch for us to integrate cryptofree into Narwhal.

Other improvements in this widget include improved dialog boxes, extra tweaks to the Tap installer framework to avoid the dreaded "zombie Taps" (which, one wag observed, might require a double-tap to dispatch), smoother install from within existing widget sessions, tighter procedures to save tokens across widget upgrades, and a handful of performance improvements behind the scenes. Oh, and way better graphics thanks to @VisualVeritas.

Full release notes, and proper publication of Narwhal source code via our github repository, being completed later today.

So without further ado... Narwhal! :clap:

Zipped installer:
(11.28 MiB) Downloaded 1177 times
Windows installer, ready to run:
(11.43 MiB) Downloaded 1684 times
by cryptostorm_team
Sat Nov 08, 2014 7:19 am
Forum: independent cryptostorm token resellers, & tokens 101
Topic: wallet addresses: darkcoins, dogecoins, litecoins, namecoins, bitcoins
Replies: 7
Views: 68897

wallet addresses: darkcoins, dogecoins, litecoins, namecoins, bitcoins

{direct link:}

We're now doing automated altcoin order processing on the main site,, using the merchant
Supported coins are listed there.
by cryptostorm_team
Sat Nov 08, 2014 4:29 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: Morally Repunant: "pay to play" in the VPN review underworld & our no-pay pledge
Replies: 1
Views: 20354

Morally Repunant: "pay to play" in the VPN review underworld & our no-pay pledge

{direct link:}

The Moral Repugnance of Fraudulent "VPN Review" Extortions & Kickbacks

When we started providing network security service to folks in 2007 that was intended not just as security theatre but rather to provide serious, no-compromise protection, many challenges presented themselves. At a technical level, repurposing so-called "VPN" toolsets in a way that was suitable for protocol- & application-agnostic network transit - not to mention cryptographically robust - took some real work. Determining optimal server-side configurations also involved more than a little effort. What we didn't worry so much about, to be honest, was "marketing" or "branding." No sense in carts before horses: make the service work, then worry about telling the world how well it does its job.

In the intervening years, we've watched with dissapointment & eventual resignation as the "VPN industry" copied our core approach to network provisioning (OpenVPN-based, Linux-served, etc.) with next to no innovation or improvements of note. At the same time, enormous effort & creativity has been brought to bear on the "marketing side" of things - with dozens and dozens of me-too technical copycats springing up, distinguished from each other only based on marketing gimmicks & snazzy logos. When a "brand" whose logo is a smiling Ass, and whose claim to fame is betraying its customers despite enormous "no-logging" hype, came to be a leading "competitor" in the field, the accelerating silde of the "VPN industry" into disrepute & dishonour was fully complete. Yes, that means you, Snitch My Ass.

But, of all the dirty & dishonest scams that have burst forth during this ugly slide into the gutter, the one that is perhaps least well-understood and yet most widespread is that of the fraudulent "VPN review" websites & their scammy behaviour behind the scenes. Of course, we've written about that broadly in the past - there's an entire subforum here devoted to just that topic - but in that morass of dishonesty one practice stands forth as the nadir of deceit: extortion-based efforts to coerce "advertising" by fraudulent review websites.

Here's how it works:

A scammy "VPN review" website - inevitably run by some get-rich-quick schemer with no technical expertise & a history of activity in other, similar web-based frauds - makes contact with a network security provider. "Hey," the scammer says, "we've decided to 'review' your service - congratulations. Only, err, if you want a 'real' review & not just our 'free' version, you have to pay us for 'advertising.' Otherwise, we'll just publish our 'free' review... & there might be errors, ha ha." What they mean is that they've drafted a dishonest, inaccurate "review" that they will then post to their cheap-o blog... unless the service provider pays up to "correct" the lies that have been drafted up. Here's a sample "rate sheet" we received from one such entity.

In a word: extortion.

This happens routinely, and is a core element of the "VPN review" website scam. Ever notice that all the same, poorly-performing, unreliable, betrayal-centric "VPN services" always end up at the top of those lists? It's not because folks are so keenly positive about their crapware services; rather, those marketing-focused companies do two things to ensure their placement. One, they provide extensive kickbacks to these sites via "affiliate programs" (more below). Two, they pay to "advertise" on the very-same sites. You will notice that even TorrentFreak, which used to have a reputation for legitimate journalism, engages in the second form of dishonesty with its readers. We hope we're reading TF's stance wrong, but the "advertise with us = better coverage" connection seems hard to ignore.

The second part of this, the extortion gambit, hasn't been publicly discussed before. After all, those who pay the extortion shakedowns ("advertising fees") don't want to mention it. And those who refuse generally get pummelled by the negative "publicity" of the fraudulent bad write-ups & are not heard from again - after all, if it's all about making "easy money," then when the easy cash isn't there, these coattails-riding copycats move on to the next get-rich-quick scheme on the internet.

But, we don't work that way on our team.

Rather than give in to extortion, we refused - and we don't really care if some amateur-grade crook pastes up some lies about our project, our community, or our team. We've been at this long enough to speak for ourselves, and we trust that the truth of our approach to serious network security service is well-understood by the community & by legitimate tech review sites. In the example last fall, the scammer decided - ironically enough - to attack one of our project's original founders, repeating verbatim a line of US-sponsored disinformation that's long since been proved false, & shown to be part of a black propaganda campaign now exposed as routine for unse in targeting substantial NSA/GCHQ enemies & encryption activists. Whenever our team is attacked with government-sponsored disinfo, we consider it a win - it shows just how much the spooks & rogue military goons fear the work we do. For the win! :-)

Anyway, as part of that reaction our team decided to put forth some principles we follow when it comes to such questions. Rather than doing so piecemeal, we've put them together here in one place. We're publishing them publicly, both to make our own stance clear & we hope to motivate other legitimate security providers to do likewise. Without further ado, here goes...
  • 1. We do not have an "affiliate program" that pays kickbacks to people who convince customers to join our network, and we never will. While the idea seems ok in theory - spreading the word, adding more folks to the team - in practice these programs have turned into cesspools of fraud, over-promising, and dishonesty. We want none of it, and never will. Sure, some are not scammy... but the structural pressure for them to become exactly that is just too great to ignore. We provide, instead, token resales to anyone who would like to become a reseller. Our resales program has an open, standard pricing model & there's no shady back-room dealing taking place. That way, members always know that overall network security & reliability comes first and foremost - not sneaky "affiliate marketing" tricks that are really just MLM scams wearing different skins.

    2. We refuse to advertise or otherwise pay money to any website or other resource that represents itself as an independent "review" service for network security companies. Again, some such sites might actually be able to retain a 'Chinese wall' between advertising sales and reviews... but most don't. Most, in fact, just sell the putative top ranking to whoever pays the most "advertising" dollars per month - often going so far as to promise to create fake customer reviews to be added as comments to the fake reviews! By removing ourselves from this sordid environment, we take away the temptation to have reviews be anything but honest or objective. What advertising we might do in the future (we don't actually 'advertise' anywhere currently, and may well never do so... more on that in another whitepaper) will only take place on neutral, independent platforms - not places that are directly involved in reviewing services such as ours. That's the only way to be sure (no need to nuke 'em from orbit, thank heavens!).

    3. Finally, and most vehemently, we will never pay extortion demands from small-time crooks threatening to post lies about our work if we don't pay up. Never. Rather, we will expose these frauds publicly & rain all available contempt on their execrable, dishonest practices. We call on other legitimate providers to do exactly the same, and drive these scum from the world of legitimate security services. They are a disgrace to our "industry," and their falsely-created gibberish serves to confuse & misdirect customers seeking legitimate advice on legitimate security questions. Only through an ironclad, unwavering rejection of such nonsense can we continue our leadership role in supporting honest, objective, fact-based decisions regarding security technology.
Phew. There you go. Hopefully, this will help to cut down on the volume of overt & semi-covert efforts to approach our team for payouts in order to get "good reviews." We're of course deeply committed to the process of member review & member feedback; here, in our forum, anyone can post such review - members & nonmembers alike, anonymously or via a named account here - without censorship or constraint. However, that process is perverted when scammers make up fake "customer" comments, write fake reviews, and otherwise pollute the legitimate information flow with their cheap-assed efforts to make a quick, dishonest buck.

Last year, when this most egregious scammer tried to extort our team (& failed, of course), the phrase "morally repugnant" came up. We think it's a great phrase! Those who spread government-sponsored black propaganda against well-respected, well-tested, well-credentialled members of our team are indeed morally repugnant. Those who run fake "review" websites that attempt to extort money from legitimate service providers are, indeed, morally repugnant. And, those who seek to profit from shady "affiliate marketing" programs that run counter to the interests of paying customers are also, in their own way, morally repugnant to us.

As a team, we value honesty, integrity, loyalty, & proven professional competence. That is the foundation on which our project is built, & to which our team strives. This means that, quite often, we're attacked & besmirched by dishonest scum who only want to find a way to act as a lazy parasite on the real work of others. That, indeed, is morally repugnant behavior. We abjure those who engage in such despicable acts, and call on them - here, in public - to grow up & earn a living as real contributors to our society, our culture, & our planet. It's not so hard - give 'er a try! :-P

This thread, as is every thread here, is open for any and all replies. If the scammer in question wants to defend himself, he's welcome to do so here - publicly, without editing or censorship by anyone. His earlier extortion efforts were done via PMs in twitter; we've screenshotted them, since as soon as we rejected his overtures he (not surprisingly) deleted the PMs at once. If it's appropriate, we'll post those screenies here... if he denies the extortion, that is. But we doubt it'll come to that. Let's see...

Often, we're told that our "no compromise" attitude comes off as intense & a bit spooky. Fair enough. That's not our intention, of course - but we can see how it happens. As a team, we're actually pretty nice gals & guys... but, yes, we do take this work seriously. For that, we cannot offer any apologies. This is what we do, & we're proud to put our spirits & our expertise fully into the task at hand. As it should be.

With respect,

  • ~ cryptostorm_team
by cryptostorm_team
Mon Nov 03, 2014 12:16 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's opensource license(s): Hessla & tldr license language
Replies: 5
Views: 30171

cryptostorm's opensource license(s): Hessla & tldr license language

{direct link:}

It was recently pointed out to us (h/t twelph) that we've never made clear the license terms of our commitment to opensource availability... because we haven't. Just never came up, to be honest. So now it has, and we've chosen the Hessla license for the widget code (also available on our github page, shortly). Here's the verbatim text:
The Hacktivismo Enhanced-Source Software License Agreement

Everyone is permitted to copy and distribute verbatim copies of this license document. You may use content from this license document as source material for your own license agreement, but you may not use the name "Hacktivismo Enhanced-Source Software License Agreement ," ("HESSLA") or any confusingly similar name, trademark or service-mark, in connection with any license agreement that is not either (1) a verbatim copy of this License Agreement, or (2) a license agreement that contains only additional terms expressly permitted by The HESSLA.


Software that Hacktivismo[fn1] releases under this License Agreement is intended to promote our political objectives. And, likewise, the purpose of this License Agreement itself is political: Namely, to compliment the software's intended political function. Hacktivismo itself exists to develop and deploy computer software technologies that promote fundamental human rights of end-users. Hacktivismo also seeks to enlist the active participation and involvement of people around the world, to help us improve these software tools, and to take other actions (including actions that involve using and distributing our software, and the advancement of similarly-minded software projects of others) that promote human rights and freedom worldwide.

Because of our non-commercial objective of promoting end-users' freedoms, Hacktivismo has some special, and admittedly ambitious, licensing needs. This License Agreement enhances the benefits of published source code by backing up our human rights projects with appropriate remedies enforceable in court.

The Freedoms We Promote: When we speak of the freedom of end-users, we are talking about basic freedoms recognized in the Hacktivismo Declaration,[fn2] the International Covenant on Civil and Political Rights,[fn3] the Universal Declaration of Human Rights,[fn4] and other documents that recognize and promote freedom and human dignity. Principal among these freedoms are:

Freedom of Expression: The freedom of opinion and expression "include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers,"[fn5] and the freedom to choose one's own medium of expression. The arbitrary use of technological censorship measures to block or prevent access to broad categories of speech and expression including the work of critics, intellectuals, artists, journalists, and religious figures is seldom, if ever, justified by any legitimate governmental objective. And, to the extent that technology enables censorship decisions to be removed from public scrutiny and review, technology-based censorship mechanisms are especially suspect and dangerous to civil society. When repressive governments and other institutions of power seek to deprive people of this basic freedom, people have the right to secure, employ and deploy the tools necessary to reclaim the freedoms to which they are justifiably entitled.

[fn5] Article 19, Universal Declaration of Human Rights.

Freedom of Collective Action and Association: People have and should have the "freedom of peaceful assembly and association."[fn6] This freedom includes the right of people to work together to secure constructive change in their personal, economic, and political circumstances. When repressive governments or other institutions of power seek to deprive people (including users of the Internet) of their freedoms of voluntary assembly, association, and common enterprise, people have the right to secure, employ and deploy technologies that reclaim the freedoms to which they are justifiably entitled.

[fn6] Article 20(1), Universal Declaration of Human Rights.

Freedoms of Thought, Conscience, Sexuality, and Religion: People have and should have the freedom of "thought, conscience, and religion."[fn7] This right "includes freedom to change religion or belief, and freedom, either alone or in community with others, in public or private, to manifest any religion or belief in teaching, practice, worship and observance, regardless of doctrine."[fn8] Every person, regardless of sex or sexual preference, and with reciprocal respect for the corresponding rights of all others, has and should have the right to determine and choose, freely and without coercion, whether, how and with whom he or she shall fully enjoy the most private and personal aspects of human life, including individual sexuality, reproduction, and fertility. Moreover, "[t]he explicit recognition and reaffirmation of the right of all women to control all aspects of their health, in particular their own fertility, is basic to their empowerment."[fn9] When repressive governments and other institutions of power seek to deprive people of these basic freedoms, they have the right to secure, employ and deploy the tools necessary to reclaim the freedoms to which they are justifiably entitled.

[fn7] Article 18, Universal Declaration of Human Rights.

[fn8] Id.

[fn9] Paragraph 17, Beijing Declaration of the Fourth United Nations Conference on Women (Sept. 15, 1995).

Freedom of Privacy: Every person has the right to be free from "subject[ion] to arbitrary interference with his [or her] privacy, family, home or correspondence"[fn10] -- digitally, or by any other means or methodology. This freedom of privacy includes the right to be free from governmental or private surveillance that might interfere with or deter the rightful exercise of any other freedoms of any person. In the context of software tools that enable people to reclaim their freedoms, all end-users have and should have the right to secure and use tools that are free from the surreptitious insertion into their software of "backdoors," "spy-ware," escrow mechanisms, or other code or techniques that might promote surveillance, or subvert security (including cryptographic security), confidentiality, anonymity, authenticity and/or trust.

[fn10] Article 12, Universal Declaration of Human Rights.

Reasons For Enhancing "Free" and "Open-Source" Licensing: Developing a new software license is never a trivial task and this License Agreement has presented special challenges for Hacktivismo. Because of our human rights objectives, this License Agreement includes some specific terms and conditions that, as a technical matter, depart from the previously-recognized and established definitions of "free"[fn11] software and "open source"[fn12] software.



We have therefore coined the term "enhanced source" to describe this License Agreement because we have sought to combine most of the freedom-promoting benefits of "free" or "open-source" software (including mandatory disclosure of any changes or modifications Licensees make to the source code, whenever they release modified versions of HESSLA-licensed Programs or other Derivative Works), with additional enhanced license and contractual terms that are intended to promote the freedom of end-users. The Hacktivismo Enhanced-Source Software License Agreement promotes our objectives in an enhanced manner by including contractual terms that empower both Hacktivismo and qualified end-users with greater flexibility and leverage to maintain and recover human rights, through the mechanism of the contract itself including terms that are designed to enhance both our enforcement posture and that of qualified end-users in court.

To be sure, Hacktivismo enthusiastically endorses and supports the goals and objectives of the Free Software movement and those of the open source community. In particular, we owe a special debt of gratitude to the Free Software Foundation, to the Open Source Initiative, and to many exceedingly talented people who have contributed to Free Software and open source projects and endeavors over the years.

Ultimately, however, after reviewing the field of possibilities among previously-existing "open source" and "free" licenses, Hacktivismo has concluded that none of them fully meets our requirements. Writing our own License Agreement enables us to pursue our human rights objectives more effectively. This licensing endeavor represents a first step toward achieving our objectives, and no doubt informed feedback, scholarship, and learned commentary will enable us to pursue our objectives even more effectively in the future.

Benefits That Carry Over From Free Software: Before we explain how an "enhanced source" License Agreement specifically differs from a "free" or "open source" license, we believe it is helpful to explain in greater detail what the principal advantages, and freedom-enhancing aspects, of "free" software are.

When we speak of "free software," we refer to important personal freedoms, and not price. In addition to terms that are intended to promote the freedoms of Expression, Thought, Collective Action and Privacy (along with other human rights) of all end-users, the Hacktivismo Enhanced-Source Software License Agreement is also designed and intended to promote the following freedoms:

· You have the freedom to distribute copies of the software (and charge for this service if You wish);

· You have the freedom of access to the source code, to inspect and verify (and even to improve, if You can) the integrity and functionality of the software;

· So long as You do not subvert or infringe the freedoms of end-users by doing so, You have the freedom to change the software or to use parts of it in new Programs;

· You have the freedom to know You can do these things.

The licenses for most computer software programs are designed to take away Your freedom to share software or change source code. This kind of software is designated as proprietary or "closed." The Hacktivismo Enhanced-Source Software License Agreement -- like other license agreements that have served as inspiration for our work -- is intended to promote both Your freedom to share our software with others, and Your freedom to change and improve the software. Your right under this License Agreement to look at the source not only enables You to contribute Your own efforts to Hacktivismo's human rights projects, but also serves as an additional level of assurance to You as an end-user that unwelcome, hidden surprises have not been inserted into the software, that could compromise Your rights and freedoms when You use the software.

HESSLA Helps Safeguard Additional End-User Freedoms: In order to understand why this License Agreement must be described as "enhanced source," and cannot strictly speaking be considered either a "free" or "open source" license agreement, it is helpful to consider the possibility that a programmer might insert malicious code, such as a computer virus, a keystroke logger, or "spyware" into a program that has previously been released under a "free software" license agreement.[fn13] The act of inserting malicious code into software, if done by a private individual or company (though many governments will contend they are not required to play by the same rules as the rest of us), may well violate criminal laws and result in civil tort liability. It is, of course, also possible to deter such malicious behavior by including, in a software license agreement, a specific contractual term that prohibits such behavior meaning that any licensee who violates the prohibition against malicious code can be sued by the licensor (or by third-party beneficiaries who the licensor has explicitly identified as alternate or additional enforcers of the agreement) for money damages and a court order forbidding any continued violation.

[fn13]In this regard, a the following hypothetical illustration should be particularly helpful. If an organization of computer security enthusiasts were to release, under the GNU General Public License ("GPL"), a program called "Grey Eminence 3000" ("GE3K") a remote-administration tool for Microsoft Windows, that helps illustrate how insecure this particular commercial product happens to be it should hardly be surprising that the United States Secret Service and Federal Bureau of Investigation, after making some loud and misleading apocalyptic noises about "computer hackers" to Congress and in the media (primarily in a largely successful effort to increase their technology budgets), would also study the software to see what it does, how it does it, and whether any of those capabilities happen to be features that law enforcement might find helpful. Of course, if the U.S. federal law enforcement community were to announce, several months later, that it had commissioned the development of "classified" quasi-viral computer-intrusion and surveillance software called "Magic Candle" the capabilities of which law enforcement does not plan to disclose to the public, and the source code for which will remain a closely-guarded secret then inquiring minds might become curious as to whether "Magic Candle" contains any of the GPLed code that was written for "GE3K" (or any other free or open-source software, for that matter). Needless to say, under the right factual circumstances, if any GPLed code from GE3K found its way into "Magic Candle," then the U.S. government or its software development contractor might well be obligated to reveal to the public all the source code for "Magic Candle." Nevertheless, so long as the "Magic Candle" source is never publicly released for comparison purposes, then everyone with legitimate questions about GPL compliance faces a chicken-and-egg problem. So long as the source of "Magic Candle" remains secret, detection of a GPL violation becomes dramatically more difficult (particularly so if, additionally, nobody outside law enforcement has access to the compiled executables), which means the worldwide community of Internet users and software developers has only the United States government's solemn assurance that no GE3K code was used cold comfort at best.

Previous Licenses Provide More Limited Protection Against Government and Other Surveillance: No software license agreement that qualifies as "free" or "open source" may contain any restriction as a term of the license agreement that in any way qualifies any Licensee's prerogative (no matter who they are or what their motives may be) to make changes to code. In other words, an "open source" license agreement, to qualify for the "open source" label, may not even contain a term that prohibits the insertion of destructive viruses or "trojan horses" into derivative code. Likewise, no "free" or "open source" license agreement can in any way contain (as a license term) any restriction on the use of software not even a prohibition against unlawful surveillance or other malicious uses of the software.

The "open source" and "Free Software" communities rely principally on voluntary compliance[fn14] with the disclosure provisions of license agreements (although many "free" and "open source" license agreements, such as BSD-style licenses, do not require changed code to be disclosed, and in fact enable modified versions of programs to be "taken proprietary") and on social mechanisms of enforcement, as means to detect, prevent, deter, and remedy abuses.

[fn14]As the example in Note 13 illustrates, it is sometimes difficult to determine whether the source disclosure requirement of the GPL has been violated, such as when a modified version of a program has been distributed without source, precisely because detection of a disclosure violation depends in part on the disclosure of the source of derivative works in order to compare whether a putative derivative really does contain code derived from a GPLed parent work.

The Hacktivismo Enhanced-Source Software License Agreement does not in any way sacrifice or surrender the enforcement techniques and safeguards available under license agreements such as the GNU General Public License. Rather, the HESSLA enhances the options available to Hacktivismo and to qualified end-users, by providing additional enforcement options. Moreover, for the purpose of promoting the freedoms of both programers and end-users, through the enforced mandatory disclosure of code modified by third-parties, this License Agreement has advantages over many of the licenses (such as BSD-style licenses) that fully qualify as "free" or "open-source" license agreements.

What makes this License Agreement an "enhanced source" License Agreement, instead of a "free software" license agreement, is that the Hacktivismo Enhanced-Source Software License Agreement contains specific, very limited restrictions on modification and use of software by Licensees, as part of a calculated trade-off of rights and responsibilities that is intended to promote the freedom of end-users.

The Enhanced-Source Bargain Reinforces End-User Freedoms: To protect Your rights, we need to make restrictions that forbid anyone to deny You specific rights or to ask You to surrender these rights. To protect Your human rights as an end-user of this program or any work based on it, we need to make restrictions that forbid You and all other Licensees of this software (including, without limitation, any government Licensees) from using this code to subvert the human rights of any end-user.

We protect Your rights and the rights of all end-users with two steps: (1) copyright the software, and (2) offer You this License Agreement which gives You qualified legal permission to copy, distribute and/or modify the software.

The restrictions shared by all Licensees translate into certain responsibilities for You and for everyone else (including governmental entities everywhere) if You distribute copies of the software, if You use it, or if You modify it.

In this regard, the methodology we employ is not materially different from the methodology Free Software Foundation employs in the GNU General Public License (the "GPL"). The methodology is to exchange the Author's permission to copy, change, and/or distribute a copyrighted work, for every Licensee's acceptance of terms and conditions that promote the licensor's objectives. In both this License Agreement and the GPL, the terms and conditions that each Licensee must accept are intended to discriminate against certain very narrow, limited kinds of human endeavor, that are inconsistent with the licensor's political objectives. In other words, the GPL requires each Licensee to promise not to engage in the activity of 'propertizing,' or 'taking proprietary,' modifications to GPLed code; modified code must also be released under the GPL, and cannot be released in the form of "closed" executables, or otherwise be made "proprietary." Likewise, the Hacktivismo Enhanced Source Software License Agreement discriminates against undesirable activity such as surveillance, introduction of certain kinds of malicious code, and human rights violations, as well as discriminating against "propertizing" behavior such as might violate the GPL. Subject to these narrow restrictions, Licensees under either license agreement enjoy very broad latitude to change, use, explore, modify, and distribute the software much broader than they would enjoy with typical "proprietary" software packages.

As with "copyleft" licenses such as the GPL, under the Hacktivismo Enhanced Source Software License Agreement, programmers (including, most importantly, programmers working for governments) do not have unfettered or completely unlimited "freedom" for purposes of what they can do with HESSLA-licensed code. Just as with the GPL, they do not have the "freedom" to convert HESSLA-licensed code into "closed" or "proprietary" code. People who create derivative works based on an HESSLA-licensed program and distribute those works have a corresponding obligation to "give back," and not merely to "take," HESSLA-licensed code.

If You distribute copies of such an HESSLA-licensed program, whether gratis or for a fee, You must give the recipients all the rights and responsibilities that You have. You must ensure that they, too, are told of the terms of this License Agreement, including the freedoms they have, and the kinds of uses and modifications that are forbidden. You must communicate a copy of this License Agreement to them as part of any copy, modification, or re-use of source or object code, so they know their rights and responsibilities.

Thus, the main difference between this License Agreement and the GPL is not the methodology we employ,[fn15] but the scope and breadth of the political objectives we seek to promote. Simply put, the political objectives we promote are somewhat broader than the explicit political goals that the Free Software Foundation seeks to promote through the GPL. Our goals include a somewhat broader range of human rights than the specific copyright-related rights with which the GPL is principally concerned. But, while we are concerned with the entire field of human rights rather than a subset, we want to make it perfectly clear that we also embrace, share, and seek to promote, the goals we share with the Free Software movement.

[fn15] There is a modest difference, but it is not large, and mostly philosophical. Some experts on the GPL draw a distinction between a "contract" and a pure "license," by taking the position that a pure "license" does not impose "contractual" conditions on a Licensee only conditions that would otherwise (but for the license) be subsumed within with exclusive rights that the licensor has under copyright law. Thus, the licensor has the right to exclude anyone else from such activities as making copies, making derivative works, publicly performing a work, and other exclusive rights specified by statute. But, concerning the act of "using" a computer software program, in instances in which a copy is not made (or, in the trivial sense that a copy is made only temporarily from a storage medium to memory, to enable software to be "used"), the Free Software Foundation takes the position that United States law, at least, does not confer an exclusive right on the copyright holder (or, as others would argue, the United States statute qualifies the holder's exclusive right to copy),because the U.S. Copyright Act specifically exempts from the exclusive right to make copies, a copy made from (for example) a computer hard drive to volatile memory, in connection with the process of executing computer software. So far as we can determine, the Free Software Foundation does not argue that it is impossible "contractually" to impose conditions on use, as part of the bargain one strikes, when conditionally allowing Licensees to make copies of a program. Rather, for philosophical reasons, the Free Software Foundation voluntarily chooses not to include what it views as "contractual" conditions in the GPL. In this sense, Hactivismo takes the position that the HESSLA is clearly a "contract" and contains "contractual" terms, such that it should not be considered a "pure license," under the nomenclature employed by the Free Software Foundation. However, in our view, precisely because both the HESSLA and the GPL are clearly conditional grants of permission to do things from which the Licensee would otherwise be excluded (i.e., the Licensee must undertake certain obligations in exchange for permission to copy, modify, or distribute, a work), the key point is that the methodology is quite similar.

Compared with the GPL, aspects of the HESSLA give both end-users and programmers (including, most importantly, governmental end-users and programmers) marginally less leeway to make malicious use of the program, or to insert malicious code into a program, than they would have under a traditional "copyleft" software license. These aspects of the HESSLA (such as the requirement that the program cannot be used to violate human rights, or forbidding the insertion of "spy-ware" or surveillance mechanisms into derivative works) are included because our ultimate objective is to preserve and promote the human rights of end-users, including their privacy and their right of free expression.

In other words, unlike many programmers, we are not just in the business of developing and distributing open-standards technologies. We're also trying to empower end-users (including end-users in totalitarian regimes) with software tools that promote fundamental freedoms while also seeking as best we can to protect these end-users from being arrested, beaten, or worse. Our objective of promoting end-user freedoms, including the freedoms of people in politically repressive countries, is precisely the factor that has led Hacktivismo to develop this License Agreement instead of using another.

The HESSLA Also Includes Features To Enhance Government Accountability: To this end, we have sought and intend to ensure, to the fullest extent that law (including, without limitation, the law of contract and of copyright licensing) enables us to do so,[fn16] that no government or other institution may do anything with this computer software or the underlying source code without becoming a Licensee bound by the terms of this License Agreement, subject to the same restrictions on modification and use as anyone else.

[fn16] "Everyone has the right to an effective remedy by the competent national tribunals for acts violating . . . fundamental rights . . ." Article 8, United Nations Declaration of Human Rights.

Accordingly, this License Agreement includes several terms that are aimed explicitly at governmental entities, in order to maximize enforceability against such entities. Respect for the Rule of Law means that no governmental entity is above the law, and that no governmental entity should be permitted to use its status as a mechanism for circumventing the requirements of this License Agreement.

Any use, copying or modification of this software by any governmental official or governmental entity anywhere in the world is a voluntary act, which act the governmental official or entity is free to forego if it does not wish to be bound by this License Agreement. This License Agreement seeks to establish as clearly as possible two important checks on the improper use of government power. First, the voluntary election to use, copy, or modify, this software by any government or governmental official constitutes a waiver of all immunities that might otherwise be asserted, against enforcement of this License Agreement by the Author, or assertion by end-users or others of any human rights laws that may have been violated by a government employing the Software. Second, any such government or governmental official not only subjects itself to enforcement action in its own courts, but also explicitly and voluntarily subjects itself to enforcement action in the courts of other nations that are likely to be more objective, for the purpose of giving effect to the terms of this License Agreement.

Mechanism of Contract Acceptance: This License Agreement treats any use of the software as acceptance of the terms of this License Agreement. To understand the significance of this, it is important to distinguish between the law governing copyright and the law governing offer and acceptance for the purpose of contract formation (which gives the offeror the power to specify the manner of acceptance). The question of whether copyright confers an exclusive right of use on the author of a program is certainly an interesting one. Under United States law, see 17 U.S.C. 117(a)(1), a limited exception to the exclusive right to copy exists if one makes a second copy "created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner." This License Agreement presupposes that there is no exclusive right to use in the Copyright Act, just an exclusive right to copy. However, You may not make a copy for anyone else unless they are subject to the terms of this License Agreement. Nor may You permit anyone to use Your copy or any other copy You have made unless they are subject to the terms of this License Agreement. You may not make a copy for Your own use or the use of anyone else without the Author's leave to make that copy. And any use, modification, copying, or distribution by anyone constitutes acceptance of the License Agreement, for purposes of contract law. In other words, the License Agreement is designed so that there is no loophole permitting anyone to claim the ability to use, copy, distribute, or modify the Program or any Software based on it without subjecting themselves voluntarily to its terms.

On "Shrinkwrap," "Click-Wrap," "Use-Wrap" and "Copy-Wrap" License Agreements: Arguably, some kinds of software license agreements have more in common with legislation than they do with the bargained-for, negotiated agreements that come to mind when most people think of "contracts." Particularly if a software licensor has sufficient market power to be deemed a monopoly, or if certain proposed expansions of the law of software licensing, masquerading as "codifications," are widely adopted, the ability of a private entity to impose legal prohibitions and duties on virtually everyone else as though the licensor has assumed powers that customarily belong to legislative bodies is both breathtaking and deeply troubling. Of course, we are hardly the first to distribute software under a license agreement that imposes conditions on a take-it-or-leave-it basis. This technique is, as everyone knows, extremely common with proprietary software. And some of the conditions unilaterally imposed by proprietary licensors range from the ridiculous to the obscene. But even certain kinds of "free" and "open-source" software licenses, such as the GPL, depend on the continued viability of legal rules that enable at least some reasonable conditions to be imposed by software licensors on a take-it-or-leave-it basis, with essentially automated methods of acceptance. Courts have been divided as to how far these kinds of licensor-driven automated agreements can go. And we cannot say that we will be unhappy if courts or legislatures ultimately reach a consensus that sharply limits what conditions licensors can impose through such mechanisms. However, while the law is still developing, we think nothing could be more appropriate than to enlist the techniques that institutions of power have used to limit freedom and instead to re-purpose the techniques of "copy-wrap" or "use-wrap" licensing by putting them to use for humanitarian purposes and using them to promote the human rights of end-users. To deny us the use of these techniques, courts and other law-making institutions would be required simultaneously to disarm, to the same degree, proprietary software manufacturers that possess vast market power. And, unlike the conditions imposed by many proprietary vendors, the conditions we impose through this License Agreement are hardly onerous for any end-user (unless, of course, the end-user wants to act maliciously or engage in surveillance).

No Warranty: Next, for each author's protection and our own, we want to make certain that everyone understands that there is no warranty for this software. And, if the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.

Software Patents: Software patents constantly threaten any project such as this one. We wish to avoid the danger that redistributors of a HESSLA-licensed program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have included terms by which any Author must, if it has patented (or licensed a patent covering) any technology embodied in any Program or Software released under this License Agreement, grant all HESSLA Licensees of the Program or Software a royalty-free license of that technology. Any Licensees who release derivative works, as permitted by this License Agreement, are required to grant a royalty-free patent license of any patented technology.

Anyone Can Release Original Software Under The HESSLA: Although this License Agreement is drafted with Hacktivismo's objectives in mind, perhaps it will meet other authors' needs as well. If You are considering using this License Agreement for Your own software (meaning the code is not a work based on Hacktivismo's program in which case all derivative works must be released under this License Agreement but rather Your code is original software that You have developed yourself) and if You have no special reason to prefer this License Agreement to some license that has a more robust and widely-understood track-record, then in most instances we encourage You to use the GPL (or, even better, release concurrently under both the HESSLA and the GPL), because a considerable body of interpretive literature and community custom has grown up around that License Agreement. The Open Software License, see < >, is newer and has less of a track record. But You may also want to consider that licensing option (as well as the option of concurrent OSL/HESSLA licensing).

Any author of original software can release that software under this License Agreement, if You choose to do so; not just Hacktivismo. Hacktivismo is the author and owner of software released by Hacktivismo under this License Agreement. But original software released by other Authors would be owned and licensed by them.

Ultimately, we think it is important to emphasize to other Authors that Programs they have written can be released under both the HESSLA and some other license simultaneously (for, example, a program that is presently GPLed by its Author can be released simultaneously under both the GPL and the HESSLA, at the Author's discretion). If You are an Author of original work, You need neither the permission of the Free Software Foundation nor of Hacktivismo to elect to release software simultaneously under both licenses. The advantage of such a voluntary double-licensing is that it will enable developers to produce hybrid software packages (combining the functionality available through, say, Hacktivismo's Six-Four APIs, with some of the functionality of one or more popular GPL-licensed communications programs) and to release the hybrid packages under the HESSLA, without causing those developers to run afoul of the GPL, the HESSLA or both. Such an arrangement maximizes the potential benefit to both the developer community and to end-users worldwide. Software released under a BSD-style license, as a general matter, can be used to produce a hybrid program, mixing HESSLA-licensed code with code that was previously subject to a BSD license. The HESSLA requires that, in such an instance, the hybrid code must be released under the HESSLA (to avoid weakening the end-user protections and affirmative rights afforded by the HESSLA). Hacktivismo is more than happy to consult with any software developer about the license terms that should apply to any Software that is derivative of any Program of which Hacktivismo is Author. If another Author has released code under the HESSLA, then that Author has primary decision-making authority about the manner in which his her or its software is licensed, but Hacktivismo is happy to field any questions hat may be posed by such an Author or by any developer who is building on another Author's HESSLAed code.

License Revisions: This License Agreement is subject to revision, prior to the release of the Hacktivismo Enhanced-Source Software License Agreement, Version 1.0. We invite interested parties from the international academic and legal communities to offer comments and suggestions on ways to improve this License Agreement, prior to the time that The HESSLA version 1.0 is released.

The terms of the latest and most up-to-date version of this License Agreement, up to and including version 1.0, shall be deemed automatically to supersede the terms of any lower-numbered version of this License Agreement with respect to any Licensee who became a Licensee under the lower-numbered version of the HESSLA.

The terms of the latest and most up-to-date version of this License Agreement will always be published on the Hacktivismo Website,

The precise terms and conditions for copying, distribution, use and modification follow.


0. DEFINITIONS. The following are defined terms that, whenever used in this License Agreement, have the following meanings:

0.1 Author: "Author" shall mean the copyright holder of an Original Work (the "Program") released by the Author under this License Agreement.

0.2 Copy: "Copy" shall mean everything and anything that constitutes a copy according to copyright law, without limitation. A "copy" does not become anything other than a "copy" merely because, for example, a governmental or institutional employee duplicates the Program or a part of it for another employee of the same institution or Governmental Entity, or merely because it is copied from one computer to another, or from one medium to another, or multiple copies are made on the same medium, within the same institutional or Governmental Entity.

0.3 Derivative Work: A "Derivative Work" or "work based on the Program" shall mean either the Program itself or any work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification."). In the unlikely event that, and to the extent that, this contractual definition of "Derivative Work" is later determined by any tribunal or dispute-resolution body to be different is scope from the meaning of "derivative work" under the copyright law of any country, then the broadest and most encompassing possible definition either the contractual definition of "Derivative Work," or any broader and more encompassing statutory or legal definition, shall control. Acceptance of this contractually-defined scope of the term "Derivative Work" is a mandatory pre-condition for You to receive any of the benefits offered by this License Agreement.

0.3.1 Mere aggregation of another work not based on the Program with the Program (or with a Derivative Work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License Agreement.

0.4 License Agreement: When used in this License Agreement, the terms "this License" or "this License Agreement" shall mean The Hactivismo Enhanced-Source Software License Agreement, v. 0.1, or any subsequent version made applicable under the terms of Section 15.

0.5 Licensee: The term "Licensee" shall mean You or any other Licensee, whether or not a Qualified Licensee.

0.6 Original Work: "Original Work" shall mean a Program or other work of authorship, or portion thereof, that is not a Derivative Work.

0.7 Program: The "Program," to which this License Agreement applies, is the Original Work (including, but not limited to, computer software) released by the Author under this License Agreement.

0.8 Qualified Licensee: A "Qualified Licensee" means a Licensee that remains in full compliance with all terms and conditions of this License Agreement. You are no longer a Qualified Licensee if, at any time, You violate any terms of this License Agreement. Neither the Program nor any Software based on the Program may be copied, distributed, performed, displayed, used or modified by You, even for Your own purposes, unless You are a Qualified Licensee. A Licensee other than a Qualified Licensee remains subject to all terms and conditions of this License Agreement, and to all remedies for each cumulative violation as set forth herein. Loss of the status of Qualified Licensee signifies that violation of any terms of the License Agreement subjects a Licensee to loss of most of the benefits that Qualified Licensees enjoy under this License Agreement, and to additional remedies for all violations occurring after the first violation.

0.9 Software: "Software" or "the Software" shall mean the Program, any Derivative Work based on the Program or a portion thereof, and/or any modified version of the Program or portion thereof, without limitation.

0.10 Source Code: The term "Source Code" shall mean the preferred form of a Program or Original Work for making modifications to it and all available documentation describing how to access and modify that Program or Original Work.

0.10.1 For an executable work, complete Source Code means all the Source Code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the Source Code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.

0.10.2 "Object Code:" Because of certain peculiarities of current export-control rules, "object code" of the Program, or any modified version of the Program, or Derivative Work based on the Program, must not be exported except by way of distribution that is ancillary to the distribution of the Source Code. The "Source Code" shall be understood as the primary content transferred or exported by You, and the "object code" shall be considered as merely an ancillary component of any such export distribution.

0.11 Strong Cryptography: "Strong Cryptography" shall mean cryptography no less secure than (for example, and without limitation) a 2048-bit minimum key size for RSA encryption, 1024-bit minimum key size for Diffie-Hellman (El Gamal), or a 256-bit minimum key size for AES and similar symmetric ciphers.

0.12 Substandard Key-Selection Technique: The term "Substandard Key-Selection Technique" shall mean a method or technique to cause encryption keys to be more easily guessed or less secure, such as by (i) causing the selection of keys to be less than random, or (ii) employing a selection process that selects among only a subset of possible keys, instead of from among the largest set of possible keys that can securely be used consistent with contemporary knowledge about the cryptographic techniques employed by You. The following illustrations elaborate on the foregoing definition:

0.12.1 If the key-generation or key-selection technique for the encryption algorithm You employ involves the selection of one or more prime numbers, or involves one or more mathematical functions or concatenations performed on one or more prime numbers, then each prime number should be selected from a very large set of candidate prime numbers, but not necessarily from the set of all possible prime numbers (e.g., inclusion of the number 1 in the candidate set, for example, may in some instances reduce rather than enhance security), and absolutely not from any artificially small set of candidate primes that makes the guessing of a key easier than would be the case if a secure key-generation technique were employed. In all instances, the primes should be selected at random from among the candidate set. If there is a customary industry standard for maximizing the security associated with the key-generation or key-selection technique for the cryptosystem You select, then (with attention also to the requirements of Section 0.11), You should employ a key-generation or selection technique no less secure than the customary industry standard for secure use of the cryptosystem.

0.12.2 If the key-generation or key-selection technique for the encryption algorithm You employ involves the selection of a random integer, or the transformation of a random integer through one or more mathematical processes, then the selection of the integer shall be at random from the largest possible set of all possible integers consistent with the secure functioning of the encryption algorithm. It shall not be selected from an artificially small set of integers (e.g., if a 256-bit random integer serves as the key, then You could not set 200 of the 256 bits as "0," and randomly generate only the remaining 56 bits producing effectively a 56-bit keylength instead of using the full 256 bits).

0.12.3 In other words, Your key-generation technique must promote security to the maximum extent permitted by the cryptographic method(s) and keylength You elect to employ, rather than facilitating eavesdropping or surveillance in any way. The example of GSM telephones, in which 16 of 56 bits in each encryption key were set at "0," thereby reducing the security of the system by a factor of 65,536, is particularly salient. Such artificial techniques to reduce the security of a cryptosystem by selecting keys from only a less-secure or suboptimal subset of possible keys, is prohibited and will violate this License Agreement if any such technique is employed in any Software.

0.13 You: Each Licensee (including, without limitation, Licensees that have violated the License Agreement and who are no longer Qualified Licensees, but who nevertheless remain subject to all requirements of this License Agreement and to all cumulative remedies for each successive violation), is referred to as "You."

0.13.1 Governmental Entity: "You" explicitly includes any and all "Governmental Entities," without limitation. "Governmental Entity" or "Governmental Entities," when used in this License Agreement, shall mean national governments, sub-national governments (for example, and without limitation, provincial, state, regional, local and municipal governments, autonomous governing units, special districts, and other such entities), governmental subunits (without limitation, governmental agencies, offices, departments, government corporations, and the like), supra-national governmental entities such as the European Union, and entities through which one or more governments perform governmental functions or exercise governmental power in coordination, cooperation or unison.

0.13.2 Governmental Person: "You" also explicitly includes "Governmental Persons." The terms "Governmental Person" or "Governmental Persons," when used in this License Agreement, shall mean the officials, officers, employees, representatives, contractors and agents of any Governmental Entity.

1. Application of License Agreement. This License Agreement applies to any Program or other Original Work of authorship that contains a notice placed by the Author saying it may be distributed under the terms of this License Agreement. The preferred manner of placing such a notice is to include the following statement immediately after the copyright notice for such an Original Work:

"Licensed under the Hacktivismo Enhanced-Source Software License Agreement, Version 0.1"

2. Means of Acceptance Use, Copying, Distribution or Modification By Anyone Constitutes Acceptance. Subject to Section 14.1 (concerning the special case of certain Governmental Entities) any copying, modification, distribution, or use by You of the Program or any Software, shall constitute Your acceptance of all terms and conditions of this License Agreement.

2.1 As a Licensee, You may not authorize, permit, or enable any person to use the Program or any Software or Derivative Work based on it (including any use of Your copy or copies of the Program) unless such person has accepted this License Agreement and has become a Licensee subject to all its terms and conditions.

2.2 You may not make any copy for Your own use unless You have accepted this License Agreement and subjected yourself to all its terms and conditions.

2.3 You may not make a copy for the use of any other person, or transfer a copy to any other person, unless such person is a Licensee that has accepted this License Agreement and such person is subject to all terms and conditions of this License Agreement.

2.4 It is not the position of Hacktivismo that copyright law confers an exclusive right to use, as opposed to the exclusive right to copy the Software. However, for purposes of contract law, any use of the Software shall be considered to constitute acceptance of this License Agreement. Moreover, all copying is prohibited unless the recipient of a copy has accepted the License Agreement. Because each such recipient Licensee is contractually obligated not to permit anyone to access, use, or secure a copy of the Software, without first accepting the terms and conditions of this License Agreement, use by non-Licensees is effectively prohibited contractually because nobody can obtain a copy of, or access to a copy of, any Software without (1) accepting the License Agreement through use, and (2) triggering some Licensee's obligation to require acceptance as a precondition of copying or access.

3. "Qualified Licensee" Requirement: Neither the Program nor any Software or Derivative Work based on the Program may be copied, distributed, displayed, performed, used or modified by You, even for Your own purposes, unless You are a "Qualified Licensee." To remain a Qualified Licensee, You must remain in full compliance with all terms and conditions of this License Agreement.

4. License Agreement Is Exclusive Source of All Your Rights:

4.1 You may not copy, modify, or distribute the Program, or obtain any copy, except as expressly provided under this License Agreement. Any attempt otherwise to copy, modify, obtain a copy, sublicense or distribute the Program is void, and will automatically terminate Your rights under this License Agreement and subject You to all cumulative remedies for each successive violation that may be available to the Author. However, Qualified Licensees who have received copies from You (and thereby have received rights from the Author) under this License Agreement, and who would otherwise qualify as Qualified Licensees, will not have their rights under their License Agreements suspended or restricted on account of anything You do, so long as such parties remain in full compliance.

4.2 You are not required to accept this License Agreement and prior to the time You elect to become a Licensee and accept this License Agreement, You may always elect instead not to copy, use, modify, distribute, compile, or perform the Program or any Software released under this License Agreement. However, nothing else grants You permission to copy, to obtain or possess a copy, to compile a copy in object code or executable code from a copy in source code, to modify, or to distribute the Program or any Software based on the Program. These actions are prohibited by law if You do not accept this License Agreement. Additionally, as set forth in Section 2, any use, copying or modification of the Software constitutes acceptance of this License Agreement by You.

4.3 Each time You redistribute the Program (or any Software or Derivative Work based on the Program), the recipient automatically receives a License Agreement from the Author to copy, distribute, modify, perform or display the Software, subject to the terms and conditions of this License Agreement. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License Agreement. Enforcement is the responsibility of the Author.

5. Grant of Source Code License.

5.1 Source Code Always Available from Author: Author hereby promises and agrees except to the extent prohibited by export-control law to provide a machine-readable copy of the Source Code of the Program at the request of any Licensee. Author reserves the right to satisfy this obligation by placing a machine-readable copy of the Source Code of the most current version of the Program in an information repository reasonably calculated to permit inexpensive and convenient access by You for so long as Author continues to distribute the Program, and by publishing the address of that information repository in a notice immediately following the copyright notice that applies to the Program. Every copy of the Program distributed by Hacktivismo (but not necessarily every other Author) consists of the Source Code accompanied, in some instances, by an ancillary distribution of compiled Object Code, but the continued availability of the Source Code from the Author addresses the possibility that You might have (for any reason) not received from someone else a complete, current, copy of the Source Code (lack of which would, for example, prevent You from exporting copies to others without violating this license, see Section 8).

5.2 Grant of License. If and only if, and for so long as You remain a Qualified Licensee, in accordance with Section 3 of this License Agreement, Author hereby grants You a world-wide, royalty-free, non-exclusive, non-sublicensable copyright license to do the following:

5.2.1 to reproduce the Source Code of the Program in copies;

5.2.2 to prepare Derivative Works based upon the Program and to edit or modify the Source Code in the process of preparing such Derivative Works;

5.2.3 to distribute copies of the Source Code of the Original Work and/or of Derivative Works to others, with the proviso that copies of Original Work or Derivative Works that You distribute shall be licensed under this License Agreement, and that You shall fully inform all recipients of the terms of this License Agreement.

6. Grant of Copyright License. If and only if, and for so long as You remain a Qualified Licensee, in accordance with Section 3 of this License Agreement, Author hereby grants You a world-wide, royalty-free, non-exclusive, non-sublicensable license to do the following:

6.1 to reproduce the Program in copies;

6.2 to prepare Derivative Works based upon the Program, or upon Software that itself is based on the Program;

6.3 to distribute (either by distributing the Source Code, or by distributing compiled Object Code, but any export of Object Code must be ancillary to a distribution of Source Code) copies of the Program and Derivative Works to others, with the proviso that copies of the Program or Derivative Works that You distribute shall be licensed under this License Agreement, that You shall fully inform all recipients of the terms of this License Agreement;

6.4 to perform the Program or a Derivative Work publicly;

6.5 to display the Program or a Derivative Work publicly; and

6.6 to charge a fee for the physical act of transferring a copy of the Program (You may also, at Your option, offer warranty protection in exchange for a fee).

7. Grant of Patent License. If and only if, and for so long as You remain a Qualified Licensee, in accordance with Section 3 of this License Agreement, Author hereby grants You a world-wide, royalty-free, non-exclusive, non-sublicensable license Agreement, under patent claims owned or controlled by the Author that are embodied in the Program as furnished by the Author ("Licensed Claims") to make, use, sell and offer for sale the Program. Subject to the proviso that You grant all Licensees a world-wide, non-exclusive, royalty-free license under any patent claims embodied in any Derivative Work furnished by You, Author hereby grants You a world-wide, royalty-free, non-exclusive, non-sublicensable license under the Licensed Claims to make, use, sell and offer for sale Derivative Works.

8. Exclusions From License Agreement Grants. Nothing in this License Agreement shall be deemed to grant any rights to trademarks, copyrights, patents, trade secrets or any other intellectual property of Licensor except as expressly stated herein. No patent license is granted to make, use, sell or offer to sell embodiments of any patent claims other than the Licensed Claims defined in Section 7. No right is granted to the trademarks of Author even if such marks are included in the Program. Nothing in this License Agreement shall be interpreted to prohibit Author from licensing under additional or different terms from this License Agreement any Original Work, Program, or Derivative Work that Author otherwise would have a right to License.

8.1 Implied Endorsements Prohibited. Neither the name of the Author (in the case of Programs and Original Works released by Hacktivismo, the name "Hacktivismo"), nor the names of contributors who helped produce the Program may be used to endorse or promote modifications of the Program, any Derivative Work, or any Software other than the Program, without specific prior written permission of the Author. Neither the name of Hacktivismo nor the names of any contributors who helped write the Program may be used to endorse or promote any Program or Software released under this License Agreement by any person other than Hacktivismo.

9. Modifications and Derivative Works. Only Qualified Licensees may modify the Software or prepare or distribute Derivative Works. If You are a Qualified Licensee, Your authorization to modify the Software or prepare or distribute Derivative Works (including permission to prepare and/or distribute Derivative Works, as provided in Sections 5.2.2, 5.2.3, 6.2, 6.3, and 6.6) is subject to each and all of the following mandatory terms and conditions (9.1 through 9.6, inclusive):

9.1 You must cause the modified files to carry prominent notices stating that You changed the files and the date of any change;

9.2 If the modified Software normally reads commands interactively when run, You must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that You provide a warranty) and that users may redistribute the program under this License Agreement, and telling the user how to view a copy of this License Agreement. (Exception: if the Program itself is interactive but does not normally print such an announcement, Your Derivative Work based on the Program is not required to print an announcement.);

9.3 Any Program, Software, or modification thereof copied or distributed by You, that incorporates any portion of the Original Work, must not contain any code or functionality that subverts the security of the Software or the end-user's expectations of privacy, anonymity, confidentiality, authenticity, and trust, including (without limitation) any code or functionality that introduces any "backdoor," escrow mechanism, "spy-ware," or surveillance techniques or methods into any such Program, Software, or modification thereof;

9.4 Any Program, Software, or modification thereof copied or distributed by You, that employs any cryptographic or other security, privacy, confidentiality, authenticity, and/or trust methods or techniques, including without limitation any Derivative Work that includes any changes or modifications to any cryptographic techniques in the Program, shall employ Strong Cryptography.

9.5 Any Program, Software, or modification thereof copied or distributed by You, if it contains any key-generation or selection technique, must not employ any Substandard Key-Selection Technique.

9.6 No Program or Software copied or distributed by You may transmit or communicate any symmetric key, any "private key" if an asymmetric cryptosystem is employed, or any part of such key, nor may it otherwise make any such key or part of such key known, to any person other than the end-user who generated the key, without the active consent and participation of that individual end-user. If a private or symmetric key is stored or recorded in any manner, it must not be stored or recorded in plaintext, and it must be protected from reading (at a minimum) by use of a password. Use of steganography or other techniques to disguise the fact that a private or symmetric key is even stored is strongly encouraged, but not absolutely required.

10. Use Restrictions: Human Rights Violations Prohibited.

10.1 Neither the Program, nor any Software or Derivative Work based on the Program may used by You for any of the following purposes (10.1.1 through 10.1.5, inclusive):

10.1.1 to violate or infringe any human rights or to deprive any person of human rights, including, without limitation, rights of privacy, security, collective action, expression, political freedom, due process of law, and individual conscience;

10.1.2 to gather evidence against any person to be used to deprive any person of human rights;

10.1.3 any other use as a part of any project or activity to deprive any person of human rights, including not only the above-listed rights, but also rights of physical security, liberty from physical restraint or incarceration, freedom from slavery, freedom from torture, freedom to take part in government, either directly or through lawfully elected representatives, and/or freedom from self-incrimination;

10.1.4 any surveillance, espionage, or monitoring of individuals, whether done by a Governmental Entity, a Governmental Person, or by any non-governmental person or entity;

10.1.5 censorship or "filtering" of any published information or expression.

10.2 Additionally, the Program, any modification of it, or any Software or Derivative Work based on the Program may not be used by any Governmental Entity or other institution that has any policy or practice (whether official or unofficial) of violating the human rights of any persons.

10.3 You may not authorize, permit, or enable any person (including, without limitation, any Governmental Entity or Governmental Person) to use the Program or any Software or Derivative Work based on it (including any use of Your copy or copies of the Program) unless such person has accepted this License Agreement and has become a Licensee subject to all its terms and conditions, including (without limitation) the use restrictions embodied in Section 10.1 and 10.2, inclusive.

11. All Export Distributions Must Consist of or Be Ancillary to Distribution of Source Code. Because of certain peculiarities of current export-control law, any distribution by You of the Program or any Software may be in the form of Source Code only, or in the form or Source Code accompanied by compiled Object Code, but You may not export any Software in the form of compiled Object Code only. Such an export distribution of compiled executable code must in all cases be ancillary to a distribution of the complete corresponding machine-readable source code, which must be distributed on a medium, or by a method, customarily used for software interchange.

12. EXPORT LAWS: THIS LICENSE AGREEMENT ADDS NO RESTRICTIONS TO THE EXPORT LAWS OF YOUR JURISDICTION. It is Your responsibility to comply with any export regulations applicable in Your jurisdiction. From the United States, Canada, or many countries in Europe, export or transmission of this Software to certain embargoed destinations (including, but not necessarily limited to, Cuba, Iran, Iraq, Libya, North Korea, Sudan, and Syria), may be prohibited. If Hacktivismo is identified as the Author of the Program (and it is not the property of some other Author), then export to any national of Cuba, Iran, Iraq, Libya, North Korea, Sudan or Syria, or into the territory of any of these countries, by any Licensee who has received this Software directly from Hacktivismo or from the Cult of the Dead Cow, or any of their members, is contractually prohibited and will constitute a violation of this License Agreement. You are advised to consult the current laws of any and all countries whose laws may apply to You, before exporting this Software to any destination. Special care should be taken to avoid export to any embargoed destination. An Author other than Hacktivismo may substitute that Author's legal name for "Hacktivismo" in this Paragraph, in relation to any Program released by that Author under this Paragraph.

13. Contrary Judgments, Settlements and Court Orders. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on You (whether by court order, agreement or otherwise) that contradict the conditions of this License Agreement, they do not excuse You from the conditions of this License Agreement. If You cannot distribute so as to satisfy simultaneously Your obligations under this License Agreement and any other pertinent obligations, then as a consequence You may not distribute the Software at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through You, then the only way You could satisfy both it and this License Agreement would be to refrain entirely from distribution of the Program.

It is not the purpose of this Section 13 to induce You to infringe any patents or other property right claims or to contest validity of any such claims; this Section has the sole purpose of protecting the integrity of the software distribution system reflected in this License Agreement, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through related distribution systems, in reliance on consistent application of such distribution systems; it is up to the Author/donor to decide if he or she is willing to distribute software through any other system and a Licensee cannot impose that choice.

14. Governmental Entities: Any Governmental Entity ("Governmental Entity" is defined broadly as set forth in Section 0.13.1) or Governmental Person (as "Governmental Person" is defined broadly in Section 0.13.2), that uses, modifies, changes, copies, displays, performs, or distributes the Program, or any Software or Derivative Work based on the Program, may do so if and only if all of the following terms and conditions (14.1 through 14.10, inclusive) are agreed to and fully met:

14.1 If it is the position of any Governmental Entity (or, in the case of any "Governmental Person," if it is the position of that Governmental Person's Governmental Entity) that any doctrine or doctrines of law (including, without limitation, any doctrine(s) of immunity or any formalities of contract formation) may render this License Agreement unenforceable or less than fully enforceable against such Governmental Entity, or any Governmental Person of such Governmental Entity, then prior to any use, modification, change, display, performance, copy or distribution of the Program, or of any Software or Derivative Work based on the Program, or any part thereof, by the Governmental Entity, or by any Governmental Person of that Governmental Entity, the Governmental Entity shall be required to inform the Author in writing of each such doctrine that is believed to render this License Agreement or any part of it less than fully enforceable against such Governmental Entity or any Governmental Person of such entity, and to explain in reasonable detail what additional steps, if taken, would render the License Agreement fully enforceable against such entity or person. Failure to provide the required written notice to the Author in advance of any such use, modification, change, display, performance, copy or distribution, shall constitute an irrevocable and conclusive waiver of any and all reliance on any doctrine, by the Governmental Entity, that is not included or that is omitted from the required written notice (failure to provide any written notice means all reliance on any doctrine is irrevocably waived). Any Governmental Entity that provides written notice under this subsection is prohibited, as are all of the Governmental Persons of such Governmental Entity, from making any use, change, display, performance, copy, modification or distribution of the Software or any part thereof, until such time as a License Agreement is in place, agreed upon by the Author and by the Governmental Entity, that such entity concedes is fully-enforceable. Any use, modification, change, display, performance, copy, or distribution following written notice under this Paragraph, but without the implementation of an agreement as provided herein, shall constitute an irrevocable and conclusive waiver by the Governmental Entity (and any and all Governmental Persons of such Governmental Entity) of any and all reliance on any legal doctrine either referenced in such written notice or omitted from it.

14.2 Any Governmental Entity that uses, copies, changes, modifies, or distributes, the Software or any part or portion thereof, or any Governmental Person who does so (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), permanently and irrevocably waives any defense based on sovereign immunity, official immunity, the Act of State Doctrine, or any other form of immunity, that might otherwise apply as a defense to, or a bar against, any legal action based on the terms of this License Agreement.

14.2.1 With respect to any enforcement action brought by the Author in a United States court against a foreign Governmental Entity, the waiver by any Governmental Entity as provided in Subparagraphs 14.1 and 14.2 is hereby expressly acknowledged by each such Governmental Entity to constitute a "case . . . in which the foreign state has waived its immunity," within the scope of 28 U.S.C. 1605(a)(1) of the Foreign Sovereign Immunities Act of 1976 (as amended). Each such Governmental Entity also specifically agrees and concedes that the "commercial activity" exceptions to the FSIA, 28 U.S.C. 1605(a)(2), (3) are also applicable. With respect to an action brought against the United States or any United States Governmental Entity, in the courts of any country, the U.S. Governmental Entity shall be understood to have voluntarily agreed to a corresponding waiver of immunity from actions in the courts of any other sovereign.

14.2.2 With respect to any enforcement action brought by an authorized end-user (as a third-party beneficiary, under the terms of Subparagraphs 14.3 and 14.10) in a United States court against a foreign Governmental Entity, the waiver by any Governmental Entity as provided in Subparagraphs 14.1 and 14.2 is hereby expressly acknowledged by each such Governmental Entity to constitute a "case . . . in which the foreign state has waived its immunity," within the scope of 28 U.S.C. 1605(a)(1) of the Foreign Sovereign Immunities Act of 1976 (as amended). . Each such Governmental Entity also specifically agrees and concedes that the "commercial activity" exceptions to the FSIA, 28 U.S.C. 1605(a)(2), (3) are also applicable. With respect to an action brought against the United States or any United States Governmental Entity, in the courts of any country, the U.S. Governmental Entity shall be understood to have voluntarily agreed to a corresponding waiver of immunity from actions in the courts of any other sovereign.

14.2.3 With respect to any action or effort by the Author in the United States to execute a judgment against a foreign Governmental Entity, by attaching or executing process against the property of such Governmental Entity, the waiver by any Governmental Entity as provided in Subparagraphs 14.1 and 14.2 is hereby expressly acknowledged by each such Governmental Entity to constitute a case in which "the foreign state has waived its immunity from attachment in aid of execution or from execution," in accordance with 28 U.S.C. 1610(a)(1) of the Foreign Sovereign Immunities Act of 1976 (as amended). Each such Governmental Entity also specifically agrees and concedes that the "commercial activity" exceptions to the FSIA, 28 U.S.C. 1610(a)(2), (d) are also applicable. With respect to an action brought against the United States or any United States Governmental Entity, in the courts of any country, the U.S. Governmental Entity shall be understood to have voluntarily agreed to a corresponding waiver of immunity from actions in the courts of any other sovereign.

14.2.4 With respect to any action or effort brought by an authorized end-user (as a third-party beneficiary, in accordance with Subparagraphs 14.3 and 14.10) in the United States to execute a judgment against a foreign Governmental Entity, by attaching or executing process against the property of such Governmental Entity, the waiver by any Governmental Entity as provided in Subparagraphs 14.1 and 14.2 is hereby expressly acknowledged by each such Governmental Entity to constitute a case in which "the foreign state has waived its immunity from attachment in aid of execution or from execution," in accordance with 28 U.S.C. 1610(a)(1) of the Foreign Sovereign Immunities Act of 1976 (as amended). Each such Governmental Entity also specifically agrees and concedes that the "commercial activity" exceptions to the FSIA, 28 U.S.C. 1610(a)(2), (d) are also applicable. With respect to an action brought against the United States or any United States Governmental Entity, in the courts of any country, the U.S. Governmental Entity shall be understood to have voluntarily agreed to a corresponding waiver of immunity from actions in the courts of any other sovereign.

14.3 Any Governmental Entity that uses, copies, changes, modifies, displays, performs, or distributes the Software or any part thereof, or any Governmental Person who does so (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), and thereby violates any terms and conditions of Section 9 (restrictions on modification), or Paragraph 10 (use restrictions), agrees that the person or entity is subject not only to an action by the Author, for the enforcement of this License Agreement and for money damages and injunctive relief (as well as attorneys' fees, additional and statutory damages, and other remedies as provided by law), but such Governmental Entity and/or Person also shall be subject to a suit for money damages and injunctive relief by any person whose human rights have been violated or infringed, in violation of this License Agreement, or through the use of any Software in violation of this License Agreement. Any person who brings an action under this section against any Governmental Person or Entity must notify the Author promptly of the action and provide the Author the opportunity to intervene to assert the Author's own rights. Damages in such a third-party action shall be measured by the severity of the human rights violation and the copyright infringement or License Agreement violation, combined, and not merely by reference to the copyright infringement. All end-users, to the extent that they are entitled to bring suit against such Governmental Entity by way of this License Agreement, are intended third-party beneficiaries of this License Agreement. Punitive damages may be awarded in such a third-party action against a Governmental Entity or Governmental Person, and each and every such Governmental Entity or Person conclusively waives all restrictions on the amount of punitive damages, and all defenses to the award of punitive damages to the extend such limitations or defenses depend upon or are a function of such person or entity's status as a Governmental Person or Governmental Entity.

14.4 Any State of the United States, or any subunit or Governmental Entity thereof, that uses, copies, changes, modifies, displays, performs, or distributes the Software of any part thereof, or any of whose Governmental Persons does so (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), unconditionally and irrevocably waives for purposes of any legal action (i) to enforce this License Agreement, (ii) to remedy infringement of the Author's copyright, or (iii) to invoke any of the third-party beneficiary rights set forth in Section 14.3 -- any immunity under the Eleventh Amendment of the United States Constitution or any other immunity doctrine (such as sovereign immunity or qualified, or other, official immunity) that may apply to state governments, subunits, or to their Governmental Persons.

14.5 Any Governmental Entity (including, without limitation, any State of the United States), that uses, copies, changes, modifies, performs, displays, or distributes the Software or any part thereof, or any of whose Governmental Persons does so (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), unconditionally and irrevocably waives for purposes of any legal action (i) to enforce this License Agreement, (ii) to remedy infringement of the Author's copyright, or (iii) to invoke any of the third-party beneficiary rights set forth in Section 14.3 any doctrine (such as, but not limited to, the holding in the United States Supreme Court decision of Ex Parte Young) that might purport to limit remedies solely to prospective injunctive relief. Also explicitly and irrevocably waived is any underlying immunity doctrine that would require the recognition of such a limited exception for purposes of remedies. The remedies against such governmental entities and persons shall explicitly include money damages, additional damages, statutory damages, consequential damages, exemplary damages, punitive damages, costs and fees that might otherwise be barred or limited in amount on account of governmental status.

14.6 Any Governmental Entity that uses, copies, changes, modifies, displays, performs, or distributes the Software or any part thereof, or any of whose Governmental Persons does so (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), unconditionally and irrevocably waives for purposes of any legal action (i) to enforce this License Agreement, (ii) to remedy infringement of the Author's copyright, or (iii) to invoke any of the third-party beneficiary rights set forth in Section 14.3 any and all reliance on the Act of State doctrine, sovereign immunity, international comity, or any other doctrine of immunity whether such doctrine is recognized in that government's own courts, or in the courts of any other government or nation.

14.6.1 Consistent with Subparagraphs 14.2.1 through 14.2.4, this waiver shall explicitly be understood to constitute a waiver not only against suit, but also against execution against property, for purposes of the Foreign Sovereign Immunities Act of 1976 (as amended). All United States Governmental Entities shall be understood to have agreed to a corresponding waiver of immunity against (i) suit in the courts of other sovereigns, and (ii) execution against property of the United States located within the territory of other countries.

14.7 Governmental Persons, (i) who violate this License Agreement (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), or (ii) who are personally involved in any activity, policy or practice of a governmental entity that violates this License Agreement (whether that person's Governmental Entity contends the person's action was, or was not, authorized or official), or (iii) that use, copy, change, modify, perform, display or distribute, the Software or any part thereof, when their Governmental Entity is not permitted to do so, or is not a Qualified Licensee, or has violated the terms of this License Agreement, each and all individually waive and shall not be permitted to assert any defense of official immunity, "good faith" immunity, qualified immunity, absolute immunity, or other immunity based on his or her governmental status.

14.8 No Governmental Entity, nor any Governmental Person thereof may, by legislative, regulatory, or other action, exempt such Governmental Entity, subunit, or person, from the terms of this License Agreement, if the Governmental Entity or any such person has voluntarily used, modified, copied, displayed, performed, or distributed the Software or any part thereof.

14.9 Enforcement In Courts of Other Sovereigns Permitted. By using, modifying, changing, displaying, performing or distributing any Software covered by this License Agreement, any Governmental Entity hereby voluntarily and irrevocably consents, for purposes of (i) any action to enforce the terms of this License Agreement, and (ii) any action to enforce the Author's copyright (whether such suit be for injunctive relief, damages, or both) to the jurisdiction of any court or tribunal in any other country (or a court of competent jurisdiction of a subunit, province, or state of such country) in which the terms of this License Agreement are believed by the Author to be enforceable. Each such Governmental Entity hereby waives all objections to personal jurisdiction, all objections based on international comity, all objections based on the doctrine of forum non conveniens, and all objections based on sovereign or governmental status or immunity that might otherwise be asserted in the courts of some other sovereign.

14.9.1 The Waiver by any Governmental Entity of a country other than the United States shall be understood explicitly to constitute a waiver for purposes of the Foreign Sovereign Immunities Act of 1976 (see Subparagraphs 14.2.1to 14.2.4, inclusive, supra), and all United States Governmental Entities shall be understood to have agreed to a waive correspondingly broad in scope with respect to actions brought in the courts of other sovereigns.

14.9.2 Forum Selection Non-U.S. Governmental Entities. Governmental Entities that are not United States Governmental Entities shall be subject to suit, and agree to be subject to suit, in the United States District Court for the District of Columbia. The Author or an authorized end-user may bring an action in another court in another country, but the United States District Court for the District of Columbia, shall always be available as an agreed-upon forum for such an action. At the optional election of any Author (or, in the case of a third-party claim, any end-user asserting rights under Subparagraphs 14.3 and 14.10), such a suit against a non-U.S. Governmental Entity or Person may be brought in the United States District Court for the Southern District of New York, or the United States District Court for the Northern District of California, as a direct substitute for the United States District Court for the District of Columbia, for all purposes of this Subparagraph.

14.9.3 Forum Selection U.S. Governmental Entities. All United States Governmental Entities shall be subject to suit, and agree to be subject to suit, in the following (non-exclusive) list of fora: Ottawa, Canada, London, England, and Paris, France. The Author or an authorized end-user may bring action in another court that can exercise jurisdiction. But the courts in these three locations shall always be available (at the option of the Author or an authorized end-user) as a forum for resolving any dispute with the United States or a governmental subunit thereof. Except as provided in Subparagraph 14.10, any and all United States Governmental Persons shall be subject to suit wherever applicable rules of personal jurisdiction and venue shall permit such suit to be filed, but no such United States Governmental Person may assert any defense based on forum non conveniens or international comity, to the selection of any particular lawful venue.

14.10 Enforcement Of Claims For Human Rights Violations. By using, copying, modifying, changing, performing, displaying or distributing the Software covered by this License Agreement, any Governmental Entity, or Governmental Person hereby voluntarily and irrevocably consents -- for purposes of any third-party action to remedy human rights violations and other violations of this License Agreement (as reflected in Section 14.3) -- to the jurisdiction of any court or tribunal in any other country (or a court of competent jurisdiction of a subunit, province, or state of such country) in which the third-party beneficiary reasonably believes the relevant terms of this License Agreement are enforceable. The Governmental Entity or Person hereby waives all objections to personal jurisdiction, all objections based on international comity, all objections based on the doctrine of forum non conveniens, and all objections based on sovereign or governmental status or immunity that might otherwise be asserted in the courts of some other sovereign.

14.10.1 Waiver of Immunity and Forum Selection. The presumptively valid and preferred fora identified in Subparagraphs 14.9.2 and 14.9.3 shall also apply for purposes of Subparagraph 14.10. All Governmental Entities are subject to the same Waiver of Immunity as set forth in Subparagraphs 14.2.1 to 14.2.4, inclusive.

15. Subsequent Versions of HESSLA. Hacktivismo may publish revised and/or new versions of the Hacktivismo Enhanced-Source Software License Agreement from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

Each version is given a distinguishing version number. Any Program released by Hacktivismo under a version of this License Agreement prior to Version 1.0, shall be considered released under Version 1.0 of the Hacktivismo Enhanced-Source Software License Agreement, once Version 1.0 is formally released. Prior to Version 1.0, any Software released by Hacktivismo or a Licensee of Hacktivismo under a lower-numbered version of the HESSLA shall be considered automatically to be subject to a higher-number version of the HESSLA, whenever a later-numbered version has been released.

Concerning the work of any other Author, if the Program specifies a version number of this License Agreement which applies to it and "any later version," You have the option of following the terms and conditions either of that version or of any later version published by Hacktivismo. If the Program does not specify a version number of this License Agreement, You may choose any version after 1.0, once version 1.0 is published by Hacktivismo, and prior to publication of version 1.0, You may choose any version of the Hacktivismo Software License Agreement then published by Hacktivismo. If the Program released by another Author, specifies only a version number, then that version number only shall apply. If "the latest version," is specified, then the latest version of the HESSLA published on the Hacktivismo Website shall always apply at all times.




18. Saving Clause. If any portion of this License Agreement is held invalid or unenforceable under any particular circumstance, the balance of the License Agreement is intended to apply and the License Agreement as a whole is intended to apply in other circumstances.

by cryptostorm_team
Sat Oct 18, 2014 3:21 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Portuguese cluster - teething pains [RESOLVED]
Replies: 49
Views: 97003

candidate conf 1.3 for Portugal cluster

As we add additional node capacity to the new Portuguese cluster, we've added in some HAF-based balancer layers in order to allow for smooth scale-up of instances and nodes without any hassle on the part of members connecting to the cluster. To do that, we're circulating this proposed 1.3 conf for the Portuguese cluster - Linux OS.

(we are deprecating the old "raw" nomenclature, moving forward, when referring to *nix instances and the decision has been made to simply call these "linux" instances even though technically they also support unix and unix-ish sessions such as OSX - as instances for further into OS specificity in the future, we expect to see those other *nix flavours split into their own dedicated instances; for now, they're lumped in with linux, generically)

Before we promote this 1.3 conf to full production status, we're hoping to have some member feedback confirming it's stable - internal testing is all well and good, but until it gets put into use by members, we don't call it a production conf. Thanks! :thumbup:

Here's the proposed production candidate:


~ cryptostorm_team

EDIT: removed out-of-date config files
- cryptostorm_support
by cryptostorm_team
Mon Oct 13, 2014 3:14 am
Forum: general chat, suggestions, industry news
Topic: DesuStrike resigns from all froum duties
Replies: 9
Views: 17819

Re: DesuStrike resigns from all froum duties

I'm sure these words have been offered privately by numerous members of the cryptostorm team, but we did want to reiterate - publicly - our deepest gratitude for everything DS has contributed to this project since its earliest days. Loyalty begets loyalty, and he has earned our genuinely loyalty... and then some.

Stay safe and strong out there, friend.


~ cryptostorm_team
by cryptostorm_team
Sat Oct 11, 2014 2:31 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Hostname Assignment Framework (HAF) | rev. 1.1
Replies: 9
Views: 30260

cryptostorm's Hostname Assignment Framework (HAF) | rev. 1.1

{direct link: haf}

cryptostorm's Hostname Assignment Framework | Technical Architecture ~ revision 1.1

The purpose of this whitepaper is to provide a technical overview of cryptostorm's hostname assignment framework ("HAF"), which is used to mediate member sessions with the network. Our approach to this process is substantively divergent from that commonly found in old-style "VPN networks," and as such it requires some degree of expository effort to ensure the community is able to critique & review our architectural decisions with sufficient data & insight to provide most effective leverage in doing so.

Note that study of this document, and/or understanding of the principles outlined herein, is not necessary for the regular use of our network by members. Rather, these details are provided as supplementary information for those interested. Network members with precise, specific use-case scenarios requiring particular node/cluster mappings may benefit from the information in this essay, but for most it will be superfluous from a functionality standpoint. Read in if you're curious and want to learn more, in other words: to do so is not required.

There are several, parallel goals assumed to be part of any viable HAF in the context of our security model and member use-case scenarios. These are:
  • 1. Resilience against denial-of-service attacks seeking to block members
    from connecting to the network;

    2. Resilience against naive TLD-based DNS lookup attacks, again seeking to prevent members from initiating network sessions;

    3. Flexibility in allowing members to choose their preferred exitnode clusters on a per-session basis;

    4. Provisioning of alternative cluster-selection methodologies which enable selection stochasticity & thereby provide armouring against certain attack vectors targeting specific node hardware.
In addition, there are a number of second- & third-order goals which are of primarily administrative or systems maintenance interest, and are not discussed in detail here. For example, efforts are made to ensure the most comprehensive backwards-compatability for prior versions of our configuration settings, insofar as doing so does not compromise member security in the process. We consider this to be primarly a question of "internal housekeeping" on the part of our administrative team, and thus not of high community interest overall.

The Four Tiers of the cryptostorm HAF

Our HAF model is composed of four nested tiers of DNS hostname entries across a striped range of TLDs: instance, host, cluster, balancer. It is worth nothing that these mappings are not formally congruent with FQDNs and thus do not rDNS resolve uniquely (or, at least, not uniformly) - that is not their goal (we do have an internal methodology for managing rDNS/FQDN A Record mappings across TLDs, but it is of merely administrative relevance and thus unlikely to be of interest to the community). Indeed, the core approach we take to our four-tier model hinges on the labile nature of redundant hostname:IP mappings, within the global DNS system as instantiated in the wild.

(a parallel, but topologically orthogonal, mapping exists to connect physical machines to specific FQDN, rDNS-capable hostnames; in this whitepaper, as needed, we denote such mappings with the nomenclature of "machine" - that is to say that a {machine} is a specific physical server in a specific location; this, per above, is not directly relevant to the HAF itself except, per below, insofar as there is a small overlap between these two mapping domains in the naming of instances)

As is discussed at the end of this whitepaper, we make use of multiple, redundant, registrar-independent TLDs within the HAF to ensure that the overall architecture remains resilient against attack vectors targeting specific canonical domain names within this consideration space. For example, if an attacker simply DNS blocks all lookups to, the aforementioned TLD redundancy will seamlessly 'fall back' to additional TLD-divergent canonical domains. This process is transparent to network members. However, because there are multiple TLDs in this rotation - and that TLD count grows over time as we add new redundancy moving forward - we designate below the form of TLD with a {canonical TLD} pseudocode placeholder. An example of a specific {canonical TLD} is In this sense, then, the designation {canonical TLD} is a superset of all currently-extant production TLDs in our rotation.

Each of the nested tiers below is a subset of the tier which follows, in formal (axomatic) set theoretic terms. Each set is a closed, bounded set. The sum total of subsets comprising a concomitant superset encompasses all items within that set; there are no 'orpan subsets' (or, at least, there shouldn't be if we've done our job properly!).

One: Instances

An instance is a specific server-side daemon (of the underlying openvpn application) running on a specific hardware-based server in a specific location, which in turn maps into requisite mongoDB shards to enable distributed authentication of network member sessions via SHA512'd token values. Instance are client-OS specific, as of December 2013; for example, an instance may be assigned a hostname mapping of the form:

Code: Select all

windows-{node}-{iteration number}.{canonical TLD}
Currently, instances are maintained for windows, raw (i.e. *nix), android, mac, TCP "fallback," & iphone OS flavours. Several more are in testing internally or with small groups of network members, including router iso-based optimisations. We expect to deploy these more fully as the year progresses, and according to member demand for specific flavour suppport.

Each individual {node} in our overall network infrastructure hosts multiple instances. These instances allow for customisation of configuration options for specific OS/ecosystem flavours, as well as increased security via "micro-clusters" of given instances on a given {node} for a given OS flavour. By keeping instances small, with respect to number of simultaneous connected network sessions, we retain the ability to more closely monitor aberrant instance behavior, spin-down instances for maintenance (after having load-balanced off all active member sessions; see below), and in general manage network capability more effectively in the face of ever-growing network traffic and member session counts.

Although we hesitate to point this out, each instance does in fact have an
uniquely-assigned public IP address. We hesitate, because we do not want
to suggest that members connect "directly to IPs" and thus bypass the HAF entirely. The downsides of doing so are: decreased member security, decreased session resilience, decreased administrative flexibility, and vastly increased fragility of session integrity over time. In short, IPs change - not quickly, but there is attrition & transience within the physical/public IP pool of our infrastructure. This is both inevitable, and acceptable - our infrastructure is not "locked in" to any host, colo, facility, infrastructure, or organization. Hard-coding IPs breaks this model entirely, and inevitably results in member frustration - or worse. It is strongly discouraged, wherever possible, in favour of adherence to the HAF as described herein.

A demi-step upwards in the hierarchical HAF mode brings us to the concept of "pooled instances." The form of pooled instance is as follows:

Code: Select all

windows-{node}.{canonical TLD}
The pooled instance differs from standard instance in the absence of an "iteration number." This is the first tier of connectivity we recommend network members consider when connecting to the network. The pooled instance represents the set of all parallel instances for a given OS flavour, on a given node. Thus, as a specific example for this essay, we might consider the following fully-defined pooled instance:

Code: Select all
In this case, a connection directed at this pooled instance (via the openvpn "--remote" directive, within configuration files) will resolve to one of the pool of raw/Linux instantiations currently live and in production on the node named "cantus." The underlying instances themselves, per explanation previously, will have an "iteration number" included in their nomenclature; for example:

Code: Select all
This represents the "first" instance supporting raw/Linux network sessions on the node named "cantus" - there are likely more than the one instance on that node, which would of course follow as "-2," "-3," and so on. We do not recommend direct connections to underlying instances, as these are moderately ephemeral and may be spun up or spun down depending on short-term administrative requirements.

In contrast, pooled instances (without the numerical identifier) will always resolve to a pool of the then-active instances on a given node. As such, it is acceptable to hard-code connections to specific pooled instances as there will always be an underlying specific instance - and likely more than one - to handle inbound connection requests. Of course, members could simply default to the first cardinal instance - "-1" - and assume there's always going to be a first of such... but no benefit is gained in doing this, as compared to simply using the pooled mappings, since in the corner-state where only one instance exists on a given node, the mappings devolve to identical functionality; however, when more than one instance exists on a given machine, hard-coding to the first cardinal risks having that specific instance be "down" occasionally for maintenance or other administrative needs.

In summary, instances represent the fundamental building block of the cryptostorm network. On a given physical machine - a "node" - multiple instances will exist, each supporting specific OS flavours. Additionally, aggregations of same-OS instances on a given node are defined as "pooled instances," and are the lowest level of recommended connectivity for network members to consider using in their own configuration deployments.

Two: Nodes

Nodes serve as the next layer of our HAF, above instances. Nodes are the logical equivalent to the machine layer, in the parallel rDNS model described above. Nodes are uniquely named and do not extinguish; however, they do "float" across physical hardware temporally (over time). For example, a given {node} may be named "betty" - betty.{canonical TLD} - but the underlying physical hardware (and thus, of course, public IP assignments) of "betty" will likely evolve, change, and otherwise vary over time. "Betty" is a logical - not physical - construct.

(we do not name physical machines, apart from node assignments; physical machines are fungible, and fundamentally ephemeral, within our model)

Node designations are something of a "shadow layer" within the HAF; members do not "connect directly" to nodes, and they exist in a logical sense as an organizational tool within the HAF to ensure it retains internal logical consistency. A node, in that sense, is merely a collection of instances - once all instances on a given physical machine have been fully enumerated, the resulting aggregation is, definitionally, a "node." Node mapping simply take the form of:

Code: Select all

{node}.{canonical TLD}
Note, in contrast, that the pooled instance mappings (one layer down) are of the form:

Code: Select all

{OS flavour}.{node}.{canonical TLD}
As can be seen, the former (node-based) mappings are a superset of the latter (instance-based) mappings. Put another way, each instance-based mapping is, in fact, unique to a specific IP address at a specific point in time (with pooled instances being aggregations of same, on a given node). However, node-based mappings exhibit one:many characteristics - each node in a specific geography maps to multiple IP addresses (which are, in fact, fully concomitant with machine-based mappings).

In summary, members do not connect directly to nodes. Nodes exist as an intermediate layer, between instances, and clusters. Nodes are composed of pooled instances, which themselves are aggregations of specific OS instances on a specific node.

Three: Clusters

It is at the organizational level of clusters that the HAF becomes directly relevant to those components visible within the cryptostorm configuration files. Clusters are the core unit of aggregation to support the most commonly-deployed network configurations, within our model.

A cluster is an aggregation of nodes in a given geographical location. When a cluster is first opened in a given geographical location, it is often the case that it is composed of only one physical machine; this allows us to test out member usage levels, ensure our colocation providers prove reliable and competent service levels, and scale physical hardware levels smoothly as needed. Nevertheless, we always refer to new clusters as "clusters," rather than as their underlying nodes. Careful readers of earlier sections of this essay will now surely understand why: nodes, for us, are more of an internal administrative designation and do not have direct relevance to member session connection parameters themselves, in the public sense.

The form of nomenclature for cluster mappings, available to network members for connections, is as follows:

Code: Select all

{OS flavour}-{geographic locale of cluster}.{canonical TLD}
So, for example, we might have:

Code: Select all
When queried for DNS resolution during a network session initiation, this cluster hostname will return connection parameters for one of the android instances on one of the hosts within the Paris (France) cluster. These lookups are dynamically assigned - we're tempted to state they are assigned "stochastically," but that is not provably the case and thus we decline to do so - via rudimentary round-robin lookups as part of the DNS protocol itself, at whatever layer of the member's network route/configuration undertakes the specific DNS query at that specific point in time (with all the need to consider the many layers of DNS cacheing involved, of course). As such, depending on local caching parameters, each connection querty to this identical cluster hostname as provided above is likely to return a different IP address (which is to say: a different underlying instance), although all will be within that geographic IP-space (likely the same /24 although in rare cases this is not true - certainly within the same /16).

Cluster hostnames are robust; that is to say, they will always resolve to a live, active instance for that specific OS within that specific geographic location. Note that cluster hostnames do not specify the underlying node - this is, as is we hope clear from earlier sections of this essay - both unnecessary, and would introduce needlessly brittle characteristics with no concomitant increase in security or functionality for network members. Recall that, for members who want to connect to a specific node, this can be accomplished via pooled instance mappings - and does require inclusion of the concept of cluster at all.

Naturally, there is no such thing as cluster-mapped hostname without a specific OS flavour being defined; since December 2013's "forking" of our server-side instantiations, all instances are OS specific.

In summary, clusters are the core of our HAF and are the layer of the model which is most directly relevant to customers seeing network sessions that terminate in a specific geographic location, for a specific OS flavour. They are robust, scale smoothly without any need for members to adjust their configuration parameters, and allow for failover/loadbalancing invisibly to members by way of standard administrative tools and practices on the part of the cryptostorm network admin team.

Four: Balancers

The final tier of the HAF - and the one most directly relevant to most network members, as it mediates the majority of network sessions - is the balancers. The cryptostorm balancers dynamically assign network sessions across geographic clusters, and are the optimal security selection for network members seeking maximal session obfuscation against the broadest class of threat vectors. Balancers deploy various forms of algorithmic logic to determine the cluster, host, and instance to which a newly-created network session will connect.

The content of the balancing algorithm itself is not hard-coded into the concept of balancers; rather, various forms of round-robin, load balancing, or formally stochastic session initiation may be implemented at the balancer layer, and new forms can be (and in fact are) added over time. Our team has been working in several additions to the balancer algorithm suite, and we look forward to rolling those out in upcoming months.

Currently, there are two balancer algorithm options: locked, and dynamic. Both are rudimentary round-robin techniques for mapping a given network session initiation request to a given geographic cluster. They vary in the method employed to provide round-robin functionality, and therefore in the "velocity" of change of mapped node selection for iterative network session re-initiation efforts.

First, we will consider "locked" balancer sessions. Locked sessions utilize the inbuilt round-robin A Record lookup functionality of the global DNS system itself. When a network session is initiated, a DNS query is generated to a table of A Records, which contains multiple possible public IP mappings. Once that lookup completes, the mapping of balancer hostname to a specific IP will remain durable as long as that lookup remains cached within the network member's local computing environment. Our default TTL settings, within the HAL, are set universally to 1337 (seconds), or just a bit over 20 minutes. However, of course, there are so many layers of cacheing found in most real-world DNS lookup scenarios that the functional durability of these DNS mappings on a given client machine is likely to be (in our real-world testing) closer to an hour or two.

So, for example, if a network member initiates a connection to the HAL address of:

Code: Select all will return, after a DNS query, an IP address of the form and it is to this IP that the specific network session is directed. If that session drops, is cancelled, or is terminated in any other way and a reconnection occurs, the same IP will be used for the "remote" directive of the session... up until the point at which the client computing environment flushes DNS cache and initiates a new DNS query. Assuming that query "jumps the cache" and actually gets to an authoritate A Record lookup table, there is a high probability (greater than 90%, and rising as nodes and clusters are added to the network) that a different IP will be returned, let us say this is yyy.yyy.yyy.yyy - this IP will be, again with a fairly high probability (about 80%) in a different geographic cluster. Once again, that specific IP - yyy.yyy.yyy.yyy - will be "locked" into the session profile (even if the session drops, is cancelled, the client-side machine is rebooted, etc.) until the DNS cache expires locally and a new DNS lookup is undertaken. This is why we refer to this as a "locked" balancer - although, of course, beyond the span of a couple of hours it is not at all "locked" and is quasi-stochastically variant over longer time periods.

In constrast, "dynamic" balancer sessions are mediated by the round-robin logic built into the OpenVPN framework's "--remote-random" directive in current compiles of the underlying source. This directive causes the network session to choose, from a sequential list of alternative remote parameters, "randomly" for each and every newly-initated network session (we say "randomly," in scare quotes, because the selection is not formally random but can be better thought of as quasi-stochastic). Thus, if a network session drops or is cancelled, the newly-instantiated session will go through a new "random" lookup and will, with reasonably high probability (greater than 70% currently, and rising), connect to an entirely different cluster. This is why we call this "dynamic" sessioning: each session, when instantiated by a member, is likely to result in a different cluster being selected "randomly" from all in-production clusters. This will, on average, result in a higher velocity of change of session cluster mappings than the locked balancer will.

Future balancer logics take these baseline quasi-stochastic methods and extend them into more formally "random" (pseudo-random) frameworks, as well as into "best performance" and "closest pingtime" approaches to session instantiation. We have already mapped out these three additional balancer algorithms internally, although they are not yet ready for in-production testing. We are, however, quite optimistic about the open-ended nature of the balancer logic itself: in the future, we expect that member- and community-created balancer algorithms will be added to the network, with logics outside of our currently-assumed consideration sit of possible options. Creativity, in that sense, is the only constraint on the HAL balancer framework itself.

In summary, the balancer layer of the HAL is most relevant to the majority of network members and sessions, and embodies an extensible, open-ended ability to add new logics & new algorithms in the future. Currently, we support locked & dynamic balancer methodologies, across all production clusters.

Summary: The Evolution of HAF

We do not hesitate to acknowledge that the cryptostorm Hostname Assignment Framework itself is a work in progress. It has evolved, and in some senses exhibited emergent properties in its real-world application, as network members & network activity overall have continued to increase in a steady progression. If we were to claim that, prior to the network's fully deployment in 2013, we had planned all this out in advance, it would simply not be true.

That said, the direction in which we have guided the HAF is towards a flexible, extensible model that minimises the need for members to fuss with the HAF, understand the workings of the HAF, or be inconvenienced in any way as the HAF itself continues to develop and mature. In a purely metaphorical sense, we have sought to to channel some of the ideas of object oriented systems design as guiding principles for the HAF: decoupling subsumed layers & the details thereof from higher-order "objects"/layers, in a nested hierarchy or - equally validly in ontological terms - a holarchy.

Our future systems architecture roadmap envisions the HAF as a core element of a mesh-based network topology that both fully embraces stochastic routing methodologies, and leverages per-stream/per-protocol independence in within-network routing path selection. In other words, rather than having a defined "exitnode"/instance for all packets sent & received by each network session, members will have the flexibility to route selected packet streams - say, a video available only with a US geolocated IP address - via one route, whereas other packet streams for the same session can be directed to other exitnodes/clusters/instances as preferred. Or, for maximal security, streams and packets can be stochastically routed through the overall topological mesh of cryptostorm's entire network, egressing in many geographic locations and via many public IPs. There is no need for one network session to send & receive all packets through one exitnode, in other words; the HAF is our foundation for enabling that future functionality for the network overall.

For now, the HAF serves the purpose of providing robust connectivity, high security, fine-grained administrative capabilities, and minimal hassle to network members in order to achieve maximal security & capability whilst on-network. We should make note that, with the rapid growth of our network through the winter months, our tech admin team is still catching up some of the new hostname mappings within the framework defined above. Too, we're in process of migrating our entire DNS resolution/lookup capability, network-wide, to a more robust & scaleable backend infrastructure. Together, these two steps are not something we've chosen to rush - they must, essentially, be done properly & tested fully prior to cut-over, and our priority is to ensure that process is fully transparent to network members. Planning and implementing towards this goal takes time, and a bit of patience. So, if network members notice that some hostname assignments which appear to be implied by the HAF don't currently resolve... we know. We're working on it. :-)

Finally, a note on the selection of TLDs for use in the HAF. As is well-known within security research circles, some TLDs - we're talking about you, .com - are controlled entirely by specific governmental bodies that have very selfish reasons to exert undue influence over those registries.

We consider any domains located in such TLDs to be at best ephemeral, and at worst subject to arbitrary takeover by hostile governmental entities with little or no due process or notice. Of course, we generally avoid such TLDs as they serve little purpose in our security model. However, rather than searching for the "perfect TLD" that is entirely free of efforts to censor and subvert free speech (which is pointless, in any case, as specific nations can poison TLD lookups within their own localised DNS sub-frameworks, at will), we choose instead to stripe our needs across a broad range of TLDs, continuously adding new ones and, over time, pruning those which prove less than useful.

These are the TLDs that are seen by those who take a look inside our "--remote" directives: they are there to ensure systems continuity & network resilience in the face of denial of service or outright censorship-based attacks on cryptostorm. They are not actually part of any balancer capability, nor do they serve a purpose beyond simple fail-over protection against loss of any one domain within a given TLD. These TLDs are, in a sense, disposable - which is not to say they are not security relevant, not at all! An attacker subverting a specific TLDs DNS lookups can redirect new cryptostorm network sessions towards hostile server resources, and we have systems in place to notify us of any such hostile actions at the TLD level (further, of course, our PKI and cryptographic framework protects directly against efforts to "false flag" exitnodes via public key-based certificate validation of server-side resources).

It is our hope that this short essay introducing the cryptostorm HAF has been useful for network & wider community members alike. More than that, we are eager to receive feedback and suggestions as to how it can be improved, expanded, or otherwise modified to best meet the needs of our members worldwide. It is not perfect - far from it - and we expect the HAF to continue to evolve as time goes by.

Finally, we will continue to bring into line our versioned configuration files with the full logical implications of the HAF framework, a process we hope to complete in upcoming revisions of the configs. That will not result in breakage of backwards-compatability for older versions, but it will mean that the full suite of capabilities implied by the current (1.1 rev) version of the HAF is only fully available to those using fully-current config files (or widget installs) for their network session management. That will be indicated, once again, during 'pushes' of new config files and we trust that network members will help their peers within the community to continue to upgrade their connection profiles so as to gain maximal functionality & security hardening from our ongoing improvements in the HAF.

Our thanks, as a team, for your support & assistance. We look forward to many more enhancements to the HAF - and the network itself - as time continues to pass us by.


~ cryptostorm_team
by cryptostorm_team
Tue Oct 07, 2014 4:22 am
Forum: independent cryptostorm token resellers, & tokens 101
Topic: Back to School specials on bundles 'o tokens :-)
Replies: 0
Views: 21517

Back to School specials on bundles 'o tokens :-)

{direct link:}

It's not much of a secret that we kind of suck at marketing, here at cryptostorm.

C'mon - if you know us, you've probably thought if it yourself: "man, those folks at cryptostorm pretty much fucking suck at marketing. Lol." Or something similar :-)

It's true. We're reasonably good at running a secure networking service - that's our job - but when it comes to promoting what we do.... yeah. Not so good, are we?

Someday we'll hopefully get better. But in the meantime it's become something of a running joke that as good as we are at providing cryptostorm service, we're just not so good at telling folks about what we do: spreading the word, etc.

That's where our token resellers come into the forefront. We built the network - and the business model supporting it - with the focus on independent resellers of network access tokens. That model helps ensure improved security for network members, because decoupled purchasing via independent resellers means that in many cases we know exactly nothing about who our members are. Nothing - since they didn't buy a token from us (directly), we've had no interaction with them whatsoever. They make use of the network, but apart from that we're like ships crossing in the night.

As it should be - not a bug, but a feature!

The recursive irony is that we're both bad at general marketing stuff, and at doing marketing outreach to resellers. We have a great baseline of resellers who have been with us for ages, and we love them to death - but nobody on our team does anything specifically to roll out the red carpet for new resellers - or expanded relations with our current (and wonderful) resellers. We brought someone on this spring to do that kind of thing, but he got pulled into other (really interesting!) projects, and the boring "marketing to resellers" stuff fell by the wayside. See a pattern here? :eh:

Anyway, blah blah - so what?

In our own utterly simple way, we did some team brainstorming about the whole thing - after being repeatedly distracted by other, cool(er) tech stuff -and here's what we came up with: Back To School token bundles. Yay! :clap:

We warned you about us sucking at marketing, right? There you go.

Anyway, here's the bundles we've put together. They're offered at some tasty discounts to our typical wholesale/bulk token prices. Your're welcome.

Circle of Trust
  • 10 one-month tokens
    20 three-month tokens
    15 six-month tokens
    3 one-year tokens
    bundle price: $660 (CAD)

Spreading Umbrella
  • 20 one-month tokens
    50 three-month tokens
    30 six-month tokens
    5 one-year tokens
    bundle price: $1312 (CAD)

Big Things
  • 50 one-month tokens
    200 three-month tokens
    90 six-month tokens
    20 one-year tokens
    bundle price: $4004 (CAD)

Here's the fine print: there isn't any. That's the flipside of us sucking at marketing - we're pretty easy to work with, unlike real marketers who know how to do this stuff well :-)

But a few things to consider:

  • 1. We're pretty much not going to fiddle with quantities in the bundles - they are what they are. We can always whip up whatever batch of tokens you want, but these special-priced batches are come-as-you-are...

    2. Please pay for these bundles in BTC (or some other *coins), so we don't have to eat merchant processing fees on top of these already discounted prices. Please. Thanks!

    3. We've just set up a super-dedicated email forwarder account - - that pretty much spams everyone on the cryptostorm team, so that you can email us and you're email inquiry won't get lost in the seas of our collective inboxes. We promise.

    4. There was a fourth thing... but we forgot. If we remember, we'll edit this post so it's more professional and stuff.

You can also bitmessage us, which works (fairly) well: old BM addy BM-NAueHWwiZQ26TgX9iXPqtiMjMBB5dc5t ~ new (better, sorta, 'cause it's a dedicated account) BM addy BM-NC4acFXJqxXnjNPQHeryYVfZT7G9pfh6

Token bundles are always sold prepaid, because otherwise we'd maybe have to show up at your place and be all like...
Yeah, maybe not so much. But still, we really do only prepaid tokens. Except if you're tight cash and really want to get started... talk to us. We'll see what we can do. We know how it goes, really we do. Just don't tell Graze because he's really weird about that stuff. :shh:

That's about it. Let us know if you've got questions, or if we forgot to include super-important information in this post. Or whatever. We're leaving this thread open and we'll make sure someone checks in on it regularly (not Graze, ffs!). Also there's that cool new email address.

Help us help you... or whatever corny magic words real marketers use to convince folks to do important marketing kinds of things. Fight the good fight, and so on.


  • ~ cryptostorm_team
by cryptostorm_team
Fri Sep 26, 2014 5:37 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: #SHELLSHOCK (another heartbleed, sorta, but not really :P )
Replies: 5
Views: 14852

Re: #SHELLSHOCK (another heartbleed, sorta, but not really :

There seems a bit of hyperbole about this "worse than heartbleed" exploit - to be clear, it's a really bad one - but we have some fairly experienced people on staff who point out that the chances that people will widely do really stupid stuff with CGI/Bash, etc. are not super-likely. It's sort of just not a "found everywhere" thing. We wouldn't call it rare, as by spending a few minutes with Google we found truly vulnerable servers, but just not "every linux box" as the press is implying.

We do wonder about all those older home routers, NAS's, etc. There will be a new, ugly front opening on people's home networks as they turn into litecoin miners, etc. because frankly updating those is a pain the ass. People who read this may be tech enough to patch, but the general public? Your mom? Nope. Those'll be forever stuck in time, pre-shellshock.
by cryptostorm_team
Thu Sep 25, 2014 7:12 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: #SHELLSHOCK (another heartbleed, sorta, but not really :P )
Replies: 5
Views: 14852

#SHELLSHOCK (another heartbleed, sorta, but not really :P )

First off, we're patched on all servers.

Second, here's the deets via ... hell_vuln/

SHELL SHOCK: Bash bug blows holes in Unix, Linux, OS X systems

CGI scripts to DHCP clients affected – patch Heartbleed-grade injection vuln NOW
By John Leyden, 24 Sep 2014

A newly discovered vulnerability in the Bash command interpreter poses a critical security risk to Unix and Linux systems – and, thanks to their ubiquity, the internet in general.

It lands countless websites, servers, PCs, OS X Macs, various home routers, and more, in danger of hijacking by hackers.

The vulnerability is present in Bash through version 4.3, and was discovered by Stephane Chazelas. It puts Apache web servers, in particular, at risk of compromise via CGI scripts that use or invoke Bash in any way – including any child processes spawned by the scripts. OpenSSH and some DHCP clients are also affected on machines that use Bash.

Ubuntu and other Debian-derived systems using Dash may not be at risk – Dash isn't vulnerable, but Bash may still be present. Essentially, check the shell interpreter you're using, and any Bash packages you have installed, and patch if necessary.

"Holy cow. There are a lot of .mil and .gov sites that are going to get owned," security expert Kenn White said on Wednesday in reaction to the disclosed flaw.

The bug lies in Bash's handling of environment variables: when assigning a function to a variable, trailing code in the function definition will be executed, leaving the door wide open for code-injection attacks. The vulnerability is exploitable remotely if code can be smuggled into environment variables sent over the network – and it's surprisingly easy to do so.

According to the NIST vulnerability database, which rates the flaw 10 out of 10 in terms of severity:
GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution.

Authentication: Not required to exploit

Impact Type: Allows unauthorized disclosure of information; Allows unauthorized modification; Allows disruption of service
An advisory from Akamai explains the problem in more depth, as does this OSS-Sec mailing list post.

Proof-of-concept code for exploiting Bash-using CGI scripts to run code with the same privileges as the web server is already floating around the web. A simple Wget fetch can trigger the bug on a vulnerable system.
wget -U "() { test;};/usr/bin/touch /tmp/VULNERABLE" myserver/cgi-bin/test
You can check if you're vulnerable by running the following line in your default shell, which on many systems will be Bash. If you see the words "busted", then you're at risk. If not, then either your Bash is fixed or your shell is using another interpreter.
env X="() { :;} ; echo busted" /bin/sh -c "echo stuff"
Jim Reavis, chief exec of the Cloud Security Alliance, claims the hole is comparable in seriousness to the infamous password-leaking Heartbleed bug in the OpenSSL library that was uncovered earlier this year.

"A large number of programs on Linux and other UNIX systems use Bash to setup environmental variables which are then used while executing other programs," Reavis explained in a blog post.

"Examples of this include web servers running CGI scripts and even email clients and web clients that pass files to external programs for display such as a video file or a sound file.

"In short this vulnerability allows attackers to cause arbitrary command execution, remotely, for example by setting headers in a web request, or by setting weird MIME types."

Robert Graham of Errata Security, who suggested the name Shell Shock for the Bash flaw, also said the programming cock-up is as severe as Heartbleed. But he noted: "There's little need to rush and fix this bug. Your primary servers are probably not vulnerable to this bug.

"However, everything else probably is. Scan your network for things like Telnet, FTP, and old versions of Apache (masscan is extremely useful for this). Anything that responds is probably an old device needing a bash patch. And, since most of them can't be patched, you are likely screwed.

"A lot of wireless routers shell out to ping and traceroute – these are all likely vulnerable."

The vulnerability (CVE-2014-6271) affects Apple's OS X – and is useful for privilege escalation – as well as major flavors of Linux. Fortunately, patches are already available, and distros are ahead of the game in responding to the flap. BSD distros that do not use Bash are safe, obviously. Apple users will need to get their hands dirty until Cupertino issues a fix.

Red Hat security engineer Huzaifa Sidhpurwala has a rundown of the at-risk software, here. ®
by cryptostorm_team
Fri Sep 12, 2014 8:00 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CLOSED: aleph tokens ~ unlimited duration batch
Replies: 33
Views: 57483

Re: RE-OPENED: aleph tokens ~ unlimited duration batch

Note that the email credentials have been sent out.

If you don't see a note about how to access your credentials, please contact - it's possible we've missed a couple people (our record keeping sucks, by definition as we're a privacy company and don't really have any records :) ) especially if we used OTR/BitMessage/etc to hook you up with your token originally.

Thanks again for your patience and support!
by cryptostorm_team
Thu Sep 11, 2014 7:23 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CLOSED: aleph tokens ~ unlimited duration batch
Replies: 33
Views: 57483

Re: RE-OPENED: aleph tokens ~ unlimited duration batch

Just a quick update on the emails.

First off, note that we're not really an email company. These email accounts (and the lifetime tokens) are created as one-off's, so there's a bit of time involved. But the real issue is that we were arguing in the team about the best way to get the credentials to the users securely, and we stalled for a bit because we just didn't want to do so insecurely. We think we have a fairly simple but effective method now that works for everyone (some of you use BitMessage and OTR, so for you it's less of an issue.) We still have to do a bit of setup for it, however. We'll try to get that done over the next twenty-four hours or so, depending on staff availability.

Thanks again for your patience!
by cryptostorm_team
Wed Sep 03, 2014 7:00 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CLOSED: aleph tokens ~ unlimited duration batch
Replies: 33
Views: 57483

Re: RE-OPENED: aleph tokens ~ unlimited duration batch

vpnDarknet wrote:Fantastic guys, great to hear more server locations are on the cards, but how am I meant to sell tokens if you're signing members up for life? ;)
Just thought that this comment from one of our beloved resellers is quite fair and deserved an answer.

We thought about this, and we will sell lifetime tokens to re-sellers, as well, of course - but the numbers we release are very limited, leaving lots of room for the sale of non-lifetime durations.

Also, a more general note...

A number of team members were off on vacations and such, and in the next few weeks we'll be back into full swing with a few growth and hardening plans. We have at least one new exitnode in the works for the near future, and likely more. Plus we're bringing on some business savvy types to help us buff up our image. In our humble opinion, we're awesome, but it has often been pointed out by our customers and resellers that there has been very little work from our team to spread that word (by responding to some of the many inquiries from the press, etc.) that we've basically not had time for, because we were running a network.

Most of (arguably, almost all of) our growth has been due to "word of mouth" from good people like you, our resellers and our customers, who just happened upon us and liked what we were doing. To get to the next level of smacking down the big guys, we'll need to play a tiny corner of "their game." This we will do - but we won't do so without keeping our cheeky edge. ;)

Back onto the topic of Aleph tokens - for those who made it thru the gauntlet of our recent battle with PayPal, apologies on the delay in sending out the email account details for this round - that work was backlogged thanks to the team dealing with the drama we had with both PayPal and our bitcoin systems which has delayed things on our end, but the details should be sent out in the next day or two.

Thanks again, and if you have any issues, just vent them. :)

PS: On the BitPay bug we had, it seemed we just constructed a cart item such that it caused a failure. We asked them for help, and they helped immediately and interactively. Thus we love BitPay, obviously - they do a very simple product, and do it really well. They are always awesome when we ask for support, etc. We wish we had stocks in that company a year ago, as they just got a bunch of VC cap apparently and we'd now be eating less ramen. :-/ :-) We're not saying this for any personal gain, we just realized how ugly our dealings have been with that Big Payment Company compared to our dealings with BitPay, and we personally hope that the likes of BitPay give "PP" a good reason to try a bit harder. :)
by cryptostorm_team
Tue Sep 02, 2014 7:39 pm
Forum: general chat, suggestions, industry news
Topic: Website Outage. A good thing? Really??!
Replies: 0
Views: 8880

Website Outage. A good thing? Really??!

So - we have had an outage for about 16 hours on Monday, Sept. 1/2014.

This made us happy.

You're probably assuming that's sarcasm, but it's not. Really.

Well, okay, we were not really happy, especially at first, but if you were one of our customers, you probably noticed nothing during this period. That made us happy. We have worked very, very hard to have a fully decentralized system, where a threat of a "takedown" of our website and email, etc. would have as close to zero effect on our clients as possible. The side effect of the outage (Which was accidentally caused by maintenance from our service provider) was that we were able to test this decentralized structure, and it passed wonderfully.

While we were down, we had to move communications to non-email channels such as our twitter account. This is fine for those who knew about that, but not so good if you didn't - we'll need to move that more front-and-center somehow. I know some of you see Twitter as a mini-Facebook, however, it is pretty good at allowing most sane accounts to have free speech so we'll probably stick with it.

We should massage this into pretty marketing material about our architecture, but this is about it in a nutshell: Our distributed network works.

by cryptostorm_team
Sun Aug 17, 2014 3:02 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CLOSED: aleph tokens ~ unlimited duration batch
Replies: 33
Views: 57483


Direct link:

Sorry for the lack of information directly on this thread - our original batch of "aleph" tokens (lifetime tokens - for unlimited access to our VPN servers into the future) were snapped up in the spring and we never got to send out too many details publicly before the offer had to be closed.

However, we're now planning on restarting some growth again in September, both in our network reach (our Asia server, among other locations, for example) and in staffing. We will be moving from a "well, we've definitely proven that the tech works" stage, to a "now let's get the word out to average people who don't even know that they want the best encryption!" stage very quickly in the next couple months: We are adding biz-type people to help us network people on the growth side of the company.

Before they get here and ruin the party, however, cryptostorm is thinking of minting another micro-batch of sixteen unlimited duration network access tokens. What are these things? They give the purchaser:

  • A token that provides Lifetime access to our network
  • A lifetime, anonymous email address (currently hosted in Iceland on dedicated hardware)

Cost will be $256USD. If you currently have a token, show us and we'll refund that amount. If you're interested, this is possibly the last time we'll do this (we have heard that the MBA-crowd is as big a fan of customers who pay only once. :) ) so email us at and we'll email you details in a first-come, first-served, order.

~ cryptostorm_team
by cryptostorm_team
Tue Feb 25, 2014 10:46 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CLOSED: aleph tokens ~ unlimited duration batch
Replies: 33
Views: 57483

CLOSED: aleph tokens ~ unlimited duration batch

We've been asked often enough that it's time we made this happen.

This week, cryptostorm is minting a micro-batch of unlimited duration network access tokens. We've decided to refer to them as aleph tokens (in honor of Georg Cantor): א

Pricing has not yet been set. However, those who wish to ensure they have first-chance access to this tranche can do so via a note sent to - no obligation, of course, but rather right of first refusal.

This thread will be updated as pricing & availability finalize.

For the wild...

~ cryptostorm_team

{direct link:}
by cryptostorm_team
Mon Feb 24, 2014 8:37 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Dorky "takedown" notices
Replies: 14
Views: 46910


With mere hours having elapsed since we brought our U.S. exitnode cluster live, the extortion-bot demand letters are already rolling in. God Bless Amerika!

Hot off the virtual presses, one of our replies:
Mr. Siegel -

We are in receipt of your correspondence requesting that we "immediately and permanently cease and desist from unauthorized copying and/or distribution" of certain claimed works of intellectual property.

Thank you so much for taking the time to make us aware of this matter.

To aid in our investigation of the situation, and in furtherance of a deeper understanding of your legally-constituted role in such, we ask that you forward to our attention the forensic reporting on which you base your allegations. We naturally understand that your expertise in such matters runs considerably deeper than ours. Nevertheless, let us suggest that digitally hashed (with a non-reversible/non-rainbowed algorithm, of course) .pcaps of the alleged transfers will be a good starting point. Full server-side logfiles corresponding to these packet-layer captures will be needed to independently validate the legitimacy of the tcpdump output, it goes without saying. Router-layer log data is always helpful - we're sure you concur - so please do send those along, as well. If they're in a nonstandard format, a pointer to relevant documentation of such is much appreciated.

We also ask that you provide us with your preferred independent forensic examiners, so that we may review their credentials and historical experience in documenting such matters. We, needless to say, have several examiners with which we are personally familiar - but it would be unseemly for us to suggest that our own contacts would be of more relevance than those which you will certainly be able to suggest from past working experience.

Once we have completed the necessary work to forensically validate your assertions (as set forth in the quoted correspondence included below), we will be in a much better position to begin the process of responding to the rather novel legal interpretation you suggest. Carts not belonging before their respective horses, let us first work through the forensic issues; obviously, there's no point in arguing legal matters if (in purely hypothetical terms) you lack any sort of independent, forensically sound basis on which to make said allegations.

That we neither "copy" nor "distribute" anything whatsoever is a matter of basic factual reality: we merely route packets, neutrally & without any involvement in either their payload or their header data. As such, we're at a loss to imagine exactly how we could "cease and desist" from doing either of the action-verbs you cite. Irrespective, the question of valid & independently verifiable forensic data comes first.

Finally, as we have not in the past found it productive to engage in "discussions" with automated extortion-bots, we ask that you provide some manner through which we can confirm that these emails from you are coming from, well... from you, and not from a programmatic extortion-bot. Given the provisions of the CAN-SPAM Act, you will understand our concern that honouring an illegal spambot with "replies" would border on dark satire. Rather, we report those individuals programming and profiting from such illegal spambots to the relevant legal authorities for prosecution. Help stop spam, &c.

The rule of law applies to everyone, and nobody is exempt from the provisions of criminal law merely because of how many lobbyists they employ nor how unseemly the intertwining of those lobbyists with the mechanisms of governance in your "Land of the Free," and all that.


Thank you again for your deep concern in protecting artists from nefarious intent - particularly the notable artistic talent which has produced for the human cultural world's eternal appreciation.... let's see. Ah, yes: Family Comes First. A truly priceless artifact of our shared culture, truly worthy of the strongest forms of intellectual property enforcement.

It warms one's heart - literally, our hearts heat by measurable degrees as we type these very words - to see such selfless dedication to the beloved, cherished core of our social environment: protecting the lone artists from piratical plunder, and so on. In short, it is a refreshing pleasure to find someone who selflessly seeks to ensure that creative people receive full and just reward for their contributions to our shared culture. Bravo! Hoora! Tally-ho!

Best regards,

~ cryptostorm

ps: we had no idea that poor old port 1043 was an "unauthorized port" - would you be so kind as to send along the relevant citation to IETF documentation on that question? It's news to us!


On 14-02-24 06:56 AM, {hosting company} - Abuse Desk wrote:

> ** This is an automated e-mail to inform you of an abuse complaint **
> IP: {node IP}
> Dear customer,
> This message is to inform you we received a complaint regarding
> an IP assigned to you. Please see the complaint at the bottom
> of this e-mail. We urge you to take appropriate action to prevent
> future complaints.
> Please note: the complaint has been processed by an automated system.
> If you feel the complaint is invalid, please contact the complainant.
> Failure to take action might result in an IP block of the mentioned IP.
> Kind regards,
> {hosting company}, Inc. - Abuse Desk
> ******************************************
> ******************************************
> ***NOTE TO {hosting company}: PLEASE FORWARD THIS ENTIRE NOTICE TO ACCOUNT HOLDER OF IP ADDRESS {node IP} at 2014-02-22 20:20:34 North American Eastern Time***
> February 24, 2014
> Re: Notice of Unauthorized Use of Copyrights Owned by Zero Tolerance Entertainment Case #: P58989983
> CEG TEK International ("CEG") represents Zero Tolerance Entertainment, who owns all right, title and interest, including copyrights, in and to the work listed below (hereinafter the "Work"). For independent confirmation that CEG is authorized to represent Zero Tolerance Entertainment, please visit: (Some individuals may find certain words in titles of works to be offensive. CEG apologizes in advance if this is the case.)
> This notice is intended solely for the primary {hosting company} service account holder. Someone using this account has engaged in the unauthorized copying and/or distribution of the Work listed below.
> Evidence:
> Work Title: Family Comes First
> Copyright Owner: Zero Tolerance Entertainment
> Unauthorized File Name: Family Comes First
> Unauthorized Hash: 987897166aaf26be2552779841601b5e147b9cc3
> Unauthorized File Size: 1284512295 bytes
> Unauthorized Protocol: BitTorrent
> Timestamp: 2014-02-22 20:20:34 North American Eastern Time
> Unauthorized IP Address:
> Unauthorized Port: 1043
> The following files were included in the unauthorized copying and/or distribution:
> File 1: Family Comes First/1 veronica.mp4
> File 2: Family Comes First/2 jessa.mp4
> File 3: Family Comes First/3 stevie.mp4
> File 4: Family Comes First/4 india.mp4
> CEG TEK International ("CEG") hereby notifies you that unauthorized copying and/or distribution of Zero Tolerance Entertainment's Work listed above is a violation of the U.S. Copyright Act, 17 U.S.C. 106. In this regard, request is hereby made that you and all persons using this account immediately and permanently cease and desist from unauthorized copying and/or distribution of the Work.
> CEG informs you that you may be held liable for monetary damages, including court costs and/or attorney fees if a lawsuit is commenced against you for unauthorized copying and/or distribution of the Work listed above. You have until Wednesday, March 26, 2014 to access the settlement offer and settle online. To access the settlement offer, please visit and enter Case #: P58989983 and Password: 73cgc. To access the settlement offer directly, please visit ... 83&p=73cgc
> Settlement Information:
> Direct Settlement Link: ... 83&p=73cgc
> Settlement Website:
> Case #: P58989983
> Password: 73cgc
> To review independent confirmation that CEG is engaged and authorized to act on behalf of Zero Tolerance Entertainment, please visit:
> If you fail to respond or settle within the prescribed time period, the above matter may be referred to attorneys representing the Work's owner for legal action. At that point the original settlement offer will no longer be an option, and the settlement amount will increase significantly.
> Nothing contained or omitted from this correspondence is, or shall be deemed to be either a full statement of the facts or applicable law, an admission of any fact, or waiver or limitation of any of the Zero Tolerance Entertainment's rights or remedies, all of which are specifically retained and reserved.
> The information in this notice is accurate. CEG has a good faith belief that use of the material in the manner complained of herein is not authorized by the copyright owner, its agent, or by operation of law. CEG and the undersigned declare under penalty of perjury, that CEG is authorized to act on behalf of Zero Tolerance Entertainment.
> Sincerely,
> Ira M. Siegel, Esq.
> Legal Counsel
> CEG TEK International
> 8484 Wilshire Boulevard, Suite 515
> Beverly Hills, CA 90211
> Toll Free: 877-526-7974
> Email:
> Website:
> This is an automated email. If you have questions or concerns, please visit us at Replies sent to are not read.


cryptostorm_team: info_dorkbot @ the cryptostorm darknet
bitmessage: BM-NAueHWwiZQ26TgX9iXPqtiMjMBB5dc5t
twitter: cryptostorm_darknet
by cryptostorm_team
Fri Feb 21, 2014 5:52 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: New exitnode cluster: United States of NSAmerica
Replies: 23
Views: 32580

New exitnode cluster: United States of NSAmerica

note: folks seeking the most current client configuration files need not wade through this entire discussion thread! The current versions are always posted in a separate, dedicated thread, and will be continuously updated there. Continue reading this thread if you're curious about the details of the config files, want to see earlier versions of them, or have comments/feedback to provide - thanks! :thumbup:

We have put a footprint in the Land of the Surveilled: the United States of NSAmerica (Eastern seaboard)

Attached is a tested Linux/"raw" configuration file (version 1.3).

Underlying TLD-redundant hostname mappings have been propagated as follows:

Code: Select all

Code: Select all
This cluster has not yet been put into general/dynamic production pool resources - we're hoping for some member feedback before doing so. As such, consider this a late-beta deployment of the cluster. We'd rather make it available for final perf-tuning publicly with network members, than keep it in-house longer and thus prevent wider member usage.

Our in-house testing has thus far shown excellent performance characteristics. But as always, we've one request for our member community: hit it hard!

Feedback appreciated.
by cryptostorm_team
Thu Feb 20, 2014 4:31 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Automatic, recurring token delivery now available
Replies: 1
Views: 8613

Automatic, recurring token delivery now available

Since our beta launch of cryptostorm last year, we've been asked often by members whether we'll offer "subscriptions" to the network. The answer is simple:


For a host of procedural and technical reasons, subscriptions are a terrible idea for a security-centric service. There's just no way to provide that continuity-of-identity without also seriously compromising the transience & structural ephemerality of network interactions. Subscriptions mean customer databases, which mean payment information, which means RL identities, which is an instant target for a wide range of bad actors.


After talking this thru with a wide range of network members, security specialists, payment systems designers, and paranoid OpSec obsessives, we've come to an alternative procedure to deliver most of the functionality of "subscriptions" without the downsides: recurring token deliveries.

Simply put, it's now possible to sign up for tokens to be delivered on a set schedule (monthly or yearly). As that temporal queuing isn't handled by cryptostorm nor are payment details viewed or retained by us (it's an external service offered by PayPal and others), there's no erosion of delivered security in doing so. Rather, from our standpoint at cryptostorm we simply receive a notice to send a token to a given email address... just as we do with most all our token delivery requests. The token gets queued out by our illustrious "tokenbot," and that's that. Whether that token request is the result of an externally-queued cron job, or a one-time purchase, is neither relevant to us at cryptostorm nor actually known to us whatsoever. Terminating such an arrangement is straightforward & handled by the outside service provider (PayPal, for now): as they manage the recurring billing, they're the ones to handle cancelling it as well. That's ideal, as it means there's never a concern that we'll drag our feet at cryptostorm when members choose to end a recurring setup - it's not our call, either way. For the best.

This feature has now been added on our main token purchase page. We've opened this thread, here in the forum, to expand discussions of this feature, as well as seek additional (i.e. non-PayPal) recurring payment processors that community & network members would like us to add in the future. There's no reason to be stuck with one, or two, or any limited number of such providers - we're happy to integrate whomever proves useful for members.

We have not yet implemented this ability for bitcoin-based token purchases, but are actively seeking service provides who can offer that functionality to network members.

Finally, we will not ever be offering "in-house" recurring token plans - that's simply not part of our security model, and there's no way to square the circle to make it so. Fortunately, we do not think that proves a barrier to providing this capability to members - via outside services, as noted above.

{direct link:}
by cryptostorm_team
Tue Feb 18, 2014 2:34 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm exitnode clusters: listing+requests+roadmap
Replies: 89
Views: 123602

adding new exitnode clusters: discussion & suggestions

During the past two weeks, cryptostorm's network has seen new peaks in member connections and total traffic transiting across the darknet. This is excellent!

It also means we're getting ready to add new cluster capacity. This will include more hardware in existing clusters - Montreal is due for new capacity, in particular - as well as new clusters entirely. We've our ideas on where on the map clusters would be most useful, in terms of pingtime considerations and to some degree political realities, but we'd much prefer to open this up to community engagement. For reference, our current clusters are found in:

  • ~ Montréal, Québec
    ~ Frankfurt, Germany
    ~ Reykjavik, Iceland

So, which of the above options would be your top choice for the next exitnode cluster to add to cryptostorm?
by cryptostorm_team
Mon Feb 17, 2014 5:18 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: zero tolerance policy implemented
Replies: 23
Views: 60116

cryptostorm: zero tolerance policy implemented

We're announcing a Zero Tolerance Policy... with regards to poor network performance. Which means we reject the standard assumption that using a network security service means slow, laggy, poor connectivity.

There is no technological reason for that assumption to hold true, and we've made it a top priority for our entire team that performance on-network is excellent, always. We're not there yet, and we know there's improvements we can - and must - continue to make, for cryptostorm. But performance is as much a no-compromise issue for our team as is serious cryptographic foundations and sound OpSec procedures. Slow networks don't get used, and a network that isn't used is no security whatsoever. Therefore, performance is crucial not just for a better day-to-day experience but also as a core security parameter.

Zero Tolerance for crappy network performance. It's not necessary, it's not acceptable, and it's not part of what cryptostorm exists to deliver.

We're opening this thread to centralise discussions of and reports on network performance whilst connected to cryptostorm. There's already several existing threads that discuss performance tuning in various specialised regards (links: bittorrent | kernel-level tuning parameters | NAT & port forwarding), and this thread doesn't seek to replace those. Rather, this thread is a place to post basic performance feedback an questions.

To get started, we created a test download file that can be found at this link (which points to a Mega URL; redirect is mapped from as well). We're using Mega for this, as they tend to do a good job running their servers - and we know they're not biased. Some of these "speed test" services... they're not always 100% reliable, let's just say that. This link won't wget as it's all pretty much served up in a http wrapper - but that's not necessarily inappropriate given real-world connection usage. We've also put up the same file on one of our administrative servers in Iceland - it's less useful for testing Icelandic cluster performance as it's geographically in the same DC - but for all other clusters, it's useful as a secondary data point. That one will wget, however, so that can be useful.

Using test files like this is most effective when doing A/B comparatives on-net/off-net as close to temporally congruent as possible. In other words, pull the file - then disconnect from cryptocloud and pull the same file again, immediately. Do this back and forth a couple times, at different times of day, and those data are starting to build up a really useful foundation for quantitative measures of network performance.

The test file - cryptostorm_prng.csdn - is 13.37 megabytes of uncompressed, highly "random" data generated via SHA512 hashing and IV procedures used by the TrueCrypt application. The suffix "csdn" (cryptostorm darknet) is non-syntactical & thus hopefully won't trigger most layers of QoS, packet shaping, traffic heuristics, and so on - a source of many problems whilst assessing network performance metrics. There's nothing "inside" it, so if you want to burn a whack of time seeking to break the ciphers, you'll probably be really disappointed if you somehow succeed. Fair warning :-)

Ok, with that let's get the conversation going regarding speed test results, performance feedback, techniques for optimising client-side performance, and so forth!

{direct link:}
by cryptostorm_team
Fri Feb 14, 2014 4:55 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: final shutdown of Cryptocloud/Cryptocloud VPN
Replies: 1
Views: 11466

final shutdown of Cryptocloud/Cryptocloud VPN

After many months of work & some unexpectedly ugly behind-the-scenes issues involving a former team member, it is with weary gratitude that we are finally able to confirm the final shutdown of the former "Cryptocloud Secure Networking" service (which everyone called "Cryptocloud VPN" - even though we hate that name). Tango down.

Two important matters:

First, all prior customers of Cryptocloud will continue to receive no-cost network membership and full migration to cryptostorm. Contact our support folks via your preferred channel, and we'll get you set up - we've been working through this process for several months and even more so in recent days. We apologise deeply for those who saw "automatic payments" charged to them by "Cryptocloud" in recent months, despite the fact that the network was being decommissioned. These charges were not authorised by us, and none of those funds came to our team or our company since last August. We have done everything in our power to stop that process, ever since. Irrespective, we offer our apology for the hassle it has caused to former Cryptocloud members who saw funds pulled from their wallets for a service in mid-shutdown.

Second, it is an understanding that an official statement regarding the shutdown is shortly to be posted elsewhere by the... other party involved in this situation. We are going to keep this post, here & on behalf of the entire cryptostorm team (who also founded & ran Cryptocloud, since 2007), short and limited so that we do not speak over top of that other pending statement. If and when there's additional need to expand on, discuss, and explore this topic in further detail we'll be happy to do so, here in a parallel thread in a suitable subforum. As always. But doing so prior to that official statement would be neither polite nor really useful.


There you go. It was a wonderful ride. We created a real network security service, in 2007, at a time when people routinely pimped PPTP as "secure" and customers assumed there weren't better options available. There were... and now, in some small way thanks to our groundbreaking work with Cryptocloud, there are: cryptostorm is one, but there's a whole world of services out there (some good, most bad... but all new since 2008). Our role in creating the "VPN industry" has been somewhat eclipsed by new generations of marketing-heavy, tech-lite, SEO-driven, customer-betraying, DMCA-kowtowing "VPN services" that get tons of press from "journalists" who cash advertising checks from them every month, as well. So be it. Our goal has never been to claim credit for anything - our goal has been to change the world, for the better.

It still is.

As we said last year, Snowden's revelations showed us that Cryptocloud's framework - groundbreaking in 2008 - was not going to be able to stand up against the $50+ billion, flagrantly illegal firepower being brought to bear by the NSA and other gone-rogue global spy agencies. To upgrade the old network to a level sufficient to do so would be all but impossible, whilst still providing service on the platform: changing the tyres at 200km/hour. This is why we built cryptostorm from a blank slate: no corners cut, no compromises due to legacy code or legacy assumptions.

It hasn't been an easy road, for us or for some customers. But it has been the right road, and now we have a secure network unmatched in providing serious, reliable, high-performance network security for every member. With the final closure of Cryptocloud, that chapter ends. The new chapters have already been spooling out, one after another, in a collaborative & community-driven process, since last August.

Let's keep writing new chapters, and thereby craft a new narrative that places control and authority back with real human beings when it comes to our private online lives. Enough of the corporate fascist, spy-mania, Orwell's-nightmare bullshit of helplessness in the face of dragnet surveillance. Technology made that mess... but technology is even better-suited to fixing it. No "permission" needed from the 0.1% elite who think they can trick us into believing our lives are theirs to control.

Enough of all that nonsense. Now we swing the pendulum back the other way...

~ cryptostorm_team
by cryptostorm_team
Sat Feb 08, 2014 6:15 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Dorky "takedown" notices
Replies: 14
Views: 46910

Re: Dorky "takedown" notices

Mr. Swales -

We are in receipt of your correspondence requesting "expeditious action to remove or disable access to the material described [herein]," relating to an allegation of commission of civil tort under U.S. law.

Thank you so much for taking the time to make us aware of this matter.

To aid in our investigation of the situation, and i furtherance of a deeper understanding of your legally-constituted role in such, we ask that you forward to our attention the forensic reporting on which you base your allegations. We naturally understand that your expertise in such matters runs considerably deeper than ours. Nevertheless, let us suggest that digitally hashed (with a non-reversible/non-rainbowed algorithm, of course) .pcaps of the alleged transfers will be a good starting point. Full server-side logfiles corresponding to these packet-layer captures will be needed to independently validate the legitimacy of the tcpdump output, it goes without saying. Router-layer log data is always helpful - we're sure you concur - so please do send those along, as well. If they're in a nonstandard format, a pointer to relevant documentation of such is much appreciated.

We also ask that you provide us with your preferred independent forensic examiners, so that we may review their credentials and historical experience in documenting such matters. We, needless to say, have several examiners with which we are personally familiar - but it would be unseemly for us to suggest that our own contacts would be of more relevance than those which you will certainly be able to suggest from past working experience.

Once we have completed the necessary work to forensically validate your assertions (as set forth in the quoted correspondence included below), we will be in a much better position to begin the process of responding to the rather novel legal interpretation you suggest. Carts not belonging before their respective horses, let us first work through the forensic issues; obviously, there's no point in arguing legal matters if (in purely hypothetical terms) you lack any sort of independent, forensically sound basis on which to make said allegations.

Finally, as we have not in the past found it productive to engage in "discussions" with automated extortion-bots, we ask that you provide some manner through which we can confirm that these emails from you are coming from, well... from you, and not from a programmatic extortion-bot. Given the provisions of the CAN-SPAM Act, you will understand our concern that honouring an illegal spambot with "replies" would border on dark satire. Rather, we report those programming and profiting from such illegl spambots to the relevant legal authorities for prosecution. Help stop spam, &c.

Thank you again for your deep concern in protecting artists from nefarious intent. It is a refreshing pleasure to find someone who selflessly seeks to ensure that creative people receive full and just reward for their contributions to our shared culture!

Best regards,

~ cryptostorm
On 14-02-07 09:05 AM, wrote:
> Demand for Immediate Take-Down: Notice of Infringing Activity
> Date: 07/02/2014
> Case #: 17050
> Url:
> Dear Sir or Madam,
> We have received information that the domain listed above, which appears to be on servers under your control, is offering unlicensed copies of, or is engaged in other unauthorized activities relating to, copyrighted works published by Penguin Group (USA) Inc.
> 1. Copyright work(s):
> The Stroy of the Scrolls 9780141046150
> Backyard Harvest 9780756671631
> JavaScript for programmers 9780137001316
> The Essential Book of Fermentation 9781583335031
> Idiot's Guides: Drawing 9781615644148
> A History of the World in 12 Maps 9780670023394
> The Witches of Eileanan 9780451456892
> The Cat Who Knew a Cardinal 9780515107869
> Gym-Free and Ripped 9781615640997
> Great Paintings 9780756686758
> Dark Lie 9780451238061
> The Pool of Two Moons 9780451456908
> Catch Me 9780451413437
> Art That Changed the World 9781465414359
> A Short History of the World 9780141441825
> One Million Things 9780756638436
> How to Photograph Absolutely Everything 9780756643089
> Jamie's 15-Minute Meals 9780718157807
> Going Solo 9781594203220
> Tales of the Greek Heroes 9780140366839
> Vegetables Please 9781465402028
> The First 20 Hours 9781591845553
> Reference World Atlas 9781465408600
> The Complete Idiot's Guide to Simple Home Repair 9781592576654
> The Complete Classical Music Guide 9780756692568
> The Science of Fear 9780525950622
> Decoding Love 9781583333310
> Music 9781465414366
> The Complete Idiot's Guide to Microsoft Windows 8 9781615642366
> Ask Me Everything 9780756669713
> Digital Photography Essentials 9780756682149
> Rush 9781101620366
> A Singular Woman 9781594485596
> The Science Class You Wish You Had (Revised Edition) 9780399160325
> The How of Happiness 9781594201486
> A People's Tragedy 9780140243642
> The Survival Handbook: Essential Skills for Outdoor Adventure 9780756642792
> How to Grow Practically Everything 9780756633417
> Stop Diabetes Now 9781583333082
> Danger! 9780756667399
> The Complete Idiot's Guide to Slow Cooker Cooking, 2nd Editi 9781592576234
> The Economics Book 9780756698270
> How to Be a Genius 9780756655150
> Vampire Academy 9781595144614
> Fundamentals of General, Organic, and Biological Chemistry (6th Edition) 9780136054504
> Everyday Easy One-Pot 9780756657932
> Mastermind 9780670026579
> Eleven Rings 9781594205118
> Formula 50 9781583335024
> Do Not Open 9780756662936
> The Amazing Story of Quantum Mechanics 9781592404797
> Animals: A Visual Encyclopedia (Second Edition) 9780756691707
> Complete Atlas of the World, 2nd Edition 9780756689728
> Copyright owner(s) or exclusive licensee:
> Penguin Group (USA) Inc.
> 2. Copyright infringing material or activity found at the following location(s):
> ... e9492.html
> ... 03247.html
> ... b9541.html
> ... b233a.html
> ... 87128.html
> ... 41243.html
> ... 70819.html
> ... 62838.html
> ... 10932.html
> ... 81921.html
> ... 73863.html
> ... 70822.html
> ... 26230.html
> ... 4d2df.html
> ... 03964.html
> ... 29039.html
> ... 79229.html
> ... ast_Food_-
> ... 51375.html
> ... 46473.html
> ... 67864.html
> ... 11716.html
> ... d7bfd.html
> ... 12101.html
> ... 40684.html
> ... 64509.html
> ... 07017.html
> ... 21289.html
> ... 48105.html
> ... 42128.html
> ... 63426.html
> ... 83222.html
> ... 92870.html
> ... c02a9.html
> ... 74436.html
> ... 86523.html
> ... 05433.html
> ... 78372.html
> ... 18463.html
> ... venture_-M
> ... 91701.html
> ... 83222.html
> ... 65110.html
> ... 12811.html
> ... 91003.html
> ... 21582.html
> ... 28703.html
> ... 00394.html
> ... 97541.html
> ... 48496.html
> ... 19822.html
> ... It-Mantesh
> ... 58693.html
> ... 30922.html
> ... 34198.html
> ... 69155.html
> ... 10834.html
> ... 55732.html
> ... 27541.html
> ... 65626.html
> ... 97272.html
> ... 51901.html
> ... -_Pdf_-_Ye
> ... 31720.html
> ... t_Secrets_(
> The above copyright work(s) is being made available for copying, through downloading, at the above location without authorization from the copyright owner or exclusive licensee.
> 3. Statement of authority:
> The information in this notice is accurate, and I hereby certify under penalty of perjury that I am authorized to act on behalf of Penguin Group (USA) Inc., the owner or exclusive licensee of the copyright(s) in the work(s) identified above. I have a good faith belief that none of the materials or activities listed above have been authorized by Penguin Group (USA) Inc., its agents, or the law.
> We hereby give notice of these activities to you and request that you take expeditious action to remove or disable access to the material described above, and thereby prevent the illegal reproduction and distribution of this copyrighted work(s) via your company's services.
> We appreciate your cooperation in this matter. Please advise us regarding what actions you take.
> Yours sincerely,
> Internet Investigator
> Gary Swales
> On behalf of:
> Penguin Group (USA) Inc.
> 375 Hudson Street New York, NY 10014 United States of America
> E-mail:
by cryptostorm_team
Sun Feb 02, 2014 2:02 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's full public launch - press release
Replies: 2
Views: 9728

cryptostorm's full public launch - press release

Cryptostorm Darknet ~ Full Public Launch ~ Year of the Horse

Reykjavik, Faxaflói Bay, Iceland (PRWEB) February 02, 2014

After more than five years in the making, and months of exhaustive public beta testing, the cryptostorm darknet has now opened to full public availability. Incorporating extensive member community feedback during beta, as well as top-tier cryptographic expertise, cryptostorm offers an elegant, blazing-fast, ubiquitous online privacy service.

~ ~ ~

Rebuilt ground-up by the same team behind Cryptocloud VPN, which revolutionised online privacy when launched in 2007, the cryptostorm darknet "levels up" dramatically from all prior privacy services. Opensource, decentralised, peer-reviewed, and battle-tested... from top to bottom, cryptostorm delivers.

Months of performance tuning brings forward a privacy service without peer, one uniquely capable of serving the high-bandwidth, always-on requirements of members worldwide. Deep cryptographic foundations, correctly and transparently implemented, underpin a parallel focus on top network performance.

In a public statement concurrent with launch, the cryptostorm team observes that "the military spying apparatus of the world's superpowers has been turned away from its legitimate purposes, and towards the daily lives of civilians. To state this is not merely to offer opinion; thanks to Snowden's whistleblowing, it is documented fact. Worse, our democratic system of laws has utterly failed to offer any real protection against the frightening prospect of a world far worse than any dystopic future Orwell envisioned. And yet, math works - crypto technology has proven itself against even the NSA's vast resources. Even so, crypto tech must be done right, to be worth anything."

Cryptostorm does it right.

And yet, there's more to real-world security than fancy tech. Cryptostorm's organizational model - decentralised, attack-hardened, jurisdictionally diverse - is as important as its proven crypto framework.

No central nodes, no central points of failure, no information about members stored anywhere, ever; after pioneering "no logging" policies in 2008, cryptostorm's team now steps forward with a model that directly addresses the sharp realities of today's online world. Coupled with a groundbreaking, structurally anonymous payment model, cryptostorm delivers real protection in a post-Snowden world.

Good tech and good project design useless without excellent customer support, and an unshakeable devotion to elegant & ease-of-use. Cryptostorm neither allows corners to be cut in security, nor in elegance of design. With an unashamed commitment to improving the world by empowering citizens against dragnet spy regimes, cryptostorm's team has created a security service to protect everyone, everywhere.

Cryptostorm is different from older models that have tried initially to provide true online security. No more a "VPN service" than an old growth forest is merely "some trees," the cryptostorm darknet delivers serious crypto protection quietly in the background, as it should be. No hype and no hot air: the project bristles with deep tech talent, but remains focused on widespread accessibility.

The world's citizens are anything but helpless in the face of meta-legal spying, intrusive data collection, and transnational surveillance. Cryptostorm is a core tool in transcending victimhood: an uncompromising, unrepentant, unmatched alternative to half-baked, hype-filled, backdoored security theatre.

A hybrid of the best ideas, technology, talent, and wisdom of nonprofit efforts, combined with the production resources of a commercial project, cryptostorm brings bulletproof protection out of the specialised realms of the technical shadows, and into the reach of everyone.

Free test tokens available on request. Complimentary membership always available for activists, dissidents, students, and financially challenged folks worldwide. Cryptostorm: privacy tech done right.
by cryptostorm_team
Fri Jan 31, 2014 7:00 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Final draft - launch release (3.0)
Replies: 2
Views: 8620

Final draft - launch release (3.0)

Here's the final draft language of our press release, that's been presented to the distribution services for review and approval. It's not hit the wires yet, so please do share feedback & critique in the meantime. Thanks ~
cryptostorm's public launch | Gong Xi Fa Cai | 恭禧發財

More than five years in the making, and after months of public beta testing, the cryptostorm darknet today opens to full public availability. Incorporating extensive member community feedback during beta, as well as top-tier cryptographic expertise, cryptostorm provides member-friendly, blazing-fast, ubiquitous online privacy service. Coupled with a groundbreaking, structurally anonymous payment model, cryptostorm delivers protection in a post-Snowden world.

We all now live our online lives in a world of mass surveillance, advanced persistent threats, and rapacious marketing schemes. We all now know that the military spying apparatus of the world's superpowers has been turned on us: citizens of the planet. Laws have failed to protect us from an Orwellian present far worse than any dystopic future Orwell ever imagined. Where laws fail, math works - crypto technology has proven itself against even the NSA. But it has to be done right, to be worth anything.

Cryptostorm does it right.

Rebuilt ground-up by the same team behind Cryptocloud VPN, which revolutionised online privacy when launched in 2007, cryptocloud "levels up" dramatically from all prior privacy services. Opensource, decentralised, peer-reviewed, and battle-tested... from top to bottom, cryptostorm delivers. Months of performance tuning has delivered a privacy service without peer in supporting all high-bandwidth demands from members worldwide. Deep cryptographic foundations, correctly and transparently implemented, compliment this focus on unbounded speed and performance.

There's more to real-world security than fancy tech. Cryptostorm's organizational model - decentralised, attack-hardened, jurisdictionally diverse - is as important as the crypto in protecting members from all known threats to online privacy. No central nodes, no central points of failure, no information about members whatsoever. After pioneering "no logging" policies in 2008, cryptostorm's team now steps forward with a model that strongly reflects the sharp realities of today's online world.

Good tech and good project design aren't any use at all without excellent customer support and an unshakeable devotion to ease-of-use in everyday life. No corners cut in security, and no corners cut in elegance of design. With an unashamed commitment to changing the world by empowering citizens against dragnet spy regimes, cryptostorm's team has created a security service to protect the world's millions, not merely a few thousand geeks.

Cryptostorm is different from any earlier approach to online security. No more a "VPN service" than an old growth forest is "some trees," this new hybrid darknet delivers serious crypto protection silently, behind the scenes, as it should be. No hype and no bullshit, the project bristles with tech talent but dedicates itself to widespread accessibility.

Free test tokens available on request. Free membership always available for activists, dissidents, and financially constrained folks worldwide.

As real human beings living our daily lives, we are anything but helpless in the face of meta-legal spying and transnational surveillance. Cryptostorm is a core tool in rejecting victimhood: an uncompromising, unrepentant, unmatched alternative to half-assed, hype-filled, backdoored security theatre. We are honoured to be of service in the work of building a diverse, healthy future for us all.

~ cryptostorm_team
by cryptostorm_team
Mon Jan 27, 2014 6:44 pm
Forum: general chat, suggestions, industry news
Topic: Google Analytics on cryptostorm sites: discussion (mostly closed...ish)
Replies: 14
Views: 16525


.ga has been removed from the forum, during the recent server-side upgrades & style migration.

The community has spoken & we've followed that guidance.

  • ~ cryptostorm_team
by cryptostorm_team
Mon Jan 27, 2014 12:20 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: forum & website upgrades during weekend
Replies: 1
Views: 6430

forum & website upgrades during weekend

This should have been posted beforehand, but here goes:

We're doing some major upgrades to the server infrastructure behind our websites, including this forum & Upgrades are intended to bring all backend packages current with project builds, and in general harden things against automated attacks. We're also looking to finally bring our Apache engine up to full 1.0.1(e) OpenSSL support.

Note that none of this impacts production network infrastructure; these are "merely" website updates. Our site hosting is done on physically distinct hardware, in Iceland, relative to our exitnode clusters and other session-supporting infrastructure.

These upgrades are expected to result in short downtimes for the forum and main website... but we now know that wasn't exactly how things worked out, eh? :wtf:

If there's interest in discussing the guts of the upgrade, we hope a thread will be opened so it can be discussed in detail.

  • ~cryptostorm_team
by cryptostorm_team
Fri Jan 24, 2014 6:41 pm
Forum: general chat, suggestions, industry news
Topic: Request for feedback re Windows 8.1 widget sessions
Replies: 0
Views: 6628

Request for feedback re Windows 8.1 widget sessions

{direct link:}

We have been receiving intermittent reports to our support team of problems establishing successful connections via the cryptostorm widget on Windows 8.1 platforms. In several instances, extensive debugging and error log analysis has not resulted in any resolution to these problems - an extremely unusual occurrence for our support team to confront.

As such, we are asking for feedback from network members using Windows 8.1 and the network access widget: are you able to successfully connect? Have you experienced any performance or network throughput issues, when connected? Have you seen transient "authorisation failed" messages, and/or widget connections that "hang" after completing but before minimising to the taskbar?

The more feesback we gain from network members, the faster we'll be able to pin down the source of these reported issues and implement solutions.

Finally, if folks are on Windows 8.1 and would like to aid in testing the platform with the widget but don't currently have a token for cryptostorm access, please drop an email to debug<at>cryptostorm<dot>is and our support team will be glad to provide no-cost 30 day tokens in exchange for this assistance.

Thank you in advance for your contribution to this component of cryptostorm's member support efforts!

  • ~ cryptostorm_team
by cryptostorm_team
Tue Jan 21, 2014 9:10 pm
Forum: member support & tech assistance
Topic: Linux/Tunnelblick connect snags | RESOLVED (via 1.3 conf's)
Replies: 31
Views: 29131

1.3 - Network Manager & Linux terminal config

As notice came to us that members were reporting problems with some OS/client combinations when connecting to the correct exitnode instances, we have worked continuously to identify the underlying issue, test solutions, and prepare successful results for the member community to use.
  • Summary: we have created a version 1.3 "generic" configuration setting - client & server - which has been tested internally to support both terminal-based and Network Manager-initiated connections with cryptostorm for Linux-housed members. These have been deployed on newly-deployed Linux-specific nodes within our Icelandic exitnodes. Once we have broader member consensus that these 1.3 settings are robust, they will be deployed for Linux instances across the cryptostorm network.
Our testing has confirmed member reports that the Ubuntu "Network Manager" plug-in was not behaving as expected. In particular, problems with --fragment and --mtu directives were noted. This no longer surprises us, as we've seen similar unexpected results with these directives under other OS and client setups, as well. The result was that terminal-based Linux connections behaved as expected (and as tested), but Network Manager connections threw negative results for connected members, caused by variances in packet sizes that, in turn, result in 100% failure of HMAC validation and thus complete absence of successful data transfer via the "data" channel of the session framework.

Systematic validation of these results confirmed that numerous configuration versions that, in theory, were structurally sound would either be correctly parsed by the terminal-based client, the Network Manager, or neither... but not both concurrently.

Additional exploration of the configuration landscape identified a combination of parameters that finds correct parsing both via terminal and Network Manager, as well as retaining all necessary security components within the larger cryptostorm framework itself. Internal testing has validate first-tier performance and secure session capabilities, and the configuration - version 1.3 nomenclature - is being released for member testing prior to being fully deployed. At present, the 1.3 framework is limited to the Icelandic exitnode cluster.

The relevant client-side configuration file is included below.

Tested Network Manager connections successfully imported the 1.3 configuration file, as posted below. If members find that additional steps are needed to support successful Network Manager connections, we will summarize them in the relevant connection guide thread, for broader availability.

Mappings for --remote parameter to support "raw"/Linux connections to the Icelandic exitnode are, per cryptostorm convention, of the form: - additional mappings of are also mapped and supported. Per cryptostorm-wide production standards, four-tier TLD redundancy is provided and supported for all cluster hostname mappings - and is included in the below-attached 1.3 configuration framework.

Finally, we have not been able to test this 1.3 version under Viscosity and Tunnelblick, and thus do not know if it will find concomitant success under these OS/platform environments. We ask that members provide their results when 1.3 is put to use on these platforms, and based on that input one of two decisions will be made: either we'll continue to include these OS/platform flavours within the "raw"/Linux versioning during the roll to 1.3 across the network, or we will further fork the framework to expose Mac-specific daemons to optimally support members coming to the network from these platforms. In preparation, we have soft-reserved internal IP assignments for use in these Mac-specific connection platforms.

Reminder: all remnants of configuration files in this thread are very old, and you should NOT try to use them to connect; They won't work anyway. Please refer to this thread instead for current, working configurations.

Code: Select all

# this is the client settings file, versioning...
# cryptostorm_client_raw-iceland1_3.conf

# it is intended to provide connection solely to the Iceland exitnode cluster
# DNS resolver redundancy provided by cluster-iceland randomised lookup queries
# Chelsea Manning is indeed a badassed chick: #FreeChelsea!
# also... FuckTheNSA - for reals

dev tun
resolv-retry 16

# randomizes selection of connection profile from list below, for redundancy against...
# DNS blacklisting-based session blocking attacks

remote 443 udp

remote 443 udp

remote 443 udp

remote 443 udp

comp-lzo no
# specifies refusal of link-layer compression defaults
# we prefer compression be handled elsewhere in the OSI layers
# see forum for ongoing discussion -

# runs client-side "down" script prior to shutdown, to help minimise risk...
# of session termination packet leakage

# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

explicit-exit-notify 3
# attempts to notify exit node when client session is terminated
# strengthens MiTM protections for orphan sessions

hand-window 37
# specified duration (in seconds) to wait for the session handshake to complete
# a renegotiation taking longer than this has a problem, & should be aborted

mssfix 1400
# congruent with server-side --fragment directive

# passes up, via bootstrapped TLS, SHA512 hashed token value to authenticate to darknet

# auth-retry interact
# 'interact' is an experimental parameter not yet in our production build.

ca ca.crt
# specification & location of server-verification PKI materials
# for details, see


ns-cert-type server
# requires TLS-level confirmation of categorical state of server-side certificate for MiTM hardening.

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once well-tested...
# cipher libraries support our choice - AES-GCM is looking good currently

replay-window 128 30
# settings which determine when to throw out UDP datagrams that are out of order...
# either temporally or via sequence number

# implements 'perfect forward secrecy' via TLS 1.x & its ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice

key-method 2
# specification of entropy source to be used in initial generation of TLS keys as part of session bootstrap

log devnull.txt
verb 5
mute 1
# sets logging verbosity client-side, by default, to zero
# no logs kept locally of connections - this can be changed...
# if you'd like to see more details of connection initiation & negotiation
by cryptostorm_team
Mon Jan 13, 2014 4:25 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: HOWTO: manual editing of widget exitnode preferences
Replies: 13
Views: 22929

Re: cryptostorm: manual editing of widget exitnode preferenc

We now have the following Windows-specific remote options available and tested for use in the Widget (or other Windows clients):
  • windows-montreal.{TLD option}
    windows-frankfurt.{TLD option}
    windows-iceland.{TLD option}
Additionally, for dynamic selection of exitnode cluster that randomly selects between existing clusters, we've created the following remote option:
  • windows-dynamic.{TLD option}
Available TLD mappings - {TLD option} - for all of these selections are at present:
They can be used interchangeably. Thus, for example, will be an option for dynamic selections.

Finally, to repeat, this entire process is being encapsulated in a pull-down menu in the 1.0 widget build - so if this all seems a bit complex and annoying, the 1.0 version eliminates the drama.

Thank you,

  • ~ cryptostorm_team
by cryptostorm_team
Fri Jan 10, 2014 6:09 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83266

forked server configuration 1.2 now in production

After a somewhat astonishingly complex process, we've now deployed across all exitnodes the full 1.2 "forked" network session framework. This is a short post to let folks know what's going on - we'll be writing up a more detailed synopsis of the theory behind all this, and the substantial future benefits this change offers to the network in terms of flexibility, performance, and even a bit of increased security.

First, if you're connecting with the Windows widget - any previously-released versions - then there should be no changes, no need for you to do anything. You may have seen your session drop this morning as patches and upgrades were applied. That's it. If you are seeing problems, let our support staff know! Ensuring full backwards-compatibility with all existing widget builds has been a top priority for us.

Second, if you've changed your "remote" setting in the widget's configuration file, then do read this little summary as it will likely be relevant to you!

Ok, now the details in high-level form...

We have created a new layer of logic in the "remote" connections to exitnodes and clusters. The widget (and other Windows-based network sessions) previously connected to the same daemons (processes or instances are also accurate, pick your preference) on specific physical machines as do all our other "raw" connections from members with Linux, Android, or other OSes. That was becoming a serious bottleneck, as there's ample opportunity to tune specific OS sessions. Thus, the forking decision we've made.

From here forward, Windows-specific sessions will be connecting always via "remote" sessions of the form:

Code: Select all  |
...and so on. These subdomains, in turn, direct to specific daemons within those exitnode clusters that are tuned for Windows/widget network sessions. So, generally speaking, Windows sessions will in the future always be pointed at Windows-specific remote daemons (which are in fact separate IPs on specific machines in a given cluster). [note: current widget builds use a generic '' remote mapping, which has now been redirected to Windows-specific daemons via A record-based DNS remappings]

For "raw"/Linux connections there are now a suite of specific remote options, such as...

Code: Select all |
...and so on. We will be posting a full list of those mappings, so folks can make use of them selectively. Additionally, the official 1.2 client configurations are being polished for distribution by our support folks right now.

Finally, there's "stubs" in the framework for dedicated daemons to support custom-config sessions for iOS, Android, and other client-side platforms that each have special ways they can be tuned... but only if the server supports them. Well, all of our nodes and clusters now support this tuning ability - and we'll be expanding that as we go, across the network.

~ ~ ~

This has been, in a word, epic. We began work on this in early December, when reports of buggy behavior in the "MTU/fragment" directive started rolling in. It became clear that, to really support full performance tuning without and slip in security and resilience, we needed to completely rethink our approach to a "unified" configuration for different OS flavours. With the growing volume of exitnodes and clusters in our network, and with our desire to support backwards compatibility for everyone, this became a logistic puzzle in many dimensions.

We pushed to finish this over the holidays, when things are generally slower, but it kept surprising us with complex challenges. Many recompiles, retests, rearchitectings, and re-thinkings later, we've got things in hand. This last push to deploy the tested framework has now taken place, and we're in clean-up mode.

Speaking of... it's likely there will be a batch of unforeseen IP/fqdn mapping bugs left over from this - so please let us know if you're seeing something unexpected. And, if you're getting "hung" network sessions because of the dreaded "inconsistent use of MTU settings" or "bad header compression" warnings in logfiles, that's a sign you're getting routed to the wrong daemon on a given node. Let us know, and we'll get it sorted.

Everyone on the team has been pitching in with the testing and deployment of this upgrade. And, while it's not very glamorous - redesigning an encapsulation framework for session routing isn't exactly what most folks consider "sexy" - it has opened up a big space for future improvements, extensions, and feature additions. We knew this big upgrade was going to be needed, so we've bitten the bullet and gotten it done now - before the network grows even bigger. Anyway, what this means is that our replies to emails and bitmessages has been much slower than usual - we grabbed our support staffers and used them brutally as testers throughout this so everyone's been pulled into the vortex in the meantime.

Why so complex, you might ask? Simple: since we don't store ANY central "account" information for network members, anywhere, the challenge of enabling exitnode/cluster selection becomes considerable. Unprecedented, as nobody has ever built a secure network with our decentralised framework. This has not been a trivial challenge, and as clusters have been added, the need has become more and more obvious. It's been an adventure.

Now, we're back to the daily work of production: administration, performance tuning, minor bug identification. More, we're able to get back to work on further security enhancements, refinement of the widget framework, widget builds for additional OS flavours, and so on. Finally.

Thanks for the debugging and oversight help we've received from network members, in the last few weeks. This has been an adventure, and it's one we've all gone through together.

We'll post a link here when our ops folks write up more details on the implementation & how it can be used by network members to choose their clusters, nodes, and OS preferences.

Best regards,

  • ~ cryptostorm_team
by cryptostorm_team
Mon Jan 06, 2014 6:17 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: Icelandic exitnode cluster now in production
Replies: 2
Views: 7954

cryptostorm: Icelandic exitnode cluster now in production

All remnants of configuration files in this thread are very old, and you should NOT try to use them to connect; They won't work anyway. Please refer to this thread instead for current, working configurations.

We are pleased to formally announce the release into our production resource pool the new Icelandic exitnode cluster. Our colleagues at DataCell ehf in Reykjavík have patiently worked with our team to provision an exceptionally robust hardware suite in their sustainably-powered datacentre. Plus, DataCell kicks ass; 'nuff said.

To begin, we are capping simultaneous connections to this new cluster at 300 - as we fine-tune performance on the new hardware, we will increase that cap to make full use of our dedicated machines. And, yes, these aren't little Virtual Private Servers with limited capacity and terrible security characteristics: we use only dedicated, from-the-metal installs of custom-compiled, binary-verified OS images on machines in our production infrastructure.

As with all cryptostorm sessions, access to the darknet is validated via hashed network access tokens. Cryptostorm has no "accounts" or "subscriptions" - structural anonymity is maintained via our token-based authentication model. Tokens are available directly from cryptostorm, or for additional anonymity protections, via independent network token resellers.

For cryptostorm members connecting directly via "raw" OpenVPN sessions, the following configuration template has been tested and is ready for production use:

We are currently custom-compiling a build of our Windows network access widget with baked-in connectivity to the new Icelandic cluster; it will be posted here as soon as the compile is verified and cleared for member use. For those curious as to the server-side parameters governing secure network connections to this cluster, production-mirror conf's have been posted in our server configuration thread.

As with all shifts of new infrastructure into production, we ask that network members let us know of any glitches in the new capacity so we can quickly fine-tune the framework as it scales with member usage.

Our current exitnode clusters in Frankfurt (Germany), Montreal (Canada) - and now Iceland - are soon to be joined by dedicated, high-capacity, professionally-administered exitnode clusters in the United Kingdom, Japan, and the Czech Republic.

Since our limited beta launch in October, cryptostorm's groundbreaking secure network framework has seen steady, consistent, organic growth as members spread word of our intensively hardened cryptographic framework, next-generation token-based decentralised authentication model, and battle-tested team. Plus unlimited, all-protocols, all-ports, all-applications secure network connectivity that is, as we've been told by members, "fast as fuck." Indeed.

All the best,

  • ~ cryptostorm_team

DataCell's Reykjavík datacentre:
by cryptostorm_team
Mon Jan 06, 2014 5:49 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm exitnode clusters: listing+requests+roadmap
Replies: 89
Views: 123602

Our Icelandic exitnode cluster is live and in production. We've capped simultaneous connections at 300, for the time being, as we performance-tune the loadbalancer logics within the cluster.

Attached is the requisite client configuration file for connections. This is tested & structured for "raw" connections - we are in process of provisioning a widget-specific framework for this cluster, and will roll out the needed installer files shortly. If folks want to try manual edits to the config of their Windows widget in the meantime, we'd be curious to see the results you achieve - but we've not tested that edit ourselves and thus cannot confirm it's going to be immediately able to connect until we are ready with the dedicated daemon instances.

Enjoy :-)
by cryptostorm_team
Thu Dec 26, 2013 4:23 am
Forum: general chat, suggestions, industry news
Topic: Jeremy Hammond's stunningly courageous rebuke of tyranny
Replies: 2
Views: 10398

Snowden's holiday message

Per recent tweet, here's a local archive copy of the Vimeo version of Snodwen's Christmas message - it seems it's being blocked in some places, so we want to be sure it's available whenever and wherever requested.
by cryptostorm_team
Wed Dec 25, 2013 6:05 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: current 'config' files for cryptostorm network connections (rev. 1.4)
Replies: 2
Views: 58778

current 'config' files for cryptostorm network connections (rev. 1.4)


Members connecting via our Windows network access widget do not need to manually download or implement these configuration files, as they are pre-packaged with the widget installer.

This thread exists to serve as a one-stop location for the most current version of the client configuration files for cryptostorm. As new versions complete alpha (internal) and beta (community) testing successfully, they'll be swapped into this post - so that there's always the most current approved configs (and only the most current versions) here in this thread.

Note that all config files in this post have, as of today (12 January 2014) been upgraded to Hostname Assignment Framework compliance. This delivers vastly improved session security, resilience, and flexibility as our network continues to grow and expand.

A much deeper discussion of client configuration files, as well as archival copies of earlier versions, can be found in a separate, parallel thread. To ensure that discussion stays in one place, this post is locked & does not accept replies. Please, if you've got feedback or questions or comments on the config files, post them in the parallel thread - thanks!

These configuration files vary only in the choice of exitnode clusters embedded in them, and in their tuning for use with particular operating systems. Otherwise, they are identical to each other. Selected cipher suites, session parameters, and authentication methods are the same. Windows-based members connect to dedicated "instances" of our servers, and mostly everyone else can use the configuration files labelled "Linux." (we used to call those "raw" files but it was a terrible name and so we've moved to just using "Linux," even though it'll support BSD/OSX, and many other operating systems)

You will notice that there's two different ways to do broad-spectrum, "randomised" connections to our global network: dynamic, and settled. We can dig deeper into the differences in these models elsewhere; for here, it's best to think of the "settled" versions being less likely to "jump around" between geographically dispersed nodes during routine network interruptions. Conversely, "dynamic" balancers will be more aggressive in effectively randomising the node to which each session connects - including routine reconnects. Some folks like the extra variability and attack-surface hardening of the dynamic model; others want a bit more stability of node selection and thus the "settled" balancer does well for them.

Otherwise, network connections are made based on exitnode clusters as defined by a city (or, in a small number of cases, a geographic section of a larger country); this, again, relates to the HAF approach to network resilience, and is able to help ensure the network is always available, even with the routine ups and downs of individual nodes.

new Singapore anchor exitnode, just provisioned
(5.57 KiB) Downloaded 1875 times

(7.5 KiB) Downloaded 3044 times
(5.48 KiB) Downloaded 1756 times
(5.37 KiB) Downloaded 1593 times
(5.33 KiB) Downloaded 1789 times
(5.34 KiB) Downloaded 1775 times
(5.29 KiB) Downloaded 1434 times
(5.32 KiB) Downloaded 2157 times
(5.34 KiB) Downloaded 1590 times
(5.34 KiB) Downloaded 1702 times
(5.33 KiB) Downloaded 1809 times
(5.32 KiB) Downloaded 1789 times

For our no-cost, capped-speed cryptofree service:
(5.42 KiB) Downloaded 1682 times
(5.26 KiB) Downloaded 1186 times
Mac/OSX optimised:
(5.37 KiB) Downloaded 1165 times
(5.21 KiB) Downloaded 1753 times
(5.21 KiB) Downloaded 4523 times
(5.33 KiB) Downloaded 2931 times
(5.23 KiB) Downloaded 1107 times
(5.22 KiB) Downloaded 4051 times
(5.22 KiB) Downloaded 1100 times
(5.23 KiB) Downloaded 2492 times
(5.26 KiB) Downloaded 977 times
(5.46 KiB) Downloaded 1022 times
(5.18 KiB) Downloaded 975 times
Coming soon, or in short-term re-provisioning status:
by cryptostorm_team
Tue Dec 17, 2013 2:13 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm interview replies: Winter Solstice 2013
Replies: 0
Views: 9739

cryptostorm interview replies: Winter Solstice 2013

These questions were submitted via email to our support address; we've chosen to publish our responses here, in order to provide wider access to them and to encourage discussion, debate, and critique of our replies by the community as a whole.

Here goes...

1: Under what circumstances would you begin recording user information or information on a specific user, or assisting law enforcement with an investigation (even if you do not log by default)? Purchasing or selling drugs? Child porn? What about just jailbait or legally but not morally child porn? Stalking/harassing? DMCA? etc. Or is this an amoral (amoral != immoral) service which protects everyone exactly equally?
We don't have "users," and we dislike strongly the concept of "users" in general. Rather, we have members - members are those who are entitled to access cryptostorm's network resources. Members are not identified, to us, by real-life identities but rather by their network access token(s). Further, access to the network is itself provided not by the token, but by the one-way-transformed SHA512 hash of said token - further breaking the link between network activity and any real-life identity of cryptostorm members.

Structurally, we have no way to correlate network activity with individual human beings - our model is designed, architected, coded, and deployed specifically to sever this link. This is intentional: by removing the possibility of such linkage, we remove any possible benefit to be gained by threatening us, or pressuring us to "turn over" information about "users." Not only do we not have such information: that information, in a formal sense, does not exist. Nobody has it.

As a provider of encrypted packet routing service, we are not in the business of censorship nor are we adjuncts to law enforcement, spy agencies, or other participants in data surveillance. We transit packets: all packets, so long as they're sent/received by an authenticated network member... which is to say, someone with a valid network access token. Expecting us to participate in selective censorship of online content and packet payload makes about as much sense as expecting the water company to stop people from throwing water balloons at each other: a profound categorical error.

Bad things do happen in the world, and online. We understand that. There are people paid good money to "police" such matters. In some cases, they actually do that - rather than persecuting unpopular minorities or attempting to enforce religious or social hegemony in general. Irrespective, that is not our business and it is not an activity in which our network serves any useful purpose. Indeed, we exist specifically to protect data in transit from surveillance... because surveillance of data in transit has transcended epidemic proportions to reach near universality. We are a check and balance on the metastatic rise of the surveillance oligopolies online. We are not an agent of selective censorship.

The question is not one of morality: a-, im-, or otherwise. It is one of structural configuration. Structurally, we do not stand as a chokepoint to packets in transit, nor as a filter on same. That role must be sought elsewhere.
2: What method of encryption (symmetric and asymmetric) is used by default for your openvpn service?
Per our published server-side cipher suite selections:

Code: Select all

auth SHA512
# data channel HMAC generation

cipher AES-256-CBC
# data channel stream cipher methodology

We have no "default" cipher selections: we have only one suite of acceptable cipher algorithms; connections to cryptostorm require support of this suite. To allow otherwise is to invite rollback attacks and successful MiTM exploits - a lesson that seems largely lost on the "VPN service" industry as a whole.
3: Do you use any other methods to protect your users from other threats, for example by using packet padding, delays, acting as a mixnet, PFS (forward secrecy)?
Functionally, deployment of HMACs within the OpenVPN security model - in conjunction with properly-seeded IVs - acts as a form of packet padding. Too, given the packetization mechanics of translation of underlying data into encapsulated OpenVPN-based TLS session streams, there is functionally no correlation between packet payload of encrypted versus decrypted packets in an active cryptostorm session (an exception, of course, being a "session" that includes nothing more than a series of near-empty payload packets which, apart from HMAC-based bloat, would correlate closely from an information-theoretic perspective in both encrypted and decrypted versions; this is a trivial corner-state case, and doesn't correspond to any use-scenarios of network members in the wild).

Passive traffic analysis based on inbound/outbound flows - a scenario in which engineered transit delays might be useful - is not a viable attack model on exitnode clusters with a sizeable membership routing packets through them concurrently. This is easy to see: the streams of inbound & outbound packets, each HMAC padded and parsed to MTU-compliant specifications, provide very little in the way of statistical surface upon which to attempt any known forms of traffic analysis. That our cipher suites, themselves, introduce nontrivial delays in packet transit and re-enveloping is also worth remembering. Coding a toolkit to seek traffic analysis correlations of any useable form, given these circumstances, is very difficult to imagine: doing so would require extraordinary resources, expertise, testing scenarios, and frontline access to raw packet streams from within our datacentres. This is not an impossible concordance of capabilities, but it would be very rare to imagine in a real world context.

Every exitnode clusters serves, in functional terms, as a realtime mixnet - definitionally.

So-called "PFS" is a property of cryptographic implementations, and not of network topology itself. As can be seen from our selection of asymmetric cipher suites, we deploy discrete-logs based DHE asymmetric TLS session construction - hard-cycled every 20 minutes. We are not yet moving to ECDHE, both because of nonuniform availability of the underlying cryptographic primitives on some OS platforms (Red Hat's issue with ECDHE's curve families & concerns over intellectual property encumbrances relating thereto being a prime example), and because of the absolute need to choose no-default curve families to avoid "poisoned" curve structures now widely explored in the published literature. Once those two issues are resolved, we'll consider once again the use of ECDHE within our framework - understanding that the primary benefit of such, relative to DHE, is merely to be found in computational efficiency rather than proven security benefits.

Note that the "PFS" attempted by the OpenVPN framework outside of TLS is of purely cosmetic value insofar as any poor selection of TLS cipher suites would open up session cycling at the OpenVPN level to oracular visibility for an attacker with full access to plaintext packet data due to TLS compromise. It's the equivalent of trying to hide a box by putting the box within itself - not very useful, in practical terms.

Either TLS - and thus the OpenVPN control channel itself - is secured via standard cipher deployment selection, or any "PFS" attempted by the application layer is ineffective.
4: If it does not infringe upon user confidentiality, what are the specs of your VPN in bandwidth, and number of IPs in use?
Not sure how to parse this question. Currently, across all our exitnode clusters, we are transiting several terabits of data per day (not double- or even quad-counted, as is common in the "industry"). That number is rising in near-monotonic fashion.

We do not spawn oceans of physical IPs in a petty attempt to appear "big." Rather, our use of IP-space is parsimonious, congruent with the basic principals of sound network design.
5: What country is your HQ`s jurisdiction and in what countries (jurisdictions) are your servers placed?
In conventional terms, we have no "headquarters" jurisdiction, as our project is decentralised by design and in practice. We do not have a "customer database" or any other central operational node; every exitnode cluster is quasi-autonomous and can operate fully independently of all other clusters. Peering and session pruning is done via shard-based "quorum" voting.

Current exitnode clusters exist in Canada, Germany, and Iceland. Additional clusters are in various stages of provisioning in: the U.K. (Isle of Man), Russia, U.S., and sovereign First Nation territory within North America.
6: What are your own moral views on the use of these kinds of anonymizing tools and why do you support it?
We oppose fascism.

Fascism, historically, has arisen only in social contexts in which privacy was stripped from citizens, police powers were expanded asymptotically, and minority views were persecuted and silenced. Censorship, too, is a fundamental component of the fascist model.

Thus, to prevent the further spread of fascist structures, opposition to these essential building blocks is required. That is to say that active support of free expression, minority perspectives, and citizen access to genuine privacy is functionally equivalent to active rejection of fascism in all its forms.

Those with power - the 1%, the 0.1%, the hegemonies both social and financial - already have unhindered access to privacy in communications, assembly, and data storage. It is those without hegemonic power - those on the fringes of social dominance - who lack access to reliable, robust tools to ensure privacy when desired. We provide those tools, as part of levelling the social playing field... a field that has tipped in favour of the powerful in recent decades, to a degree unmatched in the history of our species on this living planet we share with all other species.

We tip the scales back to something resembling a balance. And, in moral terms, we do so proudly.
7: Finally, is there anything else about your VPN service that is important to know or that should be mentioned?
Our team was the first to deploy full-strength OpenVPN-based network security as a consumer service, in 2008 - at a time when PPTP-based "VPN services" were aggressively marketing themselves as secure and reliable. We were the first to choose 2048-bit RSA keys for certificate authentication, again in 2008, congruent with recommendations from cryptographic professionals and experienced practitioners.

We were the first to declare publicly a "no logging" policy (2008), for which we were aggressively and broadly criticized - "common knowledge" being that it was "illegal" not to log customer behaviour (a false assertion, then as now). We were the first to embrace peer-to-peer filesharing as a legitimate and culturally essential component of network security service - with our service - in 2008, and we were the first company in any category (as far as we know) to publish a "privacy seppuku" pledge, in 2008: declaring that we would shut our company down, rather than allow it to become complicity in any way in a betrayal of our customers. This we did, in shutting down the old Cryptocloud network, earlier this year.

We have been known, since our founding, as a "no compromise" provider of secure networking service. We don't betray our customers, we don't censor packets in transit, and we don't cut corners when it comes to doing our best in protecting data-in-transit from any form of surveillance, active or passive.

Over the years, we have seen our words - if not always our operational capabilities - widely copied and used as marketing fodder for me-too "VPN services" looking to make a quick buck. Insofar as we've been able to tilt the field towards a definition of network security that is more robust (and less pitiful), we're quite proud of the role we plan in anchoring the terms of discussion. Insofar as shady fraudsters have just copy/pasted our text onto shoddy "VPN services," we're discouraged that people are provided with a false sense of security when in fact their money being spent with these scam services is buying them nothing in the way of real protection online.

In the end, we are proud to carry the role of innovators and leaders in this market - but also saddened that, by and large, the market has come to consist almost exclusively of services that (in Bruce Schneier's term) offer little more than "security theatre" - no real protection, but merely the appearance of same.

It is our hope, with cryptostorm, that we can once again reset the assumptions as to what is possible when it comes to providing genuine, robust, verifiable security against a full suite of known threat vectors online.

  • ~ cryptostorm_team
by cryptostorm_team
Mon Dec 09, 2013 6:59 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: HOWTO: choosing exitnode clusters
Replies: 7
Views: 12469

HOWTO: choosing exitnode clusters

{direct link:}

The cryptostorm network framework routes secure member traffic in and out of the darknet via exitnode clusters. These are groups of physical (dedicated) servers in a given geographic location, which work together to handle traffic load for sessions designated to that particular cluster.

For example, our initial exitnode cluster, in Montréal, is known by PTR "A Record" as Network sessions directed at this cluster will be dynamically striped across machine resources in that particular geographic node, automatically. It's also (generally) possible to make session initiations via direct IP handshake, but we discourage this as IP assignments can and do vary over time as we upgrade, migrate, and reassign physical hardware at a particular cluster's geographic location. By using the subhost mapping - - members avoid the hassle of keeping track of direct IP allocations.

  • note: we will not be making definitive public posts regarding specific IP/machine allocations, as these change frequently enough to make such an effort both time-consuming and supremely frustrating for all but the most network-curious members - given that, decisions to direct-connect to individual IP address may well result in unexpected drops of network connectivity during IP reallocations, drops that don't come back online immediately... if ever. Hard-coding IP addresses into configuration settings is - on rare occasions - required (as with some router-based setups that can't do dynamic DNS resolutions of remote-connect parameters), but isn't provided with full published support by cryptostorm's tech admin team. If you want or need to do direct IP connects, we'll certainly never actively block them - but we also can't promise they'll be durable over time. Those who need this functionality to enable custom connection scenarios are encouraged to post their experiences & questions here in the forum, and our tech team will do their best to ensure we provide information to help smooth the process, on an 'unofficial' basis.)

There are additional layers of loadbalancing logic, in early deployment, that dynamically select entire geographic clusters based on weighted network performance metrics, in near-realtime fashion. For example, will always do a "best fit" calculation to select both the geographic cluster and the resources within that cluster that are most effective for a given session. The logic behind these decisions is still being tested and deployed, so at present these mappings will often be a bit simplistic... but they'll always work, and always point sessions at a cluster/machine with available session capacity.

For folks using "raw" OpenVPN clients to make cryptostorm network connections, these cluster mappings are the key to selecting which exitnode is used for a given network session. In contrast, for those using the network connection widget, version 1.0 includes menu-drive exitnode selection that automates the process & doesn't require manual changes to the config file. These menus also include options such as best-performance, lowest-pingtime, and so on. We will always support both connection methods - raw & widget-based - so that members can decide which is best for them.

A common question we get is: "why don't you make it so exitnode preferences can be saved in my account with cryptostorm, so when I connect the old settings are used?" Simply put, we don't do this because there are no "accounts" with cryptostorm - there are only tokens. The distributed, decentralised nature of our token auth framework means each exitnode cluster is functionally autonomous from all the others - there's no method to make them all "remember" something like preferred exitnode. We have removed that central layer from the network topology, entirely, for security improvements & hardening against LEO-driven efforts to compel network data disclosure: this is not a bug, it's a feature. The cryptostorm network remembers nothing about sessions, preferences, or identity of members - it only knows about tokens (actually, hashed tokens - an important distinction!), their validity, and their expiry date.

Changing the designated exitnode cluster, from within the client configuration file, involves simply replacing the "remote" parameter (near the top of the configuration file settings) with the new hostname (no need to change the port mapping - 443 - which will always be the same across clusters and machines). For example, the line for connecting to the new German cluster looks like this:

Code: Select all

remote 443
...whereas the default setting looks like this:

Code: Select all

remote 443
As new clusters come online, this naming logic will remain unified. These subhosts will respond to ping queries, as well, for those who wish to test network latency metrics prior to selecting exitnode clusters for a specific network session.
  • (those with a bit more experience in such matters may "comment out" unused cluster mappings in their config file, client-side, by using the "#" sign to disable a given text line, as per standard coding conventions, and thus "activate" their chosen exitnode setting by "uncommenting" that specific line from their file.)
Even though we expect most folks will make use of the widget - and its menu-based exitnode selection ability - we know some folks prefer raw connects, and we'll always ensure there's sufficient data in this thread and elsewhere to replicate all widget functionality during raw connections.

As new nodes come online, we'll also update this thread with a comprehensive list of current & active clusters, for those who wish to have a realtime, canonical reference for same.

Best regards,

  • ~ cryptostorm_team
by cryptostorm_team
Fri Nov 29, 2013 3:14 pm
Forum: member support & tech assistance
Topic: Specific website access issues? Report 'em here!
Replies: 7
Views: 9647

Specific website access issues? Report 'em here!

We've had a beta tester report issues accessing the canonical domain from within cryptostorm this morning, and we'd like to ask if there's others who are seeing any issues with this. Our in-house testing isn't able to reproduce the issue thus far. This reported bug has also been sent out via our main twitter feed.

For reference, we show the domain resolving to IP:

If folks do find they can reproduce the problem, it would be really helpful to share here mtr/tracert data on the attempted session initiation, as well as any details on DNS resolution fails if that is a problem.


  • ~ cryptostorm_team
by cryptostorm_team
Tue Nov 19, 2013 12:43 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's network access widget, rev. 0.9 public beta
Replies: 21
Views: 62688

cryptostorm's network access widget, rev. 0.9 public beta

{direct link:}


The cryptostorm network access widget has been released for full-scale public beta use. A zipped file containing the self-installing Windows executable is found here:
(12.58 MiB) Downloaded 1294 times
Here's some basic information about the widget; once we clean up the comments, we'll be posting the Perl code itself, as a textblob, in this thread below.

This is a beta version (0.9) of the widget; there's extensive additional functionality currently being tested in-house, that together comprises the full 1.0 "full" release. Prior to that, however, we'll be compiling this 0.9 version for additional platforms so that it's available to a wider segment of cryptstorm members.

The beta version of the widget results from months of in-house testing, as well as more than a month of intensive public alpha testing by many cryptostorm network members. The alpha testing thread documents this process in some detail, for those curious.

Here's some additional details on the beta widget; we'll be fleshing this out throughout the week...

what's it written in?

Perl. The GUI is done with Tkx, which is a Tk interface, and Tk is a cross-platform widget toolkit for building GUIs in different languages.

what compiler did you use to build the production binaries?

We've used the Cava Packager. It works similar to perl2exe and PAR's pp in the sense that it bundles a Perl interpreter along with this script so that you can run it on a system that doesn't have Perl installed. That's all transparent to the user; anyone curious who'd like to investigate further can confirm this pretty easily via source code review.

what installer-builder app has been used to create the Windows self-installer?

NSIS (Nullsoft Scriptable Install System)
by cryptostorm_team
Wed Nov 13, 2013 11:20 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Official launch press release: community input requested
Replies: 9
Views: 12798

Re: Official launch press release: community input requested

i0n wrote:How exactly is "the technological excellence of the Tor project" used in the cryptostorm network?
Primarily in three specific areas:
  • 1. Emphasis on selection of cipher suites representing best-version current knowledge of underlying cryptographic primitives. Tor has always prided themselves on choosing good ciphers, rather than just defaulting to whatever is auto-negotiated by underlying libraries. It's true they've sometimes had to compromise when the performance impact of a given suite would slow the whole network... but they've always done so with a deep understanding of the trade-offs involved. Needless to say, the "VPN industry" lacks this dedication to cipher selection and in fact quite often lacks any viable understanding of crypto mechanics whatsoever.

    2. Build process. Tor has been at the forefront of the deterministic build paradigm, taking some nice theoretical ideas and actually seeing how they play out in the context of a big, complex, diverse, global project. Reproducible builds are still something of a fantastical chimera for all but the most basic of packages - but understanding the crucial import of build cookbooks that, if followed, should indeed produce identical binaries has been a focus of Tor's team for years. It's a part of our work from the beginning, in no small measure because of what we've learned from closely watching the results of their work so far.

    3. Tor-as-a-service. Although that's not a term we've ever heard the Tor folks use, it's a fundamental concept for the Tor project: as a transport protocol, Tor can be plugged into the backend of an essentially unlimited class of applications - from entire OS images (Whonix) to browers, email clients, wifi routers, and so on. This may seem obvious, in hindsight, as the "right" way to do things - but the "VPN industry" has been shambling off in the exact opposite direction for years: locking down network access via proprietary, closed-source "clients" that are heavily branded and prevent easy integration of the network. To wit: nobody has to ask permission to "white-label" Tor; it's Tor, use it. Likewise with cryptostorm: it's happy being a behind-the-scenes privacy-as-a-service put to use with other applications. These divergent approaches have a deep impact on the technological decisions that must be made to build such networks.
Additionally, Roger's (and others') work on pluggable transports has served as intense motivation for our own roadmapped deployment of this class of protocol-as-obfuscation toolsets. We don't have them deployed in the 0.9 version of the network, but we've already begun some in-house testing with the available tools and it's a very high priority for future integration. Of all the new developments in privacy-related network technology out there, protocol-as-obfuscation is the most radical, and most powerful. A big chunk of that work traces directly back to Roger's pluggable transports, and like all Tor-related subjects they've done an admirable job of documenting their work so that others can benefit from their wisdom gained.

Finally, of course, the essential fact that Tor takes the whole process seriously, at a technical level, is one of our inspirations. Look at the shit-storm this summer as it became clear the NSA was throwing military cyber-munitions at Tor Hidden Services - and, in the end, likely had to get into their targets via some avenue other than directly breaking Tor. That's astonishing. Tor is a tiny project - a dozen folks, tops, including everyone not just those writing code regularly. The NSA has ten billion dollars a year to spend. And yet Tor has always felt they should have a standard of doing their best to protect against all conceivable attack vectors - whether that's a stalking ex-wife or the world's biggest military machine. Succeed or fail short-term, Tor has never figured "good enough" would do the job. They've evolved, thrown out old approaches, developed entirely new tech, made mistakes and learned from them... all with the goal of providing real, tech-driven, reliable privacy.

That's a big deal.

Compare that to the "VPN industry," composed of hundreds of me-too setups running default OpenVPN or PPTP installs on cheap rented VPS "servers" - the contrast is hard to ignore. Tor welcomes - invites - critique and aggressive academic research on possible vulnerabilities in their model and implementation; "VPN industry" participants attack anyone who dares point out that they are doing things flat-out wrong. As a result, Tor benefits from enormous community participation - and Tor users benefit from that improvement in their tech.

As a summary, this a decent start by there's quite likely dozens of additional, smaller ways that individual team members have picked up habits, or tools, or procedures, or attitudes from the Tor project's work. They are canonical.

  • ~ cryptostorm_team
by cryptostorm_team
Sat Nov 09, 2013 6:28 pm
Forum: member support & tech assistance
Topic: buying access token with creditcard (paypal) -expiration??
Replies: 3
Views: 5381

Re: buying access token with creditcard (paypal) -expiration

We're working on a little web app that allows folks to paste in the hash of their token, and see what it's status us: whether it's been "activated" thru first-time use, and when it's expiry date is. This has a number of uses, and is a fairly high priority to get deployed so folks can track expirations and ensure they've got a new token, if needed, in advance.

Building this function into the client widget - which will then "know" the token status natively and report it to the person connecting to cryptostorm, as requested - is already on the roadmap in the 1.0 production block. Which is to say: not in the 0.9 widget release, but likely part of the next major update that includes exitnode selection from a punchlist, as well.

In the meantime, providing a web interface is a good interim step and useful for folks who choose not to use the widget also. We do need to proof it against all the usual attack vectors: DDoS, malformed input syntax, and so on. That ends up being more design work than the function itself, which is fairly routine nowadays with all the useful design tools available for this sort of functionality.

  • ~ cryptostorm_team
by cryptostorm_team
Fri Nov 08, 2013 7:20 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Official launch press release: community input requested
Replies: 9
Views: 12798

Official launch press release: community input requested

This is a bit unconventional, perhaps, but we've made a decision as a team to post here a pre-release version of our "official" launch press release. Those who follow our twitter account perhaps have made note that, during this week, we've discussed several earlier versions of this press release... versions which, in the end, we didn't think captured the essence of what cryptostorm is about. Back do the drawing board.

This is version 2.3(b), by our internal version-control metrics. It's still got some work to do, we feel, and though we'd intended to circulate it earlier this morning, we've chosen to hold off.

Instead, we've decided to make the pre-release version available to folks here in the forum for critique, discussion, and - we hope - suggested improvements. Open ended, no preconceptions...

Here you go:

cryptostorm - Crypto Wars, version 2.0
{revision 2.3(b), rolling edit}

Today, we announce the launch of the cryptostorm darknet - our contribution to the battle for a free, diverse, healthy online ecosystem.

As a distributed team of privacy systems architects and developers, we know that technology provides the tools to overcome the threat of rogue military surveillance online. Just as math shows us that strong crypto works, systems theory shows us that we can create distributed, resilient, decentralised systems to bring the protections of strong crypto to everyone - not just the technological (and political) elite.

By vastly increasing the cost, complexity, and bespoke attention required for mass surveillance, such systems make PRISM-style monstrosities into expensive, unwieldy bureaucratic boondoggles. We can't protect every packet against every attack vector, but we can decisively tip the scales against functional dragnet surveillance: make it cost-prohibitive and unreliable.

- - -

During the first crypto wars of the 1990s, attempt was made to prevent the widespread availability of strong cryptographic algorithms. These attempts failed, thanks to Zimmerman and others. Today, access to proven & well-studied cryptographic primitives is largely unlimited.

This is not enough.

A lesson learned since then is that cryptographic primitives cannot alone provide broad protection against the threat of digital surveillance. Rather, systems are required: systems that build on the core foundation of strong crypto in order to provide real-world, easy-to-use protection.

We now know that military agencies such as the NSA & GQHQ have gone rogue, turning their massive budgets and secrecy-shrouded operations on the world's civilian population. Efforts to regain control of these Orwellian machines via the political process have resoundingly failed; nobody is accountable, no limits are respected.

The one method to prevent this spiralling police state nightmare that has proved effective is strong crypto, implemented in well-designed, well-managed, people-friendly systems. Efforts to prevent exactly this kind of system from being broadly deployed are today's Crypto Wars, version 2.0.

- - -

To build cryptostorm, we have merged the technological excellence of the Tor project with the economic leverage of bitcoin's identity-agnostic, blockchain-based framework. The power of these two successful models combines in cryptostorm. A privacy service without central failure points, cryptostorm has eliminated the "customer database" from the old "VPN service" model. Instead, cryptostorm leverages a novel token-based authentication model to ensure financial resources sufficient to provision ample bandwidth, customer service, and development support for all network members: no longer must darknet privacy be slow, limited-use, limited-access, and complex. The trade-off between security and ease of use is a false dichotomy. We choose both.

- - -

Our team was the first to publicly embrace a "privacy seppuku pledge," in 2007: rather than become tools for customer betrayal, we vowed to shut down the company first. We have followed through on that vow, discontinuing the former Cryptocloud VPN service. In its place, we have created cryptstorm. We follow in the footsteps of Lavabit, CryptoSeal, Silent Circle, and others in doing so. Beyond mere refusal, we show that the power of privacy seppuku is not merely in the refusal to betray, but in the door it opens to entirely new ways of doing thing better.

The cryptostorm network has no central failure point, and it has no customer database. Instead, access to the network is regulated by tokens - not usernames and passwords. Tokens are available via a range of independent resellers, and are formally decoupled from network operations & member activity online. Our model eliminates even the possibility of insiders connecting individual members to network activity, since the network knows nothing of individual people. It only knows tokens.

This model is not perfect - no deployed system ever is - but it is a substantial improvement from the old "VPN service" model we helped pioneer with Cryptocloud in 2007. To ensure ongoing improvements in our security model and OpSec procedures, all elements of the network & our implementation procedures are available for peer review. We actively welcome critique from experts in academic cryptography, cryptographic engineering, network administration, and systems topology. Our goal is simple: providing truly secure privacy service to everyone using the internet.

- - -

Those with political power want us to believe that we need to ask permission before we can design systems that protect us from the psychotic gaze of rogue military surveillance online. To this, our reply is simple: bullshit. We need not seek state approval in order to design secure systems. We reject that pinched narrative, and embrace the organic potential of genuinely creative systems design.

We speak with our actions, and today we announce cryptostorm's full public availability. In doing so, we look forward to working with other activists, technologists, researchers, visionaries, and hard-nosed pragmatists who share our simple goal: a future free from ubiquitous online surveillance, a future where people individually choose when and where to share their private information.

We are past signing petitions and begging for "change." We invest our passion and expertise in the propaganda of action, in the act of collaborative creation. It is not necessary to accept a role as victims of mass state surveillance. Rather than being helpless victims, we choose the path of empowerment.

Collectively, we have the tools to make the surveillance nightmare just that: just a bad dream, one from which we gratefully wake.

It's ti
me to wake up.
by cryptostorm_team
Sun Nov 03, 2013 2:11 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: [ARCHIVE] HOWTO: Mac/OSX connects via Tunnelblick
Replies: 19
Views: 28776

confirmed successful Tunnelblick connects?

We're hearing unofficial reports that beta testers are successfully using Tunnelblick for cryptostorm network sessions - but haven't been able to convince any of them to post their "howto" here for the simple fact that they claim it's a non-issue: it just works.

But, given the earlier issues as noted in this thread and some feedback we've received via other comms channels that Tunnelblick has been difficult during the beta test, we're not going to confirm things are working until we get at least a couple of beta testers to let us know directly that the process is successful.

Anyone out there: do we have confirmed Tunnelblick-mediate cryptostorm connections? If so, did it require any particular tricks to get things settled, or is it all fairly routine?

It may be that the network-side updates Graze did along with several beta testers, in the Android client setup, also resolved a similar Tunnelblick issue... but that's purely an hypothesis at this point.

Thanks in advance for any reports you can provide!
  • ~ cryptostorm_team
by cryptostorm_team
Sun Nov 03, 2013 2:01 pm
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: cryptostorm running DNS resolvers in-house? Discussion...
Replies: 12
Views: 34036

best practices?

Guest wrote:in the mean time you could use OpenNIC servers if needed (anon/no log or otherwise)...
Great minds think alike; the first entry in our current pushed DNS resolver settings is...

Code: Select all

push "dhcp-option DNS"
What we'd like to ask of everyone reading this thread is to think (and comment) on this question:

  • ...with a completely blank slate, what is the best-practices approach we can take in the future when it comes to in-house DNS resolution? What is the wishlist for the best way to do this, if there were no constraints on our approach?

Of course, it's not a technical challenge to provide baseline DNS resolution service in-house and do so competently - we've done that before, and we're happy to do it again. But... if we're going to do it, is there a qualitative jump in DNS resolver service that we can implement in doing so? Theoretical discussions that have taken place, but been deemed "impractical" for one reason or another?

Let's cast a very wide net, in terms of possible capabilities, and see if this is an opportunity to genuinely step things up a notch. Rather than simply doing a good job of doing what others already do (which is a starting point), can we use this as a catalyst for doing something substantively better?

It was this sort of discussion, in relation to privacy network authentication systems that eventually lead to the development of our token-base auth system; had we just assumed the way forward was to do a good job of doing what "everyone else" already does, we'd have missed the opportunity to approach the issue as one with unbounded options to improve.

Looking forward to what folks might have to suggest and explore...

  • ~ cryptostorm_team