Search found 612 matches

by Pattern_Juggled
Thu Mar 31, 2016 4:08 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: Linux...entry points/nodes/whatevers
Replies: 3
Views: 24000

Re: Linux...entry points/nodes/whatevers

Oliver wrote:Loving the voodoo concept, but very slow for me on account of both options being on the wrong side of the pond (I presume). I also presume there'll be some this side of the pond soon, but I just wanted to register my enthusiastic pester. MOAR!!! :D
More voodoo paths in process of provisioning already. Our first voodoo-dedicated 'supernode,' in Romania, has more than 60 IPs assigned to it for use with voodoo. Each IP is a different vooodoo pathway, so we'll have some room to expand.

We're looking for a procedure for announcing new voodoo paths; that's one of the tactical issues yet to be resolved fully. Any suggestions are most welcome.

Cheers.
by Pattern_Juggled
Wed Mar 30, 2016 9:18 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: BeyondVPN: voodoo, multi-layered security - throughout cryptostorm
Replies: 15
Views: 58966

Re: voodoo.network in... not so many words, please :-)

hashtable wrote:I completely agree, and I hadn't heard of 'GRE tunnels' before reading the 'stream of consciousness' README on voodoo's github. It's fundamentally simple - without the bs every single other VPN provider 'claims' to provide. You (or whoever wrote it) did so transparently - open source - so cryptostorm has my trust and respect.
I actually did not author that particular piece of work; trust that, had I done so, it'd be orders-of-magnitude longer, with oceans more words... and not in the least bit helpful in understanding how voodoo works. That's a df tech-outline at its finest.
Then I read the blog posts and slowly put together how this came to be - :wtf: :clap:
That is (mostly) my work: note the (aforementioned) oceans of words... always a dead giveaway. :mrgreen:

Voodoo is actually alot less "complex" than our descriptions thus far make it sound to be. It's not that we're being intentionally opaque; rather, I suspect, all of us that have worked on the guts of voodoo on and off since last fall are too close to the core to have perspective on the bigger vantage. So, we're tangled in the details - because we had to actually make it work - whereas the broad-stroke explanation need not be tangled in that level of precision.
CemkKUwW4AA_3U5.jpg
My hope is that someone will come along and explain what voodoo is, an elegant and memorable paragraph or two. In fact, as we've been asked in twitter to do a nontechnical "this is what voodoo is" post, my hope that someone savior will appear and solve that problem gets riper by the day!

Help me, Obi-wan... you're my only hope...
voodoo_tokensB.jpg
by Pattern_Juggled
Sun Mar 27, 2016 12:13 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: BeyondVPN: voodoo, multi-layered security - throughout cryptostorm
Replies: 15
Views: 58966

Re: speaking of "multi-hop"... did someone say "multi-hop"..? :-P

hashtable wrote:The voodoo network is unique / insane ? I can't explain it verbally, but something below the threshold of my consciousness understands the topology of the network.
My sense is that, thus far, we've done a suboptimal job of explaining what voodoo really is. Not for lack of trying, mind you... I suspect instead that we're too close to the operational side of it, within the team, to step back and put it into words in a way that's really useful.

At a topological level, it's not hideously complex - for me, visualising what's going on is obvious... in hindsight! :-P

One of the challenges is that, in the usual "multi-hop" snake oil diagram out there, it's all made to look simple and easy to do. Of course, the only reason that's the case is that they're not actually implementing anything: it's just a diagram! For example...
connection-map.png
Ok sure, looks good. But, when I try to figure out what's going on, at a routing and cryptographic level (respectively) with this kind of thing... it just isn't at all clear to me how it's actually working, and what runs where. This is typical. The technical explanation accompanying this diagram...
When connecting to a normal singlehop VPN service your Internet traffic is routed through a single VPN server. With a multihop VPN service it is routed through 2 or more VPN servers in different jurisdictions. This technology has been carefully incorporated into the IVPN network using the same 256 bit OpenVPN encryption as the singlehop VPN servers.
...srsly? Well, I feel better knowing that "this technology" (whatever that means) has been "carefully incorporated" into the network in question - hate to think what a sloppy, careless, devil-may-care attitude might bring into the equation on a project like this. :roll:

And, despite entire threads on the subject in all sorts of enthusiasm-laden security discussion boards (here's one), it's rare to get a technically cogent explanation of what these are supposed to be doing, in detail, at a systems admin or network design level. No, it's not your imagination - the explanations really are incoherent, basically, to the point of gibberish. One reads alot of "I'm using sixteen hops and three loops and a double-twist of SSH tunnelling as a cherry on top" kinds of fanciful, boastful statements - and surely some of those folks actually know how to make such monstrosities pass packets, perhaps... but mostly I question whether any of that stuff has gone beyond the, shall we say, highly theoretical stage and entered production.

One more example, and I promise I'll stop (for now): this suspiciously vague multi-hop marketing page describes itself this way:
Quad VPN works on the basis of daemon that simultaneously connects a separate VPN server with all other VPN servers. Each VPN server can simultaneously be a backend server, a frontend server or a transit server. Thus, there's formed a global server network with no chances to track the traffic route.
I'm not saying this is impossible - each server acting as any of the various requirements in a four-tier network topology - but... I've no idea what administrative tools would be used to manage such a exponentially-complex, emergent network topology on the fly. And I have no idea what a "daemon that simultaneously connects a separate VPN server with all other VPN servers" means, in context: an openvpn daemon? :wtf:

Anyhow, over the years I've invested a good chunk of time in studying how these multi-hop things work, and after much frustration and conclusion that (not surprisingly) I was probably just not smart enough to make sense of them, I came to the slow realisation that most of the ways they are described make absolutely no sense whatsoever - not even at an overview/summary level. No wonder they seemed like gibberish to me; they are.

It's a subject I could keep pouring time into forever, basically - because it's fascinating, and I am sure that mixed in with the gibberish and puffery there's some actually interesting tidbits of brilliance out there, somewhere - but it isn't relevant to cryptostorm, or voodoo, or actually offering a functional service to our full membership that actually does what it says.

In fact, doing multi-layered, tiered network topologies - as any CNE or similar can say she learned early in her training - requires clear thinking, good design skills, and a fundamental awareness of what one is looking to build. These aren't the places to "make it up as you go," and debugging them when they don't work is all but impossible if that clarity of mind is absent.

(as our voodoo alpha testers already know, and as others will be discovering in their own voodoo adventures shortly, traceroute results whilst running across voodoo trajectories are... entertainingly surprising, even to us - again, it's the OSI layer thing, and one needs to be really precise on what traceroute or any other network-path-discovery tool is actually doing, within the context of a voodoo route, to make sense of the results that come from doing these tests in the voodoo context; good times!)

To me, what's a bit of a challenge is keeping the (of course, falsely-reified) OSI layers straight in my mind as I work out what's happening where. Of course, we can skip of that stuff in explaining voodoo... but, in doing so, there's a fundamental loss of coherence as to how the network has been restructured.

Anyhow, I suspect someone will come along and in an elegant paragraph, nail an explanation that is both accurate and compact... but that someone, for obvious reasons, is unlikely to be yours truly. ;-)
It worked during testing - it was slow - but I expected that.
We made no effort whatsoever to perf-tune voodoo during the alpha test. Despite that, I've absolute confidence - more than a hunch, but still not based on empirical results just to be clear - that we'll see no performance hit (apart from, for longer voodoo trajectories in a physical sense, longer pingtimes through the segments as a whole) from two-tiered voodoo. If anything, and with the time and opportunity to do some intensive testing in production context, I predict we'll see better overall throughput metrics once we learn how to really make the voodoo segments sing.
The test only used one exit node - but the whitepaper talks about creating vps endpoints on the fly? Does that mean the ip address resolved will change in flux as needed, even though the same cores are being used?
Sorta.

The IP address visible to the outside internet is that of the exitnode, not the supernode. So, yes, whenever an exitnode is changed then the IP address of the underlying session will change. And, yes, in full deployment we'll have supernodes with a constellation of exitnode options spoking out from the, that will be chosen by members as they prefer. The single-exitnode nature of the alpha test voodoo trajectories was simply to keep the test in a small enough box for us to be able to get good empirical results out of the experiment.

The fun really starts, by the way, when we put jumpnodes ("entry nodes" is another label for them, but doesn't seem to be sticking as compared to jumpnodes) into the mix and have a three-tier voodoo topology. That enables, for example, a local jumpnode within the PRC, and we're able to GRE tunnel traffic out through the Great Firewall as we choose - which may well be very useful when dealing with GF-based blocks of anti-censorship tools used by those in mainland China.

And so forth.

Cheers.
by Pattern_Juggled
Sun Mar 27, 2016 1:32 am
Forum: general chat, suggestions, industry news
Topic: feedback reqest: jitsi, and Ostel.co
Replies: 4
Views: 25867

Re: feedback reqest: jitsi, and Ostel.co

In case you'd not seen already, there's some feedback on your questions coming in via a pointer to this thread from twitter, which someone took the liberty of creating this morning:
Screenshot (68).png
Cheers :-)
by Pattern_Juggled
Sun Mar 20, 2016 7:57 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: #cryptostorm IRC cert needs an update
Replies: 5
Views: 28969

Re: #cryptostorm IRC cert needs an update

Khariz wrote:I now know twice as much as I used to know about certs and realize that I know nothing about them at all.
I've been messing with x.509 certs as something more than merely sideline - as more of an admittedly unhealthy obsession - for a few years now... and your statement ("I now know twice as much as I used to know about certs and realize that I know nothing about them at all") actually works just as well for me as it does for you.

Much wisdom contained therein, there is :-P

Cheers.
by Pattern_Juggled
Sun Mar 20, 2016 7:53 am
Forum: general chat, suggestions, industry news
Topic: Twitter Feed
Replies: 4
Views: 25996

Re: Twitter Feed

SCREEN NAME! wrote:Consider me learned good. :lol:
+1 good sport, & appreciate the chatter... it's been fun! :D

Cheers.
by Pattern_Juggled
Sat Mar 19, 2016 3:23 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: #cryptostorm IRC cert needs an update
Replies: 5
Views: 28969

Re: Keychain All The Certz

Also: I still want to KeyChain-cert this.

Badly.

It Shall Be Done. (but prolly not today, alas)

Cheers!
by Pattern_Juggled
Sat Mar 19, 2016 2:24 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: #cryptostorm IRC cert needs an update
Replies: 5
Views: 28969

Re: wildcards, x.509, and the death of cool (or whatever)

Khariz wrote:AirVPN kindly pointed out that the cert at: https://resellers.cryptostorm.ch is expired/broken as well.
Without calling into question the profound - one might even go so far as to say, moving - kindness to be found in such an unstintingly selfless gesture, it does kind of leave one - even a kind one, such as I - with a nagging sense that possibly... just possibly, mind you, such a statement kind of hints at a lack of understanding of how certs actually work.

Kinda. :-P

You see, the cryptostorm.ch cert throws those Scary Browser Warnings for subdomains of the underlying domain - cryptostorm.ch, such as resellers.cryptostorm.ch. Is that because subdomains are inherently a source of profound cryptographic instability and thus should be flagged nine ways 'till Sunday in hopes that any prospective visitors will with utmost alacrity run away, run away fast!

Err, actually no.

The only reason that specific subdomain - indeed, any of the subdomains some idiot has (against all sound advice) littered throughout this forum (that idiot is I) - throws that warning is because we're too cheap to buy the much-more-expensive "wildcard" cert from whatever CA is peddling cheap crypto credentials in the nearest virtual street-corner of late. Wildcard certs cover {whateverdubdomainyouwanttouse}.yourdomain.{coolTLD}, so resellers.cryptostorm.ch would show up as perfectly safe with a wildcard cert.

With our bare-assed, cheapo cert we can only use cryptostorm.ch (and any subdirectories we want, which have nothing to do with DNS or cert stuff whatsoever; see below for relevance). And usually one can sneak in some protocol-ish prefixes in a non-wildcard cert, such as www.cryptostorm.ch, by listing them in the "Subject Alternative Name" section of the certificate (which is part of the wild, wild west that is the "extensions" part of the x.509 protocol, and not to put too fine a point on it, but ahoy... here be vulns!)... but not always, depending on the phase of the moon and whether the CA in question is really busy issuing rogue root certificates to national intelligence agencies and pretending they were "stolen" by lone-cub teenage skiddies - in which case you might be able to sneak in the full & lustily uncensored text of Janes Joyce's Finnegan's Wake (good stuff, btw), as a SAN entry, with not a moment's protest from the super-busy CA in question.

But you can't just stick subdomains in the SAN field, because that would lower CA rent-seeking oligopolistic profits. And also something something security - which is total nonsense, but obligatory to make it look like this isn't all just some ginned-up confidence game making a bunch of rich white guys noticeably richer without doing a damned thing to actually improve anyone's security. Well, it might improve their security since they can afford to buy whole armies of private mercenaries to fend off the starving masses of cryptographically desperate 99.9%ers. So there!

Err, I think things got away from me just a wee bit; apologies, sore subject - obviously.

(Incidentally, clever folks have engineered certificate party tricks like embedding entire mp4 video files in the extensions fields of "legitimate" certificates. Also: embedding whole certificates in the extensions field... all but requiring the standard "turtles all the way down" recursive reference. Insert memepic here. :-P ...basically as long as you can beg, borrow, or steal an OID from someone then you can blob into your cert's extensions fields whatever digital tomfoolery you can with a straight face claim is contained within the set of stuff defined by your OID... which oh by the way can be recursive, as well. Ouch.)

Right, anyway the point was supposed to be that wildcard certs cost, more or less, ten times as much as boring-assed standard certs (which nowadays you can get for free if you want; if memory serves, we paid about three-fiddy for the cert used here on this forum - not joking; that's the price. Certs are lol).

So, order of magnitude, a wildcard cert would cost about a hundred bucks. And, yes, for years we've (ok, pretty much just me cock-blocking this one, tbh) refused to pony up that hundred bucks and make the bogus Scary Browser Warnings go away forever (or at least until the bloody cert expires, and one must pay all over again) - choosing instead to burn lots of hours replying to understandably-nervous folks who visit the forum here and think (understandably) "wtf, these folks say they're all 'crypto 1337' and their ssl cert is fantastically broken; very n00b indeed." Not so! In the event, not proved to be so merely by the scary warnings associated with sobdomains of this particular discussion forum. Technically.

But it's not just because we are (read: I am) annoyingly stubborn and lack the grace to admit when I was stupid and that we just should have spend the damned hundred bucks and been done with it. Well ok, mostly it's that. Alot of it, anyway.

But wait... there's more! You see - and indeed, you do see if you've waded through this post all the way to this almost-ending part without giving up - the existence of these totally cryptographically bogus warnings provides me with the cherished opportunity to natter on about certificates, and x.509, and SAN fields, and OIDs, and such gibberish. Which, when you dig into it all, is both really important to actual security as people experience it on the actual internet... and is also totally fascinating and strange and a source of never-ending surprise and disgust and pure helpless rage. Not so often things like confidence and insight and deeper understanding... because all that was intentionally excised from the original x.509 RFPs back in the stone ages - and any new outbreaks of such healthy disorders in this particular dark corner of applied cryptography are ruthlessly and joylessly crushed to nonexistence. And fast.

Also: nowadays we just use subdirectory mappings - cryptostorm.ch/resellers - instead of subdomains - resellers.cryptostorm.ch (which, sure enough, if you click it, will throw a browser warning) - I think df even has some sort of script that hangs out somewhere and tries to make sure that the former are auto-mapped to the latter, in case some idiot (me) forgot to do so manually. Which happens. Too bad I didn't listen to him years ago, right?

Almost finally, we're pretty sure there's a clever way to use the convoluted syntax of the land of .htaccess to do on-the-fly redirects of subdomain URIs directly onto logically aligned subdirectories (or even specific, individual posts as specified by full URLs)... without giving browsers the time to realise what's going on and spurt out that ransomware-style "pay someone more munnies for no real benefit and you'll be so much safer - lol" message that makes this whole thing so demeaning and soul-deadening. Maybe someone smart in such fuckery will read this and grace us with her tl;dr solution to the problem - in which case, bless you, kind lady! - but speaking personally I read through some full-bore htaccess fuckery guides back whenever this whole thing started becoming A Thing of Concern... and those are days I will never, ever get back. And that is not a process I'm willing to repeat. Nope. Done with that, thanks. You can have your scary subdomain browser warnings - I'm not going back to that unhappy place, not without serious physical coercion being threatened. And even then, prolly not tbh. :shifty:

So it's all because I'm a dork... bot, right? Looks that way.

No! It's not my fault - it's all sad now because x.509 sucks dinosaur balls (do such things even exist - or did they exist back when there were living, extant, dinosaur-y critters from which they could enticingly hang?) and also Certification Authorities are a disgusting, money-grubbing, security-destroying racket. Because Comodo. That is all: because Comodo.

Sigh.

Also it's fun to talk about this stuff... but, as likely every reader punished by wading through this post has already long since realised - don't get me started! Really, you don't want to; I can go on, and on... and bloody on when it comes to cert sadness. Just ask my colleague Graze. Or anyone who has ever met me, basically. Yah, it's that bad. :mrgreen:

Funny aside: someday, someone is going to be so psychologically damaged by hearing me say this same thing for the 1 x 10(xxx) time, that she'll probably decide to fork out the hundred bucks for the wildcard cert and thereby finally make it stahp!

Which: fair enough. Except I might not install it, because stubborn.

j/k

(maybe)

Cheers.
by Pattern_Juggled
Sat Mar 19, 2016 12:50 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: Honeypot/Airvpn/Giganerd/Trolling/Sheivoko
Replies: 11
Views: 51168

Re: a bit of clarification on what "no logging" actually means

LoveTheStorm wrote:We use ram disks...
I'm at a loss as to the relevance of "ram {sic} disks" regarding logging policies. RAM "disk" is just another kind of physical storage media; in many respects, it's not dissimilar from SSD "hard disks"... though of course a RAM disk is instantiated in "real" Random Access Memory (thus: RAM), and SSD hardware is based on a different physics... with the latter remaining "stateful" upon power-off and the former (more or less, but not immediately and in some cases not irreversibly) being erased to blank state without power.

Which is to say, the difference relates to what happens when a machine is hard power-cycled fully and for a nontrivially short period of time. Since any reasonably well-run "VPN service server" (we call them nodes, which has now been copied by lots of copycat services, tbh) will not be powered-down regularly - if at all - in the course of months or years of production usage, the distinction as to how data are handled during powerdown is... of very little functional relevance in the context of logging.

Plus, of course, it's both possible and fairly common to have RAM disk configurations write-out snapshots of their contents to a "real" hard drive so that - you guessed it - if the machine powers down unexpectedly, those data are not lost once a successful reboot takes place. So, simply assuming that "RAM disk" means "logs magically vanish" is sort of, well, silly.

(yes, one can build a machine with only RAM disk - though it'll have to boot up from some other physical media to get its kernel and packages loaded up... obviously - or with RAM disk that does not flush/mirror to physical disk and thus is potentially able not to retain copies of data stored within its arrays after a hard reboot. But neither are, afaik, default configurations in any mainstream linus distro - or unix distro, or Solaris, or NextOS, or whatever. :-P )

and we send OpenVPN logs to /dev/null/
That kind of sounds clever - just pipe logs to /dev/null and done... but it's deeply flawed. The process of piping the logs does, in fact, pass the logs - which exist, and are real - through the kernel and over to that nonexistent place of storage we all know and love so well. If something goes wrong along the way, or if the kernel does clever stuff to make sure that in the case crashes happen, data aren't gone forever (which most computational platforms consider to be a really, really bad thing and thus work hard behind the scenes to avoid... doing alot more work on that task than most folks sitting at the terminal every realise), those "dev-null'd" logs suddenly exist and are easily accessible.

Derp.

Since that's A Problem, and Problems are best avoided rather than being apologised for down the road (the "oops gee really sorry" approach, one might call it), we modded the source code of the OpenVPN application itself, to entirely eliminate the process of retaining any logs in the first place.

Now, it's not like that modification is tens of thousands of lines of amazingly clever, achingly beautiful software poetry; it's a fairly small tweak, though to be fair it took some testing and fine-tuning to make sure that, in production, it doesn't break other stuff that really has to work for the network to function effectively. Even so (and it's a published edit, so anyone can apply the diff to their own ovpn src before they compile runtimes - no secrets here, no wheels needing to be reinvented), it's a nicely-done bit of coding, if I may humbly say so, and in doing so credit the inestimable df who... pretty much did all the actualiy "making software that actually works when compiled and executed" as opposed to me who merely writes about such endeavors after the fact.

Irrespective, eliminating the actual code - source, and concomitant binary snippets at runtime - is a vastly more elegant, and vastly less fail-prone, way to do away with unwanted logs than is the approach of just letting openvpn make the logs, letting it pass them to the kernel, and then letting the kernel and all it's various brilliant bits and subsystems, in turn, pass it to the file manager which then passes it to a directory that doesn't actually exist... which then makes the file actually not-exist, by whatever means that happens in the tiny little pocket universe that is /dev/null in linuxville (sorry, too lazy to look up the wikipedia link on that - and not quite dishonest enough to pretend I don't need to ;-) ).

Anyhow, that's how we did it. We are pretty confident that our approach doesn't suck. Which makes us feel pretty good, right? Right. :-)
It is totally unnecessary to keep logs to show you the data you can see in your control panel while a connection to a VPN server is still active.
As I've no idea what kind of data their "control panel" shows during an active session, I don't know if this statement, prima faciae makes logical sense, or not. It might come down to ontology/semantics... in that the data in the control panel itself form, themselves, a kind of "log"... but that's purely speculative.

We don't have a control panel with spinny dials and gizmos and blinking text and such... when the cstorm widget connects, it's connected. It, speaking precisely, removes itself from the Windows UI entirely - going back to lurk in the taskbar. As there's no actual benefit from all of that blinking-spinning eye-candy distraction-ware, we simply decided to do away with it. All of it.

That probably says alot about not only our design philosophy - don't clutter stuff up just because "everyone else does it," or because it makes a piece of user-facing software look "cool" or "fancy" - it also says quite a bit about where we prioritise our efforts: the widget is a way to get connected to the cryptostorm network, and not the reverse. The best thing it can do is do its job - connect, and stay connected - at which point it's irrelevant to the real work of securing traffic transiting the network bidirectionally.

Anyhow that's how we see things. :angel:

It's easy to write about "no logging" - it's not so easy to actually do it, and do it in an demonstrably effective way. That's what we've learned, since our team first introduced a "no logging" policy... back in 2009, which at the time caused us to be excoriated as "irresponsible" and "illegal" because we weren't retaining usage logs.

Oh the times, how they are a-changing...
IMG_20160317_034838.jpg
by Pattern_Juggled
Sat Mar 19, 2016 7:34 am
Forum: general chat, suggestions, industry news
Topic: Twitter Feed
Replies: 4
Views: 25996

Re: Twitter Feed

SCREEN NAME! wrote:....appears to be being ran by...
Now, see... that's not really a proper conjugation (in any known tense or mood) within the confines of conventional English grammar. Were I an obnoxious grammar nerd - which, fortunately for you, I'm most assuredly not - I'd unpack that as, err: present participle ("being run") shifted mystically into a mutated form of past-but-sorta-one-foot-still-in-the-present-somehow participle ("to be being ran").

Then there's the "appears to be" thing, which c'mon - it's a twitter feed right? It's an as-such categorical entity and really doesn't "appear" to be anything - any more than, say, water "appears to be" wet. Twitter feeds, one might argue from an ontological perspective, are pure appearance. And, as one cannot (for example) "pretend" to be able to snowboard (for one can either snowboard, or not - the doing is the thing, and pretending isn't a coherent action in such a context) - a twitter feed cannot be said to "appear" to be anything other than what it, in fact, is.

(The surprisingly elegant proper term for such cognitive - or existential, depending on one's ontology and so forth - entities is noumenon. Which sounds alot cooler and, well, less insufferably & unpardonably German than the Kantian "Ding an sich"... but means basically the same thing. Which is sort of funny - if you're an ontology geek. Which I am. Anyhow...)

The proper construction of that conjugation of "to appear" ("aparacer" in the Spanish, fwiw) is:
"reads like a 14-year-old-girl... a girl with totally awesome good taste, a brilliant sense of humour, profound technical wisdom, and compassion enough to qualify as a reborn Buddha in a pinch... cryptostorm's main twitter feed reads like that totally badassed girl is gracing us with her bounteous spirit by means of sharing her tweeted thoughts in a public timeline like this. And, fuck yeah, is it great."
See, that's alot better. Also grammatically correct. Don't believe me? Sheesh, look it up in the OED, it's literally the textbook-correct present participle conjugation of "to appear" in English. Duh.

Also, it's less depressingly negative and obliquely self-hating (who disses on 14-year-old girls, anyhow? What's next - making fun of puppies because they're cute? ;-) ). And there's no extra, err... editorial baggage loaded into the conjugated form I've offered here - so don't even start trying to throw that particular smear around. You 14-year-old-girl-and-also-prolly-cute-puppies-hating meanie!

. . .

Just sayin' :angel:

Cheers, ~ pj


ps: fun having fun is fun - also healthy. And fun.. ermigawd I already said that - I'm such a 14-year-old-girl at heart!!!1! (and proudly so, btw)
by Pattern_Juggled
Fri Mar 18, 2016 10:16 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

Re: win10 tap-dancing

Tealc wrote:So.... after a little troubleshooting with fermi on IRC I removed OpenVPN program and Windows TAP driver and installed everything once again (i deleted the app data also and all folders) and now it works, it's once again that problem with the tun/tap driver in windows 10, after some updates it gets broken for some reason :-D (actually I made a topic about this and now I didn't think it was that)
I also ran into that on a test/dev win10 box I've been working with... and just assumed it was something I broke, tbh (I break stuff - especially Windows stuff... what can I say, it's a gift). I didn't know it was "A Thing."

That it doesn't manifest until a few cycles in is... unsettling. To my old-fashioned mind, anyhow. Probably it has to do with amazing new code polymorphism and docker containers and agile development and stuff (which is to say: things I'm tool slow to understand fully). Also I blame Plesk for it, just because.

Anyhow, glad that particular hiccup is resolved.

Cheers!


(ps: it wasn't TrackerSmacker - woot!)
by Pattern_Juggled
Fri Mar 18, 2016 10:05 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: Honeypot/Airvpn/Giganerd/Trolling/Sheivoko
Replies: 11
Views: 51168

Re: Honeypot awareness

Khariz wrote:There we go. I gave a long-winded post. It's worth a read. Just follow the OP's link.
I might suggest you echo a copy into here... just in case it, you know, gets "accidentally deleted." (not saying that's inevitable, or saying Air specifically has a history of that - I'm actually just making a general observation based on a few decades' time posting stuff on the internet :-) ).

It's always a bit strange for me to read the "honeypot" silliness, wrt cstorm. This would be the most bizarre, most unlikely, most unproductive honeypot in the history of honeypots. That doesn't stop some folks from attempts at walking us down that road, in terms of accusations... but really. It's a bit of a strange one, right? Unlikely in the extreme.

Not to mention, of course, that the team here has quite a few years of very public honeypot awareness advocacy we've done - which has brought more than a little criticism and hot air our way, to be blunt. However, it's an important topic and it's something we've been keen to discuss, explore, and study for longer than most of these "VPN services" have been taking money from "users."

There's real honeypots out there (not imaginary ones, like cstorm - because seriously). There's real "VPN service" honeypots, ffs! Some are already exposed; I'd wager very good money that most are not yet exposed (because there's little incentive to put the time and effort into exposing them - I speak from experience on this, frankly). Those unexposed ones aren't very hard to pick out, once one studies the patterns created by those which have been exposed in the past; hence our "honeypot awareness" thread, and community education efforts, dating back quite a few years.

It's a bit of a shame, because in technical terms Air isn't the worst VPN service out there - not even close. We've crossed swords with them before, here and there, over the years (those who have some miles under their wheels will likely remember such dust-ups quite well). All I'd say, speaking personally and not on behalf of the cstorm core team, on that is that... well, they sure don't have any substantive technical criticisms of cstorm to offer, do they?

Indeed, not so much.

It's a strange kind of honour to have the work one's team does quite clearly transform an industry (even a little "industry" like the VPN world) - often as not, such transformative impact isn't explicitly credited to those who make it happen; it's the nature of paradigm shifts that, ex ante, they don't seem to have had proximate causes (h/t Thomas Kuhn). That said, we on this team have watched over nearly a decade's time as our far-fetched ideas, decisions, and no-compromise willingness to set a higher expectation for what can be done in this space have gone from outsider status to de rigeur default for hundreds of me-too "VPN services" (who, not uncommonly, have simply copy-pasted chunks of our stuff, unattributed, into their own marketingspeak... which is creepy).

So, yeah. We mostly let our leadership by action speak for itself - when it matters most. And, well... in terms of priorities, we've got our focus squarely on our work on behalf of cstorm's members and the community overall. The childish, personal-attack, ED-style nonsense just isn't as exciting, to me anyhow, than back in the 1980s when I first saw trolls doing such things on pre-internet BBSes, and whatnot.

I grew up, I suppose. Unlikely as that may have seemed, a few decades past. :angel:

...not everyone has. So it goes.

Cheers.
by Pattern_Juggled
Fri Mar 18, 2016 9:28 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: #cryptostorm IRC cert needs an update
Replies: 5
Views: 28969

#cryptostorm IRC cert needs an update

Ohai, it has come to our attention that the current ssl certificate for our IRC chatroom has outlived its expiry date.

Specifically, here's the PEM-encoded version of the current cert:

Code: Select all

-----BEGIN CERTIFICATE-----
MIIFUzCCBDugAwIBAgIRAMQhOpL810Yv5/Zpo8tWLEkwDQYJKoZIhvcNAQELBQAw
gZAxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO
BgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMTYwNAYD
VQQDEy1DT01PRE8gUlNBIERvbWFpbiBWYWxpZGF0aW9uIFNlY3VyZSBTZXJ2ZXIg
Q0EwHhcNMTUwMTIwMDAwMDAwWhcNMTYwMTIwMjM1OTU5WjBWMSEwHwYDVQQLExhE
b21haW4gQ29udHJvbCBWYWxpZGF0ZWQxFDASBgNVBAsTC1Bvc2l0aXZlU1NMMRsw
GQYDVQQDExJ3d3cuY3J5cHRvc3Rvcm0uaXMwggEiMA0GCSqGSIb3DQEBAQUAA4IB
DwAwggEKAoIBAQDEL3wURN59oW8NW8PSYiWZyJbXqodys9rvhkuCRkGRt7/K/laI
INqx5VK+koLp+iqW22SOdvejYYL9tpcjt4DZZ2aGF/x0kmKfw9iu61+VCJx1WYRG
VhAGxCx5kHebkDZUvINIjm0MIP/NeL/76bsG8OUmuZQ0YBdJ8Cvc6b2OVEkGU99z
FWdkTm6xEpTfS9defs7OVBLrP08PUaGErj3KUT7cvpT5wqXo0/v2S9Cux59WpXRb
5jW4VYmnRqJ8nX2+Yv84+QPy6AAjumIZVTfW5vRRpFe3LsKefxyPdeelrWjF565H
p/RZAkbq54AuKkbyaPAi8NYhNEmkrROfVH/1AgMBAAGjggHfMIIB2zAfBgNVHSME
GDAWgBSQr2o6lFoL2JDqElZz30O0Oija5zAdBgNVHQ4EFgQUZHMCJ7O3N16EkAH1
NvWgTRpdo1UwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwEGCCsGAQUFBwMCME8GA1UdIARIMEYwOgYLKwYBBAGyMQECAgcw
KzApBggrBgEFBQcCARYdaHR0cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9DUFMwCAYG
Z4EMAQIBMFQGA1UdHwRNMEswSaBHoEWGQ2h0dHA6Ly9jcmwuY29tb2RvY2EuY29t
L0NPTU9ET1JTQURvbWFpblZhbGlkYXRpb25TZWN1cmVTZXJ2ZXJDQS5jcmwwgYUG
CCsGAQUFBwEBBHkwdzBPBggrBgEFBQcwAoZDaHR0cDovL2NydC5jb21vZG9jYS5j
b20vQ09NT0RPUlNBRG9tYWluVmFsaWRhdGlvblNlY3VyZVNlcnZlckNBLmNydDAk
BggrBgEFBQcwAYYYaHR0cDovL29jc3AuY29tb2RvY2EuY29tMC0GA1UdEQQmMCSC
End3dy5jcnlwdG9zdG9ybS5pc4IOY3J5cHRvc3Rvcm0uaXMwDQYJKoZIhvcNAQEL
BQADggEBABY+7Su6jV9/1oV+RfrYwRVWyM3Dt0a5O5QMF1GqeJ/XagfDKwpJR4OU
KgDNABKS2j809ztiWfsKL+PAIxRpK4RmCfiAjfSRKWNKBvrM+vbzqKDA+h00lBcp
mZlavX/9IgJmsIruWL/P1KaSl0ebhX3jjYbw8qMKEzRkCHoIZK52Oh9MmzJU7t03
Fg9u9Ci8JgicvODK7jQTwri8IdSCorBNHhmU4xjwqKelwt6lDKV604FBUZdzZp2U
TbCA03+jejfb9dNKlAUgEFYrXH/UMzZCwgrInzXiScaQUxn4JGpJpI7ltfJA821J
qNt64AKoQe53hDyuoHdKCdSXeBtWGtE=
-----END CERTIFICATE-----

That encoding expands, more or less (depending on the parser used, and so on, because x.509 is endlessly entertaining), to this:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
c4:21:3a:92:fc:d7:46:2f:e7:f6:69:a3:cb:56:2c:49
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Domain Validation Secure Server CA
Validity
Not Before: Jan 20 00:00:00 2015 GMT
Not After : Jan 20 23:59:59 2016 GMT
Subject: OU=Domain Control Validated, OU=PositiveSSL, CN=www.cryptostorm.is
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c4:2f:7c:14:44:de:7d:a1:6f:0d:5b:c3:d2:62:
25:99:c8:96:d7:aa:87:72:b3:da:ef:86:4b:82:46:
41:91:b7:bf:ca:fe:56:88:20:da:b1:e5:52:be:92:
82:e9:fa:2a:96:db:64:8e:76:f7:a3:61:82:fd:b6:
97:23:b7:80:d9:67:66:86:17:fc:74:92:62:9f:c3:
d8:ae:eb:5f:95:08:9c:75:59:84:46:56:10:06:c4:
2c:79:90:77:9b:90:36:54:bc:83:48:8e:6d:0c:20:
ff:cd:78:bf:fb:e9:bb:06:f0:e5:26:b9:94:34:60:
17:49:f0:2b:dc:e9:bd:8e:54:49:06:53:df:73:15:
67:64:4e:6e:b1:12:94:df:4b:d7:5e:7e:ce:ce:54:
12:eb:3f:4f:0f:51:a1:84:ae:3d:ca:51:3e:dc:be:
94:f9:c2:a5:e8:d3:fb:f6:4b:d0:ae:c7:9f:56:a5:
74:5b:e6:35:b8:55:89:a7:46:a2:7c:9d:7d:be:62:
ff:38:f9:03:f2:e8:00:23:ba:62:19:55:37:d6:e6:
f4:51:a4:57:b7:2e:c2:9e:7f:1c:8f:75:e7:a5:ad:
68:c5:e7:ae:47:a7:f4:59:02:46:ea:e7:80:2e:2a:
46:f2:68:f0:22:f0:d6:21:34:49:a4:ad:13:9f:54:
7f:f5
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Authority Key Identifier:
keyid:90:AF:6A:3A:94:5A:0B:D8:90:EA:12:56:73:DF:43:B4:3A:28:DA:E7

X509v3 Subject Key Identifier:
64:73:02:27:B3:B7:37:5E:84:90:01:F5:36:F5:A0:4D:1A:5D:A3:55
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Certificate Policies:
Policy: 1.3.6.1.4.1.6449.1.2.2.7
CPS: https://secure.comodo.com/CPS
Policy: 2.23.140.1.2.1

X509v3 CRL Distribution Points:

Full Name:
URI:http://crl.comodoca.com/COMODORSADomain ... rverCA.crl

Authority Information Access:
CA Issuers - URI:http://crt.comodoca.com/COMODORSADomain ... rverCA.crt
OCSP - URI:http://ocsp.comodoca.com

X509v3 Subject Alternative Name:
DNS:www.cryptostorm.is, DNS:cryptostorm.is
Signature Algorithm: sha256WithRSAEncryption
16:3e:ed:2b:ba:8d:5f:7f:d6:85:7e:45:fa:d8:c1:15:56:c8:
cd:c3:b7:46:b9:3b:94:0c:17:51:aa:78:9f:d7:6a:07:c3:2b:
0a:49:47:83:94:2a:00:cd:00:12:92:da:3f:34:f7:3b:62:59:
fb:0a:2f:e3:c0:23:14:69:2b:84:66:09:f8:80:8d:f4:91:29:
63:4a:06:fa:cc:fa:f6:f3:a8:a0:c0:fa:1d:34:94:17:29:99:
99:5a:bd:7f:fd:22:02:66:b0:8a:ee:58:bf:cf:d4:a6:92:97:
47:9b:85:7d:e3:8d:86:f0:f2:a3:0a:13:34:64:08:7a:08:64:
ae:76:3a:1f:4c:9b:32:54:ee:dd:37:16:0f:6e:f4:28:bc:26:
08:9c:bc:e0:ca:ee:34:13:c2:b8:bc:21:d4:82:a2:b0:4d:1e:
19:94:e3:18:f0:a8:a7:a5:c2:de:a5:0c:a5:7a:d3:81:41:51:
97:73:66:9d:94:4d:b0:80:d3:7f:a3:7a:37:db:f5:d3:4a:94:
05:20:10:56:2b:5c:7f:d4:33:36:42:c2:0a:c8:9f:35:e2:49:
c6:90:53:19:f8:24:6a:49:a4:8e:e5:b5:f2:40:f3:6d:49:a8:
db:7a:e0:02:a8:41:ee:77:84:3c:ae:a0:77:4a:09:d4:97:78:
1b:56:1a:d1
The juicy bits (in current context) are:
Validity
Not Before: Jan 20 00:00:00 2015 GMT
Not After : Jan 20 23:59:59 2016 GMT
Whoops. So, we'll get a new cert spun up. Likely it'll be conventional... though I'd love to make a keychain'd one since it's something folks will want to do manual verification of (more of than when certs are used in, say, web browsing sessions for example). Perhaps we'll do the conventional one asap, then loop back and get a fully keychained replacement ready as time allows.

Apologies for the oversight - certs are derpy enough, we don't need to add to that with stale derpy certs!

Cheers.

ps: yah, it'd be fun to make one of df's patented pem-tastic magic certs (like the katstorm.party one... but perhaps not the top priority for the team, in business terms. :lolno:
by Pattern_Juggled
Fri Mar 18, 2016 8:57 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: widget v3
Replies: 278
Views: 1659676

Re: widget v3 ("Black Dolphin") - the decision fork

edit by df: THIS IS NOT ALPHA! this is pre-alpha i.e. not finished yet.
Khariz wrote:Alpha is out and ready to be downloaded. I'll find the link. PJ posted it.

Here it is: https://b.unni.es/setup.exe
Note that this version linked to above is really, really alpha... nothing wrong with alpha - but remember that it's not even officially beta yet!

(having a wider member pool playing around with the alpha version can - in theory - help to highlight areas where we might not have even been watching to closely so that we can rethink those decisions, more so than actually pinning down pre-production bugs or fine-tuning which will still require substantial beta testing; so, for those alpha testers giving things a try, please keep your view broad and wide, in terms of feeling confident in providing feedbsck that expands past the usual "here's a specific bug report, let's get it squashed"... if you have such comments to make, of course)

There's an enormous amount of expertise and experience that df has invested in this new widget version - and, without beating around the bush, the widget has really become an expression of his dedication to cstorm and to creating a Windows-based application that's far beyond basic expectations of what a "VPN service" client program does.

As with many great designs, much of what he's given to us in the new widget comes in the way of things that aren't there, and especially things that aren't visible. Knowing hat not to do can be the ultimate expression of mastery, and he's avoided whole classes of dead-ends, faulty assumptions, broken design models, and coding errors overall. On top, he's provided a lifetime's knowledge of how real attackers work in the real world - using that knowledge to harden this code in a way few consumer-grade security technologies will ever match.

When we released the version 2 widget "Sexy Narwhal" a few years back, the whole team went into an obsessive, iterative cycle of fine-tuning and bughunting that went on for weeks. We had community help, of course... but back then cstorm was a smaller network, and all of us could much more comfortably shun other obligations to eat, breathe, sleep, and dream the widget until it was ready for release. That obsessive attention shows in the Narwhal; it's stood the test of time enormously well, and remains a brilliant piece of security engineering (in my humble - but not disinterested - opinion). It's been polished here and there since then, as can be seen in the Narwhal thread here... but at core, the Narwhal has held its own and still does great work.

This presents a conundrum with the new widget3 (which has been codenamed, internally, "Black Dolphin" since ages ago - but may well end up with a more conventional name upon release... or not?). Do we seek to carve out that kind of obsessive focus-time on the part of the team, to take the Dolphin into its final beta version? Or do we do more of a "rolling beta" kind of process, reaching deeper into the community to help with final refinements and bughunting? The ultimate decision on that philosophical issue is really df's - this is his artistry at play here, and it's his to steer at the deeper levels.

Perhaps that's for the best, as I'm not personally sure the right path forward on this. Ever the obsessive, I lust for that total-focus phase to get this Dolphin widget to its finest form before release... but I know that kind of focus isn't really aligned with cstorm today - which has lots of moving parts, a broader and more diverse team, and... just alot more of everything, basically :-)

There's also two big elephants standing in the corner of the room, in terms of the Dolphin widget: voodoo, and "tokens2" (which has no link, as it's managed to remain firmly behind the scenes thus far). Both are big, hairy, important, fascinating extensions and improvements to cstorm's core model. Both are far along in the development phase - but far from being ready for production at full-network scale. Both will take a bit more refinement, technologically, before they go into widespread deployment... and both involve thorny, challenging questions wrt business model, revenue, reverse compatability, and overall direction for the project moving forward.

Which is to say: getting all that stuff pinned down quickly, so that the Black Dolphin widget (that's how I think of it, anyhow) can get into widespread deployment, isn't going to happen. Not quickly, in the sense of "days, not weeks" - it's just not reasonable. So if a decision is made to integrate voodoo and tokens2 into the new widget (the two components also intertwine with each other, so they basically come as a basket, together), it'll delay the release of the widget. That's reality.

However... if they aren't included, it's likely voodoo/tokens2 will get pushed back, time-wise, before they're really publicly available. Which would be a shame, as they are (in my humble but strongly-felt opinion) the future of cstorm. Delaying them has costs, and misses opportunity windows, and means they aren't themselves being improved and fine-tuned as can only really happen once a technology gets into wider distribution and gets kicked around by a passionate community like that surrounding cryptostorm.

So... blah. It's a challenging situation.

In business terms, we'd be far better off pushing out the Dolphin widget without tokens2 or voodoo - heck, in business terms both are probably not worth doing (in conventional business terms, anyhow)... at least, not in the short term. We'd be better off just spouting random stuff about security-this and fastest-that and strongest-everything and whatever - and smiling all the way to the bank. But... we've this thing, as a team and as a project, about leading with our actions and not just with empty words (and yes, I have a surfeit of words, as always :mrgreen: ...but I like to at least tell myself that even my textbticks mostly relate to actions we've done or are doing, and not just to hot-air "wouldn't it be cool if" hypothesizing). That means we don't do so much of the heavy hype-cycle of things we've long been doing well. Which means we pretty much suck at (conventional) marketing, admittedly. Nobody's really arguing that point, eh?
splash.png
Anyhow.

Feedback and creative thinking appreciated, wrt the above. There's no question df has produced already a brilliant piece of software engineering, in the alpha Dolphin widget. And there's no question he can move it to full production with his characteristically understated professionalism, competence, and integrity. That's all taken as a given. What's open is whether we, as a community, lobby him to hold off on code freeze (metaphorically) until voodoo/tokens2 is ready to bake into it... or do we set those aside, get the Dolphin into the seas of beta release, and then loop back to - as a community, and as a team - define how the Dolphin widget evolves forward to embrace voodoo/tokens2.

There's more, but this post is already too long. (mostly the "more" relates to cross-platform widget support... which is a big, important, open-ended subject in its own right)

Thanks, to everyone, for being the heart and soul of cryptostorm. This project has developed and evolved into something bigger than any one person, indeed to something bigger than the founding team ourselves. It's as much community, and the passion of our community, as it is a business - which is beautiful, and scary, and wonderful to see. It's something new under the sun: something good, something important. Thank you for the honour of allowing us on core team to work with such a beautiful thing.
quote-free-your-mind-and-your-ass-will-follow-george-clinton-54-39-92.jpg
Cheers.
by Pattern_Juggled
Fri Mar 18, 2016 6:41 am
Forum: general chat, suggestions, industry news
Topic: From the datacentre perspective: cartel spambot extortion
Replies: 8
Views: 27422

Re: Do It Nao!

marzametal wrote:Lurky lurky, huh? Much like Rambo in First Blood, covered in mud, stuck to a small cliff-face... eyes open, and BANG. pwned!!!
hqdefault.jpg
hqdefault.jpg (7.56 KiB) Viewed 26775 times

No no... nothing like that, not at all!

(but they did draw first blood!!!11! :twisted: )
IMG_2639.JPG

...'twas more like this, of course!
630x341px-80d7a86b_predator06.jpeg
:mrgreen:

Cheers
by Pattern_Juggled
Wed Mar 16, 2016 11:27 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

Re: All Hail Cthulu (the dark god of DNS)

Khariz wrote:I find that especially odd since its on the whitelist already.
All things DNS are black magic, at core... so the oddness is (partly) expected. Speaking metaphorically, of course!

Cheers.
by Pattern_Juggled
Wed Mar 16, 2016 9:40 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: torstorm cipher suite selection
Replies: 28
Views: 104284

Re: torstorm cipher suite selection

DesuStrike wrote:Did a quick client check on SSL Labs with my Windows 7 VM and IE11.
I won't do any testing with this VM though. I don't trust this OS even half as far as I can throw Satya Nadella. Sry... :sick:
Selection_118.png
Heh, nice to hear your voice in here, my old friend!

Cheers :-)
by Pattern_Juggled
Wed Mar 16, 2016 9:39 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: torstorm cipher suite selection
Replies: 28
Views: 104284

Re: torstorm cipher suite selection

Heya PB - we're getting reports of some cipher mismatches on some browsers. I'm not yet opening the task to formally review these cipher primitives... but I suspect it'll need to be done sooner rather than later. Because c25519, maybe? One can always dream, eh? :-)

Any help in pinning down such reports is appreciated; we've pruned supported ciphers so tight that any fudge and sessions will proactively abort (as was intended, on our part). But it does lessen the usefulness of torstorm when it won't actually load in modern browsers! :-P

Cheers.
by Pattern_Juggled
Wed Mar 16, 2016 5:04 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

https://github.com/deepDNS/TrackerSmacker/blob/master/whitelist.txt

df wrote:In the mean time, I think the best course of action (for stuff like wtvy.com and v0cdn.net) is a github repo of ours that contains a whitelist. People submit something they need whitelisted, and once staff manually verify that the host isn't evil.com, the server-side scripts automagically update /etc/hosts.
Seconded.

In fact, here we go:

Code: Select all

https://github.com/cryptostorm/cstorm_deepDNS/blob/master/TrackerSmacker/whitelist.txt
Screenshot (44).png

Anyone that would like to help maintain, approve pull requests/merges, etc. - drop a note & we'll make it so. Here's the public announce (as it were).

Cheers
by Pattern_Juggled
Wed Mar 16, 2016 3:57 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

Re: TrackerSmacker: adware/crapware-blocking done right (Copied from support topic)

crptomon wrote:This issue only started last week, but has caused me all sorts of headaches having no access and now a week of wasted work time. Consequently I'm not a fan of any blocking feature you may have. Blocking webpages is a show stopper for VPN usefulness if this is the cause.
Gah - apologies for the delayed approval of your post (not sure wtf, but will check). (also I did a bit of formatting cleanup)

On more substantive matters, can't see how what you are reporting is TrackerSmacker-related... but we're taking a look, to be sure. It's certainly not intentional on our part. We'll post analysis results here, shortly.

EDIT: I stand corrected; fixing now:
Screenshot (43).png
Screenshot (43).png (7.52 KiB) Viewed 374686 times
Cheers.
by Pattern_Juggled
Tue Mar 15, 2016 7:48 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

Re: TrackerSmacker: adware/crapware-blocking done right

twelph wrote:It might just be pertinent to wait it out and see if it actually affects users in the long run. Maybe the list will be maintained well enough that it won't be an issue. He did say that it was enabled for a whole week without anyone even having any trouble, maybe we are making too big of deal out of this?
One of the ways to see if something like this is causing obvious problems is to implement it and see if problems arise (which, of course, we would not ever do in a security-intensive situation or with new cipher primitives, &c.) - on the one hand that seems inelegant and reckless; on the other hand, all the pre-implementation testing in the world doesn't add up to actual results from actual implementation.

The philosophical issues brought up in posts here are solid, and important. I look forward to discussing those - and likely they will result in some sort of dual- or hybrid-option arising from this. Meanwhile, we've implemented TrackerSmacker everywhere in the network because, simply put, it works really well.

This advertising bloatware on so many websites is a serious security issue - and it means we can't (or choose not to) simply ignore it, if there's a way to make real improvements on behalf of members. That has to balance with our packet-agnostic roots... which go back nearly a decade into the project's earliest days. So it's an important, open topic.

I'd say, in terms of my personal (not official team) view of things is that the implementation has been seriously low on problems or complaints from members. The network, to keep things in order-of-magnitude generality, handles thousands of web browsing sessions per day (likely alot more, but that's a super lower boundary)... so if we see, say, a dozen complaints that's likely a really low percentage. (which ignores the very important issue of folks being frustrated and not actually complaining... which is not to be ignored as it's the frustration that is the hidden variable we're seeking to measure, rather than the complaints)

One reason we lean towards production-based testing of things like TrackerSmacker is there's no installed codebase on member machines, to consider: it's all done network-side, and all easily modified by our admin team as-needed. So, for example, if the whole thing proved to be a disaster... we could just rm it from nodes, and everything's back to a clean slate. That kind of flexibility to tinker and fine-tune on-the-fly means the experimental process can move faster, retain time-reversibility, and in general avoid engendering member frustration if an experiment like this ends up being ill-conceived, when all is said and done.

As Graze is wont to say (were he saying it, which he's not... so I'm paraphrasing): put the feature in 'permanent beta' and tune the heck out of it; if it works well, it'll prove itself out - and if not, take the learning from the experience and apply it elsewhere. Well said, Graze! :-)

Cheers.
by Pattern_Juggled
Mon Mar 14, 2016 3:51 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

Re: TrackerSmacker: adware/crapware-blocking done righ

LoveTheStorm wrote:Ps. also http://www.datafilehost.com/ is blocked. Seems a bit much :shock:
Do note that we're pulling from an external blacklist - not attempting to create such a thing from thin air. Which would be... eeek. Anyhow, I think the underlying repo is open for pull requests and stuff, so if there's something in there that really shouldn't be, it might be worth going upstream (as it were) and seeing if it's appropriate to rm from that resource itself.

(though yes we can pull stuff, or otherwise mod, downstream - though we've not official process for tracking such requests and edits thus yet... which sets the stage for much sad, down the line, if we don't get that process in place early - imho)

Cheers.
by Pattern_Juggled
Mon Mar 14, 2016 3:46 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

Re: TrackerSmacker: adware/crapware-blocking done right

LoveTheStorm wrote:Hi PJ, first well done. I am loving this. Crypto love! :D
I am already using Crypto dnscrypt from start for all my connections, not only vpn.
We need to actually announce the public deepDNS resolvers: they're really handy, and it'd be great for more folks to know they exist. It's been on our core team to-do list for, ummm... a couple years? Ouch.
You know when here in the official list https://download.dnscrypt.org/dnscrypt-proxy/ (dnscrypt-resolvers.csv)
will be ALL crypto dns, for example there is not Moldova, etc.., so we can use them all always.
Adding to aforementioned to-do list. :-P
Also, you can whitelist adfly? I use often it and from today i started to switch to Holland dns dnscrypt i.e. when i need to use adfly and come back to storm dns then. With this whitelist i can avoid that switch and stay with storm dns eheh

I will write here if i see something other that can be whitelisted for better use.
What would be (might be?) cool is to have some "unfiltered" deepDNS resolvers that don't have TrackerSmacker running on them... for testing, or for people who need full lookups eh? It would make sense....

Anyhow, we've been tuning the whitelist as we go - some stuff got randomly blocked in the early rollout, and it's a process of learning how to make sure it's not overly aggressive. Fwiw, my own hope is to see our whitelist jump over to the github repo so it's public and easy for folks to commit/pull into - which scales better than manual stuff. But for now I think we're doing it sort of manually... if you're bored and want to set up something in github, lemme know your handle there and I'll gladly auth you (and anyone else) into the repo with write privs so we can get that going.
Anyway really amazing work man, you all here are a great team. I love the Storm! :clap: :thumbup:
Cheers, mate - it's an honour to be of service. Genuinely so.
by Pattern_Juggled
Mon Mar 14, 2016 1:44 am
Forum: DeepDNS - cryptostorm's no-compromise DNS resolver framework
Topic: TrackerSmacker: adware/crapware-blocking done right
Replies: 67
Views: 375550

TrackerSmacker: adware/crapware-blocking done right

{direct link: cryptostorm.ch/TrackerSmacker}
{twittery announcement is clicky-here}


NEW THING! - there's now a parallel, dedicated forum thread here for the more philosophically-driven critiques of TrackerSmacker... take a look, if that's where you'd like to dip an oar (so to speak). Thanks!


Since we moved from years of study and admittedly obsessive analysis, and into providing our own cryptostorm-maintained Domain Name Service (DNS) resolver architecture - which we named deepDNS (because we got tired of referring to it in our team discussions & IRC chat as "the in-house cstorm DNS resolvers") - we've been breaking new ground in exploring all the ways that doing really good DNS resolution service can improve network security for our members and for the wider community online.

goldsworthy1_.jpg
[/url]

That's really no surprise... but it's still a little bit surprising just how powerful deepDNS really has the potential to be. After all, DNS is one of the fundamental building blocks of internet functionality: there's no internet without DNS. Plus, DNS itself is notoriously riddled with all sorts of security issues, known vulnerabilities, and all but uncountable ways that it can be attacked successfully and with devastating effect (see also: Dan Kaminski :-P ). So, doing DNS better - not perfectly, mind you... but still better than it is normally done - for our members and the community can really be a Good Thing. It is, actually, a Good Thing; we've already seen that, in how we enable things like transparent .onion/i2p access, and how we implement DNScurve and DNSchain protections. Which is all... really good. :-)

deepDNSlogo-leaves512.png
[/url]

But wait, there's more. Turns out, there's the possibility to do alot more.

In recent discussions amoungst our core team, the idea of doing DNS-based ad-blocking came up. This isn't a new idea, to be clear: it's been done, and discussed, and explored by other smart folks and it's not something we came up with out of thin air (nothing really is, because any really good idea has already been noted by researchers long before it's ready to implement by a team such as ours - by definition). Once we started kicking the idea around, however, we immediately saw how powerful it could be in the context of our network itself.

This isn't the right place to do a full analysis of the various ways adware/crapware and ad-tracking spyware breaks the internet, hammers privacy, enables spy agency surveillance, and also makes all sorts of routine daily 'net activities slow, dysfunctional, and generally awful. We all know this is true; what used to be a sort of marginal concern (mostly related to security/privacy damage) has become really mainstream in terms of why ad-tracking crapware is evil. If for no other reason, all this tracking stuff that adware uses to throw more and more targeted ads at us makes a big chunk of websites on the internet so bloody slow as to be all but useless. Plus, it straight-up causes browsers to crash when some ad-heavy websites are visited... and that's the legitimate news websites! (try visiting some tracker sites, or "adult" sites, and they simply refuse to load no matter how fast a 'net connection one might have). Then there's the huge impact this garbage has on smartphone-based web browsing... basically, the list of bad things coming from ad-tracking crapware is really long, really deep, and impossible to ignore nowadays.

So, unsurprisingly, there's all sorts of counter-tech that exists out there. Most has the best of intentions... but even with that alot of it has become dysfunctional itself. For example, some long-popular "adblock" browser extensions are nowadays so bloated, inefficient, and complex that they themselves slow browser performance to a crawl... and some even allow ad networks to pay for whitelist status, as a revenue source! We're not passing judgement here, to be clear. What we are saying is that most every approach to limiting this ad-tracking crapware has its own laundry list of unintended symptoms, costs, and frustrations associated with it.

Not using any of it, however, results, in web browsing that's often slow, buggy, rendered poorly, littered with pop-ops, bogged down with crash-y javascript... and of course so not-private it's almost impossible to overstate. So it's a lose/lose sort of decision we all have to make, in terms of what anti-adware tools we use (along with their side effects) versus how much ad-tracking crapware we're willing to put up with (in terms of all the evils it brings).

Blah. That sucks. So we made TrackerSmacker(h/t @FalsNameMcAlias). :mrgreen:
IMG_20160311_131053.jpg
Technically, what we're doing with TrackerSmacker is elegantly simple: we take a nicely-maintained (and opensource) list of known-crapware ad-tracking domain names and URLs, and we block DNS queries made via deepDNS that relate to those ad-tracker nasties. Because everyone on cryptostorm's network is, by definition, using deepDNS resolvers (which are "pushed" during cstorm connection in the current "Narwhal" widget - and which will be pushed even pre-connection in the new "Black Dolphin" widget 3.0), that means that every web browsing session whilst on-cstorm is filtered of all this ad-tracking crapware. Members need not install anything, do anything, change anything, or in any way fiddle with stuff in order to get this benefit. It... just works - the best kind of tech there is, tbh!
Screenshot (2).png
Better yet, and unlike adblock-style browser extensions, TrackerSmacker prevents the ad-tracking crapware from even being downloaded or pushed in any way to the browser in the first place. That's different from ad-blockers that live in the browser, which have the hard job of looking at stuff after it's already been pulled from a webserver and deciding whether to render it in the browser. TrackerSmacker blocks the DNS resolution of the crapware itself - it never gets to the browser, never gets parsed by an extension or the browser's own render (or .js) engine, and never even comes across cryptostorm's network. Like we said, it's elegant... damned elegant. And it works really, really well.

Earlier versions of DNS-based ad-tracker blocking required folks to manually set their local DNS resolvers to a new resolver that did the blocking for them. That's fine, sorta... but beyond what most folks want to have to do in order to block ads - also it doesn't always stay working and needs to be done repeatedly in alot of OS contexts, in order to "stick" over time. Since we do this at the deepDNS-resolver level of cryptostorm's network, all that fiddling is simply not needed. Indeed, we implemented TrackerSmacker behind the scenes, last week, without any need to tell folks about how it works in order for it to work.

That's right: since last week, if you're using cstorm, you're hand has already been soaking in the luxuriously adware-filtered softness of TrackerSmacker! ;-)
537050_551871288186247_429383099_n.jpg

True to form, we've created a new github repository for the deepdns-TrackerSmacker function - and we'll be publishing there the syntax we use to enable it, the whitelist/blacklist exceptions or additions that we make based on community input, and so on. Which is to say: the details of how TrackerSmacker works, and how we've implemented it, are far from secret or nonpublic. We're looking forward to ongoing community assistance in fine-tuning the way we provider TrackerSmacker protection within the deepDNS context.

And guess what? Because we maintain a (not officially announced, but long-since-supported fully) public pool of deepDNS-powered resolvers, anyone who wants to can benefit from deepDNS... even if (for some mysterious reason) they aren't using the cryptostorm network itself. At no cost: free. That requires manually changing local DNS settings, of course... but even so, it's pretty useful, and pretty cool, that anyone can take advantage of TrackerSmacker.
Screenshot (6).png
This post is already longer than it should be, which happens - and we've not yet included some technical details that certainly will be important as TrackerSmacker continues to evolve and expand its ability to block garbage from network sessions. Rather than bogging it down further, we're going to wrap up this introductory post and open the thread for questions, suggestions, discussions, and so forth. Ah yah: we're even talking about doing a "real" press release - wow! - so if you or someone you know is press-release-savvy and you'd like to help with that, drop a note in here and we'll be really happy to take up the offer of assistance.

TrackerSmacker is cool, it really is. It makes websites with lots of ads on them load way, way faster - and not be crashy, bloated, and laggy when scrolling. With fine-tuning, it'll continue to improve and to add more benefits for anyone who wants to make use of the deepDNS resolvers. We're not anti-advertising at a philosophical level, nor even particularly obsessive about the privacy impact of ad-tracking crap (which is pretty seriously negative, even in the best of interpretations)... but we have seen this stuff turn into a serious pothole on the internet. And we just filled that pothole, with TrackerSmacker - or whatever metaphor works better than that. Whatever - it's cool. :thumbup:

DeepDNS started as something we created because the alternative tools out there weren't quite up to cryptostorm's standards of functionality, privacy, and security. Since that start a few years back, it's expanded into it's own thing - in some sense, with a broader reach than cryptostorm itself. Who knows... perhaps deepDNS will fly the nest and become a big, cool, standalone success story that overshadows cryptostorm itself. Stranger things have happened, eh?

Meanwhile, we're proud to be where deepDNS started - and where TrackerSmacker got going, too! W00d.

- - - - -
Here begins the gratuitous bits:

<insert gratuitous Wolfmother reference>
IMG_rbscfi.jpg
[/url]

<insert not actually gratuitous h/t to our friend ntldr for his help brainstorming the early structure of TrackerSmacker... but this pic is in fact totally gratuitous, so there's that :angel: >
ojeexs.png
Cheers!
by Pattern_Juggled
Sat Mar 12, 2016 5:53 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: widget v3
Replies: 278
Views: 1659676

Re: widget v3 (pre-alpha testing build)

For those feeling irresponsibly adventurous, here's the current - NOT EVEN ALPHA - build.

When it breaks your local Win install, don't blame me. Or df. Blame Plesk. Or Graze... or both!

https://b.unni.es/setup.exe

Cheers :-)
by Pattern_Juggled
Wed Mar 09, 2016 9:49 am
Forum: general chat, suggestions, industry news
Topic: From the datacentre perspective: cartel spambot extortion
Replies: 8
Views: 27422

Re: lurk-y Olympics

Khariz wrote:When did you get back? I think this is your first post since last fall? Welcome back?
I wasn't gone... I just felt really, really lurk-y this winter.

Heh. :ugeek:

Cheers,
by Pattern_Juggled
Wed Mar 09, 2016 9:14 am
Forum: general chat, suggestions, industry news
Topic: From the datacentre perspective: cartel spambot extortion
Replies: 8
Views: 27422

Re: From the datacentre perspective: cartel spambot extortion

LoveTheStorm wrote:Welcome back PJ.
My genuine thanks for the kind words. It's been... interesting times.

Very much glad to be back.
Anyway, hope this "cartel spambot" story will not compromise/prejudice the crypto service for the future. ;)
Heh, no worries mate! :-P

Honestly, we've been dealing with this sort of silly nonsense for many years - it's neither new, nor terribly problematic. It certainly has no impact on the service, and we'd never allow it to have any security impact on our members. That's utterly non-negotiable, and always has been.

It does mean that, fairly often given the growth in our network footprint over the years, we're finding ourselves in the position of "cycling out" particular nodes because a given datacentre or hosting company... well, simply doesn't get it.

We always try to communicate openly and clearly, and we always strive to be reasonable and patient in such things. However, our loyalties are to our members - and to the cryptostorm project overall. The role of our datacentres is to provide a service to us, for monies paid; they are vendors. We like to build good vendor relationships, and some of our hosting providers have become respected colleagues in the years we've done business together. That's always a good thing, for all involve.d

That said, when a datacentre just goes a bit off the rails on us, and that begins impacting member service (not the security of the service, of course, but rather reliability and uptime statistics)... well, it's time to part ways and shift business to new vendors.

It's not something we track with high precision, but anectdotally I'd say we see a datacentre exit our network every week or two, on average. And, oh yah - there was a time last fall when we were working hard to provision US_central cluster with additional capacity. We'd add a new node (or two), and even before we could get them announced, they'd drop - DMCA drama, mostly. We'd add more, they'd drop... and we'd lose an existing node.

It became something of a test of persistence: I personally became (admittedly, and entirely predictably) obsessive about getting at least two nodes into the cluster that weren't going to disappear out from underneath us. I lost count of how many machines we added, provisioned, then all but immediately lost. (and yes, there was a good bit of evidence that the situation wasn't entirely random - although we had not the luxury of documenting those empirical data or extrapolating to hypothetical antagonists who might be on the other side of that particular table, as it were... we most likely know who it was; the relevant point is our obligation to members to provide reliable, consistent service and that goal overrode our curiosity about the source and motivation of the likely attack we were experiencing)

Anyhow, yah... this happens, behind the scenes, alot. It's one of those not-sexy and yet very important parts of running the network - not something we make much noise about publicly, but it sucks up a decent chunk of staff time to stay ahead of it. I don't even want to guess how much time df puts into provisioning, testing, validating, and deploying each node - it's a manual process, intentionally so, as we like to know we're doing it right - and all that time is a total waste if the node drops in a day or three because some cartel spammer sends a "takedown notice" and the datacentre panics.

Because sometimes we get stuff like this, even after we work hard to explain what's what:
Hello. Well, we are all aware. But, you must understand us too. The fact that the terminal address is the address of our server. and all abuzy come to our company. For these messages, we have to report to sender. According to the rules, we need to stop the server. Unsubscribe to the sender that these actions were stopped. Yes, and the message data is not very positive impact on the company's reputation in the network.

And so we have two suggestions. Which one you choose is your right.
  • 1. If you and we do business, let's do this: for every complaint you will pay compensation in the amount of $ 20 (this amount can be specified). In a way the penalty for breaking the rules. In this case, your server will not be stopped when a complaint is received on your server

    2. If you receive another complaint we stop working with you.
We value the reputation of the company, and we will do everything to not The supported the company's reputation at the proper level, and strive to meet the needs of our Clent.

-------------------------
Best regards
StepHost TEAM
Derp.

We choose option two, thanks.

Cheers.
by Pattern_Juggled
Tue Mar 08, 2016 9:16 am
Forum: general chat, suggestions, industry news
Topic: From the datacentre perspective: cartel spambot extortion
Replies: 8
Views: 27422

From the datacentre perspective: cartel spambot extortion

Here's a discussion we've been having with one of our datacentres, which provides a bit of inside-view on how these cartel spambots operate: an extortion scheme, basically.

UPDATE: here's the latest reply from the datacentre (which I've also added into the proper message flow, down towards bottom of this already-long post):
Hello. Well, we are all aware. But, you must understand us too. The fact that the terminal address is the address of our server. and all abuzy come to our company. For these messages, we have to report to sender. According to the rules, we need to stop the server. Unsubscribe to the sender that these actions were stopped. Yes, and the message data is not very positive impact on the company's reputation in the network.

And so we have two suggestions. Which one you choose is your right.
  • 1. If you and we do business, let's do this: for every complaint you will pay compensation in the amount of $ 20 (this amount can be specified). In a way the penalty for breaking the rules. In this case, your server will not be stopped when a complaint is received on your server

    2. If you receive another complaint we stop working with you.
We value the reputation of the company, and we will do everything to not The supported the company's reputation at the proper level, and strive to meet the needs of our Clent.

-------------------------
Best regards
StepHost TEAM
{name not redacted, because... yeah}

First, we receive this message from the datacentre admins (note: I mark in boldface the actual text of the DC's message, at the bottom of this quote):
Dear Sir or Madam:

We are contacting you on behalf of Paramount Pictures Corporation (Paramount). Under penalty of perjury, I assert that IP-Echelon Pty. Ltd., (IP-Echelon) is authorized to act on behalf of the owner of the exclusive copyrights that are alleged to be infringed herein.

IP-Echelon has become aware that the below IP addresses have been using your service for distributing video files, which contain infringing video content that is exclusively owned by Paramount.

IP-Echelon has a good faith belief that the Paramount video content that is described in the below report has not been authorized for sharing or distribution by the copyright owner, its agent, or the law. I also assert that the information contained in this notice is accurate to the best of our knowledge.

We are requesting your immediate assistance in removing and disabling access to the infringing material from your network. We also ask that you ensure the user and/or IP address owner refrains from future use and sharing of Paramount materials and property.

In complying with this notice, Nav Datacenter Telecom Srl should not destroy any evidence, which may be relevant in a lawsuit, relating to the infringement alleged, including all associated electronic documents and data relating to the presence of infringing items on your network, which shall be preserved while disabling public access, irrespective of any document retention or corporate policy to the contrary.

Please note that this letter is not intended as a full statement of the facts; and does not constitute a waiver of any rights to recover damages, incurred by virtue of any unauthorized or infringing activities, occurring on your network. All such rights, as well as claims for other relief, are expressly reserved.

Should you need to contact me, I may be reached at the following address:

Adrian Leatherland
On behalf of IP-Echelon as an agent for Paramount
Address: 6715 Hollywood Blvd, Los Angeles, 90028, United States
Email: copyright@ip-echelon.com

Evidentiary Information:
Protocol: BITTORRENT
Infringed Work: Terminator Genisys
Infringing FileName: TERMINATOR GENISYS (2015) PAL Rentail DVD9 DD5.1 Multi Subs 2LT
Infringing FileSize: 6959715899
Infringer's IP Address: 5.154.191.25
Infringer's Port: 20071
Initial Infringement Timestamp: 2016-03-05T20:13:27Z

This email (including any attachments) is for its intended-recipient's use only. This email may contain information that is confidential or privileged. If you received this email in error, please immediately advise the sender by replying to this email and then delete this message from your system.

{gratuitous, redundant XML version of plaintext message above removed, because silly}

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJW2z32AAoJEN5LM3Etqs/W0/cIAJLzKmQMbbgxJxDoa6gkpjwO
rRuRFyj/37Oy/qZVqyPisL+TJW/gvTopJ721ifZRCJuMOTVYIuqMxRJpA6aN5RYH
5jw4tbTlux+WaW7PrqQ5GEQ2/3FZAT0DHYp2rFcIuHrupAEVLBDX/zcXjS1qIETB
aMu9gXXqsAt6+UirQClTzqCt5co9RMlQmQRz1JMjSjnsDwZhtOUx+WrZh+lfSq73
jpGMOCG5sMCoEYlawW36s8KnmEOU+cor0jEZadRPs2+jTVnLIgKSFOSZ8Xmq2og7
wMasZoGvxQxQBvU7UgLfqP3O64DOTxfXd18U8g7yJPlNZjdEDSFkk89nGwyf9GU=
=ViUe
-----END PGP SIGNATURE-----

Can you say us how along we will receive emails like this. Do you understand that is an illigal oparations. You must to remove the illigal content. If we get another email a complaint to your server, we will be forced to shut down your server and close the order. Good Luck.

-------------------------
Best regards
{datacentre name redacted to protect the, umm... "innocent?" :-P}

We reply as follows:
We don't have any illegal content on the server. We run a VPN service, which means it is possible for our customers to download copyrighted material on to their own computers with Bittorent while connected to our service.

...aaand their reply is as follows:
We know, we seen your VPN scripts on GitHub.

Huh? At this point, I decided to provide a bit more backstory (as it were) via the following reply:
Hello, yes our nework security service provides many opensource materials via our github repository, here:
https://github.com/cryptostorm/

We also maintain our customer discussion forum at:
https://cryptostorm.ch

Our main twitter account is at:
https://twitter.com/cryptostorm_is

In fact, a summary of cryptostorm resources availale to the general public can be found here:
https://cryptostorm.is/map

We provide many tools, websites, and educational resources for our members and for the larger internet community - a service we now provide for almost ten years. Our researchers have provided useful contributions to many important areas of network security, digital privacy, and cryptographic systems during those years.

There is nothing in this work that is hidden, illegal, or "black hat" in style or substance. Yes, it is true that we are sent poorly-worded "DMCA complaints" pretty regularly - in our discussion forum we explore the legal and technological issues that are highlighted by these kinds of efforts to coerce censorship or extort money from our members. For example, here is one such thread:
viewtopic.php?t=5808

With many of our datacentres, we work with them to develop procedures to reply to these "cartel spambots" with a detailed, legally reasoned answer and request for additional information. This is something we are happy to do, in working with you, if it is necessary. I would point out that the DMCA itself (a specific law) covers *only* companies based in the United States. We, cryptostorm, are not based in the USA and I do not think your company is either.

Finally, we understand that there are services on the internet that exist specifically to make money from pirated content. This is true, and we understand there are strong feelings about them. However, please understand: we are *not* a service that makes money from pirated content. We are a network security service: we send and receive information securely, on behalf of our customers. We do not filter or block any information. We are similar to a wholesale bandwidth provider (like Level 3), who simply transmits data. We are not a "content company" and have nothing to do with specific content. We do not host websites, provide files, offer filesharing services, or anything else. We only send and receive packets of data for our members, securely and with cryptographic protection: when using our service, members have *all* of their internet traffic secured.

If we can answer any additional questions, please let us know. However, we cannot continue to do business with hosting companies that turn off servers if a single "cartel spambot" message arrives - even before we have a chance to reply! This is bad for our members, bad for our service, and bad business in general.

With respect,

~ cryptostorm private network

Now, we'll see how the discussion goes from here - and I'll post any further thread additions here, to keep things fully synchronised.

UPDATE: here's the latest reply from the datacentre:
Hello. Well, we are all aware. But, you must understand us too. The fact that the terminal address is the address of our server. and all abuzy come to our company. For these messages, we have to report to sender. According to the rules, we need to stop the server. Unsubscribe to the sender that these actions were stopped. Yes, and the message data is not very positive impact on the company's reputation in the network.

And so we have two suggestions. Which one you choose is your right.
  • 1. If you and we do business, let's do this: for every complaint you will pay compensation in the amount of $ 20 (this amount can be specified). In a way the penalty for breaking the rules. In this case, your server will not be stopped when a complaint is received on your server

    2. If you receive another complaint we stop working with you.
We value the reputation of the company, and we will do everything to not The supported the company's reputation at the proper level, and strive to meet the needs of our Clent.

-------------------------
Best regards
StepHost TEAM

Never a dull moment, eh? :roll:
by Pattern_Juggled
Sat Oct 03, 2015 1:50 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

Re: www.download.windowsupdate.com & crl.verisign.com - ongoing research

marzametal wrote:http://www.download.windowsupdate.com is a dodgy one... more so now than ever before due to the release of Windows 10. The long list of DNS addresses that Windows calls out to also contains the above address.
Keeping in mind that this hostname has been formally tied (per above posts) to APT-class malware command-&-control, that's a pretty worrisome thing to see. Though I'm a month late in commenting, do you perhaps have more data on this finding?

I've my own run-in with it recently, when researching a CRL - ocsp.thawte.com - that's showing up in certs but throws weird errors when contacted for testing (more on that in another thread, perhaps, someday... tl;dr is that CRLs are shady beyond belief, and best blocked as we do with CRLblock, network, wide).

In a technet thread that mentions this particular shady CRL, I found the following post, by "vgerNYC"...
These Windows 7 Home Edition PC's people's home computers? If they are, they really should get into the practice of going to Windows Update every month.

If anything install the following update on these other troublesome computers.

Code: Select all

http://download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/rootsupd.exe
Don't know if these are the same file or not. In Vista and Windows 7 they're supposed to quietly fetech their root certificate updates. In XP in Windows Update, it's listed as an optional update.
A copy of the .exe pulled from that URL for your analytical pleasures:
rootsupd.exe
(405 KiB) Downloaded 1339 times
Here's the hybrid-analysis sandbox run on the binary:
Malicious Indicators

Installation/Persistance
Allocates virtual memory in foreign process
Writes a PE file header to disc
System Security
Modifies System Certificates Settings
Unusual Characteristics
Contains ability to reboot/shutdown the operating system
Spawns a lot of processes

Basically, it's a shady piece of work - I'd not recommend running it on any Windows box... though I'd love to see what happens when it's launched in a safely disposable win VM. :-)

So that subdomain is the gift that keeps on giving, eh?

Cheers,

~ pj


ps: no, I've not done any IP or packet-level analysis of the fetch of the binary itself... though I suspect the results would be duly fascinating if someone decides to do so and post them here.
by Pattern_Juggled
Wed Sep 16, 2015 9:54 pm
Forum: general chat, suggestions, industry news
Topic: For newbies who desire to help but, can't?
Replies: 10
Views: 25882

Re: For newbies who desire to help but, can't?

Your critique is pretty much accurate, and on behalf of the team we thank you for posting it here.

The bottleneck with our email support responsiveness in the last month or so actually isn't related to finances whatsoever. Indeed, our growth trajectory isn't held back due to any such constraints, but rather is always a question of trust. Adding to the core team requires enormous trust, and thus roles that hold core team responsibility fill slowly and always will.

One member of our core team, who is largely the lifeblood of the support work we do here, was pulled aside from previous focus on that work, during this time period. It's not our place to discuss further detail, at this point, though that may become less a constraint down the line.

Once that situation became visible to the rest of the team, we began the work of filling - temporarily - that open area in our work queue. It's taken a few weeks to have an appreciable visible impact, but we've not got an extended team member who has stepped into that workload and is, so far, doing outstanding work at it.

For years, we've prided ourselves on our responsiveness in the support area - by any channel convenient to members. Our hyper-exponential growth in the past year or so has stretched our ability to retain this critical part of the project's lifeblood... but it was only with an exogenous factor pulling away (partially and temporarily) a member of our founding team that we really dropped the spinning plates to the ground in a clatter.

But the plates didn't break! Ok, dumb simile... point is crappy support is not something we're ok with, and we never will be. We've (it appears thus far) fixed this bottleneck and are on our way back to providing unmatched, quick, friendly, useful support again. Give it a week or two, and I think we'll all be feeling that the hard work is again visible to members.

As to the lack of "how to use our network" resources, that's been a tragic failure on our part for years. Fortunately, some generous soul has kicked off the "cryptostorm user guide" project (the link goes there, although it's still being expanded and refined). That is a great start, and we do need to do more.

From time to time we hear that there's a feeling in the air that cstorm is about elitism and too-cool-for-school tech snobbery. That's something I know every member of this team - core and extended - abors, rejects, and will not allow to take root here. It is true that more than a few team members are exceedingly gifted technologists (present company excepted, of course) and several are far past any objective metric of "genius" in raw CPU terms... but nobody here is a snob about tech. In fact, we deal with still-floppy-and-new tech every day, and we know exactly how much we don't know, when it comes to tech.

Beginner's Mind, it's called in Zen buddhism.

We will sometimes spar with other deep-geek technologists on exotic technical topics - that helps us make good choices about the tech we use in the network, and it also reflects the need for us to keep ourselves accessible and visible to our peers in the cryptographic and infosec communities (practitioner and academic, both). That is how all research has suggested it is possible to avoid most/all predictably stupid security failures, by leveraging transparency and community feedback. That model works, and we also - admittedly - love the intensity and fast-paced nature of such interactions.

However, if you - or anyone - ever sees us being all big-swinging-bits with folks who are not also themselves overcaffeinated, proud geeks... then pleast check our privilege, and do so hard. That's asshole terrain, and we choose not to become assholes. If that disease starts, intervene!

This is an emergent system, cryptostorm. Less so in technical terms - our core framework is solid and doesn't veer hither and yon - but very much so in sociocultural terms. We've chosen to experiement with all sorts of new components and structural bindings in the "human side" of cryptostorm. Those experiements have, in turn, spawned new chances to try doing things better, and so on.

From inside the whirlwind, my sense personally is that a core bolus of accreted structural/cultural elements have developed around here that we're no less at risk of veering all over the map as we try to figure out how to grow at {crazy number we don't want to admit to}% + also do genuinely non-pathetic security tech work at globa,l scale + also publish our results and maintain deep open comms with the community overall + also do so without taking outside funds from non-engaged entities which means nobody owns us and nobody tells us what to do but us, and our members + also being humans with families and health drama and the need to eat food occasionally (sometimes sleep, or so we've been told). We'll still be a bit stochastic in how we do things, of course - that's a feature, not a bug - but the big experimental work is largely done and we're instead in the "continuous refinement" part of the growth curve. For a while, anyhow.

There's much syncretism here. We've intentionally mashed-up many items that conventional wisdom simply assumed were non-remix-friendly. Some weren't, we now know... but most were.

For customers, of course all this is somewhere between distracting and annoying, to experience and hear about. We agree, actually. As a team, our job is to deliver solid security tech/service with minimal or no tech drama, fiddling, or hideously frustrating tail-chasing on the part of customers. And to do so at in a way that removes access to cash as an obstacle to using the tooks we make.

We're getting much better at doing all this syncretic weirdness in a way that's effective and also mostly irrelevant/invisible to folks who simply want tokens so they can stay safe. We'll continue to refine that deliverable, and I think it's open to some multi-order jumps in performance metrics shortly.

Which, speaking of, I'll stop boring you to sleep with this endless, droning post and look forward to your further feedback, reactions, and above all else advice.

Cheers,

~ pj
by Pattern_Juggled
Fri Jul 31, 2015 12:15 pm
Forum: general chat, suggestions, industry news
Topic: Please three character search terms on the forum
Replies: 4
Views: 13338

forum search now functional w/ 3-letter words

parityboy wrote:Just tested it with a search term of "VPN" and it worked perfectly.
That's a hell of a surprise... I mean, good - glad to hear it works as expected.

;-)
by Pattern_Juggled
Wed Jul 08, 2015 12:19 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Why is there no edit button on my posts?
Replies: 4
Views: 29396

Re: forum permissions bug-rehoming efforts

Also during the shift-over to new infrastructure, some of the permissions masks we've had for years were inexplicably scrambled. We've been de-scrambling as soon as bug reports appear, and it looks like most we've settled by now.

But if there's further permissions wtf's, post details as it's likely accidental - there haven't been any intentional changes in any group's permissions for many months/years.

(I thought edit capability existed for 60 minutes once a post goes live... is that what it used to be, or was it 10? We can shift it to whatever makes sense, as it's easy to modify within the params setup for the forum...)

Cheers,

~ pj
by Pattern_Juggled
Mon Jul 06, 2015 11:45 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Redundancy in website, email, & IRC infrastructure (etc.)
Replies: 7
Views: 42213

re: Iceland & pure.cryptohaven.net

We have been integrating a new, less technically intense platform over at [nb]pure.cryptohaven.net[/b], and to be honest we're still learning how to coordinate information posted there with threads here.

In this case, we provided an update on Fenrir and associated Icelandic infrastructure at cryptohaven last week... but failed to provide an echo or reference to that information here. Seems obvious, in hindsight. It's not clear if we'll be automating that coordination, or simply manually echoing - either way, it's something that will be done.

Meanwhile, here's an old-fashioned copy paste of the relevant data from cryprohaven's post on the subject:
...

This process has been underway for more than a month, as we saw the need to provide redundant capability to serve our websites... and it was more or less on track when, in recent weeks, problems with the internet connectivity coming and going from the island of Iceland started to become noticeable, then common, then almost overwhelming... in the past couple days, "overwhelming" is the description best suited. A visit to our cryptostorm.science network status page shows all the gory details, which impact both our websites and our Icelandic cluster (anchor node: fenrir).

Because these problems are 'upstream' from both our servers themselves, and from the datacenter in which our servers are housed, there's little or nothing any of us can do to resolve them. It's like having construction on a highway between one's house and one's intended destination: no amount of driveway sweeping or cleaning will help with the highway's crash site, and until that bottleneck clears there's not going to be much happy motoring to be had.

We're not leaving Iceland, and we're not leaving our current datacentre there! However, reality is that availability there is taking a hit lately - being an island, that's a risk. Word 'on the street' (i.e. amoungst well-connected colleagues in the deeper parts of the security tech ecosystem) is that these attacks relate to certain governments trying to "break" the anonymity of visitors to some sites within the Tor privacy network. We'll write a bit more about that in a separate post, but if that's why Iceland is being hit so hard lately, it's doubly tragic: both for the targets of the attack on Tor anonymity, and because the entire country of Iceland is being impacted so one vendetta can be acted out.

We'll update this post once the sites roll back to their normal selves... meanwhile, feel free to read the couple of posts here at cryptohaven. Not much, yet, but we're happy so far with how the project is progressing.
There's not much more to add, in terms of Iceland, meanwhile - we've prioritised the infrastructure redundancy effort as critical path since then, and largely focussed on ensuring it was completed with minimal drama. That process, although still a couple steps away from its final state, largely in-hand (knock wood) and having completed the gnarly chunks of it, the admin team has been provisioning several new nodes and an entirely new mechanism for secure session routing, this holiday weekend.

Once that's in hand and rolled out early this week, we'll be circling back to see what we can do to re-launch our icelandic cluster in a way that maximises performance, resilience, and security. It's too early to say with certainty, but we're cautiously optimistic that we can do this without either spending gratuitously for very little member-supporting capacity and without sacrificing session security in the process.

Regards,

~ pj
by Pattern_Juggled
Sun Jun 28, 2015 3:21 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: DotVPN — better than VPN.
Replies: 3
Views: 27442

vemeo.com

Amazingly similar design elements getween dotvpn and vemeo.com...

Right down to the "testimonials on dotvpn:
The fast speed and exceptional quality I need. I strongly recommend it without any reservations. I hope that in future DotVPN will continue to provide exceptional quality.

Maria Gomez
Copywriter, Madrid

I have tried a lot of similar tools and I decided on DotVPN as the best service for my assignments. You can trust me, this is the best IT service I have used. Thank you very much!

Adam Johnson
Designer, New York

...and vemeo:
The fast speed and exceptional quality I need. I strongly recommend it without any reservations. I hope that in future Vemeo will continue to provide exceptional quality.

Darya Korchagina
Copywriter, Samara

I have tried a lot of similar tools and I decided on Vemeo as the best service for my assignments. You can trust me, this is the best IT service I have used. Thank you very much!

Kirill Sokolov
Designer, Moscow
Amazing coincidence, that.

Cheers,

~ pj
by Pattern_Juggled
Sun Jun 28, 2015 11:11 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: DotVPN — better than VPN.
Replies: 3
Views: 27442

cryptostorm.ch/dotvpn

Added direct mapping to the thread, for ease of reference:
  • cryptostorm.ch/dotvpn
I'd like to unpack that .rar and get the javascript posted up in the cleanVPN repository. If anyone has a minute to do that, meanwhile, that'd be great :-)

edited to add: put up a dotvpn directory so it's there and ready for any javascript that wants to call it home.

(also, we've still to get a standalone cleanvpn.org site up and running... someday)

Cheers,

~ pj
by Pattern_Juggled
Thu May 14, 2015 9:37 pm
Forum: member support & tech assistance
Topic: cryptostorm weirdness | RESOLVED
Replies: 18
Views: 29477

Re: cryptostorm smoothness

Thanks for posting up the details, and clarifying wrt the extra "8" - everything I see there is legit, so at this point I'll mark this item closed as it seems we've got things smoothed out fully.

Cheers,

~ pj
by Pattern_Juggled
Thu May 14, 2015 8:33 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: Fermi's github -This is a git repository containing Cryptostorm related stuff.-
Replies: 5
Views: 35943

Re: Fermi's github - also cross-forked to 'samizdat-inbound'

Excellent work, Fermi!

Also forked across into this new repository: github.com/cryptostorm/samizdat-inbound, as an excellent starting point for many other elements soon to join these iptables chains in this repo.

Cheers,

~ pj
by Pattern_Juggled
Thu May 14, 2015 6:13 am
Forum: member support & tech assistance
Topic: cryptostorm weirdness | RESOLVED
Replies: 18
Views: 29477

Re: cryptostorm no-longer-weirdness :-)

rwilcher wrote:Ok I changed port to random port in the Widget. after that no more problems. Things are working perfectly now after
several reboots/Widget restarts. Haven't seen this since. I received mail from some one in support saying they found the
problem.
I can confirm that we'd added capacity over the weekend in the US-central exitnode cluster, but the new machine didn't get added to the list driving the on-cstorm testing page. That has now been resolved, and the test page is giving results based on a complete and official list of cstorm nodes and addresses.

As to the port "88888" issue, we'd love to know where that recommendation came from and exactly what the widget did when presented with that alleged port. Internal testing confirms that it should have thrown an error message with an out-of-range port attempt (ie any port not in the range (1-65535). Depending on OS details, attempts to use such ports may be re-mapped down to a valid port number... but the widget doesn't do that onboard, and doesn't pass such queries out to the OS in the first place.

So this is still a bit worrisome to me, in terms of unexpected behaviours from the installed widget. If you've got more data to share, we're all ears :-)

Cheers,

~ pj
by Pattern_Juggled
Thu May 14, 2015 2:55 am
Forum: member support & tech assistance
Topic: cryptostorm weirdness | RESOLVED
Replies: 18
Views: 29477

re: port 88888 :-0

rwilcher wrote:UPDATE: This happened to me again today. By changing to a random port , on the widget I got the conn back to green on the test page.
To get this working yesterday, I was instructed to change from the default port 'on the widget' to port 88888. This got it working.
Somethings going on. :crazy: Hope I don't have to do this every time but if so, so be it.
Woah woah woah!

There is no "port 88888" so that's not legit.

Please contact us immediately - via email or another channel, so we can figure out what's going on with your local desktop environment.

Thanks,

~ pj
by Pattern_Juggled
Thu May 14, 2015 2:53 am
Forum: member support & tech assistance
Topic: cryptostorm weirdness | RESOLVED
Replies: 18
Views: 29477

Re: cryptostorm growing-really-fast-ness :-)

Guest wrote:I'm having similar funkiness issues. ipleak.net seems to check out, in that all the information seems the same. Only difference is the unfamiliar IP address. That IP address is unfamiliar to https://cryptostorm.is/test as well.
I have a suspicion this is a simple oversight on our part.

We've been adding capacity to the network, in the form of new server 'nodes,' at a steady and solid clip in recent months. Most is redundant capacity for existing clusters, which enables resiliency and other good things... but which is not really very announcement-worthy on a per-node basis. Plus of course we're terrible at making announcements anyhow, so there's that...

Anyway, the list of IPs defining what is - and is not - "on cstorm" is manually updated. That's intentional, as it's a check against certain forms of trickery that are unlikely and sort of Rube Goldberg-esque to imagine... but we imagine them, and model them, and pre-structure things to make such trickeries more difficult for even the most patient attackers. And in this case, an actual human on the core team is required to modify that list of legit cstorm nodes; it doesn't auto-refresh, or auto-anything.

And I suspect we've not done so in a bit, because I can't remember anyone mentioning doing it... and I am pretty sure I didn't do it. So that means the list is stale, most likely. I'll get with the network ops team right now, and ensure it's updated (if it was in fact stale), and post results here.

If that's the case, our apologies - and it's a process we'll be putting better follow-on tracking around, as well, so it doesn't happen again.

Regards,

~ pj
by Pattern_Juggled
Tue May 12, 2015 11:27 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: črypto is finished... and it's about time × (also: 'Balrog' malnet, firsthand view)
Replies: 2
Views: 34179

črypto is finished... and it's about time × (also: 'Balrog' malnet, firsthand view)

{direct link: cryptostorm.ch/balrog}

This essay forms one section of a broader paper describing a global survellance technology we have dubbed Corruptor-Injector Networks (CINs, or "sins") here at cryptostorm. As we have worked on the drafting and editing of the larger paper, we saw as a team the need for a first-hand perspective to help provide a tangible sense of how CINs work and why understanding them is so vitally important to the future of network security.

I was nominated to write the first-person account, in large part because I have spent the better part of two months entangled with a particular CIN ("painted" by it - i.e. targeted). That experience, it was decided, may prove helpful for readers as it represents what is likely to be a nearly-unique frontline report from someone who is both engaged in research in this field as a professional vocation, and who was personally painted by the preeminent CIN in the world today. Despite misgivings about revisiting some of this experience, I see the wisdom in this decision and here I am pecking away at this esay. It's late, as I've found it a challenge to comport my experience with a cohesive, easily-digested narrative arc. What follows is the best I'm able to do, when it comes to sharing that experience in a way that is intended to help others.

Specifically, I hope to accomplish two things. One, and most importantly, I am sharing what amounts to loosely-defined diagnostic criteria for those concerned they have been painted by a CIN... or who are in a later-stage state of deeply-burrowed infection by the CINs implants. In the last month or so, I have been deluged by people concerned they may be targeted or infected. While I have done my best to reply with useful advice our counsel, more often than not I've been unable to provide much of either. This essay is my attempt to fill that gap.

Apart from the designers and operators of this CIN, I am likely more familiar with the operational details if it as it exists today than anyone else in the world - by a long stretch. I have invested many hundreds of deep-focus hours in this work, with only a small minority of that being solely directed at disinfecting my - and our - machines locally, at cryptostorm. The majority has involved, to be blunt, using myself as an experimental subject... allowing my local machines to reinfect via the painting profile, and then trying to limit the spread of, and eventually revers the footorint of, the infection modules/payloads themselves. I have iteratively followed that painting-injection-infection-corruption trajector through dozens of iterations, countless kernels rotted from the inside-out and simply erased as they were beyond salvation. This knowledge base all but obligates me to share what I have learned, such as it is, so others can leverage the hard-won bits of insight I've been able to collate from all this dirty tech.

The second goal of this paper is to communicate the scale, scope, and pressing urgency of CINs as a research and mitigation subject of highest priority to anyone working in the information security field today. That's a big task. I will do my best to share the broad outline of what we, at cryptostorm, have watched accelerate into the biggest, most dangerous, most complex threat we see to internet security and privacy for the next five years.

Let's get to work.

  • & crypto really is finished.
    ...once we finish this amble,
    ...that conclusion is inescapable,
    ...its consequences both subtle & profound.



Ց forest, trees, & the sum of parts

It wouldn't be too far-fetched to say that info security is a solved problem, or was before the CINs implanted themselves in the middle of things. That sounds bizarre to say, since by all accounts the State of InfoTec is... abyssmal. Stuff is broken, everywhere; everything gets hacked by everyone, all the time. Nobody follows good security procedure, and the net result veers between chaos and satire. That's all true, no question - but in theoretical terms, I stand by the assertion that infosec was essentially solved. How to implement those solution compoments... well, that's different question entirely.

When it comes to understanding how to mitigate, manage, and monitor security issues in technology, we know how: every attack vector has its defensive tools that, if applied correctly, pretty much work. This state of affairs is so ingrained in our thinking, from within infosec, that it's tough to step back and really see how prevasive it is. As much as we all know there's horrible implementation failure out there, nobody is (or was) home alone late at night, wringing hands and sighing dejectedly... utterly stumped by a question of how to defend against a particular attack. Rather, a few minutes perusing InfoSec Taylor Swift's twitter feed... err I mean "searching the web," is enough to turn up some pretty solid knowledge on any imaginable infosec topic, from post-quantum cryptographic systems to gritty OpSec-spy advice, and off to baked-in processor hardware attack models. Winnow down the advice to the stuff that seems legit, figure out the cost and complexity of putting it in production, and off we go. This we all assume is simply the lay of the land in our corner of the world.

Corruptor-Injector networks throw that somewhat comfortable state of affairs on its head in a rude, unsettling, and comprehensive way.

This is a qualitatively different sort of security threat than is, for example, "malware" or "the fast-approaching arrival of engineered AES128 collisions" - CINs are as different from such componentry as is a castle from a jumble of uncut boulders sitting in a field. All the expertise out there, developed to thwart countless sub-sub categories of security threats to computers and the networks we use to connect them, finds itself marooned in the dry terrain of "necessary, but not sufficient." That is to say, we will need all those skills to avoid an otherwise-eventual "CINtastrophe" in which the sticky extremeties of fast-mutating, competing CINs drown the internet in a morass of corrupted data, broken routes, unstable connections, and infected packets. But we'll also need more.

Which is the first important point in all of this, and one it took me more than a month of more-than-fulltime study of this subject to finally realise in one of those "oh, wow... now I get it" moments. I'm going to boldface this, as it's a core fact: no individual functional component of CINs is - or need be - new, or unknown, or freshy-discovered, or surprisingly clever and far ahead of the curve in its specialised explot category. It's all alread seen, observed, documented, and on most all cases, reasonably well understood in the civilian world. Cryptostorm has not, nor do we claim to have, "discovered a new exploit" or attack vector that nobody has previously noted or published. The sense of urgency and... dread (not the right word, but it'll do for now) we feel and are communicating recently isn't based on a novel discovery.

Even more so, the entire concept of CINs - if not the name itself - and the example of one created by the NSA, were thrown into stark, inescapably real status by the whistleblowing of Edward Snowden in 2013. There's a hefty pile of NSA slide decks, and civilan commentary, freely available to confirm that's the case (we're collecting it all in the closing segment of this full essay, as well as in our newly-birthed community research library. It's all there, in black and white... nearly two years ago, with additional follow-on disclosures continuing along the way.

So if that's the case, why are we all hot & bothered at cryptostorm about CINS? After all, they're neither made of new pieces nor even a newly-discovered category themselves - nothing to see, move right along. I'll admit that I was, unconsciously, in that mindset abou this segment of the Snowden archives. I read them - skimmed, more like - and essentially filed them under the "interesting, but not core" tag in my internal filing model. Yes, malware... you get it, bad things happen. Don't click on dodgy links, or download "free" porn. There are pages about injectors and FoxAcid, and QuantumInsert, and so on... but it all seemed mostly Tor-specific and anyway not terribly front & centre. I say this not because I misunderstood the mechanisms - MiTM is not a new concept for any of us on the team, here - but rather because I miss the implictions entirely.

We all did, or nearly all. That's despite Snowden himself taking some effort to return focus to this category, even as we all hared off into various sub-branches of our own particular desire: crypto brute-forcing, mass interception, hardware interdiction and modification, and so on. Not surprisingly, Mikko (Hypponen) calls out as something of a lone voice, in his early-published quotes on these attack tools, in really clearly pointing out that there's something fundamentally different about this stuff. Here he is, from March of 2014, in The Intercept:
"“When they deploy malware on systems,” Hypponen says, “they potentially create new vulnerabilities in these systems, making them more vulnerable for attacks by third parties.” Hypponen believes that governments could arguably justify using malware in a small number of targeted cases against adversaries. But millions of malware implants being deployed by the NSA as part of an automated process, he says, would be “out of control.” “That would definitely not be proportionate,” Hypponen says. “It couldn’t possibly be targeted and named. It sounds like wholesale infection and wholesale surveillance.”
[b"]Wholesale infection."[/b] That's the visible symptom, and it's the sharp stick in the eye that I needed to break my complacency. Mikko calls this category "disturbing" and warns that it risks "undermining the security of the Internet." That's no hyperbole. In fact, the observable evidence of that critical tipping-point having already been crossed is building up all around us.

All this doom-and-gloom from something that doesn't really have any new parts, and has been outed to public visibility for years... how can that be? CINs are powerful because of their systems-level characteristics, not (merely) because of their fancy building blocks. Just like the castle, vastly more useful as a defensive tool than a big pile of boulders, CINs take a bunch of building blocks and create an aggregated system ouf of them that's of a different order entirely.

The forest is greater than the sum of the trees, in other words. Much greater.


ՑՑ "...proceed with the pwnage”
“Just pull those selectors, queue them up for QUANTUM, and proceed with the pwnage,” the author of the posts writes. (“Pwnage,” short for “pure ownage,” is gamer-speak for defeating opponents.) The author adds, triumphantly, “Yay! /throws confetti in the air.”
One of the things we know - or knew, really - about infosec is what it means to be "infected" with "malware" or "badware" or whatever term is enjoying its 15 PFS re-keyings of fame. You do something dumb, like stick a big wiggly floppy drive into your TRS-80 that you got from some shady dude at the local BBS meet-up, and now you "have it." The virus. It's in your computer...
inthecomputer.jpg
If you do silly-dumb things and bad stuff gets into your computer, then you have to... get it out of your computer, of course. A entire industry (dubious as it is) exists to keep bad things from getting in - "antivirus" - and a parallel sub-industry specialises (not terribly successfully) in getting it out when it gets in. THis same model scales up to corporate entites, except it all costs alot more money for the same not-really-effective results. Firewalls keep bad stuff out, and scanners find it when it gets in so it can get removed.

Simple - even if tough to do in practice. CINs are different.

It took me most of a month to figure this out, too. At first, in early March, I noticed odd browser activity in several machines I'd been using to do research and fine-tuning for our torstorm gateway. I whipped out my analyzers and packet-grabbers and browser-session sniffers, and got to work figuring out what had infected the machines. Because that's how this works: if you are unlucky or unwise, you disinfect. It's tedious and not always totally successful, but it isn't complex or intellectually challenging. Indeed, I was quite sure I knew with some precision what vector had infected me - and I had (still have) the forensics to demonstrate it. Feeling a bit smug, I took the weekend to collate data, write up some findings, clean the local network, and prepare to pat myself on the back for being such an InfoSec Profesional.

Then the weird stuff started happening again, on the computer I'd somewhat meticulously "cleaned" of any odd tidbits. Hmm, ok. I suck at hardware, as everyone knows, so clearly I just didn't do a good job of disinfecting - this is not unusual. Back to the salt mines, to disinfect again. This time I roped in most all of the rest of the cryptostorm staff computers, to disinfect those... a security precaution in case I gave what I had to others on the team, somehow. I still didn't really know what it was doing ("it") in the browser, specifically... but who cares? Wipe the browser to the bare earth, or if needed reinstall the entire OS image ground-up. Problem not. Done.

I took the opportunity of this extravagant downtime - nearly a whole week without being on the computer for academic or cryptostorm work, amazing! - to pick up a new laptop. Actually new, in the box - something odd for me, as I tend towards ragged conglomerates of old machines. Once again feeling smug, I laid out some elegant UEFI partitions - tri-boot, look at me being all tech! Packages updated, repositories lovingly pruned and preened with bonsai attention. I left the drives from the old infected machines, in my local network, off in a pile for later analysis and file removal. Safety first, right? No way this nasty stuff will jump onto the new, "clean" boxes I've spent days setting up.

Then the new box went weird, all at once. Not just one partition, either: I'd boot into Win and sure enough the browser would get baulky and jagged and cache-bloated if I hopped around to a few sites... not even the same sites I'd visited when I was in the lenny partition.. That matters, because we assume - unconsciously - that we get infected from a specific site. It's got bad files on the server, you visit the server, and you have those files come down to your machine via your browser. Maybe it's a creepy flash file making use of the endless deluge of flash 0days, or whatever. The file comes from a server.

But I didn't visit any of the same sites, on these different operating systems I'd just used on my new laptop... not an intentional choice, but looking back I knew it was a clean split between the two groups of sites. But now I certainly seemed to have the same problem on a brand-new, well-tightened (as much as one can, because WIndows) OS instance - with no overlap in sites visited. That's sort of weird, isn't it?

Well, ok... thinking... hmmm. And as I'm thinking, the Windows partition locks up tight. No surprise there, it happens... though with only a couple plain-jane websites loaded in Firefox? On a brand-new laptop? Odd, but whatever: Windows. Reboot, and it'll be happy once again.

I push the power button to reboot the laptop. It powers off, by all appearances... and then simply sits like turd in the hot sun. It's a new-fangled laptop, no way to do anything to it but push the power button. Heck even the battery is locked inside tight. I push, and push, and push... nothing. And my mind is repeating two words: fucking hardware. Hardware is the bane of my existence. Two days old, and a new laptop won't even power up. Hardware and I have a fraught relationship. I go through the grief stages, sort of... first is denial - it can't be broken, no way! - and then the next one is anger - damned piece of garbage, amazing how shoddy things are!

...I think there's three more stages, but I don't remember them because I was so pissed off.

Also the laptop got a bit dented-up along the way. I was frustrated: a week's worth of fiddling with hardware and kernels, and I was one step backwards from where I'd begin. No stable partition. No stable local machines, known-clean. No real idea of the infection vector, as my assumed model wasn't doing well as new data arrived. Plus now I just had an angry shouting match with a laptop that won't boot (not much shouting from that side of things)... this is really, really not me at all. But I'm feeling, at that point, a powerlessness... a sense of non-confidence in my own ability to run a computer. This might be like a truck driver who suddenly forgets how to operate the transmission in her daily driver: really humiliating, and self-eroding, isn't it?

In the dozen or two cases of people I've talked to who also have been painted by this CIN, that powerlessness feeling is a universal marker. Many are high-level tech notables, and the concept of not being able to make a computer run cleanly is... utterly foreign. As a group, we're the kids who built computers from blurry blueprints published in Byte magazine, metaphorically speaking. We not only fix computers for friends and family when they won't work, we're the ones who the people who first tried to fix them come to when they can't fix them. It's been like that all our lives. It's sort of who we are, at some level.

And then there's these computers sitting in front of us that don't work. Or, they work for a while - a few days, maybe - and then they start sliding downhill. Browser slows, then gets GPU/CPU intensive. Lots of activity from it, even when no page loads are happening visibly - or maybe only a tab or two are open. Bidirectional traffic, noted by most of us who ifconfig'd or nload'd or iptraf'd the boxes when things took a strange turn.

Next, graphical irregularities that go beyond the browser. Fonts aren't rendering quite right... or if they do, they render well but have these "slips" where they get a bit pixellated... but only for a minute or ten, and they come back. Those of us attuned to such things note that strange tls/ssl errors spin up: mismatched certs, subtle but if one's browser is a bit snooty about credentials, they appear. Maybe a certificate for a site that doesn't match the site's URL... well ok not uncommon, except in these cases it's for sites that we know have matching certs, to the character. But they're transient.

Wireshark it. But.. wireshark crashes. Update wireshark... and suddently you find yourself downloading a really big package relative to what you are pretty sure a basic wireshark binary should be. You google that, to confirm... and as you do, you notice that there's a bunch of other packages hitching a ride on that wireshark update... how'd that happen? More googling, but as you do, your machine is doing stuff. Htop and...wtf? Lots of new processes, not stuff you are used to seeing. Bluetooth? You disabled it ages ago. Avahi... what the hell is that? Cups? I don't even own a printer.

You google each one, and they're legit packages... but packages you've never intentionally installed or configured. And no big version upgrades lately, to the kernel, either... hmmm. Look at the config files for these unexpected arrivals - eeek! Ports open, remote debugging activated... that's not default settings, and you sure as heck didn'[t set those, did you? Meanwhile the CPU is hot, the hard disk platters are spinning continuously, and the blinkenlight on the NIC is a solid LED.

Those who are reading this and have experienced some or all of that, you know what I'm describing. You can feel your OS eroding out from underneath you... but how to stop it? And how did it get in, since that's a new machine with no hardware in common with the old (infected) ones. Perhaps you go on a config jijad, like I did (many times): manually reviewing every config file of every bloody package on the bloody machine, and manually resetting to values you think sound legit... because who can google them all? Packages crash, you didn't set values right. Reading, googling, page 7 of the search results and still nobody will just post the syntax that made the damned whatever-it-is do its thing without barfing!

...what did you see??!?
wisdom_of_the_ancients.jpg
Ah, yes, now you're feeling the burn. If you looked in cache (or Cache, or Media Cache - wtf? - or .cache, or...) you see gigabytes of weirdly symmetrical, hard-symmetric-encrypted blobs overflowing, in all directions. Purge cache, and it builds back up. Plug the NIC in, and traffic screams out... you didn't even up the adapter yet! And is that your wifi adapter chattering away? That was disabled, too...

Eventually you reboot yet one more time, and the grub menu is... not the same. You run grub2/pc, and this is old-skool grub, or whatever. Is your kernel image listed differently? No way... that's not possible. You mention these odd things to colleagues or friends, and they rib you about it: "stop clicking on porn, and you won't get infected again!" But you actually didn't... which is troubling in all sorts of ways.

Read boot logs closely, and you might see paravirtualisation come up. And/or KVM. If you run windows, the equivalent there. But you didn't install a virtualised kernel. Maybe you are like me, and you get downright obsessive about this: iterate through possible infection mechanisms, between boxes. Calculate RFC ranges for NFC devices you know are disabled, but who knows..? Consider that air-gapped subsonic infection magic that at first seemed legit, then got pissed all over, but is almost certainly legit and was alll along... do you need to actually find a Faraday cage to put your computer in?

Unplug from the network entirely, hard-down adapters at the BIOS. Machine is stable. OK. But... useless, right? DIsable IP6, wreck bluetooth physically with a screwdriver, read up in WiMax and all that weird packet-radio stuff (there goes a weekend of your life you'll never get back). Start manually setting kernel flags, pre-compile... only to see the "new" initrd image hash-match to the infected one. Learn about config-overrides, and config-backups, and dpkg-reconfigure, and apt-cache, and... there's a few more weeks.

Plug back into the internet after all that - static IP on a baseline wired ip4 NIC, no DHCP packages even installed, ffs! - first packet goes to cstorm to initiate a secure session. Rkhunter at the ready, unhide(s) spooled up... iptraf running, tcpdump dumin'... an hour later, having logged in to a couple sites to check week's worth of backlogged correspondence, and the browser starts slowing. Task manager shows big caches of javascript and CSS and images and... oh, no. Check your browser config files, manually - the ones you manually edited for hours last night, and set chattr +i. They're reverted somehow. There's a proxy enabled, and silent extensions with no names and no information when you look for matches by their thumbprints.

Kill your browser with pkill -9... but the browser in your window is still there. htop.... is that legit, or is that a remote xterm session? Why is sshd running? Who enabled Atari filesystem, ffs!

So it goes...



ՑՑՑ “Owning the Net”

In the first week or two after I got painted, I stuck the name of "SVGbola" on the malware I had captured... because .svg-format font files are one of the mechanisms used for the initial inject of targeted network sessions, and because ebola ofc. But quickly I saw that there were other vectors, they seemed to evolve over time. I'd block or disable or find a way to mitigate one clever ingress tactic, and a few hours later I'd see the telltale cache-and-traffic stats begin climbing... not again. Two or three days of frantic battle later, and I'd learned about a couple more attack/inject tactics, but still had no damned idea what tied them together

I'd intentionally been avoiding reading those old NSA slide decks, as I didn't want to taint my perceptions with a "one holds a hammer, and the world become a nail" dynamic. But it was time to dig into the literature (using a borrowed touchpad... I'd borrowed a few laptops along the way, from friends and colleagues, to use for some simple email and web tasks... and managed to brick the hard drives on every single one), and refresh my memory on this whole "weird NSA MiTM malware" cul-de-sac.

It didn't take long at all...
The NSA began rapidly escalating its hacking efforts a decade ago. In 2004, according to secret internal records, the agency was managing a small network of only 100 to 150 implants. But over the next six to eight years, as an elite unit called Tailored Access Operations (TAO) recruited new hackers and developed new malware tools, the number of implants soared to tens of thousands. {article date: March 2014}
I had been assuming Stuxnet, in terms of initial infection vector... you know, a USB stick with sharpie writing on the side that says: PR0N, DO NOT OPEN!!! <-- that is how you get malware, right? ( speaking metaphorically, sort of)

But this isn't what the NSA is doing with these programs, not at all.

They're selecting targets for injection of malware into live network sessions - apparently http/https overwhelmingly - on the fly, at "choke points" where they know the targets' sessions will go by the hundreds of machines that compromise these NSA 'malnets.' Custom-sculpted nework injections (we call them 'session prions') are forced in, seething with 0days. An analyst in some post-Snowden NSA office tomb clicks a few GUI elements on her display and the selector logic she was fed by her bosses primes the Quantum and Foxacid malnets worldwide, waiting for that signature'd session to show up on their targeting radar.

You've been CIN-painted.

Now, whenever your sessions match that profile, you will get more Foxacid Alien-implant session payloads coming back from your routine internet activities. The selectors can be anything that identifies you as a general profile... the slide decks mention things like Facebook tracking fingerprints, DoubleClick leech-cookies, twitter oauth header snippets, and so forth. Physical IP is entirely unnecessary, as is your name or any other identifier.

Perhaps the NSA (or its clients in the civilian law enforcement world, in dozens of countries) wants to find out who runs a particular website... say, a .onion website like agorahooawayyfoe.onion...
l_ff525d308ba173b66cd3d533cc092237.jpg
l_ff525d308ba173b66cd3d533cc092237.jpg (5.75 KiB) Viewed 34177 times
This isn't a small-scale effort any more, either. That's what I think I had unconsciously assumed, that it was a couple hundred people on the Amerikan drone-list, or whatever. Not making light of such things, but rather for me as a technologist if an attack is bespoke and requires expertise, it limits it to a tiny, tiny percent of defensive threat modelling scenarios. And for those on the drone-lists? Well, good luck is what I'd generally say.

However, these CIN malnets are scaling/scaled to millions of concurrent painted-chumps. And growing.
The implants being deployed were once reserved for a few hundred hard-to-reach targets, whose communications could not be monitored through traditional wiretaps. But the documents analyzed by The Intercept show how the NSA has aggressively accelerated its hacking initiatives in the past decade by computerizing some processes previously handled by humans. The automated system – codenamed TURBINE – is designed to “allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually.”

In a top-secret presentation, dated August 2009, the NSA describes a pre-programmed part of the covert infrastructure called the “Expert System,” which is designed to operate “like the brain.” The system manages the applications and functions of the implants and “decides” what tools they need to best extract data from infected machines. {ibid.}
Or for another way of saying it in the NSA's own words, dating from 2009...
intelligent-command-and-control.jpg


ՑՑՑՑ ņame your poison

Once I realised this was about quite a bit more than simply borked svg's (which is still a pretty interesting vector, imho), I pulled out the name #SauronsEye for what I was experiencing: a totalising, all-seeing, ever-present, burning glare from a height. I was being surveilled, by some entity somewhere, for some reason. The pressure of the eye was almost physical, for those middle weeks.

But the name doesn't seem to fit, now that we've been able to fit the scrambled, jagged mess of data-pieces together into a more or less fully-coherent understanding of what the system is. Because this stuff isn't passive it doesn't simply sit there and watch. Rather, it's 'all up in your shit,' as they say... every time you get online, however innocuous and carefully-constrained your activities are, you run the risk of this happening to your browser once those prions spread through your network session and shoot right into your local kernel:
12.jpg
A colleague, overhearing us discussing this amoungst the team, blurted out "Balrog." And that's the fit, just so. Yes, it's LoTR and that's drifted twee of late - but at core Tolkein isn't twee, and he knew his evil as only an Oxford professor of decrepit languages can know evil.

The Balrog, for the less painfully geeky amoungst the readership, are described by JRR as "they can change their shape at will, and move unclad in the raiment of the world, meaning invisible and without form" (cite), which gets it spot-on for our CIN-naming task here. He goes on, waxing a bit more poetical...
His enemy halted again, facing him, and the shadow about it reached out like two vast wings… suddenly it drew itself up to a great height, and its wings were spread from wall to wall…
Shadowy? Check. Great height, and wide (metaphorical) wingspan? Check. But it's the imagery of the Balrog that seared the name into the very souls of Tolkein-reading boys such as I. Imagery that quite hits the nail on the head:
1826732-balrog.jpg
Balrog500ppx.png
That's something of what it feels like to face down this stuff as it repeatedly pierces one's local perimeter and turns one's root-level kernel sanctuary into a mutating, unreliable, dishonest, corrupted mess... right in front of one's eyes. (and yes, I know that computers behaving badly are very much First World Problems of the most Platonic sort, and hyperbole aside I remain aware that starvation trumps Cronenberg-transgressed computational resources when it comes to real problems to have in one's life)

The final point, for this spot of writing, is this: there is no "disinfecting" once you are painted as a target by Balrog (or any CIN). The infection exist ephemerally in the fabric of the internet itself; it's not something you can simply remove from your computer with antivirus software (or manually). Trust me on this: even if you are successful in disinfecting (and that'll require expertise in grub, Xen, containers, obscure filesystem formats, font encoding, archaic network protocols down the OSI stack, and on and on and on), dare to actually use the computer to communicate with others online, and you'll be right back to the alien-bursting-from-stomach place in short order.

Neither cryptostorm, nor cryptography, can protect you from Balrog, or from CINs. The session prions come in via legitimate (-ish) web or network activity. You can't blacklist the websites serving dirty files... because they aren't coming from websites, these prions. They're phantom-present everywhere in the internet that's a couple hops from a Foxacid shooter... wihich means everywhere, more or less. You can blacklist the internet, I suppose - offline yourself to stay pure... but that in and of itself reflects a successful DoS attack by the NSA: they downed you, forever...

I can hear the grumbling from the stalwarts already: "BUT WHAT ABOUT HTTPS??!?! IT'S SUPER-SECURE AND INVINCIBLE AND SO NSA CAN SUCK EGGS I'M SAFE BECAUSE HTTPS EVERYWHERE WHOOOOOOO!!"

...

Https - as deployed, in the real world, based on tls & thus x509 & Certification Authorities & Digicert & ASN.1 & parsing errors & engineered 'print-collisions & DigiNotar & #superfisk & all the rest - is so badly, widely, deeply, permanently, irrecoverably broken on every relevant level that it merely acts as a tool to filter out dumb or lazy attackers. Those aren't the attackers we worry about much, do we?

I mean, if we put a lock on our door that would be totally effective in keeping out newborn babies, caterpillars, and midsized aggregations of Spanish Moss - but was useless against some dude who just hits the door with his shoulder to pop it open - then it'd be less than wise to go cavorting about the neighbourhood, crowing to all who can hear that you left 500 pound sterling on the kitcken table and too bad suckers, no mewling infant will ever find her way in to steal that currency... wouldn't it?

That's https.

Indeed, I have a... something between a theory, and a strangely intense fantasy... concept that PEM-encoded certs themselves are being used as an implant vector by Balrog :-P Or, as my colleague graze prefers to (more reasonably) suspect, strangely-formatted packets for use in transporting data between Balrog-sickened victims and the MalCloud of Balrog's control architecture, globally. Or maybe the're used as meta-fingerprints... beyond-unicode control characters embedded in obscure fields nobody even decodes client-side but which can be sniffed cross-site to identify sessions over time...

Anyway, https. Were we to discover (or read the work of others who discovered, more likely) super-exotic cert-vectored exploit pathways, we would be not surprised in the least; it's not that it's 'only' marginally useful in securing actual data (and network sessions) against CIN-level active attackers, but rather it's a question of how destructive it is, on balance. Alot, a little, or in the middle? That's an open question, but it's the only one when it comes to https and security.

But remember, many keystrokes ago, we discussed "necessary but not sufficient?" This is where it folds back in, like an origami crane tucked in one's pocket...

The defensive techniques that can - and will - protect us from Balrog and other CINs (there will be others, likely already are... that's a given), systems-level infected-cloud virulence, must also act as integrated, coherent, cohesive, outcomes-defined systems as well. Cryptography (symmetric & asymmetric primitives alike) is a piece of that, a crucial piece without which overall systems success would likely be impossible.

But crypto alone is no more protection from Balrog than would be a single thick mitten serve as protection from a month in the Arctic during coldest wintertimes. There's more, and more importantly it all needs to fit together as a sum far greater than its parts: a big pile of right-handed mittens won't substitute for a proper Inuit snow suit.

Funny thing is, we know how to do that - the systems stuff, the integrated functionality. It's been where we've headed since last fall, perhaps reflecting a team-wide intuition that our membership's needs were pulling us that way. Too, we've been seeing the weirdness out there - fractal weirdness on the network - for many months: borked routed, fishy certs, dodgy packets, shifty CDNs, https being https, etc. Little fragments of mysterious code piggybacking on "VPN service" installers (pretty sure we know where some of that comes from now, eh?), microsoftupdate.com hostnames used as C&C for... something? Repository pulls showing up weird-shaped, with signed hashes to back their dubious claims to legitimacy.... it goes on and on.



“La semplicità è la massima raffinatzza” (Łeonardo da Vinci)

CINs work by corrupting network integrity, at the most fundamental levels: routing, packet integrity, DNS resolution, asymmetric session identity validation. They use the trust we all have in those various systems more or less working a they were designed to work, and as their maintainers strive to enable them to work... they use that trust as a weapon against everyone who uses the internet to communicate, from a father in Ghana texting the family to find out what they'd like for dinner from town, to the Chilean wind-farmer planning future blade geometries with meteorological data available online, to the post-quantim information theory doctoral student in Taiwan who runs her latest research results up the flagpole with colleagues around the world, to see who salutes... all get leeched, individually, so CINs can frolic about & implant malware as their whims dicatate.

Galrog, and CINs generally, will prove to be our era's smallpox-infested blankets dropped on trusting First Nation welcoming parties by white guys behaving badly. We trust the internet to more or less inter-network, and CINs use that trust as an ideal attack channel because who would really think?

Well, Balrog - this Balrog, not Tolkein's - is real. Funding is in the order of $100 million USD a year and growing. It's been up and running a decade or so, long since out of beta. There's other CINs in the works, surely... if not deployed already regionally or in limited scale; When more than one is shooting filth into whatever network sessions catch its fancy, attribution will be hopeless. Its not like one checks ARIN for Foxacid records, eh? As to C&C, all evidence suggests Balrog piggybacks on the incomprehensible route-hostname complexity of the mega-CDNs - cloudflare, akamai, others so shady and insubstantial it's likely they'll be gone before this post comes out of final-round edits: you can't blacklist those, and their hostnames cycle so frequently you can'd even do subhost nullroutes.

So if you are painted, and Balrog is whipping at your NICs, you'll likely never 'prove' to anyone whose whip made those scars. But the scars are real, eh? They burn. And it'd be a heck of alot better to avoid the whip, rather than burn endless spans of time in Quixotic attempts to prove whodunit when whodunit dun moved to the cloud, address uncertain and changing by day.

So that's our job now, at cryptostorm: post-crypto network security. Crypto, Reloaded. Crypto... but wait, there's more! Protectiion from an ugly blanket of festering sickness already grown into the fabric of the internet itself, and sinking its violation deeper every day. Assurance that sessions go where intended, get there without fuckery, and come back timely, valid, & clean.

One cannot simply 'clean' Balrog off, as the infection is entwined with the internet itself.

Within that spreading rot, there exists the latent possibility of clean secret pathways, reliable protected networks delivering assured transit and deep-hardened privacy for every session, every packet, every bit... an underground railroad of peaceful packets. Identifying and alerting to network level threats is all well and good, but useless compared to threat transcendence.

Done right, that kind of service delivery creates a network-within-the-network, a sanctuary for people to talk and share and live their lives with meaning, confidence, and peace.

º¯º

º¯¯º

...cryptostorm's sanctuary comes now ±

  • ~ pj
by Pattern_Juggled
Sun Apr 26, 2015 6:04 pm
Forum: member support & tech assistance
Topic: France node, websites don't load
Replies: 2
Views: 8371

Re: France node - let's look into this asap

PoisonIvy wrote:This is what I meant in my previous post about constant disconnections. You have to reload a page constantly - this happens with all nodes. This means that the connection is either slow or unstable and constantly disconnecting
Hey there, what you're describing is absolutely not something you should be seeing from any of the machines in the Paris cluster... or anywhere on the network. It's not something we're seeing systematically, either - I've confirmed that with some test sessions personally, just to be sore.

Over this weekend we did get some reports of strange behaviour upstream from our London cluster, and admin-list announcements from Leaseweb indicate they've had a host of hardware swap-out tasks going on in their own French DC (which is not where we locate our Paris cluster, but may still have spill-over impact on routing stability in that network region, generally).

That all said, I'm going to be really surprised if this isn't something more specific to your browsing sessions - and more troublesome - than a simple cluster-specific transient routing hiccup. Can you drop me an email (pj@cryptostorm.is) and I'll provide more details there - or if you're comfortable working through it here in this thread, I'm happy to do that as it allows for a broader community integration along the way.

There's something going on here, and whatever it is you should never just assume it's "something going on with cryptostorm" when things run poorly. Rarely do we have such issues afoot in our clusters or nodes for more than a handful of minutes until someone's getting it resolved. More importantly, something is causing the problem - and quite a few of the "somethings" which are prime culprits are not friendly, and not handled wisely by letting them go un-resolved.

Cheers,

  • ~ pj
by Pattern_Juggled
Wed Apr 15, 2015 1:16 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: [Status] Cantus Non-Operational?
Replies: 5
Views: 29945

Re: cantus under investigation

I'm still a bit out of the operational loop, but I did overhear df discussing this issue yesterday and I know there's been some testing work going on meanwhile.

That datacentre does get quite a but of packet shrapnel from DDoS attacks running across the backbone interconnects in Frankfurt, but normally that's transient - a few hours here and there. This seems longer-term.

I'll open another staff ticket to be sure there's a resolution quickly... but I suspect that's already well underway if not nearly complete.

Cheers,

~ pj
by Pattern_Juggled
Wed Apr 15, 2015 11:37 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: #SauronsEye: it mostly only comes out at night... mostly
Replies: 9
Views: 57623

#SauronsEye: it mostly only comes out at night... mostly

{direct link: cryptostorm.ch/#sauronseye}
(note: this post continues discussion started in a parallel thread, which provides useful backstory ~pj)


I've sat down to write up this summary of recent investigative and sanitization work I've undertaken after identifying a form of polymorphic, browser-based malware (that I unilaterally started calling "Sauron's Eye" because it seems apt, and also that blazing eye really does capture the creepy element to all this), and subsequently deleted a small pile of draft versions of the report after trying and failing to edit them into a form that worked. They all veered deep into technical terrain and quickly lost the thread of why it all matters in the first place. Technical data is important of course, and I've got a few hundred gigs of captured stuff that needs to be analysed, published, and reviewed. But, it is perhaps best to keep that work separate from a shorter, more relevant attempt to explain in plain language what's been going on and what it means.

So, this post is steering (mostly) away from technical analysis in favour of the "what and why" focus - which, it turns out, is a larger challenge to write than is the technical detail. Anyhow, let's get on with it.

- - -

I've chosen to wrap this essay in the narrative fabric of the Aliens movie, for two reasons. One, the movie's thematic arc really does help pull forward the important parts of this topic and thus provides a nice outline to help keep my writing focussed. Two, because it provides the opportunty for a bit of humour injection and that matters. Humour helps us retain a good sense of perspective, and remind ourselves that even "Serious Business" is often not quite as serious as we sometimes feel like it is when we're in the middle of it. It's good to chuckle a bit, helps keep us sane. Mostly.
esVm336.jpg
esVm336.jpg (17.22 KiB) Viewed 57622 times
What happened is easy to summarise: for the past couple months, I've taken it upon myself to commandeer bits and pieces of cryptostorm staff time and expertise in several research projects involving what we usually call "malware." Some of it's been related to malware compiled into and/or packaged along with "VPN service" installers, and some of it's related to the "superfish" SSL-interception toolkit Lenovo was including on newly-purchased laptops since last fall. In both cases, I felt the issues were relevant enough to cryptostorm to justify some research effort and resources investment.

Then, in mid-March, as we were fine-tuning the torstorm conventional-to-Tor .onion gateway service we provide, I noted some highly unusual behaviors on the part of browser installs being used for the testing itself. Curiosity piqued, I started saving off the source code being received as I loaded landing pages for a small group of .onion-based websites - as well as .pcap packet capture sessions. I quickly became confident that something was coming across the wire and into the browsers I was using during this testing, although I wasn't entirely sure what. This conclusion I reached based on the behavior of the browsers themselves: cache metrics, stack snapshots, and monotonitcally increasing instability that was persistent across application and underlying OS restarts.

My suspicions were apparently confirmed when, within 10 days of my publication of these conclusions, a number of high-severity bugs were patched by the Mozilla browser team - bugs directly relating to the attack vector I'd concluded was most likely causing the behaviour I was documenting. Then, less than a week later, a series of mysterious but powerful attacks on all of the largest "dark markets" hosted on .onion websites became publicly known. The second-largest such market, Evolution, vanished from the web in an alleged "exit scam," leaving no trace. The largest - Agora Marketplace, which I'd used in much of my load-testing - became even more consistently unavailable than it had historically been... and when it did appear, it was often serving strange, exotic fragments of gibberish. Word spread (and I was made aware of thanks to several generous community members) of one or several 0day attacks targeting Tor relay nodes, that allowed for injection of payload code into active .onion web- browsing sessions.

In sum, all appearances were that I'd noted a confluence of trends that resulted in a powerful-widespread, and ongoing attack on Tor hidden services onion websites - the "dark markets," in particular - and indeed had successfully captured forensic data and actual samples of whatever it was that was being deployed. Yay for me, I suppose.

What's funny, looking back, is that I never considered the obvious likelihood of being infected with exactly what I was studying. Why didn't I consider this? Honestly, I have no good answer: at some subconscious level I suppose I considered myself "exempt" from infection since I'm merely a researcher documenting events, not a participant... a bit like a journalist in a war zone, who has some safety in knowing she's not the intentional target of the belligerents.

Except of course, that's silly. This is code - it doesn't decide who is or is not a legitimate target.

- - -

To make a long story short(er), I got several of my research machines infected with SauronsEye. Despite being naive about the likelihood of infection, I was immediately aware something was afoot on these machines... whatever other flaws I have, I'm fairly good at noticing unusual local machine behaviours. I immediately air-gapped the machines in question, and cheerfully begain looking into what had come to reside in their digitial innards. This initial cheerfullness faded, as I realised both that I had no idea what was going on with the machines, and more importantly I had no good handle on how it spread.

Without knowing how it spread, I had no idea whether it had spread to other machines on my local network... or to the machines of other cryptostorm staff. That's a big deal.

Once I realised that, I made the decision to shift cryptostorm's non-sysadmin staff computers to lockdown mode: every machine was assumed compromised, and any data being removed from them was assumed to carry an infection vector. This is a procedure we have in place for such situations, although it's the first time we've made use of it. Is it overkill? Likely yes, but the point is that by the time we know for sure it's far too late to go back and redo things if it's not overkill. Is this frustrating for staff? Oh, hell yes - trust me, I have heard those frustrations loud and clear.

It's also profoundly frustrating for our members, partners, and vendors. As is quite obvious, staff availabilty suddenly dropped - and stayed dropped for nearly two weeks. Email unanswered, twitter questions hanging, threads here in the forum not getting replies. What we've now learned the hard way is that locking down support staff computers basically locks them off the resources needed to provide support. And this has direct, tangible impact on members and on everyone with whom we do business, as a project. To be honest, I imagined we'd simply lock down machines and staff could continue on with new hardware... which is naive, in hindsight.

Initially, we shifted hard drives to new computers and that seemed a reasonable approach - but almost immediately I shut that down, as of course anything on the hard drives would still be there on the new machines. Then we tried new machines, bringing over files selectively... but I saw test data myself that suggested the drive media itself might be a transmission vector - so I shut that down, too. Eventually we got to newly-purchased hardware, kept air-gap separated from existing hardware and needed passwords copied over manually, via pencil and paper. Which, sadly, doesn't work well for things like PGP keys and other long-form credentials. Nor for staff geographically distributed, and otten unavailable via physical postal service for opsec reasons.

In sum, the lockdown was (I believe) successful in ensuring the infection didn't spread within the staff or compromise production systems... but at the cost of making a hash of our ability to provide quality services as a team. That's a big lesson learned.

As I was the one who called the lockdown and enforced unreasonable demands on its implementation, I bear the responsibility for the disruptions it caused. I also, of course, was the one who got myself infected doing research without adequate protections - so I'm doubly at fault, here.

- - -

Infected with what, exactly?

Technical details set aside for a separate post, it's a modular exploit kit whose initial vector is browser-based and that shortly thereafter jumps out of the browser, and which then takes up residence on dedicated hard drive partitions. Once installed, it implements a series of proxy hijackings of all ip4 network traffic - including "encrypted" https/tls sessions. It can generate fake certificates if needed, and it can redirect traffic to alternative destinations without leaving an obvious footprint in ip4 routing and traffic statistics. Finally, it's able to hijack Debian-based repository updates and thus deepen its roots within the operating system itself (I assume the same is true of Windows, but my competence in that OS is so low as to be all but useless).

Once it's got its claws into the kernel, it creates an early boot-stage, modified Xen hypervisor that captures subsequent OS instantiates within its virtualised environment - thus taking the role of dom0 itself, and leaving the other OS instantiations as unknowing "guest sessions." i.e. domU. Getting out of that bind, from within domU, is all but impossible - I pulled it off once, on a test machine, but the slightest fumble and it all unwound back down to the evil hypervisor being in control.

The hard-drive based foundations are quite resilient, and quite virulent. I managed to spread it across OS variants, across Linux distributions, and across "full-wipe" deletions of the partitions in question. I think I managed to eradicate it with full-disk/all-sectors formatting... but I'm still not entirely confident in that. I did note some Android anomalies on devices that were in NFC/bluetooth/wired LAN connection reach of the infected machines - but I have no idea if that's just the usual tragic Android "security" or if it's related. I'd suspect it's unrelated - but I'd not bet heavily on that conclusion without further testing.

I don't know who runs it, where it's from, who is targeted, or why it has been created. I can speculate, but no more than can anyone else - I have no special insight into any of those questions. That said, I have no sense it targeted me - or cryptostorm - specifically. Indeed, it seems likely we picked it up merely as a side-effect of the research I cited above. The fact that we're all, as a team, suspicious of ip6 and largely have it disabled on local machines likely helped prevent its ability to "phone home" successfully, even before machines were air-gapped.

There's more, but that's for a separate post.

- - -

On a personal level, the experience of seeing several local machines become infected, working to understand the infection, ensuring it didn't spread within cryptostorm, and eventually giving a green light for staff to begin using full infrastructure once again has been... difficult. I am by no means a malware analysis specialist, and in the early days of this cycle I vastly underestimated the complexity and capability of what I'd invited into my local network. The word I'm looking for here is "hubris" - I assumed I could not only clean the infection, but do a nice tight job of documenting the process along the way: pithy, concise blog post to follow.

I tackled the work in a rush of enthusiastic confidence, and I pushed myself hard to get it done fast so the team could green-light and I could get back to 'real' cryptostorm work. I didn't sleep enough, assuming I'd push through the process and catch up on rest later... I went into that fugue state known to many technical folks, where hours and days blurred as I reviewed code and built my mental model of what was going on. This, of course, is not sustainable over the long term - although it can be quite useful for short-term productivity.

Within several days, I was strung-out and coming to the realisation that I'd underestimated the task badly. At the same time, the rest of the staff was waiting on me to clear them and their computers to get back to work - frustration grew. I could only say that I still didn't know what was happening but that I knew something was happening... hardly confidence-inspiring forensic analysis, on which to base the partial shutdown of a high-functioning team. As the pressure grew on me to, speaking bluntly, shit or get off the pot - come up with malware samples that could be used to clean and re-approve our computers - I became more and more stubborn in my insistence that the threat was real and that it had to be enumerated before anyone could start connecting to our in-house systems again.

At the same time, I was routinely seeing things happen on my test machines that absolutely could not happen, period. To someone like me with a rather rigid, formalistic turn of mind, this opens up a yawning chasm of epistemological vertigo. Terror, in a sense. One feels one has a general grasp of "how computers work." after decades in the trenches... and then over a period of a week or two one sees these assumptions seemingly ripped away by cold, hard, stubborn facts on the ground. I had moments of severe despondency - no sense in denying it. I questioned even my questions, which can become pretty self-destructive in no time at all.

That happened.

Fortunately I have great colleagues, a supportive family, and several outside researchers who were at once supportive and also not intrusive in their questioning about what I was getting up to with all this. It's not that I am in the least bit protective of or "grabby" about owing these data - quite the reverse! However, before I could articulate even a loose theory of what was going on, I was loathe to dump the whole mess on others' doorstep. Call it pride, or call it hard experience, but that's not a path I will go down and in the early days it meant I was largely isolated and trying to make sense of things from down at the bottom of a pit of self-doubt and intellectual uncertainty.

Once I confirmed the presence of non-intended virtualisation, the pieces began to come back together for me and from there I knew how to clear our machines and get back to full production status.

And, yes... I cursed computers. I yelled at hard drives. I took long walks in the middle of the night, muttering to myself and undoubtedly causing the local fauna to wonder for my sanity. In daylight hours when sharing polite company, I slipped into the wrong spoken language many times - a social gaffe to which I'm prone when overly tired. I missed appointments, I let my lagging academic duties lag further. I took jangled notes in the heat of battle that, reading them later, made no sense and seemed more like graffiti than research data.

Also I decided early on not to rely heavily on google or on searching the literature, in general. This I did so I did not have preconceptions as to what was happening - it's a hard-edged decision to make, but in the end I'm glad I did. When I came back to present my unofficial conclusions to better-qualified colleagues than I, we were able to see whether my imputed conclusions matched up with published research others have done. It did, almost to a perfect match. That helped me be confident enough to green-light cryptostorm's machines for full use, once again.

Overall, these two weeks left me with several distinct byproducts: fatigue, morbid fascination, a certain paranoia, humility, embarrasment, and - yes - curious fascination. They only come out at night, right? Well, yeah... mostly.
7ae210382a99cc53a8b9f8251b03d69b39f518ebf67909b03f7e60207a97787f.jpg
- - -

Blah blah blah... what does it all mean, eh?

For nonspecialists such as myself who read the work done by front-line experts on state-level/APT malware technology, it' all fascinating but seemingly disconnected from our daily life. Regin, or Stuxnet, or DarkHotel target other people - not us. That's a comforting assumption, as it assumes targeting is both rational (someone really does choose who gets targeted, and who doesn't) and that the logic of the targeting is obvious to us.

Neither assumption, of course, is true.

Inevitably, some kit designed to target one group will "jump free" and run wild - that's how Stuxnet was discovered, after all. Worse, there's every incentive for spy agencies to essentially infect everyone and then only activate modules when they want to pick someone out of that vast ocean of candidates - much more reliable than trying to infect targets after they become targets, right? Is this happening, today? I'd turn the question around and ask it this way: is there any reasonable scenario under which it is not happening? No, there isn't. Ergo, it's happening. Q.E.D.

Further, who really knows who is targeted - or why. This isn't LEO (law enforcement organisations), with court orders and documents available via Freedom of Information requests. These toolkits are run by spy shops, largely off the books and designed to be plausibly deniable. Why knows why they target some people - perhaps they seek to leapfrog through them to their "real" target? No idea.

If the Tao wants your noods, the Tao gets your noods.

Mostly I suspect this happens quite often, but most folks don't notice it most of the time. Sometimes, however, someone picks up a corner of the carpet and sees the wild exuberance of what's squirming underfoot... the illusion of a stable foundation is shattered. I think that's what happened here - attuned to such matters, I heard the scratching at the cellar door and I opened the door.

Here's where I do the overly-broad, unsupportable, if-you-ask-me-on-a-dark-night style of "I can't prove it but if my life depends on it this is how I jump" meta-analysis:
  • If you are running a Windows-based machine, and you connect routinely to the internet, you are compromised by at least one such APT rootkit - and likely several. If you run a mainstream Linux distro, relay on routine repository synch to keep your OS and packages current, and don't routinely self-compile and fingerprint-validate binaries on your machine, you're compromised. Compromise takes the form of performance anomalies, transient network hiccups, ssl validation problems, and difficulties with some encryption packages - all unintentional side-effects of the technical tools being used to maintain a toehold on your system, for future use.

    You are compromised not by a specific "virus" with a specific name and code signature, but rather by these meta-frameworks that stitch together dozens of components - many of which are legitimate packages being used for nefarious purposes. The local mix of such sub-components on your machine, your router, your Android phone is going to be all but unique to you - and it'll vary over days and weeks, as each component self-updates, dies off, is replaced by others pieces, or os obviated by an OS update or whatnot. You exist in stable equilibrium, more or less, with this local micro-ecosystem of code that answers not to you as "root" superuser on your local machine, but to remote puppetmasters.. or to nobody at all, if the c&c infrastructure has gone down but the orphaned bits are left out there to fend for themselves.

    There's a real risk that those open backdoors to your - to our - machines will be used to harm us or do evil... a risk impossible to quantify given the vast uncertainties of the whole affair. But it's not a zero risk, this is clear. And, knowing that we're not 100% secure in our own local network and machines, we know that we must be careful what we say, who we say it to, and what we do. This fear is the real cost of such a scenario - it's unhealty, but it's also based in reality.

    Finally, I do know of techniques that can eliminate this thin smear of digital bacteria from our local machines - but they aren't the usual techniques we grew up being taught were "good security practice." Running antivirus apps perhaps doesn't do much harm, but it won't do anything to keep a robust defence against this sort of meta-threat. Antivirus apps are the modern Maginot line - and just as effective in protecting France from aggression, sadly.
I've been warned by a number of well-meaning colleagues that I should leave this topic aside, and write no further on it. It's demoralising, they say, and it certainly doesn't help cryptostorm sell tokens or succeeed as a network security service - reminding people that endpoint security is an unmitigated disaster just makes them feel like network security service is useless, anyway. They also warn that I sound shrill, paranoid, and frankly not-fun when I write on these topics - and who really wants to read that sort of thing?

These are all fair, points, but I'm writing anyway - for to reasons.

One, what I am writing is true. This is reality, this is objectively accurate as a description of the universe in which we live. Whether it is what we want to be true, or not, is not relevant to whether it is true or not. Rarely do I import much of my Buddhist viewpoint into technical work (not overtly), but here I do: reality transcends meaning, transcends explanation, transcends theory. Reality simply is.

Two, I don't feel these are gloomy, tragic, hopeless topics - genuinely I don't. I may sound paranoid and Enemy of the State creepy about "Them" and how "They" are tracking us, etc... but my strange obsesssions also help illiminate a path forward that's largely immune from these in-the-dark creepy-crawlies. The bearer of bad tidings may be bad news, as the old Quebec aphorism goes... but if the bad news brings with it the seeds for future good news, I think we've more than balanced things out along the way.

It's a complicated world out there. Big players with big power are battling over prizes and issues us mere mortals will most likely never understand. And yet, we're still at risk in the crossfire - so it behooves us to know the lay of the land, a bit, so we can stay out of the worst areas of danger. I knew all this, in theory... but the last few weeks have made it real, tangible, and personal to me.

I've felt the gaze of Sauron's Eye on me, and I've turned to look back at it. I don't recommend it to others - do as I say, not as I do! - but I also wasn't burned to a crisp merely by that gaze. And, having looked back, now I know how to slip futher from view in the future. Even Sauron has blind spots... once we know where they are, we know where to go so that we can live free from that nauseous feeling we're always being watched.

Because, yeah, we all know what happens at the end of Aliens, right? ;-)
getawayfromheryoubitch.jpg
Regards,

  • ~ pj



ps: just because :mrgreen:
IMAG01023-1024x613.jpg
by Pattern_Juggled
Wed Apr 15, 2015 9:39 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: #svgbola: thoughts on operations security, browser vulns, & endpoint awareness
Replies: 3
Views: 49165

#SauronsEye: protecting technical security in a complex, dangerous world

Ok, well it's been a week since I posted my pre-summary summary note above on what I was then referring to as "svgbola" in recognition of the .svg-based 0day expliots recently patched by Mozilla, and used against visitors to Tor hidden services. At the time, I felt I'd largely gotten to the bottom of the issue and I was cheerily making my way towards approving the rest of the team's full return to systems connectivity after completing a review of our infrastructure in light of this new class of in-the-wild attacks on internet users.

You know those scenes in old horror movies, where the kids who survived the massacre at {insert rural location} at the hands of {insert name of boogeyman} are relaxing afterwards, thankful to be safe? Or, better yet, Ripley and Bishop on the transport ship after having escaped the bitchy ministrations of Alien 1.0? Even Newt is almost relaxing..
Aliens-A3S5-RipleyNewtBishop.png
So heartwarming! So tragically naive, too :-P

We all know what comes next... ;-)
bishoppain.png
bishoppain.png (100.88 KiB) Viewed 48804 times
Pretty much that sums up what's been going on during the intervening week. Without the milky-white android blood, fortunately.

I'm splitting off the reply to a separate thread so it's not buried in here, and is easier to access.

Cheers,

  • ~ pj
by Pattern_Juggled
Tue Apr 07, 2015 10:44 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: #svgbola: thoughts on operations security, browser vulns, & endpoint awareness
Replies: 3
Views: 49165

#svgbola: thoughts on operations security, browser vulns, & endpoint awareness

{direct link: cryptostorm.ch/svgbola}



As I've been settling back into things after a few days of largely afk time with the family on an out-of-town trip, I've had a tab open waiting for this post to write itself... and the tab's still largely devoid of text.

This suggests to me that there's a need to carve the topic into a few smaller pieces; the alternative, broad-brush approach seems a big, chunky pill for even the most enthusiastic to swallow. In fact, nearly two weeks ago I began work on a post covering much the same ground as this one. Things came up, attention was diverted, and that too-long post has sat in draft form long enougn to grow some moss. It's likely destined for rm -rf because any post that fights that hard to avoid being published most likely sucks at a fundamental level.

To avoid repeating that mistake, perhaps I can provide something useful in the way of introduction and framing, with this post, and then dig in more deeply in the follow-on.

It's about operational security, and Tor .onion attacks, and TLS injection exploits, and beyond-javascript/post-Stuxnet modular polymorphic rootkit architectures. And vacation. So we'll go back-to front and hope the narrative arc coheres as things unfold along the way...

Over the just-completed long holiday weekend, I took advantage of the lull in the calendar to hoof it up into the mountains surrounding our hanging-valley city here, in the lands where winter is still keeping a good grip on events. More accurately, I acquiesced to well-crafted plans others put in place, seeing both the wisdom of the timing and the chance to buffer my absent-minded professor failings with the crisp competence of my loved ones. As is our way on the team (cryptostorm.ch/teamsec), we made no public statement on this and only a few core team were briefed in advance - that's an attack surface we keep minimised, a priori as a team because it's trivially easy to do, has insubstantial costs, and generates a small but measureable improvement in project security overall.

Clearly I was in line for some off-terminal time, away from the fishycerts and browser injects and beyond-unicode character sets festooning SSL certificates like a garland of Cthulu's own joyless Mardi Gras beads. In fact, reading that last sentence, I can see that the need for a vacation had long since arrived before last weekend. Go too far down those dark, paranoid pathways and one risks fitting Hunter Thompson's legendary character summation of his Samoan lawyer colleague:
"There he goes. One of God's own prototypes. Some kind of high powered mutant never even considered for mass production. Too weird to live, and too rare to die."
~ Hunter S. Thompson
There was only the small matter of the infection I'd picked up on an old laptop I use almost exclusively for research coordination.

☂ ☂ ☂

I noticed the machine going a bit off its feed, as it were, several weeks ago. Nothing unusual there - show me a personal machine that's not traversing the monotonic downward curve towards utter dysfunction, and I'll show you a machine still in bubble wrap. They all get blasted by the vagaries of everyday 'net connected life, and no it's not just Windows machines. Most all the really brilliant technical maestros I know simply treat local machines as disposable, ephemeral cannon fodder: time spent making them durable is like time spent making a sandcastle strong against the incoming tide.

Still, this particular machine - an old laptop running an enviably-stable Linux distro - was far from the flaky edge of that world. So when odd things went from zero to more numerous than I could count with my fingers, my interest was piqued.

Naturally, the first assumption amoungst my peers at cryptostorm is that I'd screwed up the hardware, somehow. This is enirely fair. I am... not a hardware specialist. It's all mysterious and vaguely unsettling to me: atoms, not bits. Weird terms like "shearage" and "stichtion" that are not in my universe of formally undefined supersets, recursive topology, and information decoherence are a sign I've gone too far down the OSI layer and ended up mired in... stuff. Which I am no good at: stuff. It rebels against me, it's mysterious and stubborn and at once boringly predictable and constantly surprising in how it can go bad.

The problem this time is that it didn't seem obvious how I'd caused these strange tidings. Reading papers on topics too boring to even type here? Using a command-line text editor to take notes on technical projects? Even I might not have enough hardware fail to pull those kinds of failure off - although I have a tendency to surprise, in that regard.

By the time we'd prepared for the weekend trip, the laptop in question was essentially nonfunctional. Like a well-spruced zombie, it's outer veneer of anodyne functionality had fallen away at an accelerating rate. The GUI over-layer, which had come to think of it had odd moments of inexplicable slippage for a week or so, simply fell apart. Passwords for accounts would work... then stop working... then work again. Aggressively laggy network performance, and glimpses of wireshark data that looked like black-swan outliers. The network slippage is what finally hooked my curiosity - that's my world, to a degree, and I know enough to know when I've broken something with my fiddling... or hadn't broken something.

By the time the weekend was over, and I'd ambled back into cryptostorm's world of daily ops admin and weaponised fonts (" ") as exotic remote attack vectors, I'd reinstalled a handful of Linux distros a dozen or so time on that machine, as well as similarly unsettling efforts invested in more or less every machine touching our local/home network. The nickname "typhoid pj" took root, got old, and became meta-clever - and I found myself vivisecting a perfectly-healthy (computer, non-biological) mouse late at night, muttering about BadVIOS and RFID-carried firmware evils.

I was that guy, the paranoid one.

Easy enough to walk away from, really. Write it all off as the usual local machine slide into chaos. Get some late-spring skiing in, enjoy the unaccustomed sunshine - healthy choices. But this stuff was doing interesting local network tricks, and in the end that's what kept me curious enough to keep fiddling.

☂ ☂ ☂

No, it's not the next DarkHotel - I don't think so anyway, but what do I know in such matters? It's something, that I am confident of - we'll be posting disk images of a newly-infected, baseline Debian OS install so others may review firsthand, if they're curious enough to look. I don't have pcaps - getting those OS snapshots was a challenge enough for me, with my lack of forensic malware streetfighter experience. In short, I took a middle road: neither throwing a few months into learning the skills needed to do it right, nor ignoring it entirely because it's (apparently) just PC (Linux, actually) malware & thus not cryptostorm's job.

A few things about the experience sparked some really interesting lines of analysis, for me... relating directly to cryptostorm's network service. First, I learned that moving quickly and unpredicably makes it hard for an opponent (digital or otherwise) to pin one down. Sun Tzu said it better, but it sunk into my bones during those long hours spent trying to out-fox the various clever tools used to hijack my network session mid-stream. To me, this speaks to a phase of our project work that embraces fluid, ephemeral network and endpoint behaviour as the one mechanism both known to work against vastly over-resourced adversaries, and practical in terms of real-world usage by folks who have jobs and families and lives outside of exotic font-based attack models. Dynamic network models, and dynamic OS instances, resonate as offering a path through the minefield of the NSA's "the internet is our LAN" position globally.

Also - here I become self-parodic as beat once more a drum even I'm tired of hearing me beat - there's bogus SSL certificates all over this stuff. It's a first step, the sine qua non of exploitation and pwnage: get the fishy root cert installed in the trust store, and the attack blossoms from there. To say that we - as cryptostorm, let alone as the global internetworked community of several billion humans - either deploy methods to plug this gaping hole in network security, or we must stop pretending to "encrypt" data that's functionally plaintext to the very attackers we intend to be protecting against... to say that's the either/or is neither exaggeration nor false reification.

Therefore: keychain.tools, which is a handy, surface-minimised, dynamically flexible, successful response to the wholesale and years-long undermining of CA- based network security by the NSA and pals. It's not the only tool, and it's got its own attack surfaces - but as I strugged all weekend to make sure the repository pulls I was doing were actually going to a repository & not some weird package-mutating workshop in a secure undisclosed location, I realised how utterly powerless we are if we are using all this great crypto on top of a platform made of toothpicks.

Once one stares down that particular rabbit hole long enough, the frame of reference shifts and one is looking up into the dark-version sky-analogue rather than down towards our planetary core.

And the one surprising thing to come out of this weekend wresling with a clever collection of scripts and rpm injects is that I am profoundly empowered by that look down the rabbit hole. As a child, I'd have fears of monsters under our bed - a common thing, yes? At once I wanted to see it, whatever "it" was - to knowit would be to categorise it, plan for it, and either overcome it or avoid it. The not-know has for me always been a slow torture. In this case, the knowing opens up inviting - charming, even - pathways through the dual minefield of dragnet surveillance & global rooting that's how things are out there today.

☂ ☂ ☂

Yes, in fact, these client-side nasties are "cryptostorm's job" - we might not be able to single-handedly "fix" them, but we'd be utterly remiss in our duty to members if we didn't realistically model them as part of our overall security analysis. A colleage recently chivvied us with regards to our slow deployment of "real" cryptostorm Android and iOS apps. As a team, we've held off on those for a combination of reasons; one core reason is that we don't yet know how to provide any modicum of legitimate security assurance on these platforms: we can protect data-in-transit, sure... but a gang-rooted headset with a "secure" connection to cryptostorm is hardly a paragon of reliable security assurance. Whether it's "cryptostorm's fault," or not.

In conclusion, my little dilettante's foray into the world of big-game rootkit meta-toolkits emphasied for me that these "endpoint" attacks are all network-based at core. Obviously - remote exploits are remote. Harden up enough in the right spots, and one can undermine the whole kit's capailities... make cert fuckery no longer easy for anyone with a few bucks to do, and a really big chunk of these attacks simply stop working. That's high-leverage insight, to me - and it's something I think our members will benefit from for a long while indeed.

Along the way I managed to get one brand-new laptop to refuse to boot or otherwise acknowledge the visible world. One is missing a few keyboard keys because I "slipped" and hit it too hard. A few hard drives are simply not unified as a coherent object, having been pulled aprt bit by bit until nothing is left.

My whitepaper-authoring laptop, that started it all off? Oddly it came back from the dead, and seems quite happy to be free of the parasite it's been carrying for who-knows-how-long. Ironic, indeed.

Cheers,

~ pj
by Pattern_Juggled
Tue Apr 07, 2015 8:23 pm
Forum: general chat, suggestions, industry news
Topic: Let's do this: a library of technical security papers
Replies: 2
Views: 12803

Let's do this: a library of technical security papers

{direct link: cryptostorm.ch/paperchase}


Last week, some of our friends in twitter provided an excellent suggestion: why don't we put together a collection of academic papers on network security & cryptography? Having pondered that over the holiday weekend, I concur 100%.

As is true for every cog in the wheels of academica, I struggle with the flow of papers - papers I'm working on myself and with colleagues, papers I've been asked to formally review prior to publication, papers I really want to read as they are hot stuff, papers I need to read because currency, papers I know I should read and if stuck on a desert island for a few years surely would "catch up" on, papers I feel obligated to have a copy of around despite knowing that I'll never get past the abstract (if that), papers sent to me randomly (often the best ones, by far)... and so on. Some folks reading this will be chucking now, as it's a universal issue in academia. We all have different strategies for handling; most of those strategies fail, in the sense of not keeping the papers well-prioritised and well-organised and accessible - and we take that failure as a fact of life, more or less. So it goes.

I'm in an odd role of carrying (part of) that academic load whilst also engaging in real-life work on the team with cryptostorm. Not surprisingly, I tend to veer towards theoretical areas on this team - and thus I collect an entirely new set of papers that I duly manage poorly as I have for decades elsewhere. And I bore dinner guests to death with references to "the {insert lead author name}" paper - such usage serving as a permanent scar of academic hard time far better than any formal credentials ever could. ;-)

Anyway, the paper-management issue with cstorm is less of a micro-scale private Woody Allen skit, and more relevant because many of the papers we see on the team here are really important in practical terms - life and death, in more than a few cases - for cryptostorm members, and the community more broadly. Research library denizens such as myself develop a creepy ability to recall and digest a flow of papers spanning decades - actual humans who see the light fo day aren't stuffing their minds with obscrure paper cites that may or may not (likely not) ever prove useful in any practical sense. So having this big pile of papers sitting around, inaccessible, sucks. To put it bluntly.

I personally have hundreds of wonderful netsec papers squirreled away in repositories here and there. In the past, I tried to post them here in forum threads - but it's tedious, monotonous, hard-to-automate work to do so in volume. Because, yes, almost all papers are .pdf even in today's day and age. Sure, we stick DOIs on our output and yes DOIs are handy in their own way... but they don't in themselves do anything to solve the archiving problem.

Which, in fact, is largely a solved problem.

There's half a dozen software toolsets of reasonably high repute that automate various parts of the keeping track of papers task. I've a favourite, but it's really better aligned with small-scale collaboration (with co-authors, basically, and journal editors/reviewers) than to this task... although I think a couple of the others look quite strong as candidates. So my tendency is to lean into that space - not surprising, given that it's home terrain for me.

Conversely, there's been suggestions to do a simple wiki: flexible, extensible, de-structured & thus encouraging structuring, and so on. The collaborative tech side of me thinks this all sounds like catnip for info-ecstatic rapture... but the academic in me quails at the disorganised, floppy, student-style sense of it. (I could claim otherwise, but I'd be lying - why do that?)

Thus, opening the question to the community.

Ideas? Advice? Feedback? Suggestions? Tools to recommend? Tools to avoid at all costs & burn with waves of zero-g fire if possible?

Worst-case, I'll just import in an academic tool and use it to essentially open a public portal into my personal paper-queuing methodology... so at least the papers are there and can be found in some sort of taxonomic structuring, for those seeking them. And also pagerank, of course ;-)

Best-case, we can seed the creation of a resource for paper archiving, access, and commentary that will vastly improve the accessability of and thus real-world impact achieved by these excellent research write-ups.

As an example to stir the pot, here's a great piece that I'd not even heard of until it was pointed out to me recently by a colleague in twitter DMs. That might seem banal, but this is a topic that's something of an obsession of mine (one of many, admittedly) in terms of cstorm's security roadmap... but I'd never seen this one. Blame me for being lax in staying abreast of the literature, or see it as a harbinger of what's not really working in current form:
ILOM.png
ipmi-woot13.pdf
(170.97 KiB) Downloaded 764 times
Cheers,

~ pj
by Pattern_Juggled
Wed Apr 01, 2015 8:35 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: cleanvpn.org/airvpn - information & team process discussions (a great, positive example!)
Replies: 1
Views: 23968

reply to AirVPN's contribution: suggestions & appreciation

Clodo wrote:We are available to provide any information you need.
We'd missed this post, until a member was kind enough to point us towards it. Our apologies for the delay in reply, no disrespect intended.
Under GitHub we release ALL the source code related to our client: https://github.com/AirVPN/airvpn-client
This include also an additional project used to generate (compilation, packaging, signing) binary deploy files (.zip, .dmg, .tar.gz etc).
This is a nice standard, and we have indeed reviewed your repository as a benchmark and example of source publication.

That we take a much more slimmed-down approach to this process should not be surprising, and is entirely congruent with not only our project's general "less is more tendency," but also reflects a vastly slimmed-down client application itself. The lack of extensive second-order helper and deployment components is not indicative of a failure to publish them, but rather of their absence from our build process.

Our version-specific binary releases have traditionally been published here in our forum, along with all relevant hash fingerprints, build details, changelog, and whatnot. We're not averse to migrating some of that to github, but frankly there are security implications involved that have kept us from doing so. Simply put, we don't control github (obviously) and using it as a binary-verification platform creates a single point of subversion failure that is much less the case here in our forum which we administer ourselves on a server we maintain ourselves.

That's said, we've long been moving towards a reproductible build framework for our version deployments - following along behind the excellent work in this space being done by the Tor Project. As our client is minimally bogged-down with extraneous components, this is a much less challenging task than that faced by Tor... even so, we've not yet got it to where it's ready for public presentation. That's something we need to do better at, and we're appreciative of others in the industry chivvying us in that regard.

We differ somewhat from others in the mechanisms we use as benchmarks for code signing and code integrity verification. Frankly, using CA-based code signing resources strikes us as close to parody, given how badly subverted that entire process is. At the same time, raw OpenPGP signatures are close to impossible for 99% of folks to actually verify as genuine given the requirement for command line competence. That's unfortunate, perhaps, but 100% true.

As this subsumes under our KeyChain decentralised authenticity verification framework, it's something we'll publish in more detail via that channel. Suffice to say that we'd like to see binary verification that is not fiendishly difficult for people to use, and that is blockchain-based in terms of posting of validation primitives.
In your GitHub, in our section: https://github.com/cryptostorm/cleanvpn ... ter/airvpn there are 3 files.

7za.exe is used by the deploy project to generate .zip files for Windows. Included in GitHub. Never included in final build.
Program.cs is the main source-code of the deploy project. Included in GitHub. Never included in final build.
github-d967f968a967d73050b6f00df5ceb05917ff8f3c7f3803e832bee5eda8037365.js is an unknown file by us. Anyway, our client doesn't use javascript.
That repository is open to public editing & commits, and is not intended nor adminstered as "our" repository in any meaningful sense. We hope you'll revise, expand upon, and remove files from the stub AirVPN subfolder I created there as a placeholder. If you prefer us to clone over your repository, we could do so... but that seems a little bit heavy for the cleanVPN project flow overall.

Basically, our hope is that you'll use the space in the cleanVPN repository in what ways you prefer to use it so that there's a diversity of approaches and presentations taking place as time goes by. For example, if I've inadvertently cloned in some javascript that has no relevance, by all means rm it & annotate the changelog with exactly that information! The reason I've not published on any of those three files is that they're utterly incomplete, not well-reviewed by anyone for cleanVPN, and basically stuck there as a reminder that the subdirectory could use some fleshing out.
As a side note, I would like to underline that few competitors release their client software under GPL.
This is undeniably true, and perhaps we can create some momentum towards change in that regard. In part, I suspect, some simply aren't familiar with the tools for source publication... it can seem daunting to those new to the process. By providing examples - diverse examples, as I mentioned above - perhaps we can do some "leadership through engaged mentorship" in this regard, and thus encourage constructive evolution in the industry overall.
Can we know why you report the aforementioned files?
Hopefully I've touched on that in sufficient detail above; there's no malicious intent nor intimation in the selection of files there, nor has that ever been suggested by us in anything we've published independently or as part of the cleanVPN process. And to reiterate: the repository is publicly edit-permission set and has been all along. Edit it into something that is useful, and by all means we'll gladly use that as a constructive example to provide to others.
It's important to us to block in a timely manner such insinuations.
Here's a screenshot of the tweet to which you've linked, above:
AirVPNreddit.png
Your concern over insinuation is understandable, given that. We'd neither seen it, nor been made aware of it previously.

I'll submit this reply to our twitter-manning staffer, so there's another direct link into this thread directly connected to that twitter conversation. And we're happy to aggressively publicise any materials or analytic supplement you choose to provide in the repository, or here, or anywhere else to be honest - the best way to overcome whispering innuendo, our experience has suggested, is to shine a bright light of factual data on it.

Again, my personal apology for the delay in seeing this post - and thus in replying. I've set some triggers in these threads to ensure such doesn't take place again.

Regards,

  • ~ pj



ps: though I hesitate to point this out as it sounds a bit disingenuous to do so, but you'll find if you check our public statements as shared in twitter and elsewhere that we routinely cite AirVPN as a VPN service that is clearly doing good work and doing it without any sense of fraudulent undertone. This may seem like faint praise, but there's only one other service we mention in similar terms (Mullvad) - out of the vast seas of other entities now littering the VPN landscape. If it's of any benefit, I'll gather up citations to those public mentions to back up this parenthetical comment. Cryptostorm approaches many areas of secure networking from a different direction - and answers resulting questions differently - than does AirVPN (or Mullvad), so it's not that we're aligned in that way. However, we have a high confidence that AirVPN isn't a scam nor ineptly managed... that confidence is all to scarce in the industry nowadays, as I am sure you well understand.
by Pattern_Juggled
Wed Mar 25, 2015 12:47 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: KeyChain: cryptostorm's #CAfree, direct-key tls/ssl w/ https .onion PoC
Replies: 4
Views: 44520

KeyChain - community support

parityboy wrote:So the next obvious question is also a rather pertinent one. How can we network members support this initiative? Bitcoin and Namecoin server instances? Keyserver instances? Hidden versions of the above? Other things?
One of the cool things about what is now known as the much more marketing-friendly name of "KeyChain Validation" is that there's no extra or new infrastructure required. It's leverages what's already working, and brings together components that already have widespread support.

Our view of what it needs most at this point is two things:
  • 1. some TLC for the github repository, as that's the core place for the project to really find its footing.

    2. framework wrangling
With regards to #2, what KeyChain really is when we step back and take a broad view, is a framework. That word has terrible connotations nowdays, and for good reason - those who can't code, framework - but the core value of a framework remains. KeyChain will power cryptostorm's ssl/tls verification irrespective, as it solves a bucket of thorny problems with one well-defined, well-tested, minimally complex procedure that we can harden intensively.

But for the KeyChain model to expand and evolve, it needs to be able to integrate with other use-cases and toolsets. Hence a framework.

How about this: I'll write up a vastly more concise KeyChain model overview than the above how-we-got-here post, if you'll get the KeyChain github repository ready to host the KeyChain wiki via their cool onboard basic wiki tool. Sound fair?

Let me know your github ID, and I'll get you pulled in to the project team admin group.
Side Note
I have read that many Tor site operators do not bother with SSL because the network is already encrypted, so there is "no need". Actually, that has sparked off another thought. For metanetworks where the data transport is already encrypted, could a similar solution be made to work without SSL? Or "switch off" the encryption and keep the authenticity? Or would that require Tor to develop its own Hidden Service authentication protocol?
Our summary of why .onion sites rarely implement https is threefold. One, it's a bit of a pain in the ass to get webservers and torrc comfortable with the hocus-pocus nonsense of x.509 PKI given the realities of onion addressing - not horrible, but just the usual silly CA, cross-signing, varying encoding shenanigans of any such CA-related task - but made all the more so by the utterly arbitrary nature of "trust" decision on the part of CAs themselves and how that impacts .onion sites.

Second, it's assumed that Tor itself makes the encryption and authentication of https redundant or actually a potential security risk. This critique is basically true in many cases, but once .onion content starts to come out of Tor and into the conventional internet, that falls apart & plaintext .onion content is running across a public wire. So that alone is worth doing it, just to be sure it does not happen.

Third, there's general misunderstanding of how all this crypto crap works together in such a model, and admins (entirely correctly!) decide to winnow out some complexity by excluding the parts they aren't 100% sure add actual security benefits. So, I applaud that decision rationale, and until she's feeling confident on understanding how the pieces fit together - or until the pieces fit together in a considerably more elegant & drama-free manner - a .onion admin is far better off staying with the Tor layer exclusively.

And with regards to your question about splitting crypto and authenticity, I'll boil down a very interesting route of discovery and simply point out this: encryption and authenticity are two sides of one coin, not two components that can be chosen or discarded as one prefers.

Cheers,

~ pj
by Pattern_Juggled
Tue Mar 24, 2015 11:08 pm
Forum: member support & tech assistance
Topic: Turing Down?
Replies: 1
Views: 6356

turing.cryptostorm.net status update

Hardware replacement underway - bad hard drive. Details here.

Cheers,

~ pj

ps: you might want to compile up your openVPN build to the current openssl libraries...
by Pattern_Juggled
Sun Mar 22, 2015 7:15 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: KeyChain: cryptostorm's #CAfree, direct-key tls/ssl w/ https .onion PoC
Replies: 4
Views: 44520

KeyChain: cryptostorm's #CAfree, direct-key tls/ssl w/ https .onion PoC

{direct link: cryptostorm.ch/keychain}

github repository: github.com/cryptostorm/KeyChain


Late last week, I made use of the opportunity to lay out some of the ground-level work we as a team have been doing since last fall, via a post at our crypto.cricket blog. As I was "volunteered" for this duty by the rest of the team, I wrote long... as I tend to do. For some, that sort of writing is painfully boring to read. I concur, for the most part. However, our decision as a team was to allow my worst long-form impulses to assert themselves, in order to provide some framework for the real heart of the story.

The heart of the story is delivering network security - real, reliable, consistent, comprehensive network security - to our members. So, today, my job is to share that part of our work in as concise a form as I can. To make that possible, here's several citations of previous items published by our team, on topics related to this; those who want to know why we're doing what we're doing are encouraged to start with these. Those who prefer to 'cut to the chase' and see the plan as it turns tangible, can simply continue forth from here.
Yesterday, in Part 1 of this piece, I laid out a series of flexibly-connected, global-scale, complex, systems-level threats that, when seen in full perspective, constitute an ontological thread to the security of cryptostorm's network and its members. Actually, I left quite a few pieces out - hardware-based attacks, client-side rootkits, side-channel weakness ubiquity, and so on - as the laundry list can start to seem overwhelming in full roar. But I hope the point has been made: big issues are relevant and require attention.

Today, we're tactical. And a tactical example helps set the stage for what can go really wrong in the "think local" side of things. This little vignette begins with an exchange that took place recently on twitter:
HMAoops.png
We at cryptostorm - and I personally - aren't interested in overstating this issue, but there's no nicer way to say this than: this is what happens when bad crypto meets low external review. I promised to stay short with this reply, and despite the temptation to veer astray, I'll stick it out. Besides. Moxie's essay on this topic is, in a word, brilliant - there's nothing I can say that'd improve on his explanation and it's best I just point those curious for the deeper details over there.

As HMA says in their reply to our criticism:
"The PSK is for authentication, not encryption or decryption. It's used as an alternative to certificates."
it sounds reasonable to say that the "pre-shared-key" is only for authentication, not for actual encryption... and in that case, if we don't care about that weirdly abstract authentication nonsense, we can just cut to the chase and do the encrypting that counts. And actually, it is possible to do that... but only if you basically do authentication but call it something else - same difference. Or, of course, you can use pre-shared-keys... which must remain private and secure to be of any use.

In this case, HideMyAss is doing neither. They've published their PSK on the internet, so anyone can find it. It's so low-entropy in any case that a ten year old could guess it in a few minutes. That means, for the specific underlying protocol on which they have based this "encryption" service, MS-Chap is what's available (2... as well as 1, the latter being beyond broken and into satire):
HMA-MSchap.png
Long story short, these network sessions are functionally plaintext for anyone who goes to the trouble of gathering them up - or storing them for later review. And while it seems like authentication can be carved off from crypto, in reality that's not how things work.

Incidentally, if there's any question regarding the decrypt we'd gladly receive captured traffic on a session or two, and we'll turn around plaintext from them. This is not a challenge, given that Moxie did automate the entire process (much of that automation exists to figure out the PSK or equivalent passphrase - which isn't needed here, since it's published). This really is as simple a case of useless crypto as can be imagined. Or as Moxie put it a few years ago...
"In many cases, larger enterprises have opted to use IPSEC-PSK over PPTP. While PPTP is now clearly broken, IPSEC-PSK is arguably worse than PPTP ever was for a dictionary-based attack vector. PPTP at least requires an attacker to obtain an active network capture in order to employ an offline dictionary attack, while IPSEC-PSK VPNs in aggressive mode will actually hand out hashes to any connecting attacker."
tl;dr - authentication matters

☯ ☯ ☯

Which is something of a problem, because authentication for effectively all network cryptography in the civilian world is structurally crippled. That's the bad news.

The good news is that the underlying mathematics - the cryptographic primitives - underlying real auth models that work across untrusted network pathways work just fine. I'd say they're genuine marvels, these asymmetric key exchange algorithms - close on to magic. The failure (intentional or not) lies in the way it's put into practice, out in the world.

Out in the world, authentication hinges on chains of trust - only, these chains aren't trust like most of us would use the term. Instead, they're rigidly hierarchical - anyone at "root" trust level can vouch for anyone else in the entire system, including themselves. This is a parody of rigid hierarchy, and unsurprisingly it shows all the failures such rigid models are known to produce.

In practical terms, when you decide you want to create a secure communications channel with a particular news website, for example, the way you know the website you're visiting is really the website you want to visit is that you are implicitly trusting the entire, broken, dysfunctional edifice of Certification Authority-based session verification to make sure you're pointed right. Further, the same question of knowing who you're connecting to is at the root of protecting against Man-in-The-Middle attacks - so that's a fail, as well.

There's a whole giant edifice that's grown up around this admittedly broken way of making sure our network traffic is secure. And there's enormous effort expended by smart, talented, motivated folks to fix the CA system so we can make the internet secure again. It's all a hopeless waste, and worse it's all completely unnecessary.

Cryptostorm has found a path out of this mess doesn't require "fixing" an un-fixable mess. Nor does it envision throwing that entire system out and starting, tabula rasa with some idealised new system of perfection, whole-cloth. Instead, we've gone through the convoluted inner workings of the existing CA model, highlighted the bits that actually do a decent job of their tasks, pushed aside the unnecessary complexity and baroque filigree of silly absurdity, stayed firmly based on proven cryptographic primitives, and sought out iterative steps to deploy instead of big-bang, all-at-once pipe dreams.

We call it Decentralised Attestation. And it works.

We'll show you, right now.

☯ ☯ ☯

Asymmetric key exchange, the way we get a secure channel going across an insecure internet, has a couple of absolute requirements in order for it to work. Basically, each party in the discussion needs to have a public key they've verifiably received from the other. Those keys aren't secret - unlike HideMyAss's PSK, these keys are meant to be public. If the two sides can get their hands on known-good public keys for each other, we're off to the races.

We get alot closer to good answers when we remind ourselves that "certificates" are just public keys wrapped in some extra "I vouch for you" stuff tacked on by Certification Authorities (and not wrapped well, or securely, at all). The public key is right there, in the certificate. As a thought experiment, take the certificate fluff out and you've got two parties, trying to communicate securely, and they need to be sure they've got each other's public keys. That's the crux of the whole thing.

Which can be seen in either of two ways, basically. On the one hand, we might say "sure, no problem - they send each other their public keys and that's that." Or, we might say "there's no way to do that across an insecure network that isn't itself subject to MiTM and thus already a failure." They're both wrong: yes, the public keys can't just be emailed to back and forth before the secure channel exists... obviously anyone wanting to can grab the keys off the wire, switch them for different keys, generally blur the channel so bad that there's no way to even get started.

On the other hand, the assumption that it's impossible to get keys into the right hands - provably do this - is also wrong. It seems like it might be pessimistically true... but that's only if we don't notice all the useful new techniques & technologies around that can crack that nut.

Incidentally, the way this is done for https "secure" web sessions is a mix; certificates do get sent back and forth plaintext, but they also have this shambling "trust store" idea that at core tries to get certs to end-users without going across insecure networks. This is done by having trusted companies like Comodo and DigiCert "vouch" for certs... which in practice often means checking if certs are legit by plaintext network sessions - exactly what we know is not going to be secure. Total failure, fractally broken top to bottom. Let's just put that aside.

What we have is the need to get public keys into the hands of the people who need then, in a way that's reliable and robust. Also we need to be sure that when a public key is no longer controlled by the person who used to, the people relying on it can find out it's been "revoked." Two sides of the same coin.

This isn't just https web browsing, either. Cryptostorm makes use of asymmetric authentication to create our secure cryptographic connections with our members. And, yes, that system has lots of places it can fail - not as many as CA-based https fortunately... but still too many to stay as it is.

We're deploying DA-based authentication for all cryptostorm sessions - that's already in the works - but meanwhile we thought a tangible example, a proof of concept (PoC) in security tech speak, would help cut through the sea of my boring words, and demonstrate what's actually happening.

Here goes...

☯ ☯ ☯


We have made our main websites - cryptostorm.is & cryptostorm.ch - available as .onion Tor hidden services sites since last fall. Mostly this helps us understand the tech required to do this, and keeps us current in such matters. Given torstorm's role in our team's work, and cross-network access to .onion and .i2p sites we already provide for on-storm members, it's important that we're hands-on with these tools.

Naturally, once we'd made those sites available, we wondered about https versions. Not because we're fanatics about https; if anything we're deep into pessimistic terrain (in that, we're hardly alone). But rather, it's an obvious question for anyone of a crypto-tech frame of mind. Yes, facebook did it (<fake cheer>), and more recently blockchain.info (their shallot-forced address is vastly cooler, we think). How, specifically? We read all the reports in the press, but that's not brass-tacks.

So we pulled the server-issued certs themselves to take a look at them firsthand, and we de-PEM'd them. Here they are:
Skipping over details, we tried replicating their approach and got shot down by the CAs. Apparently there's alot of begging involved if you want an "official" ssl cert for an onion site. Some on our team were keen to find a workaround to that - I'm sure they're possible; cut me loose with some outside-UTF8 glyphs and it's a simple spoof, I suspect - but as we got more clever about it, eventually we realised we were off the path entirely.

Because, wtf? I'm sorry, but CAs "vouching" for hidden service websites in Tor is just an astonishingly, brazenly, sneeringly horrific plan. And it's already well along the way to becoming real - they want to bring "trust" to .onion hosted content... with CA-based session validation!! Specifically, here's the rationalization offered...
– Powerful web platform features are restricted to secure origins, which are currently not available to onion names (in part, because of the lack of IANA registration). Permitting EV certs for onion names will help provide a secure origin for the service, moving onion towards use of powerful web platform features.

- Currently, access to .onion names over https from a standard browser results in the standard existing ‘Invalid Certificate’ warning. Training users to click through security warnings lowers the value of these warnings and will cause users to miss important security information. Removing these warnings for the user, through use of a digital certificate, will help users recognize and avoid real MITM attacks.

– The public needs attribution of ownership of the .onion address to differentiate onion services, including potential phishing services. Because onion names are not easily recognizable strings, providing the public with additional information about the operator has significant security improvements, especially in regions where use of the incorrect name could have lethal consequences.
I boldfaced the particularly cringe-inducing parts. Because anyone who can sign a document claiming that .onion sites need CA-controlled https in order to "help users recognize and avoid real MITM attacks" really does have a possible second career in acting, Impressively done! Painting CAs as harbingers of improved security, stable attribution, and generally well-run secure network sessions really is over the top, however.

So why do we need CAs to help make sure .onion-land enjoys all the putative benefits of crippled, CA-controlled validation as it's witnessed on the conventional web already?

Simple answer: we don't.


There's no improvement, indeed an anti-benefit, to be found in having CAs involved in onion session security or onion identity authentication. This is utterly the case, since .onion websites are routed via the coordinates that actually make up their address. How do you know an .onion site is what it says it is? Send packets to it - definitionally, they get to the address that is encoded in the address itself. This is really basic stuff.

Rewind back to the public and private keys. I already know an onion site is who it "says" it is because its address is its identity. This is basic ontology. However, as we debated this issue in team meetings for the last couple months, we eventually came to something of a consensus that an additional payer within the Tor model, when visiting onion sites, has essentially no drawback and in some cases can add genuine security benefits. We're fans of layered topologies - tunnelled tunnels - on the team, so it's our default assumption that they're work considering. Technically, they work fine - https is all TCP by definition, so there's no UDP issues coming across Tor.

Indeed, technically this proved easy to do: it's only getting a "real" certificate that's a bottleneck.

A real certificate? We decided to demonstrate what that means...


☯ ☯ ☯


Take 100 megabyes of quantum-generated, high entropy almost-not-pseudo "random" source material. Mix in some customisation of the obscure parameters involved in generating RSA-based keypairs, pull out all the useless crap of corrupt CA "validation" and this is what you get...

Code: Select all

Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number: 17006882260368345458 (0xec04954f3a25c972)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=IS, ST=H\xC3\x83\xC2\xB6fu\xC3\x83\xC2\xB0borgarsv\xC3\x83\xC2\xA6\xC3\x83\xC2\xB0i, L=Reykjavik, O=\xC3\x83\xC2\xA7r\xC3\x83\xC2\xBF\xC3\x83\xC2\xBEt\xC3\x83\xC2\xB8st\xC3\x83\xC2\xB6rm\xC3\x83\xC2\xB0\xC3\x83\xC2\xA5rk\xC3\x85\xC2\x8B\xC3\x83\xC2\xAAt, OU=decentralised_attribution, CN=\xC3\x83\xC2\xB0\xC3\x83\xC2\xB8rk\xC3\x83\xC2\x9F\xC3\x83\xC2\xB6t/emailAddress=DAkeypair-onionHTTPS@cryptostorm.is
        Validity
            Not Before: Mar 22 09:56:13 2015 GMT
            Not After : Mar 19 09:56:13 2025 GMT
        Subject: C=IS, ST=H\xC3\x83\xC2\xB6fu\xC3\x83\xC2\xB0borgarsv\xC3\x83\xC2\xA6\xC3\x83\xC2\xB0i, L=Reykjavik, O=\xC3\x83\xC2\xA7r\xC3\x83\xC2\xBF\xC3\x83\xC2\xBEt\xC3\x83\xC2\xB8st\xC3\x83\xC2\xB6rm\xC3\x83\xC2\xB0\xC3\x83\xC2\xA5rk\xC3\x85\xC2\x8B\xC3\x83\xC2\xAAt, OU=decentralised_attribution, CN=\xC3\x83\xC2\xB0\xC3\x83\xC2\xB8rk\xC3\x83\xC2\x9F\xC3\x83\xC2\xB6t/emailAddress=DAkeypair-onionHTTPS@cryptostorm.is
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:bb:e4:b5:31:e8:b7:81:1c:5f:40:d0:03:09:3a:
                    66:b3:3f:36:19:92:73:53:12:e6:e4:cd:7f:56:e5:
                    07:94:27:db:ee:61:b6:0f:2d:53:2d:b1:05:9a:63:
                    5c:02:cf:f4:ed:ee:dd:81:3c:89:71:29:c2:a2:7c:
                    82:06:fe:0e:6f:96:20:9d:e0:1a:e3:ff:06:c7:fd:
                    a2:a1:30:3b:45:3a:6f:6d:85:d7:ba:30:81:87:60:
                    0e:76:3d:d8:6c:00:61:e1:cb:00:07:89:b0:19:f0:
                    e4:98:c2:8e:11:4d:8f:78:55:12:ef:55:30:99:68:
                    18:b5:b0:a0:19:1c:44:54:ae:67:6f:ca:3e:c3:85:
                    9d:6b:0c:d6:56:29:af:8d:eb:58:f7:97:a0:63:06:
                    37:83:25:c4:11:61:cd:3b:9a:7f:66:51:cb:4d:4d:
                    13:1e:3d:b0:30:b6:ad:bc:e7:31:a9:70:75:eb:18:
                    15:dc:17:80:e7:b2:08:10:31:c3:65:ed:7b:32:73:
                    33:d4:74:62:82:d2:c9:d7:bc:61:fb:a5:1a:cb:35:
                    fc:b4:ff:6f:6b:7d:db:a9:d0:0e:b5:59:db:91:05:
                    60:fe:56:02:04:dd:cf:bb:ef:a9:5d:0f:a4:60:3d:
                    9f:f0:11:5e:7d:c5:b0:88:d4:3a:be:e8:5c:e3:9d:
                    d6:78:15:27:5f:89:9f:ec:53:5f:d3:6c:fb:33:3a:
                    6b:03:12:19:c4:18:33:ff:32:a3:8d:9c:b9:ce:1e:
                    32:8e:33:7c:45:bf:e2:1f:38:27:b0:be:dc:9f:67:
                    a5:04:5d:46:11:98:e2:f8:62:c7:3d:09:9c:c7:ec:
                    c9:1f:b6:b9:17:8c:ff:5a:c0:37:2f:fa:64:12:2c:
                    06:75:a5:a2:7c:66:09:c0:5b:75:86:99:c1:cc:1e:
                    09:8c:eb:7f:5e:94:2e:05:41:6b:b3:57:3f:98:fa:
                    b8:79:30:50:4f:d6:94:17:5b:78:37:d8:5b:da:22:
                    e8:b6:62:98:82:b5:98:a8:f0:90:5a:b8:cd:ac:88:
                    6f:c8:7a:5d:1c:62:be:73:0e:16:c1:30:df:6e:51:
                    6e:21:b3:af:82:e9:11:29:34:a3:e7:35:db:82:5d:
                    1c:60:33:e9:09:ed:e9:e7:0e:64:74:ba:16:7c:e0:
                    8e:54:2a:43:a9:af:9d:ef:51:0c:5c:85:87:03:78:
                    68:3f:f7:c6:19:36:4d:4c:de:d9:08:74:46:38:b0:
                    7e:86:d6:5c:90:61:26:9e:4f:c5:87:4f:ac:c1:aa:
                    05:b3:2a:b1:bc:6c:59:b7:6f:79:e6:d2:11:b4:66:
                    b1:ac:2d:61:d0:66:20:a0:d4:00:c9:4f:3c:fd:ec:
                    82:39:ef
                Exponent: 65537 (0x10001)
    Signature Algorithm: sha1WithRSAEncryption
         23:b3:34:8e:8e:13:b9:3f:97:b5:47:19:27:41:8b:bd:dc:26:
         79:09:a3:07:f4:f1:52:5e:3c:0e:63:bf:a1:24:cf:7e:62:3a:
         99:50:3c:15:20:0f:44:3a:c1:ea:c0:16:17:af:40:6f:bf:97:
         b2:83:08:2a:03:34:dd:04:fb:3f:70:52:b9:3d:7b:e1:bc:30:
         e8:7d:d3:19:ad:ef:e9:08:3a:51:f1:7f:b2:a3:33:72:1a:01:
         b0:b4:7b:12:62:41:8b:eb:3d:32:a1:1f:81:fe:e3:de:e2:c6:
         da:5a:87:88:8f:42:e8:ed:17:2a:ea:56:36:20:d1:22:2f:6d:
         3b:ec:64:4f:9e:17:bd:36:10:b9:48:af:3b:21:94:e7:45:af:
         17:17:c0:ef:03:6a:b8:67:6d:7d:6d:a6:3f:5b:33:7c:96:fc:
         9a:a8:33:c3:59:b5:fa:2f:13:8a:01:75:d0:ea:83:d4:96:74:
         c2:3e:7a:d9:6f:35:53:79:59:a5:97:70:bc:86:f3:3f:a9:89:
         9b:0f:f4:eb:d7:71:31:23:b3:49:40:92:ab:7b:05:6f:34:05:
         3a:75:81:a5:3a:14:ce:f9:40:b8:5f:b6:82:3b:97:0b:c3:b3:
         db:b5:58:eb:27:1d:24:c8:e3:e0:29:87:66:95:21:65:60:04:
         fe:9a:9c:1c:9e:e1:67:ea:a7:e3:67:77:db:4a:4a:1e:d6:d2:
         32:b2:ca:c0:2d:2a:1e:5c:d2:94:57:66:18:6c:b9:cb:dd:13:
         de:dc:88:b6:c5:7c:8e:32:8c:1f:85:b1:90:7a:f5:7c:9e:e7:
         ff:1b:51:65:17:a6:44:20:e2:df:e4:f9:9c:f7:80:ac:a8:42:
         a6:90:3e:9a:88:1b:c6:e7:24:65:85:a9:59:b5:c6:c5:6b:e4:
         64:80:76:c6:16:93:e2:72:d0:4b:41:e6:21:f6:27:f4:0b:df:
         08:1f:29:4b:2a:38:a6:86:f4:4c:88:7d:91:56:06:d8:67:55:
         ea:04:a9:91:42:21:e8:6f:d9:0d:bd:34:b8:e0:a8:bd:a1:24:
         4a:37:66:a4:10:6b:e6:c5:4f:10:50:87:91:99:9c:df:21:ec:
         6b:59:06:23:dd:2d:d3:81:0f:dc:5f:a6:a8:e4:64:6d:29:76:
         45:ad:f8:fb:ee:db:31:ce:94:67:81:f1:1a:a2:96:a1:b1:c9:
         82:85:96:80:45:ee:f8:90:db:88:ab:d6:78:f3:f3:e3:c3:57:
         33:cd:81:0c:28:d9:19:5e:75:28:8c:e9:c3:1a:a3:7a:8c:f8:
         ed:f2:b1:dc:51:8f:69:25:b4:be:f2:0b:7f:cc:75:54:37:b9:
         ec:b7:f8:23:f4:65:69:d2
There's still some fiddling we've to do with it, yet. Like, we wanted to do a bit of experimentation firsthand with the encoding & parsing of extended Unicode that x.509 actually produces in the wild, so we fed these parameters to the GPG certificate signing request (CSR) daemon:
  • Organization Name: çrÿþtøstörmðårkŋêt
    State or Province Name: Höfuðborgarsvæði
    City: Reykjavik
    Common Name: Reykjavik
    Email Address: DAkeypair-onionHTTPS@cryptostorm.is
    Comment: “May we be fortunate in our endeavours and able to say looking back on these times that we did, in fact, succeed in doing it right ~ みんな ~ çrÿþtøstörmðårkŋêt.xyx”
...as you'll see below, we did bend the bidirectional encoding transform pretty well out of shape - which is what we expected. Whether one can use this to inject unintended behaviours into the entire process, I leave it to curious readers to confirm for themselves.

So yes, it's a bit of a tweaked-out certificate since we got up to our usual cryptostorm unicode silliness with it.... but it'll do for now - and a far sight better than the crap handed out by CAs, to be blunt.

What purpose does this certificate serve? In plain language, it can be used to encrypt stuff (with the public key part of it) that only the server holding the corresponding private key can decrypt. That feature is almost always used to share some initial data that, in turn, primes the pump of the rest of the crypto process for the session. So, to do that well and reliably, this cert (i.e. key) has to have genuinely ergodic ("random"), high-entropy foundations. It needs to be a nice long key so it's really hard to brute force break, and it needs to have a good algorithm used to create it.

We've got all that, in spades.

So... what doesn't it have? It doesn't have a "chain of trust" in the form of a bunch of extra certificates all signing for one down the chain, confirming that they are... um, they are valid? That's an open issue, sort of. Anyway, it has an "issuer" listed on it - cryptostorm_darknet, Decentralised_Attribution department. That's us. Because we issued it. We made the keypair, on one of our servers. We then signed the public key with the private key, which is how you create a certificate ("and when two keys love each other very much, honey, sometimes they come together and their love makes something beautiful: a certificate!).

Ahem.

We'd just as soon skip the certificate mumbo-jumbo and work with public keys. That's what we do for PGP encrypted email, after all. Publish the public keys at MIT, or keybase, or onename, or wherever. But, web browsers want certificates because... blah. Just because. So feed them certificates. However, they only accept as "legitimate" certificates that are pre-loaded in their "trust store" already. Why, and who decides? Don't get me stared. Trust me, you would regret it :-)

So here's where the start getting fun...

Visit our .onion version main site as an http session - https://stormgm7blbk7odd.onion/, and sure enough your browser will throw a scary warning at you: not trusted! invalid certificate! However, if you pay Digicert a whack of money, they'll issue you a much less cryptographically robust certificate... and that's about it. They "vouch" for it, which means they have a root cert in the browsers, so the browsers will show a green lock. That's about it.

Certificate Revocation Lists, OSCP, and other mechanisms to revoke "bad" certificates? No root certificate has ever been revoked. None. CRLs are now officially abandonware, repurposed over the years as malware depots, spooky spy dead drops, or whatever else - who knows? OCSP fails open, so you block the session and the cert validates. On and on...

Ok, so how can you know that our certificate is "legitimate" and not, umm, like those fake Microsoft certs you still see around, years after they were supposedly "cancelled?" Here's where our first principles of crypto come in, and here's where we start building a DA-based validation system everyone can use.

☯ ☯ ☯


If you check over on this keybase page, you'll see the following blob of text come up:

Code: Select all

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQINBFUOi3UBEAC75LUx6LeBHF9A0AMJOmazPzYZknNTEubkzX9W5QeUJ9vuYbYP
LVMtsQWaY1wCz/Tt7t2BPIlxKcKifIIG/g5vliCd4Brj/wbH/aKhMDtFOm9thde6
MIGHYA52PdhsAGHhywAHibAZ8OSYwo4RTY94VRLvVTCZaBi1sKAZHERUrmdvyj7D
hZ1rDNZWKa+N61j3l6BjBjeDJcQRYc07mn9mUctNTRMePbAwtq285zGpcHXrGBXc
F4DnsggQMcNl7XsyczPUdGKC0snXvGH7pRrLNfy0/29rfdup0A61WduRBWD+VgIE
3c+776ldD6RgPZ/wEV59xbCI1Dq+6FzjndZ4FSdfiZ/sU1/TbPszOmsDEhnEGDP/
MqONnLnOHjKOM3xFv+IfOCewvtyfZ6UEXUYRmOL4Ysc9CZzH7MkftrkXjP9awDcv
+mQSLAZ1paJ8ZgnAW3WGmcHMHgmM639elC4FQWuzVz+Y+rh5MFBP1pQXW3g32Fva
Iui2YpiCtZio8JBauM2siG/Iel0cYr5zDhbBMN9uUW4hs6+C6REpNKPnNduCXRxg
M+kJ7ennDmR0uhZ84I5UKkOpr53vUQxchYcDeGg/98YZNk1M3tkIdEY4sH6G1lyQ
YSaeT8WHT6zBqgWzKrG8bFm3b3nm0hG0ZrGsLWHQZiCg1ADJTzz97II57wARAQAB
tPRzdG9ybWdtN2JsYms3b2RkLm9uaW9uICjigJxNYXkgd2UgYmUgZm9ydHVuYXRl
IGluIG91ciBlbmRlYXZvdXJzIGFuZCBhYmxlIHRvIHNheSBsb29raW5nIGJhY2sg
b24gdGhlc2UgdGltZXMgdGhhdCB3ZSBkaWQsIGluIGZhY3QsIHN1Y2NlZWQgaW4g
ZG9pbmcgaXQgcmlnaHQgfiDjgb/jgpPjgaogfiDDp3LDv8O+dMO4c3TDtnJtw7DD
pXJrxYvDqnQueHl44oCdKSA8REFrZXlwYWlyLW9uaW9uSFRUUFNAY3J5cHRvc3Rv
cm0uaXM+iQI+BBMBAgAoBQJVDot1AhsPBQkG4qWABgsJCAcDAgYVCAIJCgsEFgID
AQIeAQIXgAAKCRAnQ+yhxEwTeTzoEACYrIyVC9A27QKl8HrfxqMGwiwLzOzFbvPN
w2iircuR4y0KDxch8VJ6E2O+y8kYtZkLzXrwDNUbLtHlGtFboNHdY6wyc0j8rMjy
MuqElhs3OtymtzhzqubojvxaoxlGmiw6x/hlAJN3k8I6ymyBYlEDHehBLRugFEOQ
O2xUgzMXm18neIQafw+xWqGrardztALRIjcMP17SG0ep8bTvxOFPfmozsAH6E4On
vJ4jfA++GhOGCDevvhF2OpkQfBmrDH19iL1FwidMrc1UpK5F8Vx47YvpGvff58Qz
4aZqdrkE+1K4ikloB6Jg0mPmY0LBeZlc5etBee7jRWYeJMXRSpMt3IKXp0OLv0Uf
q8kQqkri/LzM6SdwYSTaJDeEbcvrm7pR7UL4Xz/wNdNaaaZgutwKsLH2a3mGQcy9
IiA5C+uzXKvdCXX3/DjtChOENTsG3VobYRSLlOP7qpGw7xaNLHqba9AyOClBA2b/
eJSYxftLtpyZghTGzjHZRsG8BDW3OVGCZdVw6k/95p23JzSIDQiKAs4phiZwFwxp
vBpe9NUQsEij5RNYamION4ZqB4YL0YhHXZo1l2zX9sTtnHBU+1+GxVs2VGlCScz1
Or3JUaz24gvQVULEVLiNsugDyRKKPa8Do0sho+K/gvOqSeseoq60u1l8/he9Sx3h
yoyFRWvJgg==
=Xnnh
-----END PGP PUBLIC KEY BLOCK-----
(yes, that keybase account is named 'superfish' - an oblique homage to the piscine progenitor of much of our work on DA-validated network sessions)

That's the public key of the keypair that sits behind the certificate that is presented to visitors to https://stormgm7blbk7odd.onion/. Put another way, anyone who encrypts (or signs) a message with that public key can be very, very confident that only the holder of the corresponding private key of the pair will be able to decrypt (or signature) verify it.

We've also posted that public key over at MIT's old-school PGP keyserver[/ur]. It's known as key "0x2743eca1c44c1379" over there, and is mapped to our .onion site's [url=https://pgp.mit.edu/pks/lookup?op=vindex&fingerprint=on&search=0x2743ECA1C44C1379]identity as embedded in the key's outside wrapper. Of course, that public key is now posted here, in this thread - which is another place to verify it. We can post it a dozen more places - on a different .onion site, accessible via torstorm, and hosted inside an i2p-housed 'eepsite,' and all over whatever cryptocoin blockchain we want to use. Pretty much, it's everywhere we want it to be. For someone to either remove or subtly alter all of those public key copies, so we don't realise it's been changes, would be extremely difficult to accomplish.

This is a unification of the signing keypair with the authentication keypair - which traditionally (for reasons we're not really sure we can articulate effectively because we're not sure they make any sense when you look closely) are not only created separately and stored separately but - despite using the exact same underlying cryptographic primitives (RSA, SHA1, etc.) - they were encoded and presented in formats just different enough to make them fail at interoperation. This sectarian divide was created out of thin air, and can be de-created the same way (note, we're not the first to recognise this, obviously - others have developed nice toolsets for & pointed out the value of asymmetric key unification years ago... but, prior to public blockchains we conclude that they were a bit ahead of the times, and ahead of the internet foundations required to really make the concept bloom).

Anyone who has ever, in a flash of obvious-in-hindsight intuition, realised that the keypairs underlying HTTPS and ssl are the same sort of crypto gadgets as are the keypairs used in PGP email or SSH logins, decided to confirm that via experimentation can speak to the astonishingly frustrating, opaque, bizarre netherworld of transformative generative parsing wasteland that exists between the two. Despite a few tutorials written up by patient, brilliant, calm-minded pioneers on how to make one kind of key into another format of the same key, at some point in that process mere mortals will find themselves cursing the fates for having allowed such a horror to be borne.

I'm not actually exaggerating. Try it. You'll end up there - that cursing place - if you do. Fair warning.

But that's just procedural nonsense. We've automated that via a series of open scripts, enabling fluid, bidirectional mappings of keysets and certificates without need to fight through the dead zone manually. Of course, anyone can confirm the code is doing what it says; we'll have it up in the Decentralised Attribution repository shortly. However, that procedural complexity shouldn't let us be pulled astray from the core value of asymmetric keypairs: one key proves the other key (private to public), whereas going in the "other direction" (public to private) gets you nothing.

In other words, that public key "vouches" for the identity of the holder of the private key... which is the exact same private key that spawned the ssl certificate being presented by our website. That public key is basically a blob of high-entropy ("random") data - mated with the private key on the server, it's the best validator of identity that human beings have been able to develop thus far.


☯ ☯ ☯


So here's where all the words and digressions start clearing away, and the core of the system itself come forth. The ssl certificate presented by our onion site is simply the public key - the same public key as posted on keybase - paired with a private key we control. When our server presents that public key (in the form of a certificate), and a browser uses it to encrypt a bit of startup entropy for a crypto session with our server, and that dollop of encrypted entropy comes back to us, we know - in formal mathematical terms - that we receive the encrypted dollop without any edits being made to it, nor it being decrypted on the fly.

Concomitantly the web browser knows that only the holder of the corresponding private key can receive and decrypt that dollop of entropy - entropy that kicks off the entire cascade of ephemeral cryptographic wonder that keeps all the rest of our session secure. Given that, what's the reason identity verification has proved so troubling for https, thus far?

Simple: how do we know that public key that shows up in our web browser when we head to https://stormgm7blbk7odd.onion/ is the public part of a keypair that's really controlled by cryptostorm? Couldn't the NSA just sit on the wire (holding aside Tor, for argument's sake) and feed us their own public key? Our visitors would think they're coming to cryptostorm, but in fact it's the NSA - the classical MiTM attack.

But we've really take a good step towards making that attack unproductive, because now all we have to do is compare the public key that comes from the server with the public key posted over at keybase. They should be - they must be - identical. If someone swaps out the https-site public key on the fly, the keybase one won't match any more, and we know something bad is going on.

Seems simple, doesn't it? Too simple to really work, perhaps. Well, there's more to the full model - parallel redundancy, write-outs to other public authenticators, and so on. But for a start, keybase does pretty well. And, no, it's not that we "trust" keybase not to swap out our public key for one of the NSA's if the pressure gets too heavy.. they write out records of key-posting transactions to the bitcoin blockchain (thus leveraging Merkle root validation), which makes it very hard for someone to go back and "un-write" them secretively. Once that key is up there, it's up there - verifiably so.

Of course, there's second-order attacks possible. We've got mitigations. The crux of the DA solution - as the label decentralised emphasizes - is resilience through distributed redundancy. Rather than having one monolithic, centralised, hierarchical mechanism of enforcement and control - the CA model, in short - DA spreads attestation duties out, across a diverse spectrum of channels and technologies. With that diversity, an attacker would find herself facing a complex and resource-intensive tasks, if she wants to intercept that session verification successfully: instead of one CA to subert, she's got a welter of weird-tech parallel, decentralised mechanisms going off all at once. Not so simple, that.

No Certification Authorities. No trust chains. No complex, but-prone layers of parsing syntax. A private key. A public key. The math that binds them. That's the core, and as we move forward we'll winnow down to more and more of a minimal-complexity deployment of DA auth. To start, we're piggybacking on he existing browser-certificate foundation... even though it's buggy and sad. And even though we really have no use for certs - we're doing keypairs, and that's it.

We do that because we can leverage what's there, iteratively improve it, build momentum, and deliver genuine security benefits right away rather than in some utopian future with a New System that never arrives.

☯ ☯ ☯

Brass tacks, how does it work?

Crawl, walk, run.

To begin, verifying the certificate in the browser is simple enough to do manually. Click on it and, stripping out the fluff & the fractal complexity of encoding formats, look at the mathematical guts of the embedded public key. Look over at keybase, and confirm the gots of the public key posted over there match (we're sitting lots of weird format-spanning curlicues out of this explanation, because it's easy enough to script into submission via web-based widgets, right off the bat). If they don't, walk away from the session... or route it another path to see if you can get a clear line of sight to connect clean.

Or, in order to authenticate our .onion server's DA-cert authenticity, take the cert as it shows up in your browser ("PEM encoded"), and use any standard unpacking tool to expand it into readable form. You'll see a chunk of text, up near the top, labelled as the "modulus" of the key (in short, the stubby remainder left over after two numbers are run through a function together). In the case of our DA-cert, the full modulus is...

Code: Select all

                 00:bb:e4:b5:31:e8:b7:81:1c:5f:40:d0:03:09:3a:
                    66:b3:3f:36:19:92:73:53:12:e6:e4:cd:7f:56:e5:
                    07:94:27:db:ee:61:b6:0f:2d:53:2d:b1:05:9a:63:
                    5c:02:cf:f4:ed:ee:dd:81:3c:89:71:29:c2:a2:7c:
                    82:06:fe:0e:6f:96:20:9d:e0:1a:e3:ff:06:c7:fd:
                    a2:a1:30:3b:45:3a:6f:6d:85:d7:ba:30:81:87:60:
                    0e:76:3d:d8:6c:00:61:e1:cb:00:07:89:b0:19:f0:
                    e4:98:c2:8e:11:4d:8f:78:55:12:ef:55:30:99:68:
                    18:b5:b0:a0:19:1c:44:54:ae:67:6f:ca:3e:c3:85:
                    9d:6b:0c:d6:56:29:af:8d:eb:58:f7:97:a0:63:06:
                    37:83:25:c4:11:61:cd:3b:9a:7f:66:51:cb:4d:4d:
                    13:1e:3d:b0:30:b6:ad:bc:e7:31:a9:70:75:eb:18:
                    15:dc:17:80:e7:b2:08:10:31:c3:65:ed:7b:32:73:
                    33:d4:74:62:82:d2:c9:d7:bc:61:fb:a5:1a:cb:35:
                    fc:b4:ff:6f:6b:7d:db:a9:d0:0e:b5:59:db:91:05:
                    60:fe:56:02:04:dd:cf:bb:ef:a9:5d:0f:a4:60:3d:
                    9f:f0:11:5e:7d:c5:b0:88:d4:3a:be:e8:5c:e3:9d:
                    d6:78:15:27:5f:89:9f:ec:53:5f:d3:6c:fb:33:3a:
                    6b:03:12:19:c4:18:33:ff:32:a3:8d:9c:b9:ce:1e:
                    32:8e:33:7c:45:bf:e2:1f:38:27:b0:be:dc:9f:67:
                    a5:04:5d:46:11:98:e2:f8:62:c7:3d:09:9c:c7:ec:
                    c9:1f:b6:b9:17:8c:ff:5a:c0:37:2f:fa:64:12:2c:
                    06:75:a5:a2:7c:66:09:c0:5b:75:86:99:c1:cc:1e:
                    09:8c:eb:7f:5e:94:2e:05:41:6b:b3:57:3f:98:fa:
                    b8:79:30:50:4f:d6:94:17:5b:78:37:d8:5b:da:22:
                    e8:b6:62:98:82:b5:98:a8:f0:90:5a:b8:cd:ac:88:
                    6f:c8:7a:5d:1c:62:be:73:0e:16:c1:30:df:6e:51:
                    6e:21:b3:af:82:e9:11:29:34:a3:e7:35:db:82:5d:
                    1c:60:33:e9:09:ed:e9:e7:0e:64:74:ba:16:7c:e0:
                    8e:54:2a:43:a9:af:9d:ef:51:0c:5c:85:87:03:78:
                    68:3f:f7:c6:19:36:4d:4c:de:d9:08:74:46:38:b0:
                    7e:86:d6:5c:90:61:26:9e:4f:c5:87:4f:ac:c1:aa:
                    05:b3:2a:b1:bc:6c:59:b7:6f:79:e6:d2:11:b4:66:
                    b1:ac:2d:61:d0:66:20:a0:d4:00:c9:4f:3c:fd:ec:
                    82:39:ef[/b]

Now, take a look at that public key posted over at keybase, and MIT, and everywhere else. It's also encoded, so what we want to do is run it through a process to extract its modulus value as well. The quick way to do so at the command line reads like this...

Code: Select all

# openssl rsa -in cstorm_onion_private.key -noout -modulus > modulus.txt
The content of that file the algorithm produces, is...

Code: Select all

Modulus=BBE4B531E8B7811C5F40D003093A66B33F361992735312E6E4CD7F56E5079427DBEE61B60F2D532DB1059A635C02CFF4EDEEDD813C897129C2A27C8206FE0E6F96209DE01AE3FF06C7FDA2A1303B453A6F6D85D7BA308187600E763DD86C0061E1CB000789B019F0E498C28E114D8F785512EF5530996818B5B0A0191C4454AE676FCA3EC3859D6B0CD65629AF8DEB58F797A06306378325C41161CD3B9A7F6651CB4D4D131E3DB030B6ADBCE731A97075EB1815DC1780E7B2081031C365ED7B327333D4746282D2C9D7BC61FBA51ACB35FCB4FF6F6B7DDBA9D00EB559DB910560FE560204DDCFBBEFA95D0FA4603D9FF0115E7DC5B088D43ABEE85CE39DD67815275F899FEC535FD36CFB333A6B031219C41833FF32A38D9CB9CE1E328E337C45BFE21F3827B0BEDC9F67A5045D461198E2F862C73D099CC7ECC91FB6B9178CFF5AC0372FFA64122C0675A5A27C6609C05B758699C1CC1E098CEB7F5E942E05416BB3573F98FAB87930504FD694175B7837D85BDA22E8B6629882B598A8F0905AB8CDAC886FC87A5D1C62BE730E16C130DF6E516E21B3AF82E9112934A3E735DB825D1C6033E909EDE9E70E6474BA167CE08E542A43A9AF9DEF510C5C85870378683FF7C619364D4CDED908744638B07E86D65C9061269E4FC5874FACC1AA05B32AB1BC6C59B76F79E6D211B466B1AC2D61D06620A0D400C94F3CFDEC8239EF

There they are. The same public key sits behind both... and anyone can verify that, with any open tool, and know that the website is legitimately the one being published by cryptostorm... the same cryptostorm who has control over that public key that's been posted hither and yon on the internet. If those two moduli don't match, something's wrong - and depending on circumstances, that might mean anything from not visiting a website or perhaps trying a few different avenues of transit to get there without being ambushed by digital MiTM banditry along the way.

Of course, some automation will go a long way to making things east and low friction; doing this manually for every website and ever visit would be really distracting, really quickly. How about an opensource browser extension that does the check for you, so you can look at the two thumbprints and ok them... let it ok them if they match? Those exist already, for different projects - simple to do, not a major security issue, and a big step forward.

Better yet, enable client-side logic to do the test transparently for any local application needing a cert ok (openssl has hooks to enable such things, with a bit of dusting-off of disuse)... similar in concept to the way DNScrypt carries resolver questions up to the nameservers securely via its own special channel. Make it a simple API, a framework callable by anything that needs to check and be sure a fingerprint comes up clean.

Note that there's no "trust cryptostorm" in this, generally speaking - the validation happens between the member, keybase, and the blockchain. We route packets, but we don't control that process. Decentralised.

In fact, there's no trust in this model (in any meaningful sense of the word). We trust that the math works, and we trust that we can create enough redundant paths to posted public keys that we won't get tricked by a few forgeries. But we don't need to trust Certificate Authorities, or the government, or keybase, or cryptostorm, or anyone else (apart from compiler designers and hardware chip-fabs and other important issues we're not going to bog down discussing here). There's no risk of someone being bribed into subverting this system, as there's nobody with the power to unilaterally make it misfire and spit out corrupted authentications.

That's kind of a big deal, because it means we can start building trust back up in HTTPS and SSL and everything that depends on them: we have a firm foundation from which to do that rebuilding, having stripped out all the booby-trapped snarls of ostentatious complexity that were clogging the view of the core machinery of public key crypto.

And, obviously, beyond this little proof of concept the immediate prize is doing exactly this sort of authentication for cryptostorm network sessions. We don't use CAs in our PKI model and never have. Indeed, we auth asymmetrically - clients verify server identity with asymmetric RSA crypto but the server does no such thing for clients. We've already carved out a bit of useless complexity, years ago.

If we want to up the crypto engine to start hardening our security models against quantum crypto, we can swap in new-age asymmetric algorithms from (non-NIST) ECC through lattice crypto and everything else. It's modular: the structure embraces whatever sub-tools make the most sense and have the most going for them. We're not tied into specific bits of code that we'd just as soon get away from.

But we can do more, and we can bind cstorm network sessions more cleanly to the guts of key-based session initiation - toolsets like DJB's NaCl (which, famously, fits into 100 tweets ) do asymmetric crypto tight, clean and fast... enabling us to strip out the entire goofy apparatus of x.509 itself. Replicate that improvement across the network, as we drop cruft and open the way for substantially more focus on the parts that matter and matter a great deal, indeed.

Key-validation lookups routed through Tor, touching namecoind instances running on hidden services .onion servers. Validation queries pushed out via clever ICMP encapsulated proto-tunnels... invisibly blending into the pingstream. Etc. and so on. Each one has weak spots, exposed attack surfaces, opportunities for DoS-based blocks. But, instantiaed in parallel they prove to be a willowy, flexible, evasive bundle of slippery threads to grab all at once.

That's the ideal.

And yes, website admins will need to take the trouble of posting a public key places it can be touched by DA-based agents, to verify it's thumbprint. This is not an unreasonable hope, given that - for onion sites in particular - the alternative is hegemonic despotism enforced by the same CAs that broke security on the conventional internet. Also: DA costs nothing. So there's that.

We've done a DA-based PoC with our onion site as a demonstration: rough-cut, manually-deployed, still a bit loose and evolving to be something used in daily production context. It's a pressing need - proper https for onion sites, without becoming infected with CA-ware misery along the way - and we're happy to do it ourselves, be our own crash-test dummies. Our .onion site isn't high-security in any case: it's brochureware. Gorgeous, unique, and utterly fascinating... but nobody's going to be tortured to death if it's found out they've visited our glitched-out little gem.

We'll continue to publish functional components of the DA auth system in our github repository, and we'll continue to develop the details of its evolution here in-forum, with the community. This is most certainly an emergent structure: we've not planned it down to the tiny details, as we know it will benefit from some space to find its feet, and from the contributions of the community along the way.

Good tech evolves and moves with the flow of the times. Bad tech digs in, and demands the world bend to its rigid needs. We like good tech, fwiw.

☯ ☯ ☯

This isn't "sexy" stuff, is it? It's all a bit.. conceptual, hardening against attacks that are rarely seen overtly but instead are inferred from data points scattered worldwide. We'd sell more tokens if we just followed the latest trend - lately it appears to be "warrant canaries," whatever that means in the context of tech-challenged VPN services. Or maybe this quarter it's... honestly, I don't have any idea. We just don't stay very current on the shenanigans of the VPN industry, as a team. Too much other constructive work to do, frankly.

But this stuff actually matters. Ssl kneecapping, MiTM hijacks, DNS poisioning, and general ssl-based fuckery is a constant, droning background noise on the internet today. It's not even possible to keep track of it all. The funny thing about it is that it leaves little evidence it's happening, for the most part: network sessions are a bit flaky, packet retransmission rates up. But all those thoughts and dreams and desires and fears are being archived off to Bluffdale, or wherever else spy shops get their dirty little fingers in everyone else's little pies. And, a day or a week or a decade later, politics change and those data re-emerge from the crypt, ready to wreak havoc on the living who assumed they'd long since returned to the source...

It puts us in the role of taking forceful, focussed steps to protect members from a threat they will rarely see manifest physically in plain sight - and it means that, the more we succeed in doing so, the less likely the threat will ever become real! That's a bit of a jape the universe seems to have with us, perhaps - do things well, and what we do looks easy and perhaps needlessly paranoid. Fail, and suddenly everyone wants "protection" and tokens fly out the door.

Bah, to hell with all that :-)

One thing we've learned in the last few years is that our members have good intuition. Often they tell us as a team - and me, in particular - that my seemingly-incohate rambling about this or that esoteric attack model is to them impenetrable gibberish. Often it's boring, unless one is deeply engaged with the subject. And we - me, in particular - don't do so well in converting raw tech insight into communications useful for real people who have real lives outside cryptographic minutia. So why are we honoured with the member support that's always been the core of the project?

Folks know we're doing this for real, not just playing along to the laugh track. We're not pretending - we've poured ourselves into making this service the best it can be... and then making it better from there. We make mistakes, we get distracted, we have off days and some of our projects stall or mutate eternally... all true. That's also all part of putting oneself deeply into a process - of being present in the singularity of the now.

Whatever else good or bad can (and will, and has) be said about us, one thing is clear: we mean it, we really do.

And with that, I'll step back from my terrible habit of pontification to the point of absurdity. This is the model - this is DA auth. It's a good thing. We've already begun building it. We'll keep at that, shifting back to a more intensive deploy schedule and out of something of an introspective phase of "what next," this winter. We, as a team, are ready for that shift - I think it's in the air, and we're keep to see it play out.

We're also rock-solid in our conclusion that this DA-based transition is needed, and needed immediately. It's not window dressing; it's the core of our service and the benefits we offer our members. We do this right, or we've no claim to be in the business at all.


May we be fortunate in our endeavours and able to say looking back on these times that we did, in fact, succeed in doing it right.

With respect,

Cheers,


  • ~ pj (aka ðørkßöt), writing on behalf of cryptostorm's team, core & extended

deepDNSlogo-leaves512.png
by Pattern_Juggled
Sun Mar 22, 2015 2:13 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

"...certificates are presumed to be generated by the attacker(s)..."

Fraudulent issued certificates

The following list of Common Names in certificates are presumed to be generated by the attacker(s):
...
*.windowsupdate.com (3)
...
DigiNotar.png
by Pattern_Juggled
Sun Mar 22, 2015 12:20 am
Forum: general chat, suggestions, industry news
Topic: Countermail
Replies: 2
Views: 16302

We have not rewritten SSL, that would be pretty stupid..."

ntldr wrote:
What we describe on that link I gave you is a simple protocol using asynchronous key exchange with RSA (PKCS1 padding). We have not rewritten SSL, that would be pretty stupid since is SSL had so many problems throughout its history. We are using the BouncyCastle library for the main crypto functions: http://bouncycastle.org/

Best Regards,
Countermail.com
I'm not sure I follow this explanation too well, so I was hoping for additional information if possible,

Countermail says they have developed from scratch, sui generis, a new secure network protocol (if I understand this correctly). that's a modification to or fork of (?) not OpenSSL but rather of "SSL."

However, SSL as a protocol doesn't even exist any more; it was supplanted by TLS years ago, although TLS is obviously version-related to SSL and in many senses is "the same thing" at a general level. But we're not supposed to talk about "SSL" any more since it's deprecated, although we all do... and despite those three letters being embedded in names like OpenSSL, PolarSSL, etc. Nobody wants to change OpenSSL to OpenTLS, do they? Right.

But now they say they "have not rewritten SSL" - which is good, since it's dead and replaced by TLS - but are "using BouncyCastle library for the main crypto functions." They provide a link to BC's site, in case folks haven't heard of it before. Thanks, that's a big help - this crypto stuff is entirely unexplored terrain for me ;-)

Right, so now they've either forked BouncyCastle, or are using primitives (that's what most folks who work with such things usually call that class of algorithmic tools, rather than "crypto functions" which in mathematics would have a different connotation and really isn't ideal for this usage) from BouncyCastle in their new, not-SSL secure network protocol that is based itself on... no idea. I'm lost.

This is relevant to us, as in the near future we're likely to do a bit of careful pruning of the secure network framework within which cryptostorm network sessions take place. It's not a fork, nor even a tweak of the source code, but rather a shift in libraries used, and an explicit down-tuning of primitives we don't use so that even the potential for version downgrade attacks is excised from the codebase in our deployed binaries.

So we're really hoping to find best practices examples of this kind of work... and by everything that countermail says, they've done exactly that: they've written... something, some new protocol. Using BouncyCastle (?) as a primitives library. And whatever it is they've written, it can apparently talk comfortably with client-side cryptographic handlers, who one might assume would not know how to talk an entirely new secure network protocol without some protocol definition with which to work. Which maybe this new protocol somehow provides during session instantiation, via a novel form of pushed parameters, or..? I have no idea.

I'm lost, so hopefully they can help!

Cheers,

~ pj
by Pattern_Juggled
Sat Mar 21, 2015 6:02 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: HideMyAss & L2TP & MS-CHAP1/2 (sub-post)
Replies: 0
Views: 26445

HideMyAss & L2TP & MS-CHAP1/2 (sub-post)

{direct link cryptostorm.ch/HMAl2p}
{this segment of a longer thread regarding our DA-auth framework is being released here, prior to the full thread's publication, as there's ongoing pre-publication editing taking place with the full thread that's run longer than expected & we felt this information is best shared earlier ~ admin}


Yesterday, I made use of the opportunity to lay out some of the ground-level work we as a team have been doing since last fall, via a post at our crypto.cricket blog. As I was "volunteered" for this duty by the rest of the team, I wrote long... as I tend to do. For some, that sort of writing is painfully boring to read. I concur, for the most part. However, our decision as a team was to allow my worst long-form impulses to assert themselves, in order to provide some framework for the real heart of the story.

The heart of the story is delivering network security - real, reliable, consistent, comprehensive network security - to our members. So, today, my job is to share that part of our work in as concise a form as I can. To make that possible, here's several citations of previous items published by our team, on topics related to this; those who want to know why we're doing what we're doing are encouraged to start with these. Those who prefer to 'cut to the chase' and see the plan as it turns tangible, can simply continue forth from here.
Yesterday, in Part 1 of this piece, I laid out a series of flexibly-connected, global-scale, complex, systems-level threats that, when seen in full perspective, constitute an ontological thread to the security of cryptostorm's network and its members. Actually, I left quite a few pieces out - hardware-based attacks, client-side rootkits, side-channel weakness ubiquity, and so on - as the laundry list can start to seem overwhelming in full roar. But I hope the point has been made: big issues are relevant and require attention.

Today, we're tactical. And a tactical example helps set the stage for what can go really wrong in the "think local" side of things. This little vignette begins with an exchange that took place recently on twitter:
HMAoops.png
We at cryptostorm - and I personally - aren't interested in overstating this issue, but there's no nicer way to say this than: this is what happens when bad crypto meets low external review. I promised to stay short with this reply, and despite the temptation to veer astray, I'll stick it out. Besides. Moxie's essay on this topic is, in a word, brilliant - there's nothing I can say that'd improve on his explanation and it's best I just point those curious for the deeper details over there.

As HMA says in their reply to our criticism:
"The PSK is for authentication, not encryption or decryption. It's used as an alternative to certificates."
it sounds reasonable to say that the "pre-shared-key" is only for authentication, not for actual encryption... and in that case, if we don't care about that weirdly abstract authentication nonsense, we can just cut to the chase and do the encrypting that counts. And actually, it is possible to do that... but only if you basically do authentication but call it something else - same difference. Or, of course, you can use pre-shared-keys... which must remain private and secure to be of any use.

In this case, HideMyAss is doing neither. They've published their PSK on the internet, so anyone can find it. It's so low-entropy in any case that a ten year old could guess it in a few minutes. That means, for the specific underlying protocol on which they have based this "encryption" service, MS-Chap is what's available (2... as well as 1, the latter being beyond broken and into satire):
HMA-MSchap.png
Long story short, these network sessions are functionally plaintext for anyone who goes to the trouble of gathering them up - or storing them for later review. And while it seems like authentication can be carved off from crypto, in reality that's not how things work.

Incidentally, if there's any question regarding the decrypt we'd gladly receive captured traffic on a session or two, and we'll turn around plaintext from them. This is not a challenge, given that Moxie did automate the entire process (much of that automation exists to figure out the PSK or equivalent passphrase - which isn't needed here, since it's published). This really is as simple a case of useless crypto as can be imagined. Or as Moxie put it a few years ago...
"In many cases, larger enterprises have opted to use IPSEC-PSK over PPTP. While PPTP is now clearly broken, IPSEC-PSK is arguably worse than PPTP ever was for a dictionary-based attack vector. PPTP at least requires an attacker to obtain an active network capture in order to employ an offline dictionary attack, while IPSEC-PSK VPNs in aggressive mode will actually hand out hashes to any connecting attacker."
tl;dr - authentication matters

☯ ☯ ☯

Which is something of a problem, because authentication for effectively all network cryptography in the civilian world is structurally crippled. That's the bad news.

The good news is that the underlying mathematics - the cryptographic primitives - underlying real auth models that work across untrusted network pathways work just fine. I'd say they're genuine marvels, these asymmetric key exchange algorithms - close on to magic. The failure (intentional or not) lies in the way it's put into practice, out in the world.

Out in the world, authentication hinges on chains of trust - only, these chains aren't trust like most of us would use the term. Instead, they're rigidly hierarchical - anyone at "root" trust level can vouch for anyone else in the entire system, including themselves. This is a parody of rigid hierarchy, and unsurprisingly it shows all the failures such rigid models are known to produce.

In practical terms, when you decide you want to create a secure communications channel with a particular news website, for example, the way you know the website you're visiting is really the website you want to visit is that you are implicitly trusting the entire, broken, dysfunctional edifice of Certification Authority-based session verification to make sure you're pointed right. Further, the same question of knowing who you're connecting to is at the root of protecting against Man-in-The-Middle attacks - so that's a fail, as well.

There's a whole giant edifice that's grown up around this admittedly broken way of making sure our network traffic is secure. And there's enormous effort expended by smart, talented, motivated folks to fix the CA system so we can make the internet secure again. It's all a hopeless waste, and worse it's all completely unnecessary.

Cryptostorm has found a path out of this mess doesn't require "fixing" an un-fixable mess. Nor does it envision throwing that entire system out and starting, tabula rasa with some idealised new system of perfection, whole-cloth. Instead, we've gone through the convoluted inner workings of the existing CA model, highlighted the bits that actually do a decent job of their tasks, pushed aside the unnecessary complexity and baroque filigree of silly absurdity, stayed firmly based on proven cryptographic primitives, and sought out iterative steps to deploy instead of big-bang, all-at-once pipe dreams.

We call it Decentralised Attestation. And it works.

We'll show you, right now.

☯ ☯ ☯

{continued in full post, upon completion of edit process this weekend ~admin}
by Pattern_Juggled
Tue Mar 17, 2015 9:01 pm
Forum: general chat, suggestions, industry news
Topic: Countermail
Replies: 2
Views: 16302

"Bascially a simplified SSL-protcol" <-- sounds great, tbh... not easy, but great!

ntldr wrote:So I asked from CS team opinion about Countermail and they did reply to me so I posted this reply to countermail and they didn't really explain anything they just attack me by saying. Since they seem to refuse to answer any more detailed answers can anyone of the members here explain?
Bascially a simplified SSL-protcol, without the SSL-pitfalls like algorithm-downgrading, CA-trust and so on.
We look forward to reviewing the published specification and codebase underlying this "simplified SSL-protcol" [sic], as it is an area in which we also have longstanding interest. It is rare for a small company to decide to rewrite something as ungainly, complex, and frankly brittle as OpenSSL (assuming they've forked their "simplified" version of SSL from OpenSSL and not from some less common offshoot or sibling - say, for example, NaCL which would be at once interesting and sort of inexplicable... or BoringSSL which seems interesting but is a bit young to be fork'd one might imagine).

In any case, this does sound like fascinating work & we're eager to get a look at the specific approach to such deep crypto questions they've chosen to implement.

Cheers,

~ pj
by Pattern_Juggled
Sat Mar 14, 2015 1:04 pm
Forum: member support & tech assistance
Topic: TLS Error
Replies: 1
Views: 5977

cryptostorm.ch/mac

Hey there, I think we just provided more or less the exact same reply in email, but you''l want to take quick read through the Mac howto, here in the forum, if you've not done so already. This not really a scary cryptographic error - it's just some missing step in the login process that's preventing all the crypto interactions from competing.

Almost always, those are easy fixes. Check that howto, and if that doesn't do the trick, let us know. We'll get things working, one way or another.

Cheers,

pj
by Pattern_Juggled
Sat Mar 14, 2015 12:54 pm
Forum: member support & tech assistance
Topic: HOWTO: Connect to CryptoStorm on TAILS OS??
Replies: 10
Views: 22197

version reporting in openssl / Linux

I don't even need to read the details of the above post to know what's happened, as it's one of those universally frustrating things that we have all been thorough - fortunately, it's much easier to get beyond than it might seem.

This is a divergence in the mechanism by which openssl reports its version status (which is not technically accurate, but if you know it's not accurate then you know enough to know I'm not getting into that because it's mostly just distracting to do so, most of the time) as compared to what's being called or compiled into production packages that rely on openssl for crypto functionality.

I basically just picked a random stackoverflow thread on the topic. Start there, and within a couple clicks, you'll have the exact info on how to confirm versioning is correct. We're a little blase about this as we've seen it on so many machines, so many times. It's just one of those "gotchas" that eventually one learns to work around.

Anyhow it'll take you longer to read this post than to just loop back & get the proper syntax for version validation.

Cheers,

pj

edited to add: 1.0.2 is almost ready to go into full release, afaik, and has been pretty stable for us in the places we've been using it in late-beta form (webservers here and there, because it's alot less flaky about proper ECC support with proper curve init point-pairs and so on)... but those dependency hiccups might trace back to those late-beta blues, especially on Tails which tries to avoid unnecessary package/dependency bloat to improve security (which is excellent security practice, in fact). Mostly if you just keep iterating on the install, it'll eventually fill up its pockets with all the dep's it needs :-)
by Pattern_Juggled
Sat Mar 14, 2015 4:57 am
Forum: member support & tech assistance
Topic: "WARNING: No server certificate verification method has been enabled." in logs
Replies: 6
Views: 14565

1.0 - 1.2 & ECC & brainpool & c25519

Guest wrote:I see TLS 1.0 in that pic you posted. is that right? I kinda assumed CS was TLS 1.2 and non-backwards compatable.
Isn't TLS 1.0 vulnerable to beast and poodle?
Nah, there's nothing intrinsically terrible about 1.0. Most all the core patches for the BEAST-class stuff have backported to 1.0 concurrently with the upgrades into 1.2, so in practical terms that's not a good reason to push for 1.2.

To me, the parts of 1.2 that matter are the ECC inclusion... which we're not using yet, because NIST. If you want that full backstory, this is the thread for immersion.

Cheers,

pj
by Pattern_Juggled
Sat Mar 14, 2015 4:52 am
Forum: member support & tech assistance
Topic: "WARNING: No server certificate verification method has been enabled." in logs
Replies: 6
Views: 14565

TLS_DHE_RSA_WITH_AES_256_CBC_SHA

I'd finished most of the research to reply to this a few days back, then managed to get pulled off the project and now I've to gather up the data for posting. I should have that done properly, in short order.

Meanwhile, I believe the answer is that there's two closely related OpenSSL cipher suites in play here. Their full expression in the relevant syntax is as follows:

Code: Select all

{"hex":"0x0033","name":"TLS_DHE_RSA_WITH_AES_128_CBC_SHA","value":51}
{"hex":"0x0039","name":"TLS_DHE_RSA_WITH_AES_256_CBC_SHA","value":57}
I refer to them as "33" and "39," respectively, although of course that's all sorts of sloppy.

My preference in our initial spec was to tip towards AES256, out of an abundance of caution in such matters (given the lack of serious performance concerns in deploying synmmetric ciphers nowadays) and I set the parameters for that suite, which does require full 1.2 support. However, I can't say that the same suite with AES128 substituted in makes me in the least uncomfortable, and thus I was comfortable with that suite down-cycling to 128 if needed - for platforms not able to carry full TLS1.2. Of course we could force the 1.2 issue and require the 256, but it's just not something I feel makes anyone safer... and it'll push quite a few mobile platform folks off the network.

(we've required proper 1.2 support for torstorm and I've little hesitation to do so if circumstances require... but at the same time I"ve no interest in being tendentious about such matters for no credible reason}

Now, in practice what I've seen from our pcaps and test connects over the last 18 months is an essentially universal synchronisation on "33" for cstorm sessions, for reasons only openssl (and perhaps Filippo :-) ) can really understand. Were that an issue, in cryptographic terms, I'd re-parameterise and push new confs - but, again, I just can't say it's bothering me in fundamental terms.

I'm also keen to avoid wasting time niggling with AES trivialities, when the next iteration we've set for the core cipher suites on the network is a proper integration of c25519... a longstanding goal of the team, for obvious reasons. The relevant libraries are quite close, I think, and we're about ready to do some alpha testing. I'll also look to wrap a ChaCha inclusion in that upgrade, as it seems to have reached a critical mass in deployment terms, and there's certainly nothing bad I can say about it's performance.

Anyhow that's the short form from memory. I've got some nice reference papers set aside to add in as links, and so on - even a diagram or two - but I figured I'd get something up now, rather than let this go further stale in the meantime.

Cheers,

pj
by Pattern_Juggled
Fri Mar 13, 2015 7:56 pm
Forum: general chat, suggestions, industry news
Topic: [CS] No Mention Of I2P Access On Website
Replies: 9
Views: 17653

Re: "the i2p gateway access thing" marketing-suck-y status report

parityboy wrote:
Pattern_Juggled wrote:
Also we have no name for it, apart from "the i2p gateway access thing"... which does, indeed, suck.
"eepstorm"? "TI2" (Truly Invisible Internet)? :)
There's been moves towards "i2pstorm" but that... well, you can imagine. Got2pstorm, etc. ;-P

It'll appear, at some point, and we'll be glad for it's arrival!

edit: also helps to know that already we're transiting any Tor traffic - not just .onion sites - via the deepDNS gateways, & it seems almost certain we'll be doing the same for i2p in fairly short order. So it's not just eepsites, in terms of functionality...

Cheers,

~ pj
by Pattern_Juggled
Fri Mar 13, 2015 7:54 pm
Forum: member support & tech assistance
Topic: HOWTO: Connect to CryptoStorm on TAILS OS??
Replies: 10
Views: 22197

cstorm on TAILS?

marzametal wrote:The mods are going to authorise a post I made earlier... sent it via TAILS.
Sorry, from what I can gather, connecting to CS on TAILS is not available at the moment. After setting it all up, I saw in their FAQ they don't support VPN over TAILS... over TOR yes, over TAILS no.
Heya, apologies for coming in late here.

What version of openssl are those Tails images being distributed with?

Code: Select all

openssl version
I can't see that doing openvpn from Tails would be somehow blocked - indeed, I cannot imagine how such a block would actually be implemented.

We know a few folks close to that project team - if there's indeed some sort of overt issue that's confirmed after a bit of further digging, please post here so we can look for a constructive path beyond any such (hypothetical) snags, ok?

Cheers,

~ pj
by Pattern_Juggled
Fri Mar 13, 2015 11:51 am
Forum: general chat, suggestions, industry news
Topic: [CS] No Mention Of I2P Access On Website
Replies: 9
Views: 17653

"the i2p gateway access thing" marketing-suck-y status report

Rollout is complete, but it's sort of been waiting on an official announcement. Which in turn is waiting on some final work on torstorm's public access announcement. Which, in turn, is waiting on...

Anyway, marketing stuff - which we suck at. So it takes longer than usual for us to do it... and it's still sucky :-P

Also there's been some iterative upgrading of the load-handling side of the i2p gateway access.

Also we have no name for it, apart from "the i2p gateway access thing"... which does, indeed, suck.

Cheers,

~ pj
by Pattern_Juggled
Tue Mar 10, 2015 11:36 am
Forum: member support & tech assistance
Topic: "WARNING: No server certificate verification method has been enabled." in logs
Replies: 6
Views: 14565

cert management: security theatre v. actually understanding cryptography in practice

pants wrote:Hi, I'm just testing cryptostorm here, what's the deal with "WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info." in the logs?

I understand you're using a self signed certificate but what about this: http://openvpn.net/index.php/open-sourc ... .html#mitm ?
We're quite attentive to MiTM-style attack vectors. In fact, we've been accused - with some justification - more than once of being completely obsessed with these attacks. I'd say that's the case because MiTM attacks work, and are known to be widely deployed.

However, substantive protection against MiTM attacks takes more than reading manual pages that themselves predate modern MiTM attack techniques by many years. I've also read those pages... indeed, I remember the OpenVPN manpages before those entries were added by the team, so I'm familiar with their trajectory. They were good at the time, to explain to people with little formal experience or background in real-world network security issues why some of the functions of OpenVPN existed, and were encouraged in deployment. Remember, that was back when iPredator was deploying an all-PPTP "VPN service" that left every packet sent and received effectively plaintext to any attacker with so much as an old Windows machine at their disposal.

So I was supportive of the inclusion of these comments about MiTM attacks, although - even back then - I had some concerns that the full complexity of these attacks weren't really being communicated in these short notes. However, they were a good start and in their own way groundbreaking at the time.

Using them as guidelines for today's MiTM attack vectors would be, to be blunt, childishly naive.

Server cert verification within OpenVPN checks to see if the cert being presented has embedded in its defined characteristics that it's a server cert. The idea there is to prevent other clients on the network from presenting their certificates and having them accepted (due to horrific misconfiguration of quite a few other parameters), as proof of server status. This would then not really be a MiTM, but rather a resource substitution attack: connecting not to a server ("node," in modern terms), but rather to some random entity with merely a client certificate for the same VPN network.

That's really not an attack model we're hardening against, for a fairly simple reason: we don't use client certificates. So spoofing a client cert really wouldn't make much sense. Plus they don't actually exist. So that would be a challenge to implement.

The actual RSA-based validation of cert fingerprint and modulus is, of course, done by openssl - not openvpn. One can sit on the pcaps and watch that process unfold...
cstorm_cert.png
There's the flag in question; I've highlighted it in the screenshot.

When looking at competitor Ipredator, they use TLS auth: ssl-server: TRUE/FALSE

It'd be easy enough to implement this in our conf's (actually our current confs have an old version of the switch-param for this constraint in them, that I've not upgraded and will likely drop from the 1.5 conf entirely), but doing so would be purely for show. Legitimate security modelling builds scenarios from credible methodologies available to credible attackers via credible attack vectors. Now, if I'm modelling a MiTM attacker against cryptostorm network sessions, I'm going to model in the fact that they're smart enough to generate their hijack-certs with that parameter simply flipped to 'true'... this is obvious, right?

There's no external CA issuing these certs, no authority enforcing these extended attributes. Just like Superfish was able to set whatever conditions on their own self-signed root certs, any MiTM attacker can simply print up super-duper-special server-only certificates... right? So setting that variable would protect against what attackers, exactly? The ones too dumb to know how to change a parameter from false to true? Those aren't the ones we are defending against, either in formal theory or in our daily work.

If one added them up, there's dozens of viable, proven attacks against certificate-protected network security. Our team has been deep in studying those attack vectors for months at this point, on top of years of firsthand experience in the field with the topic. I'd say we have good sense of the lay of the land; there may be super-obscure (or advanced) attack models out there we don't know anything about, but apart from that we are familiar with the ways this system fails.

That's the basis of substantive security assurances: a deep, long-term, engaged dedication to understanding the full range of attacks and defensive methods. From there, one can begin building new attack models not yet discovered by others, and thereby enable proactive defences for them in production. That's the stage we're at, as a team, here. I cannot speak directly to whether other "VPN services" are similarly engaged in the work, but from what I've seen I'd not bet on it.
I understand you're using a self signed certificate but what about this: http://openvpn.net/index.php/open-sourc ... .html#mitm ?
Oh no, not the "self signed certificate" thing.

Should we go pay DigiCert or Comodo or one of the more seedy of the USERtrust mutant offspring to print up a certificate for us and sign it with their root credentials? That will help this process... how? There's no CRLs in this model, right? No OCSP. No cert pinning. Without all of those pieces - which, incidentally, are fundamentally broken for https nowadays, in so many ways I couldn't begin to enumerate them - having a signature from a CA (or chain of fishy CAs, the illustrious "trust store") is worse than useless. It's delusional.

Look, the certificate thing is mostly a distraction from what's really going on here: this is basic RSA asymmetric cryptography in service of identity validation. Private key, public key. That's it. The math is interesting but not horrifically complex. The implementation mechanisms are well-understood and have few obvious crypto engineering fail-states nowadays. We know how to do RSA verifications across the wire, in other words.

The point of that is the mathematical relationship between the private key and the public key. Adding a bit of exogenous entropy to the public key that says "this is a public key for this class of private key" is pretty silly when you get to the core of the entity-relationship structure at work here. That's essentially a one-bit shared secret. Except it's a secret published in the specifications and known to everyone. So... not much of a secret.

We'll pass thanks. Ipredator might think that's a big deal, security-wise. Dunno. We're a bit beyond kindergarten in that regard.
When looking at competitor Ipredator, they use TLS auth

Image
I'm not entirely sure whether you're making a rather nicely-structured joke about what it means to misunderstand the fundamentals of the TLS protocol, or whether this is intended to be serious. I'll at least give the assumption of the latter for as long as I can do so without sounding sarcastic - which is beneath all of us, so we'll not do that.

Using a shared secret - the eponymous "TLS key" - requires the sharing and the secret. It doesn't count as "secret" if you publish the bloody thing on the internet, ffs!
tldauth.png

Code: Select all

#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
03f7b2056b9dc67aa79c59852cb6b35a
a3a15c0ca685ca76890bbb169e298837
2bdc904116f5b66d8f7b3ea6a5ff05cb
fc4f4889d702d394710e48164b28094f
a0e1c7888d471da39918d747ca4bbc2f
285f676763b5b8bee9bc08e4b5a69315
d2ff6b9f4b38e6e2e8bcd05c8ac33c5c
56c4c44dbca35041b67e2374788f8977
7ad4ab8e06cd59e7164200dfbadb942a
351a4171ab212c23bee1920120f81205
efabaa5e34619f13adbe58b6c83536d3
0d34e6466feabdd0e63b39ad9bb1116b
37fafb95759ab9a15572842f70e7cba9
69700972a01b21229eba487745c091dd
5cd6d77bdc7a54a756ffe440789fd39e
97aa9abe2749732b7262f82e4097bee3
-----END OpenVPN Static key V1-----
echoed to cleanVPN github repository for research reference


To be clear, there's no harm done in publishing this "static key" - it's not paired with a private key in the conventional sense and having it be shared by the whole world doesn't break anything. Well, except any claim that this "shared secret" does anything useful in protecting network traffic. Because it's not secret. That's sort of the whole point: secret.

Quoting Yonan, again:
...a pre-shared key is generated and shared between both OpenVPN peers before the tunnel is started
"Pre-shared secret." That means it's transmitted out-of-band, securely, prior to the creation of a VPN session.

Further, this feature isn't even intended to be a security layer in the sense of adding cryptographic defence in the conventional sense: protection against decryption of plaintext. Rather, it's designed to mitigate certain classes of DDoS. From the 2.2 documentation:
The rationale for this feature is as follows

TLS requires a multi-packet exchange before it is able to authenticate a peer. During this time before authentication, OpenVPN is allocating resources (memory and CPU) to this potential peer. The potential peer is also exposing many parts of OpenVPN and the OpenSSL library to the packets it is sending. Most successful network attacks today seek to either exploit bugs in programs (such as buffer overflow attacks) or force a program to consume so many resources that it becomes unusable. Of course the first line of defense is always to produce clean, well-audited code. OpenVPN has been written with buffer overflow attack prevention as a top priority. But as history has shown, many of the most widely used network applications have, from time to time, fallen to buffer overflow attacks.

So as a second line of defense, OpenVPN offers this special layer of authentication on top of the TLS control channel so that every packet on the control channel is authenticated by an HMAC signature and a unique ID for replay protection. This signature will also help protect against DoS (Denial of Service) attacks. An important rule of thumb in reducing vulnerability to DoS attacks is to minimize the amount of resources a potential, but as yet unauthenticated, client is able to consume.

--tls-auth does this by signing every TLS control channel packet with an HMAC signature, including packets which are sent before the TLS level has had a chance to authenticate the peer. The result is that packets without the correct signature can be dropped immediately upon reception, before they have a chance to consume additional system resources such as by initiating a TLS handshake. --tls-auth can be strengthened by adding the --replay-persist option which will keep OpenVPN's replay protection state in a file so that it is not lost across restarts.

It should be emphasized that this feature is optional and that the passphrase/key file used with --tls-auth gives a peer nothing more than the power to initiate a TLS handshake. It is not used to encrypt or authenticate any tunnel data.

In terms of DDoS protection... yeah. Booters work fine without needing consume TLS-layer resources. So I think this was well-intended, but not terribly effective in actual practice - this is hardly a failure or flaw in the design of the function more or less a decade ago, and merely reflects the evolution of attack techniques since then. Amplified, DNS-based packet floods are so broadly deployed nowadays that talking about TLS-layer, targeted, technically clever DDoS mechanisms seems rather quaint.


There's a second possible benefit to this TLS-auth key/HMAC scheme, which is nicely summarised in Yonan's documentation:
One notable security improvement that OpenVPN provides over vanilla TLS is that it gives the user the opportunity to use a pre-shared passphrase (or static key) in conjunction with the --tls-auth directive to generate an HMAC key to authenticate the packets that are themselves part of the TLS handshake sequence. This protects against buffer overflows in the OpenSSL TLS implementation, because an attacker cannot even initiate a TLS handshake without being able to generate packets with the currect HMAC signature.
Yes, well, I have never been terribly convinced this is a substantive security benefit. Turns out that was not too far off-based, because Heartbleed. So much for the theory of buffer overflow protection from tls "static keys."

Indeed, adding extra fiddly bits to cryptographic processes that provide minimal actual security benefit can itself be a serious security risk. All those fiddly little bits accrue bugs or are born with them outright, and those bugs are exploits waiting to be developed. How can I say this? Well, because Heartbleed. And Shellshock. And GHOST. And FREAK. And on and on...

ὅπερ ἔδει δεῖξαι άλφα

Also, obviously, the fact that these static keys (or a tiny bit of them; see below) serve as the cipher key for this HMAC and are widely distributed in public means that an attacker who did want to implement this TLS-layer vector can simply grab the key materials from the published "private" key and use it to remove any potential hindrance offered by tls-auth whatsoever. Making the entire process completely, totally useless... worse than useless in fact, as it adds non-constructive complexity to a cryptographic procedure and this, ceteris paribus, decreases overall system security.

So yay for iPredator for following Yonan's yellowed, old, well-meaning sentences about auth-tls... but maybe it's not really a reflection of their superior cryptographic awareness. Or perhaps they did a deep analytic dive into these waters, and just happened to come to exactly the same archaic conclusions Yonan did all those years ago... the same ones published in that outdated howto for making openvpn secure. Unlikely, but possible I suppose.


A final bit of historical data comes from the full specification of the TLS-key parameter in the openvpn documentation. I note that ipredator has proudly included the text "2048 bit OpenVPN static key" in their distributed keyfile (perhaps not "proudly" - it might just auto-generate during the process... still, they do distribute it with that phrase embedded in the strings, which seems sort of goofy to me, fwiw). Because - WOW - that's alot of bits! Bits are good, right?

Well, actually...
The 2048 bit static key is designed to be large enough to allow 512 bit encrypt, decrypt, HMAC send, and HMAC receive keys to be extracted from it.

However, this key size is far too large for current conventional OpenVPN usage. OpenVPN uses the 128 bit blowfish cipher by default. It also uses the 160 bit HMAC-SHA1 as a cryptographic signature on packets to protect against tampering. Since you probably didn't specify a key direction parameter, the encrypt/decrypt keys for both directions are the same and the HMAC keys for both directions are also the same.

That means that OpenVPN is only actually using 128 + 160 = 288 bits out of the file -- much less than the 2048 bits which are available. Below, I will show a sample 2048 bit OpenVPN key, bracketed to show which bits are actually used for key material, assuming default crypto settings:

#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
[eac9ae92cd73c5c2d6a2338b5a22263a] -> 128 bits for cipher
4ef4a22326d2a996e0161d25d41150c8
38bebc451ccf8ad19c7d1c7ce09742c3
2047ba60f1d97d47c88f7ab0afafb2ce
[f702cb04c7d15ff2606736c1825e830a -> 160 bits for HMAC SHA1
7e30a796] 4b82825d6767a04b3c8f4583
d4928127262c3a8603776bd6da339f69
dece3bbfee35f1dceb7cbceaef4c6933
2c2cef8ac550ed15213b216b825ab31e
49840f99ff9df3c5f31156439ed6b99c
4fc1bff417d33d77134365e38c9d71cd
e294ba6e65d51703d6d4a629d5fc618e
adddb889b8173ac79b4261328770bbbe
74294bc79e357c82af9ef53f2968be6a
007e6022da0a1a39f2ed5660f94a5926
35d72e5838dd78dd680d91f6edcf6988
-----END OpenVPN Static key V1-----

As you can see, the only lines actually used are 1, 5, and 6. And of course, that matches up perfectly with what you observed.

To verify this, run OpenVPN as follows:

Code: Select all

openvpn --dev null --verb 7 --secret key | grep 'crypt:'
where 'key' is a file containing the key shown above.
  • Static Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
    Static Encrypt: CIPHER KEY: eac9ae92 cd73c5c2 d6a2338b 5a22263a
    Static Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
    Static Encrypt: HMAC KEY: f702cb04 c7d15ff2 606736c1 825e830a 7e30a796
    Static Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
    Static Decrypt: CIPHER KEY: eac9ae92 cd73c5c2 d6a2338b 5a22263a
    Static Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
    Static Decrypt: HMAC KEY: f702cb04 c7d15ff2 606736c1 825e830a 7e30a796
Note that the keys which are shown in the OpenVPN output exactly match the bracketed section of the key source.

{snip}

So you might ask why is the OpenVPN static key file so large, if such a small percentage of the bits are currently used? The answer is to accomodate future ciphers and HMAC hashes which use large keys. Changing a file format is obviously problematic from a compatibility perspective, so 2048 bits were chosen so that two sets of 512-bit encrypt and HMAC keys could be derived for two separate key directions.

So, in actual practice, the entropy from the keyfile that's actually used in the TLS-auth parameter's implementation is 160 bits, just enough to sign the SHA1'd hash of the packet HMAC (I am not sure what signing algorithm is used in this, and haven't yet gone to the source code to learn the details - if anyone does, and wants to post that in replies to this thread, that'd be pretty interesting to see). It's exactly the fifth line in the PEM"d keyfile itself.

So for the ipredator TLS key, this is what's actually used in any cryptographic algorithm:

Code: Select all

a0e1c7888d471da39918d747ca4bbc2f
That's in hexadecimal. Let's convert to binary (as in bits, small-b), just for the fun of it. We get...

Code: Select all

10100000111000011100011110001000100011010100011100100000000000000000000000000000000000000000000000000000000000000000000000000000
Which neither comes out to a full 160 bits of entropy (it's 128 bits, actually), nor does it give much confidence in its entropic content. The latter makes one wonder what PRNG seed was being used for the RSA generation algorithm when this key was created... but at this point we're pretty far into the weeds with all this, and it might be time to step back and wrap up this boring post.

edited to add: I bet they use the 128 bits of data as the Blowfish key, and they bring in the 160 bit (always) output of the SHA1 hash of the contents of the packet (and header) as plaintext to be run through Blowfish. But I'm not going to confirm this, because I'm supposed to be wrapping up this post!


(Incidentally, leaving some extra capacity in the defined keyspace for future use is to me excellent thinking on the part of the designers of the protocol. It shows foresight and a willingness to do smart things even if it's not likely many folks will notice them until much later, if ever. One does see such decisions throughout the openvpn codebase, one reason the protocol has justifiably gained such widespread support.)


One final question was asked:
There is this thread, but this should be automated, no?
The question relates to our thread here in the forum entitled "HOWTO: confirm authenticity of cryptostorm.is & cryptostorm.ch SSL certs," an excerpt of which reads as follows...
Here are the currently-installed SSL certificates (public exponent) for our two main production websites, cryptostorm.ch & cryptostorm.is. We will also add certificate materials for secondary domains such as torstorm.org, as well as keep this post updated with current materials as we upgrade or otherwise adjust our CA credentials server-side.

Note that neither of these two identity-verifying server certificates are part of connections to cryptostorm's network; rather, they simply exist to confirm that the websites folks are visiting using TLS/SSL (https protocol) are actually the websites we run, and not a Man-In-The-Middle replacement undertaken by an attacker.
Although this thread is discussing keying materials that back the https sessions of cryptostorm.is and cryptostorm.ch, respectively, your question is insightful and highly relevant. The process of confirming validity of those certificate materials is currently automated for https web sessions... by web browsers, via the Certificate Authority (CA) PKI model. Which is a horror. The reason we do a non-automated validation is exactly to enable manual confirmation of those materials. It is difficult for attackers to model systems designs that include manual elements, because manual stuff is all squishy and weird because humans.

That's a feature, not a bug, in this context. Stick a human in the mix, and an automated attack model has a much more complex challenge to face.

That said, we are in process of automating validation via out-of-band mechanisms the authenticity of public key materials for web-based (and other online) resources. That project, Decentralised Attestation, has as its Proof of Concept (PoC) confirmation of exactly the cert/key materials backing cryptostorm network sessions that we've been discussing in this thread.

So that's a really solid question, that touches on the core of alot of the work the cryptostorm team has been doing this winter.

I don't know of similar work ipredator is doing in such areas, but perhaps I've just not run into their published contributions in my literature review thus far.


~ ~ ~


I apologise if I have sounded crabby in this post. It's not intended as disrespect, and in truth these conversations are always helpful - dialogue is the core of good security process. I've been, personally, buried in cert-based validation systems analysis for... a while now, as cryptostorm prepares to deploy its own 'Decentralised Attestation" (DA) model as an alternative to the broken CA model. And that research has left me more than a little crabby, for a host of reasons I'll not bore you with here.

Suffice it to say that there's oceans of legitimate CA/PKI/identity authentication issues to concern ourselves with in the real world (as opposed to the "VPN industry" which is... broken, in so many ways, even more so than the CA world is - which is amazing), as well as a crying need for deployed systems that address those issues in a substantive and credible way. This silly stuff with old advice about old understandings of old attacks that even in the old days nobody did because real attacks were faster, easier, and worked better... it's really not the most crucial parts of the dialogue, imho.

That said, it's a starting point, eh? So, again, my apologies for the crab-ness - and I thank you for the questions posed.

Cheers,
  • ~ pj

ps: ipredator is using cert signatures generated with SHA1 = "Signature Algorithm: sha1WithRSAEncryption" - which is long since recommended against in any security-intensive context, and they've issued a decade-long self-signing CA certificate which is generally considered bad security practice. We certainly don't do either in production context. Fwiw.
by Pattern_Juggled
Tue Mar 10, 2015 8:17 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

unpacked CTL...

{cross-posted to twitter ~admin}

marzametal wrote:Ran it in a sandbox, right clicked to install "CTL"...
rundll32.exe kicked up a fuss, wanted to talk to 23.63.99.202 (Akamai)...

According to an anti-executable...
command line switch - "C:\Windows\system32\rundll32.exe" cryptext.dll,CryptExtOpenCTL D:\Downloaded\authroot.stl
digital signature - Unknown
file publisher - Microsoft Corporation
ctl01.jpg
Ok, so if the CTL is genuine it'll have a bunch of certificates in it that Windows can grab and assume are good/trusted... so merely seeing a bunch of certs in there isn't an automatic red flag.

However, from what little I can see in those screencaps, they don't look like the trusted root certs that are legit. Edited: looked again, aaaah... that's the cert that signed the CTL itself. Yikes. That's a fishycert, for sure. Not a good sign...

Can you dump the certs out of the file? They'll be in PEM or DER format, afaik, or worst-case Windows versions of them which are easy to convert.

And yes, those IPs all resolve to Akamai blocks, which again is at least surface-level legitimate: Microsoft does use Akamai's CDN to distribute some (or all?) Windows Update files... which as I said above, seems unwise to me, but what do I know of such things as managing Windows update production requirements? Not much, honestly.

That said, those IPs are the destination of resolution for some decidedly non-Windows and most likely non-legitimate domain names in time windows right up against when they show up as "legitimate" resolver answers for this windowsupdate hostname... and those IPs sure do end up serving alot of verified malware in those close time windows too.

My working hypothesis right now, which may be total bunk, is that there's a trick using AAAA/IP6 DNS lookups that enables this redirect trick. It's come up a few times in related research, and it makes sense: IP6 resolver pathways are preferred by most modern browsers, so if you can jump the AAAA records and get traffic headed to your machine, you're in good shape.

Note that all of the analysis in this thread is solely IP4-based. That's myopic. What's happening in IP6-space? I'm running off machines - indeed, the entire cryptostorm network - that's got IP6 hard-disabled, so I'm seeing at best a partial picture here. Most of the world will be IP6-active, and that presents some new angles to consider.

Cheers,

~ pj
by Pattern_Juggled
Sun Mar 08, 2015 5:52 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

TechNet on www.download.windowsupdate.com

Here's a search query on the "social" side of TechNet that turns up a vast pool of questions relating to this hostname; I've only just begun reading, but wanted to post out the full search so others have easy access meanwhile, as well:

https://social.technet.microsoft.com/Fo ... update.com

Cheers,

~ pj
by Pattern_Juggled
Sun Mar 08, 2015 5:36 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

TechNet thread re www.download.windowsupdate.com

A colleague pointed out a long thread on Microsoft's TechNet site, discussing the http://www.download.windowsupdate.com host and the files it serves. Here's one sample post, from 2009:
THIS SOLVED MY PROBLEM
downloaded & installed this file.....

http://www.download.windowsupdate.com/m ... ootstl.cab

but check the error log in your event viewer... this was my message

Failed extract of third-party root list from auto update cab at: <http://www.download.windowsupdate.com/m ... ootstl.cab> with error: The data is invalid.
.
Log Name: Application
Source: CAPI2

and the name of the file in the temp folder which caused the problem was tmp9ccc.vbs

i also faced other errors.... 513 and 1002

check your services...... the properties of each service

i guess all programs mentioned in the dependency tap should be on automatic

http://download.microsoft.com/download/ ... rvices.doc
There's ample data in the thread to dig into, for those following along. I'll be posting more of my own summary analyses, as time allows.

Cheers,

~ pj
by Pattern_Juggled
Sun Mar 08, 2015 5:28 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: Kebrum - raw data - cleanVPN, or not?
Replies: 5
Views: 31455

topic split

I've taken the liberty of splitting off the "funky CRL subdomains" topic into its own dedicated thread, as it had basically taken over this one. I may go back and pull some of the findings still in posts above, relating to the CRLs, and move to the new thread, but that seems a spot of work so I'll avoid doing so for now :-P

Do keep in mind, however, that it's the Kebrum installers calling these mysterious subdomains during their setup process - so this is directly relevant to the Kebrum analysis itself. At the same time, we've noted it's not only the Kebrum installers that make such calls; thus, having a standalone thread for that work makes sense as it can be managed and maintained independently, and referenced as needed.

Cheers,

~ pj
by Pattern_Juggled
Sat Mar 07, 2015 3:29 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

crl.comodoca.com --> Upatre trojan downloader

Looks like the two ends of the bridge are coming closer together.

Here's a confirmation from Malware Must Die that the hostname crl.comodoca.com is used to deliver a payload 'EssentialSSLCA.crl' - which then gets installed into the trust store, which then... it's quite a chain, isn't it?
012.PNG
This is a step in the process of the functioning of the Upatre trojan downloader - which in turn was (is?) a part of the P2P/Gameover family of botnet agents. So that fills in one more piece.

added: take a single step deeper into the P2P/Gameover botnet forensics and you run into this:
D11010011.png
Well, how very interesting. Black Lotus is very, very closely affiliated with a particular VPN company most folks will know by name. What are the chances, eh? All those ISPs all around the world, and two wires cross right down in Texas under the name Black Lotus. Quite a coincidence, indeed.


Still digging...

Cheers,

~ pj
by Pattern_Juggled
Fri Mar 06, 2015 5:20 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

subverting windows update abandonware for fun & profit (& ssl kneecapping)

If you open a website that Windows doesn't have a valid root cert for, that CA/Root cert will be looked up from the list (which is cached localy as far as I understood)
I'm still working to integrate the "Certificate Trust List" into this process, because that's the one that actually gets pulled during (for example) the process of executing the Kebrum installation binaries, above.

From the pcaps, here's what happens (apart from the pulling of the 'authroot.txt' file, which is outlined in full above:

1. A process within the instantiated installer initiates a DNS resolution query via...

Code: Select all

User Datagram Protocol, Src Port: blackjack (1025), Dst Port: domain (53)
2. DNS query syntax is as follows:

Code: Select all

95	15.546665	192.168.56.101	192.168.56.1	DNS	76	Standard query 0x6bfc  A crl.verisign.com
3. primary nameserver indicates a CNAME alias to:

Code: Select all

crl.ws.symantec.com.edgekey.net
4. that hostname, in turn, is a CNAME alias to:

Code: Select all

e6845.ce.akamaiedge.net
5. That, in turn, resolves via A Record to:

Code: Select all

23.5.245.163
6. HTTP request of:

Code: Select all

GET /msdownload/update/v3/static/trustedr/en/authrootstl.cab HTTP/1.1\r\n
Here's a screenshot of the underlying wireshark'd packet flow:
kebrumauthrootstl.png

Which IP address - 23.5.245.163 - is indeed located in an Akamai-controlled block. It's also an IP address that is associated with a mix of (Verisign-affiliated, as Symantic now owns GeoTrust, Thawte, RapidSSL, and Verisign CA businesses) hostnsames that have resolved to it in the last 18 months alone...
VirusTotal's passive DNS only stores address records. The following domains resolved to the given IP address.

2015-01-29 e6845.ce.akamaiedge.net
2014-11-20 t.symcb.com
2014-07-19 st.symcb.com
2014-07-08 ga.symcb.com
2014-07-07 ica-crl.digitalcertvalidation.com
2014-07-07 orgc3-crl.verisign.com
2014-07-07 ssp-sia.verisign.com
2014-07-06 td.symcb.com
2014-07-05 svr-rapidssl-crl.rapidssl.com
2014-07-04 gb.symcb.com
2014-07-04 tb.symcb.com
2014-07-04 tc.symcb.com
2014-07-02 tj.symcb.com
2014-07-01 strato-crl.digitalcertvalidation.com
2014-06-30 th.symcb.com
2014-06-28 ica-aia.digitalcertvalidation.com
2014-06-25 crl.ws.symantec.com
2014-06-05 a23-5-245-163.deploy.static.akamaitechnologies.com
2014-05-22 a23-5-245-163.deploy.akamaitechnologies.com
2014-05-16 svrintl-t1-aia.verisign.com

An awful lot of nasty stuff has been communicating with that specific IP address recently (although it's of course quite possible that all that badware is simply doing routine background Windows cert-update stuff entirely unrelated to any badness, in which case this exact pattern would appear but would indicate nothing improper on the part of ):
Latest files submitted to VirusTotal that are detected by one or more antivirus solutions and communicate with the IP address provided when executed in a sandboxed environment.

13/56 2015-02-26 15:41:56 fd54cc6001a6dd809672dac15d97890a8738bcb701680210879acb79fed7f0ee
37/47 2013-05-21 23:42:34 97d56e716e29f566bd227c17f1531b11f3f66678bb53e156f7e66b66d1038c8a
3/47 2013-05-21 23:31:52 be0bb1db750c0a7c29d3db1af20b4b6fc407bf01a901a7d699006657717f8853
3/47 2013-05-21 23:31:42 ac0c23dfaf427190de25e915ad3f30da85753e60d8b91cef0dcad40854f0753b
5/47 2013-05-21 23:24:33 7e4be4daf139b3c79a532e02fae4810ea9408a3598050b94590ed67811e649dc
5/47 2013-05-21 23:22:28 617ade6180c64f8dc07ad38847f0749520f13e8e74c5a4a59489f716525b6c08
5/46 2013-05-21 23:19:51 20c0ab864a4338b714c299716fe9fc488768d01a1fad9fee6c6288e2536eec02
36/47 2013-05-21 23:17:11 ac4bffc0b2c321db9bc516acf8371cfc311d5ee3f25100af30869a5fc71c22d1
3/47 2013-05-21 23:16:29 1c096e8eb3af484fd9667bbb3c626eb1db44a0fad0fa1e67b7d1c8c7ef5aa7c4
35/46 2013-05-21 22:42:48 e2ec948c1bb9bb08f5608b6c93484bf7b26cc673c130ed1403491cf27ea5148a
36/47 2013-05-21 22:41:23 8eae3d417cd295581be4648d811cce0801e35b285f2243b9818983aa0d892d01
38/47 2013-05-21 21:58:54 1cd07c608c6b046db532f17f1f24654291fe762eea8f437b92171bbcce460da9
3/47 2013-05-21 21:55:29 b709088c6b681912a7e4a0c8e28acf51e6c410f3d9223b8b2056a1e3a0dcfef9
8/47 2013-05-21 14:06:12 636898c5358a35856e37a6f22ea1f840cd7343e71e64df98fb08260029513dc7
8/47 2013-05-19 00:36:08 469fbe016536dbb42266f38147dd07578a21e048217d84b17c9b4f1cc0fd7151
1/47 2013-05-18 15:34:09 f968f6102333afe793d8cce85d35b89a77130e182e6dc473b7e51f5971dcf228
32/46 2013-05-16 09:16:54 60a9e49e8007429a4bdd6a52d9541a38662d241f2653c2ed010f280e92658e7b
1/46 2013-05-15 04:43:19 52070c17d639df3fd921bb1d0c52c13220aa9e84fe1dc20ca7a8eab53712ac0b

I still don't understand what purpose the complex welter of subhostnames serves in this: wouldn't resolving crl.verisign.com to 23.5.245.163 directly be more secure, less complex, and easier to administer? I say that with an awareness of these things given our use of similar layered hostname-IP resolver pools (our Hostname Assignment Framework), so I'm as much curious as anything else - assuming it's legitimate, then there's some reason they add that extra layer of arbitrary subhostnames into the process flow). Of course, I do understand the benefits of putting akamai in as one lookup layer... sorta. Only not really.

Do you really want to outsource to an amorphous cloud-based CDN the delivery of... CRLs? Or in this case CTLs (certificate trust lists)? These aren't exactly multi-gigabyte streaming video files. They don't change every 5 minutes (do they?), and they aren't subjected to enormous pressure to cut every millisecond from RTT pingtimes (are they?). They're compact, relatively static files that are extremely - extremely! - security-relevant given that they control what root certs are or are not injected silently into the windows trust store locally on countless millions of PCs.

Subvert that process, and you've got one bad-assed exploit avenue. You can now do things with those PCs unimaginable without that subversion... like, ooooh, MiTM all of their https sessions invisibly, without any hint it's going on. At massive scale.

It would seem to me, naive as I am, that Symantec/Verisign/Thawte Corporation would - given that they are a Certificate Authority - want to actually manage the process of controlling and delivering those CRLs and CDLs themselves, in-house, firsthand. Managed closely, with audit trails and accountability. Maybe a hostname like svrintl-t1-aia.verisign.com helps that, via some internal tracking process. Seems alot of complexity just asking to be broken, to my way of thinking - but then again my world includes a different mix than that of the folks at Symantec who likely designed this.

Finally, for this point anyhow, I'm really not sure how one would do an audit trail given that this is Robtex's graphical representation of what happens to get a request from a Windows machine wanting a CRL (version 3, which dates back to 2006... because of course, that makes sense?) and asking the DNS system to give it an IP address to which packets can be sent:
verisignCRLsnarl.png
. . .

Anyway, so presently we see that crl.verisign.com resolves via a somewhat complex, multi-step process, to IP address 23.5.245.163.

That's not always been the case; a look back through records shows that these mappings have been noted as in effect at in the past 18 months:
2014-07-29 23.9.85.163
2014-07-19 23.7.69.163
2014-06-26 23.5.5.163
2014-05-27 23.9.117.163
2014-05-21 23.13.165.163
2014-04-17 23.5.245.163
2014-03-16 23.50.69.163
2014-02-13 23.64.165.163
2013-10-30 23.49.133.163
2013-10-19 23.61.181.163
2013-10-17 23.60.133.163
2013-10-15 23.61.69.163
2013-10-07 23.65.5.163
2013-09-26 23.4.53.163
2013-09-18 23.35.165.163
2013-08-23 23.38.85.163
2013-08-20 23.53.181.163
2013-08-17 2.22.133.163
2013-08-17 23.43.133.163
2013-08-16 23.36.149.163
Which is twenty different IPs - all ending in 163, what are the chances? - since August of 2013. So each IP lasted less than a month, on average. They are:
  • 23.7.69.163
    23.5.5.163
    23.9.117.163
    23.13.165.163
    23.5.245.163
    23.5.245.163
    23.50.69.163
    23.64.165.163
    23.49.133.163
    23.61.181.163
    23.60.133.163
    23.61.69.163
    23.65.5.163
    23.4.53.163
    23.35.165.163
    23.38.85.163
    23.53.181.163
    2.22.133.163
    23.43.133.163
    23.36.149.163
I haven't looked into all of these yet - they should all cleanly WHOIS to either Akamai, or Symantec... right?

I'd have more confidence - and perhaps not include the question mark - if stuff like this didn't show up in my research on this post. First, malware that explictly calls some of these verisign.com CRL hostnames:
funktCRL.png

Second, this anomaly when checking the WHOIS records for crl.verisign.com (yes, doing WHOIS lookups for subdomains is a bit nonstandard, but it can still provide hints of interesting things):
verisigngodaddy.png
verisigngodaddy2.png

This, I don't even remember where I screenshotted it tbh, so I'm sticking it here to remind myself (or anyone else curious) to follow up and see what's in the file referenced. That version is also very old, isn't it?
crl_verisign.png

Oh, and here's the "CSL" that comes out of that cab at the "windowsupdate.com" URL (which itself cascades, via a series of DNS zone file records returned in response to the initial resolver request, from CNAME www.download.windowsupdate.com to CNAME www.download.windowsupdate.nsatc.net - both of which are subdomains, not http-based resources... which is not supposed to be possible, is it? - to CNAME download.windowsupdate.com.edgesuite.net to CNAME a767.g.akamai.net which is actually - woohoo! - a real A Record listing IP address of 184.25.56.93, which is the actual IP address that actually provided the below-attached file to the test run we did of this installer - pulled out of the full-payload pcaps gathered during the process) discussed earlier in this thread...
authrootstl.cab
(56.24 KiB) Downloaded 1632 times

That, in turn, unpacks to this .stl:
authroot.stl
(132.73 KiB) Downloaded 1618 times

..which, and you'll get used to reading this alot, I've not been smart enough to unpack using any tool I can find (though I'm not a Windows native, so perhaps it'll be easy for others - I hope so!). It's pretty big for a "certificate trust list," in any case. Not a CRL, remember... a trust list.

(incidentally, IP address 184.25.56.93 a little odd in that it's been the answer to DNS resolution requests for domains nextgennbn.gov.sg, abercrombie.com, caranddriver.com, rediff.com, mashable.com, a184-25-56-93.deploy.static.akamaitechnologies.com, and nfl.com... all since December of 2014. That's quite a record! Now Akamai is a CDN, of course, so they can remap IPs to client domains as often as they want, and perhaps they've been fast-fluxing this stuff in the last few months because, ummm... something something. Because of some obvious corporate reason I'm not smart enough to figure out by myself, obviously.)


Anyway, authroot.txt & authroot.stl both come down as confirmed by pcaps. They come from deep within Akamai, via a series of CNAME redirects that's pretty impressive. They end up at IP addresses that are impressively ecumenical in the sorts of domains (and subdomains) to which they'll answer.

And it's all related to the authority to inject root certificates into the Windows trust store arbitrarily, without any approval (or even admin access) from the user required. Which is to say: figure a way to hijack, even temporarily or ephemerally, that process and you've got the door open to root cert inserts at an unmatched scale... it makes Superfish seem a drop in the bucket, in comparison.

Has someone managed that trick? Is there evidence of it in these files? I honestly don't know. Personally, I can't rule it out based on the data I've collated thus far... nor can I say I've got the proverbial smoking gun that it has taken place (or is taking place). Additionally, it fails to pass my tech-world sniff test... admittedly not terribly objective, and far from a perfect metric: I'm a bit jaded after years of network security front lines & yes I have been known to see ghosts in the shadows when there's only shadows.

Even so, this is all... so terribly ripe for exploitation. Were it my job to figure out how to inject root certs into alot of Win machines without causing a fuss, I'd sure as heck be checking this whole complex, unstable, multi-entity, http-encoded mess really closely for any little corners to pry up and use along the way. That smells like ripe ground for exploits to me... and I've had enough time around exploits to have a half-decent gut sense on that sort of thing, albeit far from perfect.


Whatever the case, it's fascinating. It's very instructive, in terms of the layering of DNS and CA-based certification that produces certificates in the local trust store, and thus the magical "green lock in the browser" that means https is secure. Only, of course, it doesn't... not at all.

Not yet, anyhow :-)

Cheers,

~ pj


edited to add: these are recently-enumerated examples of malware that have the hostname "crl.verisign.com" embedded in their post-compile strings somewhere; as noted before, that could simply be a result of them including routine CRL administrative matters in pre-compile code that comes through as binaries and thus pops up as this kind of list (indeed, on the same virustotal page one can see a list of non-flagged binaries that include this URL in strings but haven't raised red flags on virus scanning sites - which may or may not indicate they're clean files, tbh)... but it's something I'd be remiss not to at least note. I've not gone to them and checked each one to see if they'd have a legitimate reason to be calling a verisign CRL hostname as part of their compiled-in duties...
Latest files that are detected by at least one antivirus solution and embed URL pattern strings with the domain provided.

25/54 7b28837f16a9941b979c4e4031cb214ec67d29097e6d26f03fe96b0d9e101568
1/54 8d81c92d1d02e9aa0027dc18b0022b89639bb74dbef43afec398645d9d80b191
44/54 9bb200bc3259cc724009d923c2ca0845c652f71e5c50bce2894311f57434872b
49/53 5158b19ba52311bd8a888323507e3d43119eda32630d387a8cad4b3ac50a7246
48/54 451d822ddab6e1642d20c248fbbb27efcb5eb62f78e52279b51709898c765244
4/54 3a67623c9f0038feefd667bb56ce7a7ec0e6d6ba292fadca02d75105b9a8b6df
46/53 a67841ba3adbdeceb65a6538d637ba1a48e7edb0fdc5679e1f0c285a91aa314f
14/54 f3ef72da9bf5dc1752abf8ddfc8505ce52d7cf5e963783c4bb30e4f065651dd9
2/53 f97293c29716811a30852b6186c65d8c1d8632b2f223a8cf70035c8530281de3
43/54 54c259ac27a886e79ab2dd8eb9c6bad3d8879084e3e20419b09c1cdcaad6c5e2
47/53 f79ef0189485f6e24d14d9b705cb3cab0dc01417adaabca6ea9f858b84e7d8ff
45/53 ab3e371c3ad484954bd88a5ceee2ce9c957bddbc08a292517823419b571c8687
42/53 e1276ec2f6b0c18e4878b294f0e4b6fd6320599942b5e8fd9afa1f1d13f60665
43/54 b70e6130cda1b9ac32237f8f07d03434ba4e9caa7e1eacaca58a1886bb708ef9
50/54 9571818ee19cbcb2a9dd0241e0e5a1f33a539b2dd576ae55124b5466ac29ecc6
40/52 ce984a3033b9c3507ef0e15b9d9beaf6caedaa9ae9c8e802a861fa26a2a31023
47/53 33d4e518a57f1bacdb1dc8c6e22ce02e7fe62fe7e011b6ecd79151ec451d346c
48/53 c83deebaa0e4ff771046423c77133b61a7ff655a6aa6c195f813b429f6c8f6f1
1/50 67ce83ac8b3cb2792222c58cf068f4808ff4ed9b0a44a13ee8b229e50e2632e3
3/54 52634de789cc5ba63a445e186d7c888a0032dd64a4609ca3970412f5d7c270db
43/54 c0fca0132257134d0f403123cab2e2fcff07dd94eaf49a9a77f2f0b6c74a28c2
48/53 7a0ba519a5d14564c3628935fa662f7f73ea0d91546d6d34bab2c38e369addd6
43/54 943ff1eaea992e80ba99fa4283f2247464cec40398e2c239fde9b51e80af25e6
42/53 04710d9d767e6cf262cc4777d04df13b6b9375645dbb053b29faad4ad2007a03
43/54 f1419e115a60ab4a9525ebe5ae51e3e1611a8b328bc51bbf284dd4e73e2e6663
42/53 b769b22fe24bbe254787b4b50ce0a5981a7f75c4cab537d40117518b714a5120
26/54 302ae701ece4e031afa4428e86957fa56574dfdb7a496951df30cc97e64b4178
43/54 75ca5a6d0b08b31f92484bf83dd728a7487182bd353119a06115fd886834931f
43/54 30feaf83619509479e9e17707b04d8d30915b7764153821f05ce6abdcca1e339
41/54 1ddaaf94c06d4fc974e43e859e6ec6b15f30aede27454dab2b79dff2a722a8e6
48/54 9c66ab11b944b96702ded52b553685aa9cd56f1738e142f9eb1475104829391e
43/54 7ce554c9e1bf31acdf06e1b5abf5f3aeec50ea33bd96fa2538dc6e4984cca571
43/54 a480346105b3d831cb5be816de3cd6c834798757ef002e609d80330b0a6f4e59
1/54 b9f35917c3593b36c4bb8930e9c589354063b7612338022dba506d60a8a8d769
24/54 911f1f70155219e648155982b12c019a332cf21ea361aa3b84b45745c9661445
42/54 53cb74f7605efd4017258bbe8001867ef1bc4a5de063e064aef5c959dab16fac
49/54 dce782d51f265990f5ace8abd0f934b25a9389c1a7864c3ed8046a4e1e2d8fd8
43/54 054fc4368316f11bc17c998c33240afc21073f7077e478f5e6d8788b153847aa
46/54 1af8514edab296c3fc4b91c814ea9c08e2aca63c4cc60018451e59fc6ed83443
48/54 f53c9effd57e9b5a1f0b35c4614cabfe1a134ddded29d90b30d645bf74cf4211
26/54 49c64d18629e9efad31c5c9a77ce6df53c68b1998dbbdef0e2fadb08413fcc41
42/54 6a73d6fed9247d8bf343a03284ec8d3632a661ea690d9faf3a760956a0cd3fd0
27/54 6eec0856dba595b0d8454755e6b2b799dbef051e7be443ec52258d64245f53d5
48/54 49c5d6e790a7b1f17ed0554a6fa79b49e54002bc5a6a809a173ca060089376be
48/54 3b4df336d1244d84f42382b52c2cf159a5a30732cff8ab48d645ebee23b36aa9
43/54 7e374e4b1c1d31c4cf4a1da35f50cd238431e027333e88548aecc5141578e71b
50/54 b86d700d317ae62185907bbc7e1958e96db7b9d39042c34117f239bfc6f50df6
49/54 dea017e8333cb57fd9150fb1cfb81353ef701801dfe596d5fa76dba77046c9db
48/54 e66fa6d49c2e172b2d13490da56e3878b886f999f7e3478b46f77a6c4ee2cb3e
37/48 aaef1045dbd77cdbd13a11e73fe038c0733e80ec44b218dc7e8cd3958fcb98b8
47/54 c3b8ed5e5856d228c2a2c2d8eb8d8e07c225e2a0180c7723364aa66f7b8e5140
43/54 8dbd3adf01b60108be9a7e2f1c5f75e91625a6181d8064aea52a156bea8b1b5f
42/53 a77307b8499051f90a6922cce8007ad2af3c698b7d48c2ad5141d6034d478640
43/54 4e2698f38fa92732971ee9aa5634b4a89703be1b5509b5d82cb67ee6482850de
43/54 4112278766b225b7960e130cb92c7d1e5ab0197170b8fa85409fc760caeb0fb7
48/54 a36cc9fe6d358901f805d62dc955a6b3f47b71419f830fd80808acb51f9d1558
43/54 ba8d94ad5e143749d0f5c9455e9c9a1bc82a0449c61c3c65b324afd10f15110d
49/54 179f1bfea87c2dae771ebc4e59c36fae31c40df74b8b6b7c5db9072015d31a05
1/54 c48f3e169ccb553ea237901902994c685ff28e3e6b96f9d2018ac795555f5104
12/54 02f15fd9714b845e852df64dc014afca416c0c449767d3f04e597dd2c38195fc
8/54 a46b8a3e78e0103a50b7d07df02123e485f65070d39912bf5e3f231037e826c2
1/54 4b84963b7be73176d3df04b6a96c65bcd9982de1e71ceba879a8227681baac52
1/54 29a6e5fd43402f7ab9e1c2fd782524ecd6d9b2eeef84227b35b6ab7f6e312480
1/54 612dbd56d7bc19919a3a89a45377066883a074fce9de7847fdeff04e88564f95
1/54 020d79d8b6f89b6f602917cda563cd3ebe39fdf0e70a07dd222e0a976d9b9bfb
1/54 5634ac6be67187c5d6e8b5c03ed89944ecb06bf32d705db2e11cfc7c0191fadf
8/54 9aacf3df6b5227078f1b2d2da1077d59898f6e28a038e78508406ba1da9e2d97
2/53 1bd433fcc02007616709e06c4bd14ee252eaa16b945a5f3221fed0513ae2dc94
6/53 03cdfffebcb2a5861f19fb4cbeb64b522d141ec5979f438a424d004ca2bc3224
9/54 6fd94fe238e8b4187090cdc5436f483d62f898290c305f75ec7e4d92b26b0156
5/54 b8809fe358726bd7b5a0e87d93ef8fc4ba2fec9d2fdd300bd048bc684cc8dbf5
5/54 00de56d51b7be49f42126687862661f454b4141088e1193770f4bcd32e5807ef
3/54 cb20843d2ad8679739e62d020572b3ef77d69dee10a1e26a2fd426d468ebfbe7
2/54 981dffac632f107320b2df9b092833d9ce2e421e5e4731b940e5d58ff8d1007a
1/54 1352c216f06dd81768af0e8b0a99ab222465917354acd81381890f81aeea347c
6/54 c35b56bc5eea3698f0b9f37c04db8f063f02c16261b9a005500d03dad6d4a11d
2/53 885e69834195bcce39b279d4f401d51095c3dec22b6be85a6987b986dffebeb7
1/54 42c4a94bb75b0b5576e1d086f823f0e08a279d38c1b3edc537670540a802342d
5/54 bc14862f14f8dd5f6a05e19eefa21a51bbe5b5b3acb1ca610dfa783b1a2e5fdb
3/54 de4a6b5a38099888c1888c0b314fa980b1229e6f3783e65bb9d8f9c6937bc236
2/53 be7b1e4eadbba0986179efdd8e82c87d90b226cf006991d4ca3f2eab86f1bc47
5/54 5938037f046420fbaddda64e82fb370b165796ce13e52a1ca2ea7595bac655d9
5/53 a45f52e586c2e6e5700b5db2bf34fd5bf8a698367c881af2b08dac10bc146d35
1/53 92190daeff3c7d84a9038abcba3cdbebbf087d0cdc5c5415adddbb448cfbd49a
5/53 ae90488a1702b4149755432bd19865b37bb43d9bbd881c468a46e7b958405814
2/53 6243230c2fa78ef681d425c5a542cba1111f162c0b758dd77ac9b90d45fa7ac8
4/53 3fded5ee5de03c5d55226b1c141fed9997b2c1af71da23446ae6d68b8ff41ab9
5/53 d047016811cd780dfd5b22eef5c1f37f7eec836ebd909a25e63eb6b7a1dce3b0
2/53 0307b4f42bf05f5acf89f33aa0a3d3437a7fe50b26ebaea1fc4c4ee77f2ab73d
6/53 072807a177b1150e34f2a8c09a78ed5710bc05414d3614b11b44b9a488e4e6eb
4/53 8d2795552f37adeb62a9891948346b8ef9c2a9321d77223dd43a7c108186cd94
5/51 80203119554176f7c911bc79046d9de5596ee7e7c6abdca4c0ceaa7c391066ce
10/55 41f2899c34776ca8bba876e6e0f0874d9a333a4340dbaf5eef31d5fff939cbe5
12/55 8b5ef37d01165498859dd83f752a6e8cba514ace82b1477e58b0d1fa374e3891
1/55 205aae8464fc05c60c26d8711a782ee456dc3bed2da5ae93535462ae459de674
1/55 12bf51608bd95c8cc948129c7fbb9679eb917ea7bc04f49dd3dd9108c481feb8
5/54 22b8e6c5d308247c1a6300e5f53468faceee1bac87cde53925bd8d324ca188fa
3/55 b741a4dcf2bc703459652e8f5c5c4344dc7506564b8924dacfa2c4804a2da96a
21/55 5e5bf94f225bb7b4d12f6a6416ee078ef69806aa7d6d3c79d295da61b1762bd9
20/55 06334b6500d90d283d36cb1eebd3bb2ef02b37f439d3cd4ed6ab2195ac5764f3
by Pattern_Juggled
Fri Mar 06, 2015 2:27 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

CryptoAPI2, CAB, & ctldl.windowsupdate.com

This additional information regarding the authroot.stl issue has been generously provided by @wneessen (and is echoed over from pastebin):
  • - CryptoAPI2 fetches a MS signed CAB file from ctldl.windowsupdate.com (Akami hosted)

    - CryptoAPI2 extracts the CAB and checks the signature. CAB file holds a list of authorized CAs/Root certs, that Windows will allow auto-fechting/updating for

    - If you open a website that Windows doesn't have a valid root cert for, that CA/Root cert will be looked up from the list (which is cached localy as far as I understood)

    - If the CA/Root cert is in that list, CryptoAPI2 will fetch that root certificate via http:// (yes, http not https) from ctldl.windowsupdate.com; the exact URL looks like this:

    Code: Select all

    http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/<SKI of root cert>.crt
    - If the DL is corrupt or times out (5 secs. or so), nothing happens and the process is not reproduced unless you restart your browser and open that website again

    - If DL succeeds, some validation mechanism checks the SKI and fingerprint of the certificate (I wasn't able to figure out, what exactly happens, but I couldn't just present a different root certificate. Windows wouldn't accept this).

    - If validation succeeds, the root cert is installed into the local trusted store

The process can be blocked either by disabling it via GPO (on Windows 8 via Registry Entry) or by pointing DNS for ctldl.windowsupdate.com to 127.0.0.1/blocking requests to ctldl.windowsupdate.com
by Pattern_Juggled
Fri Mar 06, 2015 2:01 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security
Replies: 9
Views: 57097

Dr Green: "tunnel traffic through some alternative (secure) protocol..."

Following up on this comment from yesterday:
Pattern_Juggled wrote:...with access to cryptostorm, as one example, one can often simply redirect sessions a different pathway to avoid the badness.
I ran into a convergent explanation of this solution path from Dr. Green this morning:
One option for Google is to find a way to deal with these issues systemically -- that is, provide an option for their browser to tunnel traffic through some alternative (secure) protocol to a proxy, where it can then go securely to its location without being molested by Superfish attackers of any flavor. This would obviously require consent by the user -- nobody wants their traffic being routed through Google otherwise. But it's at least technically feasible.
This works much better via cryptostorm than with Google attempting to do browser-based encapsulations - we don't need to move up to those OSI layers to address the problem, and rather are continuously moving HTTPS traffic through cryptostorm's network transit fabric the entire time.

(note that, yes, this doesn't solve the problem of hideously-subverted browsers or rootkits on member computers... I do not think there's any network level-mechanism that can do much to help in the event a member is pwned at root on their local machine)

As usual, Dr. Green's writing is much better than mine!

Cheers,

~ pj
by Pattern_Juggled
Fri Mar 06, 2015 3:28 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security
Replies: 9
Views: 57097

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

One more quick little note-let...

This can work, and work with minimal drama. I know this is true because my PoC for it has been a manual process of doing gut checks of connections to websites, for the last month or so. One can often, after a bit of practice, spot problems as they happen - and with access to cryptostorm, as one example, one can often simply redirect sessions a different pathway to avoid the badness.

If that can be done with meatspace implements, it can be done better and more efficiently with a bit of scripting and the benefits of the blockchain & meta-networks. That's the ground-up approach I've taken to proofing the implementation capability. The rest is simply fine-tuning and improving efficiency...

Cheers,

~ pj
by Pattern_Juggled
Fri Mar 06, 2015 3:24 am
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security
Replies: 9
Views: 57097

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Guest wrote:How can topological routing be verified via tor/i2p pki unless 'janet' is running on tor/i2p? as I understand it- tor/i2p pki only verifies/validates routing within tor/i2p- once traffic exits to clearnet it's back to square one, vulnerability wise. or do you mean just the cert (err fingerprint?) to janet is validated via tor/i2p/blockchain somehow, and checked for consensus? Wouldn't Tor/ip2 exit nodes be a prime candidate for exactly the kind of interception your trying to avoid- ie, if most tor/i2p nodes are targeted for interception then the consensus itself might be wrong? In any case, what happens when there's a legitimate cert change- how is that transition handled?
Short reply now; more later. If one does the namecoind query inside one of the meta-networks, it becomes exponentially difficult to reliably inject altered results into the process. The blockchain doesn't have a routing address - it replicates everywhere and can be read to or written from anywhere - so pinging it (or "it," as it's a bunch of copies of itself) can be done anywhere inside those networks.
You've done a great job explaining all the issues surrounding these standard outdated clusterfuck "security" systems- just when it comes to the proposed solution/implementation and the nuts and bolts of how it works- I'm lost; I'm either too ignorant of the underlying tech, and/or there's not enough info here to understand what you're implementing. Could you please explain more clearly on the fundamentals of how this new system works?
The fault is in my very skim-level presentation of the structure of a technical plan for how to do this. It felt like the essay would become even more over-bloated were that grafted on, so I've spawned that off to handle separately - not as a "here's the answer" but rather a "here's my model, let's kick it around & refine it down the most elegant version of itself."

There are some things that won't translate well - although they are generally the things that are least in need of additional bulwark, in my way of seeing things. Ephemeral, multi-layered, centrally-administered CDNs don't (initially at least) translate well. Same goes for fast-flux-ish iterative domain::IP mappings - that stuff is designed to be fleeting and easy to change, and that's not really the core of what is most broken.

What is broken tends to be the "there's a server, it's got fairly stable IPs associated with it, I need to know I can spin up a good https session with it and not have a bunch of nasties bum-rushing the process every hop along the way.

That problem can be solved.

Cheers,

~ pj
by Pattern_Juggled
Thu Mar 05, 2015 9:02 pm
Forum: cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity
Topic: Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security
Replies: 9
Views: 57097

Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security

{direct link: cryptostorm.ch/cafree}


edit: framework name revised from 'root2root' to 'Decentralised Attestation' because, well, DA sucks alot less :-)

"There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, "Morning, boys, how's the water?" And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, "What the hell is water?" "

~ David Foster Wallace
In the 18 months of time since the first round of Snowden's disclosures began, it's been my pleasure to watch from the inside as cryptostorm has evolved from a starry-eyed vision of a "post-Snowden" VPN service into a globally deployed, well-administered, high-profile leader in the network security service market. That kind of a transition in such a short period of time can leave one with a sort of future-shock: the phases blur until the only real phase is one of transition. It's exciting, challenging, exhausting, exhilirating, and fascinating all at once.

One faces the very real risk of myopic blindness, a loss of situational awareness, when one becomes accustomed to live inside such a red-shifted existence: the world outside the bubble can come to seem distant, slow, and less relevant to the local frame of reference every day. That can make for exceptional focus on tactical obligations - I think cryptostorm's excellent record in deploying innovative tools quickly and consistently speaks to the value such a focus can bring - but it can also lead to a form of brittle ignorance of the flow of macro events.

In the past month or so, largely because my operational duties on the team are relatively small (hence the luxury I enjoy of being able to post here in the form more than most anyone else on the team) I've been able to step back from some of the red-shifted intensity of cryptostorm's internal ecosystem, and consider not only the trajectory we've been on since Snowden but also the trajectory leading forward from here.

Which all sounds awfully boring and maudlin, admittedly, so let's move along to the interesting stuff, eh?

In summing up what we do, I'd say the core of cryptostorm's mission is providing a layer of genuine security around the data our members send back and forth to online resources. That layer isn't end to end, but it does protect against local-zone snooping and it provides an ubiquitous level of identity decoupling - "anonymity" - for most all routine online activities. That, in turn, frees our members from a constant fear of having the ugly snarling bits of the internet come back down the pipeline and appear on their physical front door (amoungst other fears allayed). And although there's unquestionably areas in that remit where we can continue to improve - and must continue to improve - in general I say (with all humility) that we're pretty good at that job. That's a good thing to say; it reflects quite a bit of wisdom, experience, expertise, creativity, and bloody hard work on the part of the whole team... plus enormous support from our close colleagues and the larger community along the way.

So: yay.

But: now what?

Do we continue to iteratively improve our core "data in transit" remit as we move forward, keeping that as unitary focus? Or... is there something else that's sitting on the edge of our peripheral vision, only waiting for us to recognise it? Yes, the latter.

No need to bore with the etiological summary of how these obvious-in-hindsight revelations have come to us as a team in recent months (there's equal bits of webRTC, torstorm, deepDNS, komodia, fishycerts, torsploit, superfish, and more fishycerts mixed in with who knows how much else), let's simply lay out some things facts we've been fortunate enough to see staring us in the face, as a result:
  • 1. Data-in-transit is one part of a larger challenge our members face in staying safe and secure online, in general

    2. Doing our small part of that work really well is helpful and important... but leaves many other areas uncomfortably exposes

    3. Most of those areas are not part of our core expertise as a team... but a few, somewhat obviously, are.

    4. Of all the areas of uncomfortable exposure beyond the confines of cryptostorm's network edges, "secure" web browsing via https is unquestionably the most badly broken, most widely used, and most complex to mitigate security problem our members face online in their day to day activities.
Simply put, https is badly broken. Or, no that's not quite right... how about this: the cryptographic foundations of https (which are, after all, TLS) are reasonably strong and reasonably reliable even in the face of strong attack vectors. But, the model of ensuring integrity of identity upon which https is built is designed to be unreliable, opaque, inconsistently secure, and open to whole classes of successful exploitation and attack. That design means that the cryptographic solidity of https is essentially fully undermined by the horrific insecurity of centralised identity verification that exists in the form of the "CA model" (as it's generally known). Certificate Authorities - CAs - act as "root" guarantors of identity within the CA model, and these CAs (in theory) are the foundation on which confidence in network session integrity is built.

Only that's not how any of it actually works.

I'm not even going to attempt to summarise how this all came to be, nor how it actually plays out at a systems-theoretical or technological level. Many brilliant people have written on those subjects far more effectively than I ever will, and I encourage anyone interested in these matters to read those writings rather than wasting time reading any attempt of mine. But, while I may not have the ability to articulate the CA model in all its gruesomely convoluted, counter-intuitive, opaque hideousness... I do know how it works as an insider and a specialist in this field. I know it from years of frontline engagement, elbows-deep in x.509 syntax & CRL policies & countless complex details stacked in teetering layers.

I also know it as someone who has as a professional obligation ensuring that our members are secure in the context of this insecure CA model... which is to say as someone who is tasked with making something work that's designed not to work. Because, yes, the CA model is designed to be insecure, and unreliable, and opaque, and subject to many methods of subversion. This is intrinsic in its centralised structure; indeed, it's the raison d'être of that structure itself. What exists today is a system that guarantees the identity of both sides of an "end to end" https network connection... except when the system decides to bait-and-switch one side out for an attacker... if that attacker has the leverage, resources, or connections to have access to that capability.

The CA model also puts browser projects - Chromium, Mozilla, etc. - in the role of guardians of identity integrity, through their control over who gets in (and stays in) the "trust store" of root certs held (or recognised) by the client's browser. But of course browser vendors are in fact advertising businesses and they make their daily bread on the basis of broad coverage, broad usage, and no ruffled feathers... they are the last entities in the world with any incentive to be shutting down root certs if a CA is compromised in a way that can't be easily swept under the rug. So the browser vendors loathe the role of CRL guardians, and basically don't do it. Which means every root cert out there today is going to stay "trusted" in browsers, more or less, irrespective of whether there's any actual trust in the integrity of their vouching, or not.

Editing in [6 March] a relevant summation of this dynamic from Dr. Green. Here, he's speaking in reference to Superfish - an acknowledged distribution of badly-broken (unauthorised, although in what context one can say whether a root cert is "authorised" or not becomes one of ontology, quickly) root certificate and private key in the wild:
The obvious solution to fixing things at the Browser level is to have Chrome and/or Mozilla push out an update to their browsers that simply revokes the Superfish certificate. There's plenty of precedent for that, and since the private key is now out in the world, anyone can use it to build their own interception proxy. Sadly, this won't work! If Google does this, they'll instantly break every Lenovo laptop with Superfish still installed and running. That's not nice, or smart business for Google.
Not smart business for Google, indeed. This makes a mockery of the entire concept of "revocation lists" - which actually become "lists of stuff Google et al may or may not revoke, depending on their own business interests at the time... and any political pressure they receive behind the scenes" rather than any kind of objective process (not picking on Google here; indeed all appearances are that they're the least-bad of the lot).

One more aside on CRLs. First, they're accessed via plaintext by just about every root certificate I've ever looked at myself. Let me repeat that, in boldface: certificate revocation lists to revoke bunk certificates are sent out via plaintext http sessions and are accessed via plaintext per the URLs hard-coded into certificates themselves. Really. They are.

Here is a specific example, from the cert we all love to hate, namely StartCom's 30 year 4096 SHA1 '3e2b' root:
X509v3 CRL Distribution Points:

Full Name:
URI:http://cert.startcom.org/sfsca-crl.crl

Full Name:
URI:http://crl.startcom.org/sfsca-crl.crl
Can't imagine any problems with that, can you? I'm hardly the first person to notice this as "an issue," nor will I be the last - it's another example of structural weakness that enables those with central hegemonic authority to bend the system arbitrarily as they desire in the short term, while retaining the appearance of a "secure" infrastructure in the public mind.

After some posts about this in our twitter feed recently, @stribika let us know that this is an intentional design decision:
Publishing them over HTTPS wouldn't fix it because the cert is assumed to be good on CRL download failure.
Good point, but one can see how this spins quickly into a recursive pantomime of any legitimate sort of CLR-based assurance of root cert integrity.

~ ~ ~

From here, I could launch in to a foam-speckled summation of the DigiNotar Hack of 2011, now, to illustrate all these points. But I won't... or if I do, I'll do it in a separate thread so the foam doesn't speck over this one too much. But, yes... DigiNotar. One image to emphasise:
NSAdiginotar.png
The CA model serves the purpose (in structural functionalist terms) of giving the appearance of reliable identity validation to the majority of nontechnical users who see the green padlock in their web browser and think "secure," while simultaneously ensuring that the door to subversion of that security is always and forever available to those with enough access to central political power to make use of it. So: if you're Microsoft and you really want to, of course you can break any https session because you can sign root certs - short term - that browsers will swallow whole-cloth, and MiTM your way to plaintext. Same for the NSA and other such spooky entities, of course. If you do it too much, too broadly, someone might notice (certificate transparency at least might do this, sometimes, maybe)... but if they do what of it? There will be a half-baked story about a "hacker" with a ski mask on, etc... no root certs pulled from trust stores, no big heat, really not much hassle at all. Give it a bit to die down, and come right back to the trough.
IMG_20141124_190757.jpg
This is not a "failed CA model." It's the exact requirements the CA model fills. Those who seek to "fix" the CA model are trying to fix something that's doing exactly what it's supposed to do for those who make the macro decisions about how it will be managed. To say such efforts are hopeless is actually giving them more chance of success than they have. They are sink-holes for naive enthusiasm, able to sop up technological radicalism in unlimited volumes... eating entire professional lives of smart and eager activists, leaving nothing behind but impenetrable whitepapers and increasing intake of alcohol over time.

But I digress.

This all became crystal clear to many people - and was re-emphasised for those of us who already knew -via the Superfish debacle. And, personally, as I dug into that research topic, I started seeing more and more evidence of how deeply subverted the CA model is - and is designed to be. I could send many bits of foam flying talking about bunk certs and hjacked hostnames and DNS caching evils, and on and on...

I could also spend months or years documenting all that, and eventually add that pile of documentation to the mountains already in existence - more landfill fodder. But, to be blunt, I'm interested in addressing the issue - not in writing about it. I know enough firsthand to know without a quantum of uncertainty that https is unreliable as a secure transport mechanism today. That's enough - it's enough for me to move forward, knowing the facts on the ground as they exist today.

It'd be easy to say that https isn't cryptostorm's job. And it'd be basically true, in historical terms. We route packets, and if those packets carry https sessions that are themselves subverted by cert fuckery... well that's not our problem. Members should be more careful (how?), and besides we can't fix it anyhow. Well, we've debated this as a team quite a bit in recent months. I can't say we have complete consensus, to be honest... but I do feel we've got a preponderance of support for the effort I'm describing here.

Simply put, we're expanding our protection offered to on-cstorm members: we're tackling the problem of broken https at the cryptostorm level, and while we won't be able to nullify that attack surface in one step, we're already able to narrow it considerably, and our mitigation from there has ongoing room to move asymptotically towards zero viable attacks on https identity. We've started calling this mechanism for credible identity validation for https sessions "root-to-root" identity authority, as opposed the Certificate Authority model out there today. Root-to-root doesn't replace the CA model, nor is it in a "battle" with it; it subsumes it, in a sense, in a simpler wrapping of non-mediated identity validation.

In short, we're shifting the Authority of the Certificate Authority model back to individual network members... they're the real "root authorities" in a non-compromised model, and thus root-to-root sessions are the way to ensure the model meets their needs.

~ ~ ~

Implementing r2r for on-cstorm sessions requires us to be clear in what problem we're seeking to solve. That problem - verifying identity online - is actually composed of two distinct, but deeply intertwined - sub problems. Those problems, or questions, are...
  • 1. How can I be sure that an entity I already know is the same entity, over time, and not some other entity pretending to be they in order to gain access to communications intended for the real one?

    2. How can I be sure that when I engage in network-routed communications with a particular entity, those discussions go to that entity rather than being surreptitiously redirected through a fake transit point masquerading as that entity?
The second of these problems we usually refer to as MiTM, and the first is why we have things like PGP key signing parties. In technical terms, the first one has been considerably narrowed through the unreasonable effectiveness of public-key cryptography. It is still, however, plagued by the problem of in-band "oracular router" subversion of public key identity validators. Simply put, if an attacker can undermine the ability of those communicating to have confidence in getting public keys from each other, the effectiveness of asymmetric crypto technology drops to near zero in practical terms.

The second problem - "how can I have confidence that the network entity I am talking to is the same as the "real" entity I want to talk to?" - is presently tacked by a mongrel mix of DNS and CA model centralisation... which is to say, it's got two famously complex and insecure systems entwined in an ugly fail-dance ensuring that there's no way in hell anyone can be 100% sure - or even 95% sure - that the two systems together give a reliable answer to the question of whether I'm sensing packets to "Janet" at a network (logical) address that is actually controlled by Janet. Usually, my packets will get to Janet... except when they don't. And I'll most likely never know if they don't get there, because an attacker with access to the skeleton keys of DNS and/or CA credentials can do so invisibly. I never know when I'm being screwed, nor does Janet. This uncertainty serves central power just fine.

The second problem emerges from the ontological roots of routed networking: the divergence between physical and logical network topology, as well as the distribution and dynamic evolution of "connectome"-level entity-relationship information embedded in those model layers. The first problem, in contrast, is simply a by-product of remote communications for a species of mammal evolved to know each other in physical terms, not as amorphous, disembodied conceptual categories.

Both problems must be solved, concurrently and robustly, if we are to have easy and consistent confidence that when we visit https://janetphysicsconsulting.org we are sending our latest experimental data to "the real Janet" rather than someone pretending to be Janet, and that those data are being routed to an endpoint controlled by Janet rather than some sneaky GiTM along the way...
IMG_20141107_083232.jpg
Currently, to send those data to Janet's website with confidence they'll arrive unmolested in Janet's custody, we have to both have confidence that the hostname "janetphysicsconsulting.com" will translate into instructions for our data to go to Janet's computer (DNS resolution and routing table integrity), and that janetphysicsconsulting.com is actually controlled by Janet and not some imposter pretending to be Janet (the TLD registrar system of authoritative nameservers, etc.). If either - or both - of those assurances fail, then no amount of clever crypto will prevent our data from getting fondled in a most unseemly way.

That's the problem, in a nutshell.

The solution, most emphatically, is not to continue to incrementally refine the CA model, or (merely) encrypt DNS queries... each might have its uses (indeed we're supporters of DNS query security ourselves): those may be useful in and of themselves, but they cannot act as a substitute for systems-level alternative mechanisms for solving this problem. I'm repeating this point over and over, because until we accept that reality, we're self-precluded from ever seeing our way forward. Like the fish in the sea who never imagined the concept of "sea," we're swimming in waters of which we remain pathetically unaware.

We're in the water, all of us. We must see that, before we can even talk about what that means.

~ ~ ~

Oh, right, I'd mentioned something about cryptostorm solving these intertwined problems of network identity for folks on-cstorm, hand't I? A quick sketch, so as to leave room for more technical exposition once we've rolled out a tangible proof-of-concept in the form of r2r-verified connections to cryptostorm itself (which should be done in a day or so... we'd scheduled that earlier, but pushed STUNnion up the queue given its serious opsec implications).

There's two main components to our r2r framework: one addresses routing, and one addresses public fingerprint verification. Fortunately, both problems have already been essentially solved (in technical terms) via creative, vibrant technologies that were essentially nonexistent a decade ago.

Verification of the integrity of publicly-published data is a problem fundamentally solved by by the blockchains. Consensus validation of chain-posted data works, and has proved robust against very strong attacks thus far. It is not perfectly implemented yet, and there's still hard problems to be tackled along the way. That said, if cryptostorm wants to post something publicly in a way that anyone can access and have extremely high confidence both that it was posted by cryptostorm, and that it has not been modified since, blockchains work. Whether with pleasant frontends such as those offered by keybase.io or onename.io (as a class), or direct to blockchain commit, this system gets data pushed into a place from which it is nearly impossible to be censored, and in which it is nearly impossible to modify ex-post. This works

Successful routing of data across an unfriendly network substrate, with exceedingly high confidence that those data not being topologically hijacked mid-stream and that the endpoint to which the data were directed at the initiation of route setup is in fact the endpoint at which they arrive (and the reverse), has been solved by meta-network technologies of Tor and i2p (in a form of convergent evolution of disparate architectures). Both mate packet transit with asymmetric cryptographic verifications of bit-level data and route trajectory, and both work. An oracular attacker sitting on mid-route infrastructure can of course kill routing entirely by downing the network itself, but no practical or theory-based attacks enable such an attacker to enjoy oracular route-determination control over such sessions. These tools, also, work.

With those two technical primitives in place, the challenge of enabling confidence in our visit to Janet's website is fundamentally met. We can verify that Janet is Janet, publicly and reliably, via blockhain commit... and, we can ensure that the essential components of this process are routed reliably through the use of meta-topological tools from either Tor or i2p. Simply put, we can do blockchain lookups via topologically hardened Tor/i2p routing constructs that allow us to establish reliably secure connectivity with Janet's website. Once we have that session instantiated, in cryptographic terms, we are in good shape: TLS works, once it's up and running, and we need not try to restructure TLS to fix the problem of route/identity validation and integrity assurance.

Rather, we graft on exogenous tools - themselves well-proven in the field but somewhat at a remove from "mainstream" https currently - atop the existing strengths of https. Further, this approach generalises into non-https network encryption. Once the extra superstructure is in place to bulwark against the structurally implicit weaknesses of the CA, DNS, and TLD-nameserver systems, there's no intrinsic bounds on how far it can be extended from there.

~ ~ ~

We're making no fundamentally new tech, at cryptostorm, in order to bring r2r to life. The tools are there, because creative and dedicated individuals and teams have invested their passion and wisdom in bringing them to life. We're using components from the DNSchain team, from the entirety of the Tor Project's work, from Drs. Bernstein & Lange's c25519 breakthroughs, and from dozens of other brilliant technologists. We're just stacking those wonderful building blocks up in a way that enables something really useful for folks seeking secure network access, via cryptostorm.

The final piece of the puzzle is our deepDNS resolver/reply system, which has emerged from our earlier work to ensure integrity of DNS queries in the micro sense. With deepDNS, we are able to deploy "active masking" at the cstorm-network level - ensuring that privacy-relevant attack surfaces are minimised for folks on-cstorm.

Once we recognised the implicit capabilities of deepDNS - once we noticed that we're swimming in the water, as it were - the jump to r2r was all but inevitable. We are able to provide robust assurances of both data-in-transit integrity and routing-trajectory integrity for the on-cstorm leg of member network sessions... and that bootstraps all the rest. It's a layered, fluid, topologically heterogeneous meta-system that makes r2r possible. And it works.

So that's that. Despite this "too many words" essay, the deploy is somewhat trivial in practice. Once we've got tangible examples of this methodology in the field, we expect to find improvements, refinements, and extensions currently not obvious to us. And we hope others will take what we're able to do, and build in turn new capabilities and technologies we don't yet imagine ourselves.

Here's to those of us who are brash enough to worship at the alter of the cult of the done...

Cheers,

  • ~ ðørkßöt


ps: down with fishycerts! :-P
by Pattern_Juggled
Tue Mar 03, 2015 8:12 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: www.download.windowsupdate.com & crl.verisign.com - ongoing research
Replies: 15
Views: 62596

www.download.windowsupdate.com & crl.verisign.com - ongoing research

{direct link: cryptostorm.ch/strangeness}
{this thread has been split from the Kebrum analytics thread, to improve access and clarity of organization ~admin}


Here's some unpolished data relating to an odd file format I found during this analysis:

The file in question is authroot.stl

Here's one of the few references I found on this format:
Certificate Trust List (.stl)

Certificate Trust List is generally used during SSL/TLS handshake when Client Certificate Authentication comes in to picture. During SSL Handshake the server sends the client the list of the distinguished CA names that it supports as a part of Server Hello message. The Client uses this list to draw up a list of client certificates that is issued by any of the CA’s in the list i.e., only those client certificates which are issued by any of the CA’s in the CTL will be populated. Below is an example of how a CTL looks like
certificatetrustlist.png

Here's the file that came out of the unpacking of binaries:
kebrum.zip
(112.74 KiB) Downloaded 1501 times

The pre-compressed version is nearly 140kB long. Dunno, I'm no expert but that seems a bit big given the usage framework for which .stl is designed...

Cheers,

~ pj