Search found 92 matches

by cryptostorm_ops
Tue Jul 14, 2015 1:11 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: cleanvpn.org/HideMyAss - raw data - #cleanVPN, or not?
Replies: 9
Views: 54397

HMA: Don't miss 50% off in our Summer Special!

Return-path: <bounces+914316-b587-lulz=cryptokens.ca@email.hidemyass.com>
Envelope-to: lulz@cryptokens.ca
Delivery-date: Mon, 13 Jul 2015 20:15:25 +0300
Received: from o1.email.hidemyass.com ([198.21.7.164]:29148)
by bafana.cryptostorm.net with esmtps (TLSv1.2:DHE-RSA-AES128-GCM-SHA256:128)
(Exim 4.85)
(envelope-from <bounces+914316-b587-lulz=cryptokens.ca@email.hidemyass.com>)
id 1ZEhK9-0000sX-3V
for lulz@cryptokens.ca; Mon, 13 Jul 2015 20:15:25 +0300
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=email.hidemyass.com;
h=content-type:mime-version:from:to:reply-to:subject:list-unsubscribe;
s=smtpapi; bh=tS0V6j5UMrJmUp7CqjwlcUYSiVI=; b=Pp74I8/3N51aJN5Dx5
7LDC2uZh5TyUwMcriP1rILtGlgVuZEqIe5FgjVl92jr0bZ9eSVupUr9v8s+/TbRP
UGb9G0iaV5IfTYFqnKTZW+Dvpm6LGkRCMnttB95J3TIdcnz861V1WLZhEU7Fi/M3
DaQbt4EAasPXX5JHFJXOl9Guo=
Received: by filter-337.sjc1.sendgrid.net with SMTP id filter-337.8036.55A3F207B
2015-07-13 17:14:47.370377484 +0000 UTC
Received: from OTE0MzE2 (unknown [10.42.83.122])
by ismtpd-078 (SG) with HTTP id 14e88696d89.7ab.977dd
for <hithere@hidemyass.com>; Mon, 13 Jul 2015 17:14:47 +0000 (UTC)
Content-Type: multipart/alternative;
boundary="===============0191143234228126760=="
MIME-Version: 1.0
From: HMA! Team <hithere@hidemyass.com>
To: lulz@cryptokens.ca
Reply-to: hithere@hidemyass.com
Subject: Don't miss 50% off in our Summer Special!
Message-ID: <14e88696d89.7ab.977dd@ismtpd-078>
Date: Mon, 13 Jul 2015 17:14:47 +0000 (UTC)
List-Unsubscribe: <http://email.hidemyass.com/wf/unsubscri ... dBfuBAb3pl>, <mailto:unsubscribe@email.hidemyass.com?subject=http://email.hidemyass.com/wf/unsubscri ... dBfuBAb3pl>
X-SG-EID: aXMS4AcJQ8qkYz963H8i6AT5igliZEUkQCK+Q9Ftz9vXweEF8mzE5NYcIjW8eYYTQ5oGqrAESX11v0
yK4UzL47v+tCTH8lSO1qKYAHheU4OU2OtRHN1W9cppXn9Ld3M9jGNUiUq6enpegRksMlfjnb7ov+Rb
W//EXf8e7LVaAkw=
X-SG-ID: VPWZYjw6GOzHdwkwPeoX9QiEbzQXX/gF9P8njHP5+LBmJ5dz+kbEZqGLY+HjsjSwVSI4yUI+iELeem
ZmXI8wAUXjE2lajr82WBjO+UwMLrnaQ+xfPovGrIU1OwSHLlnnc6z6piqBHvJaT7MINpIcXGeU2dx2
rrnd5eiRIscKbs2ulabvJ0bsLt6iIyhYrvkp

--===============0191143234228126760==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

Email not displaying correctly? View it ( ) in your browser

Hi twinklet0es,

=C2=A0

The HideMyAss! Midsummer Night=E2=80=99s Special is bringing Dream Savings =
to thousands of VPN users =E2=80=93 but you=E2=80=99ve still not got your 5=
0% saving on HMA! Pro VPN!
=C2=A0

These prices are for a limited time only, so take advantage of this great o=
ffer and order today! ( https://vpn.hidemyass.com/vpncontrol/myaccounts/dir=
ectorder/3emV6BOM2A?utm_source=3Dss&utm_medium=3Demail&utm_content=3Dexp3&u=
tm_campaign=3Dsumspe )

=C2=A0



=C2=A0




( https://vpn.hidemyass.com/vpncontrol/my ... 6BOM2A?ut=
m_source=3Dss&utm_medium=3Demail&utm_content=3Dexp3&utm_campaig=
n=3Dsumspe )

=C2=A0

Have a magical summer!
=C2=A0

The HMA! Team

=C2=A0

To unsubscribe please click here ( http://email.hidemyass.com/wf/unsubscrib=
e?upn=3DsnAYkUZDhsEvWOVksOy-2BGHPOxu-2BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAnZaSro=
GAwynnpMIsNI72eRRxLnFi6ohC1VkJBBVeKHynmNuo3h3tA6-2BCVcsxljMh0ojOwb929mepvfk=
hvzFNnqj0W-2Fe3hgkN5HjeM6dbxnilEVVB-2BsjZClrcjDjdaS62bTQf6kq2RDMpNmi0E7RHFt=
Hzs63gnPsQitAxFtEc0IBt-2BJniKV7Mzt9AhbUSL6eK5iJiVoFzmYECY0ZbsBXgnRQB-2FW-2B=
WHmfrGeCXBO11uI-2FxgLW8zHAeY1Od7N91xDFpZh7IJVqHFajhWXjF4oZqatqijTMvQjD5v9Vh=
akSKS1R3zn0rbTnD6Syw-2Fcv6zlV0uhuvMoA )

HMA! Team
7 Moor Street, London, W1=

--===============0191143234228126760==
Content-Type: text/html; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

<style type=3D"text/css">.ReadMsgBody{width:100%;}.ExternalClass{width:100%=
;}span.yshortcuts{color:#000;background-color:none;border:none;}span.yshort=
cuts:hover,span.yshortcuts:active,span.yshortcuts:focus{color:#000;backgrou=
nd-color:none;border:none;}p{margin-top:0;margin-right:0;margin-bottom:0;ma=
rgin-left:0;}*{-webkit-text-size-adjust:none;}</style><table cellpadding=3D=
"0" cellspacing=3D"0" width=3D"100%" style=3D"border-collapse:collapse;back=
ground:#f2f5f7;min-width:620px;table-layout:fixed;"><tbody><tr><td align=3D=
"center" style=3D"padding-right:10px;padding-top:10px;padding-bottom:10px;p=
adding-left:10px;"><div style=3D"width:auto;margin-right:auto;margin-left:a=
uto;margin-top:0;margin-bottom:0;color:#000;font-family:Arial;font-size:12p=
x;line-height:150%;"><table style=3D"width:100%;border-collapse:separate;ta=
ble-layout:fixed;" cellspacing=3D"0" cellpadding=3D"0"><tbody><tr><td align=
=3D"center"><table width=3D"600" cellspacing=3D"0" cellpadding=3D"0" style=
=3D"border-collapse:separate;border-spacing:0px;table-layout:fixed;width:60=
0px;background:#ffffff;"><tbody><tr><td><table cellpadding=3D"0" cellspacin=
g=3D"0" style=3D"width:600px;border-collapse:collapse;table-layout:fixed;">=
<tbody><tr><td width=3D"100%" style=3D"vertical-align:top;"><div><table sty=
le=3D"width:100%;border-collapse:separate;table-layout:fixed;" cellspacing=
=3D"0" cellpadding=3D"0"><tbody><tr><td style=3D"background:#ffffcf;"><tabl=
e width=3D"100%" cellspacing=3D"0" cellpadding=3D"0" style=3D"border-collap=
se:separate;border-spacing:0px;table-layout:fixed;"><tbody><tr><td style=3D=
"vertical-align:middle;padding-top:10px;padding-bottom:10px;padding-left:0;=
padding-right:0;"><div style=3D"word-wrap:break-word;line-height:140%;text-=
align:left;"><p style=3D"text-align:center;font-size:11px;margin:0;">Email =
not displaying correctly? <a href=3D"http://email.hidemyass.com/wf/click?up=
n=3DT-2F2-2FNjKM2-2Ftd7al2EP888by6JkR7EM21U1HHXF9rBHLEyO111JcbyUdxXqMnIYQEk=
nA0xByjtQZmUNzx-2BS8t3VJKaughhDvodRKLSWw6yVZaifhOpRK-2FVMQLVHuTXwHF8uiAZzJE=
Kica6Isw2bbAXFKag-2FoWrb-2FTKHwPHtweYxjGdjJR4lCC1uJua9YkeXkhuTv15ptS9-2FoT9=
7snSGwVEFdewrFCs-2FWOXtIWUHXVcZww-2Frd0X6PqsvZcnlz5Hn9uatAAGUPOXqoz2neBRxHZ=
FDycxWagqYNjxjUoetGpreM2GTpvoMtxHQ2kY8otmBDo_snAYkUZDhsEvWOVksOy-2BGHPOxu-2=
BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAnZaSroG0l9BtuabOlA9DVo0gnGlNAtR0ECzf-2B2KXbp=
9ZQgaffty8JHgA1ZuZA-2FR5IC7Zei6QPnKWvdq88VkDQFAgt-2FNMZFvXabmGGKDUtlX9Q4h-2=
F7JoK0M7DTWB31Yw7oWFnPDf-2BDyWbMNlaC3qH7JVW3LC0f5Ugg98HQDsGtV5uaTA-2BTxIBS-=
2B95bdxdYM3OGz1UeHaieigOzKBQRFsHq-2B-2F6-2BvyFigAPESVPklUnR9iFmjSCRVB9DI-2B=
A7mxj0Fqn9D-2F4rSulIBI9YUz66ee6IrDVSrB-2F6jD7gQzTfCiJFkwRUSJoR-2FDf0ArvjjLU=
6U7Iz2F4noA" style=3D"text-decoration:none;"><span style=3D"text-decoration=
:underline;color:#002AFF;">View it</span></a> in your browser</p></div></td=
></tr></tbody></table></td></tr></tbody></table></div><div><table style=3D"=
border-collapse:separate;border-spacing:0px;table-layout:fixed;" cellpaddin=
g=3D"5" cellspacing=3D"5"><tbody><tr><td></td></tr></tbody></table><table s=
tyle=3D"width:100%;border-collapse:separate;table-layout:fixed;background:#=
ffffff;" cellspacing=3D"15" cellpadding=3D"0"><tbody><tr><td style=3D"backg=
round:#ffffff;"><table width=3D"100%" cellspacing=3D"0" cellpadding=3D"0" s=
tyle=3D"border-collapse:separate;border-spacing:0px;table-layout:fixed;"><t=
body><tr><td style=3D"vertical-align: top;" align=3D"center"><div><a href=
=3D"http://email.hidemyass.com/wf/click?upn ... jWrK1bB6M=
kNCin-2BdH2gi4fqEMCh37qyY037Q3hAuCUqNtNI181anF57HHvON2wdl2ZQVAwRsd5vWz3Z8zX=
FZxJg9GD0w2wPzpzrBcUZJ4RXJchEUchdMDC6nnRNJT-2FiG-2Bm-2Bw-3D-3D_snAYkUZDhsEv=
WOVksOy-2BGHPOxu-2BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAnZaSroGJbcrt-2FaHJ08szJOi0=
y7NNQMIZXVwcmApWpdXndQ2rvvVxtkOa6sXo1iphvYQE9FEDi0DHuTOtOzfFXwDp60ILxQXwTHV=
XUPMzM78VFqtWyYqDxtfFEfm-2FLgsw6o-2BePjqCf-2FKsU-2BSSG2YP8f6B7CINZ2od6ItkDv=
iBEzJuRxYCHr5-2Bj-2FzXtn4v1Y6DOsKIRQA2-2B8W3iFEKNz98yj2cm4Yb-2FdiQ-2BybYVm0=
vXtinhFJILvcIYsbDEaU184JULFTa5GhmMOPFyp-2FOmUxZCnkziVShXKggLVt7XmcmWlouQdxB=
gD-2FJyLd6lpXkPeaKkDaL8yA" style=3D"width:auto;" target=3D"_blank"><img sty=
le=3D"border:medium none;width:570px;height:285px;resize:none;position:rela=
tive;display:block;top:0px;left:0px;" width=3D"570" height=3D"285" src=3D"h=
ttp://static.sendgrid.com/uploads/UID_914316_NL_8122912_c8148d92d8288fc5343=
9e5b0e760e1f7/b78be676e70ff18295c9d5ecbee9c565" /></a></div></td></tr></tbo=
dy></table></td></tr></tbody></table></div><div><table style=3D"border-coll=
apse:separate;border-spacing:0px;table-layout:fixed;" cellpadding=3D"5" cel=
lspacing=3D"5"><tbody><tr><td></td></tr></tbody></table><table style=3D"wid=
th:100%;border-collapse:separate;table-layout:fixed;background:#ffffff;" ce=
llspacing=3D"15" cellpadding=3D"0"><tbody><tr><td style=3D"background:#ffff=
ff;"><table width=3D"100%" cellspacing=3D"0" cellpadding=3D"0" style=3D"bor=
der-collapse:separate;border-spacing:0px;table-layout:fixed;"><tbody><tr><t=
d style=3D"vertical-align:top;"><div style=3D"word-wrap:break-word;line-hei=
ght:140%;text-align:left;"><div>
<span style=3D"font-size:16px;"><span style=3D"font-family:arial,helvetica=
,sans-serif;">Hi twinklet0es,</span></span></div>
<div>
&nbsp;</div>
<div>
<div>
<div>
<span style=3D"font-size:16px;"><span style=3D"font-family:arial,helveti=
ca,sans-serif;">The HideMyAss! Midsummer Night&rsquo;s Special is bringing =
Dream Savings to thousands of VPN users &ndash; but you&rsquo;ve still not =
got your <strong>50% saving on HMA! Pro VPN!</strong></span></span><br />
&nbsp;</div>
<div>
<span style=3D"font-size:16px;"><span style=3D"font-family:arial,helveti=
ca,sans-serif;">These prices are for a limited time only, so take advantage=
of this great offer and <a href=3D"http://email.hidemyass.com/wf/click?upn=
=3DQ6DpSGekeiTjZuIvKoDbWGJZc2QDCwe7idpUGsWbCCDwJrKvAJwRUBY8-2BkUnq1q9CSuB-2=
FpwInqzXpcVVj2kfydKcQJQNeOSMukdmeWKnC0yXNb3iS99-2BlT6hUco7WJ0nbmAPY50PRlaY0=
vl-2BbpT4-2BofKujvvNM7JsoUy3nbEehSgxAC6mnYUBnGcublEEWG6fQWvy3-2F-2BFhTjO8DF=
TXW4VA-3D-3D_snAYkUZDhsEvWOVksOy-2BGHPOxu-2BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAn=
ZaSroGnN8Sq8CuSdyyRidzDEWljoHcIxGIDyylZc214pKruJo66ksQ0g1cDkZ-2FfS4twkbLaLK=
RAL4JeWPwVlnKnsPeeQ2Qenq1fKybrCNKDyIrnolx2iJQl9eJHPWngScyoTcc3kuyqvhpCmo-2F=
WOQwNhXIFkjoT72fGKvRMudTibsfB-2F-2BQu0SQJksittrYq-2FppxRob-2BsHjwF1T-2FtLuc=
fFGfFdZxTIFLEgqdyW1h1kx-2FVMYKX54k3qp-2Fq1yUMBL6m5HiUyMjDh6BcLS0YOCpi7OQdUt=
zER7E8synMmb5BxK-2F144VuG6jNQMnCN4oqhuaQ5gRtBc">order today!</a></span></sp=
an></div>
<div>
&nbsp;</div>
</div>
<div>
&nbsp;</div>
</div>
<br />
</div><div><p style=3D"margin-left:0;margin-top:0;margin-right:0;margin-bot=
tom:0;text-align:center;"><a style=3D"display:inline-block;text-align:cente=
r;vertical-align:middle;text-decoration:none;background-color:#fff;color:#0=
00;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;paddin=
g-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;" href=3D"h=
ttp://email.hidemyass.com/wf/click?upn=3DQ6DpSGekeiTjZuIvKoDbWGJZc2QDCwe7id=
pUGsWbCCDwJrKvAJwRUBY8-2BkUnq1q9CSuB-2FpwInqzXpcVVj2kfydKcQJQNeOSMukdmeWKnC=
0yXNb3iS99-2BlT6hUco7WJ0nbmAPY50PRlaY0vl-2BbpT4-2BofKujvvNM7JsoUy3nbEehSgxA=
C6mnYUBnGcublEEWG6fQWvy3-2F-2BFhTjO8DFTXW4VA-3D-3D_snAYkUZDhsEvWOVksOy-2BGH=
POxu-2BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAnZaSroGJjYQvkVLEaLZQ6mIwxrn3-2F-2Ft-2B=
bsORX5I0d6Ct4h0aTsN-2BIP7ccnkTIRfsHlHyZ-2B9Yob96zJHFvvIG4zwzeaGKF1zfDXyl5Sn=
vsa0WNKYM5-2FUfgFZ0Al3sSEbfMmmux5HFZNJ4FwBZkxpW1SG7dpEBBQUajz0qdPO7ZDDl5hwt=
sILPkl29708PgFOZKZnNoOwa3rdGU-2F4D-2BOXwNAbXV5liycMZTSY3wOiQ4lypk67YrxyEyuC=
WMJJA5V2Mik0vh8wJbVyZEu8KjBhL3P5sbeMD2XYqyE4Xwf73vZmsm-2Fdzu6yjzJJAu-2BHs4N=
hXs-2FcNAtg"><img src=3D"http://static.sendgrid.com/uploads/UID_914316_NL_8=
122912_c8148d92d8288fc53439e5b0e760e1f7/40400c245b4b704f24c5de7563043254" /=
></a></p></div></td></tr></tbody></table></td></tr></tbody></table></div><d=
iv><table style=3D"border-collapse:separate;border-spacing:0px;table-layout=
:fixed;" cellpadding=3D"5" cellspacing=3D"5"><tbody><tr><td></td></tr></tbo=
dy></table><table style=3D"width:100%;border-collapse:separate;table-layout=
:fixed;background:#ffffff;" cellspacing=3D"15" cellpadding=3D"0"><tbody><tr=
><td style=3D"background:#ffffff;"><table width=3D"100%" cellspacing=3D"0" =
cellpadding=3D"0" style=3D"border-collapse:separate;border-spacing:0px;tabl=
e-layout:fixed;"><tbody><tr><td style=3D"vertical-align:top;"><div style=3D=
"word-wrap:break-word;line-height:140%;text-align:left;"><div>
&nbsp;</div>
<div>
<span style=3D"font-size:16px;">Have a magical summer!</span><br />
&nbsp;</div>
<div>
<span style=3D"font-size:16px;">The HMA! Team</span></div>
<div>
&nbsp;</div>
</div></td></tr></tbody></table></td></tr></tbody></table></div><div><table=
style=3D"border-collapse:separate;border-spacing:0px;table-layout:fixed;" =
cellpadding=3D"5" cellspacing=3D"5"><tbody><tr><td></td></tr></tbody></tabl=
e><table style=3D"width:100%;border-collapse:separate;table-layout:fixed;ba=
ckground:#ffffff;" cellspacing=3D"15" cellpadding=3D"0"><tbody><tr><td styl=
e=3D"background:#ffffff;"><table width=3D"100%" cellspacing=3D"0" cellpaddi=
ng=3D"0" style=3D"border-collapse:separate;border-spacing:0px;table-layout:=
fixed;"><tbody><tr><td style=3D"vertical-align: top;" align=3D"center"><div=
><img style=3D"border:medium none;width:570px;height:95px;resize:none;posit=
ion:relative;display:block;top:0px;left:0px;" width=3D"570" height=3D"95" s=
rc=3D"http://static.sendgrid.com/uploads/UID_ ... 48d92d828=
8fc53439e5b0e760e1f7/c9e33e00b74cefefb508036cd8f0ed4d" /></div></td></tr></=
tbody></table></td></tr></tbody></table></div><div><table style=3D"border-c=
ollapse:separate;border-spacing:0px;table-layout:fixed;" cellpadding=3D"5" =
cellspacing=3D"5"><tbody><tr><td></td></tr></tbody></table><table style=3D"=
width:100%;border-collapse:separate;table-layout:fixed;background:#f1f1f1;"=
cellspacing=3D"15" cellpadding=3D"0"><tbody><tr><td style=3D"background:#f=
1f1f1;"><table width=3D"100%" cellspacing=3D"0" cellpadding=3D"0" style=3D"=
border-collapse:separate;border-spacing:0px;table-layout:fixed;"><tbody><tr=
><td style=3D"vertical-align:middle;font-size:11px;padding-top:10px;padding=
-right:10px;padding-bottom:10px;padding-left:10px;"><div style=3D"word-wrap=
:break-word;line-height:140%;text-align:left;"><p style=3D"font-size:11px;m=
argin:0px;text-align:left;">
To unsubscribe please click <a href=3D"http://email.hidemyass.com/wf/unsub=
scribe?upn=3DsnAYkUZDhsEvWOVksOy-2BGHPOxu-2BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAn=
ZaSroGVZncESMQfZ0nyS8W7ZkJ-2Ft7kgZn1u-2B6RxEcgZ3XMExd2XeanToNR9cMLHF6LGSFny=
Sk3wSOWmyS7ctRjt5rVXddUEYLNzSGrcEcfB7LQZHWtg7WaQV0bUXy5opk9AaNp1KNoJTmdykiq=
aeczml4rkN8rgheobya5ZBt5eS-2Bs2Wf2VVpikWltm2u0oTGx79MRrivDJxwwIsvQZ-2BeCcUG=
fgfwVBuHvUlOkQh0RKnGkvIS2Y3Ife8Pma2eXru-2FmIFyJRU0s0l54rfdXlokjzZ-2B1TA5JZI=
WsaATUw6YDuHb2w9EVwZ6Ty6Jx1SojFP6u5vpd" style=3D"text-decoration:none;"><sp=
an style=3D"color:#0000FF;text-decoration:underline;">here</span></a></p>
</div></td><td style=3D"vertical-align:middle;font-size:11px;padding-top:10=
px;padding-right:10px;padding-bottom:10px;padding-left:10px;"><div style=3D=
"word-wrap:break-word;line-height:140%;text-align:left;"><p style=3D"font-s=
ize:11px;margin:0px;text-align:left;">
HMA! Team<br />
7 Moor Street, London, W1</p>
</div></td></tr></tbody></table></td></tr></tbody></table></div></td></tr><=
/tbody></table></td></tr></tbody></table></td></tr></tbody></table></div></=
td></tr></tbody></table><img src=3D"http://email.hidemyass.com/wf/open?upn=
=3DsnAYkUZDhsEvWOVksOy-2BGHPOxu-2BjKdnhdGbpOWc4o5Z0hKAbWYp6WpVxAnZaSroGvHOa=
levdfJsnPt51JBaFZnc8uNtuV6EY-2B3zrF31w5S6MgSKAWPPopA2L58et4GvjNt5y5lsPLFoax=
MUf02VXpuVuY88nnODcMSy02GkwkO-2FpeMzISdXnZXS4ajCrx-2BQP4wGBZqAg3O-2FVmB-2FM=
UbzDOVtEhqmYvH5-2BMptz6M0-2BgVurBSOHY0hKP-2BvX3c3ORZ0ssSoAH1goRPIlwNRc6Monb=
rsA-2F8W3r5eUPsRKppiW4Y-2FfFRw25rpq9S6WC3PdDD3DgCMdntG8WSNJ79juDZlf2WyeNeXt=
-2BVc7jUbO6UR1aQLl3SABAXZmk6De5NwmlyR6hcYyRFO41DxM9x57ykIW0Q-3D-3D" alt=3D"=
" width=3D"1" height=3D"1" border=3D"0" style=3D"height:1px !important;widt=
h:1px !important;border-width:0 !important;margin-top:0 !important;margin-b=
ottom:0 !important;margin-right:0 !important;margin-left:0 !important;paddi=
ng-top:0 !important;padding-bottom:0 !important;padding-right:0 !important;=
padding-left:0 !important;"/>

--===============0191143234228126760==--
by cryptostorm_ops
Fri Mar 06, 2015 7:23 pm
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: request for cyberghost review - #cleanVPN clearance?
Replies: 2
Views: 20800

post binaries!

It's helpful to get things moving if copies of binaries (i.e. the installers, which for windows are usually of the form randominstallername.exe) can be uploaded to a thread here or, if that's not possible, at least a link to them.

We notice some VPN companies do an amazingly good job of protecting their installers from public view. It's really impressive. :-)

cryptohippie & zorrovpn should get some kind of award for "most difficult to locate installer packages." That doesn't indicate evil intent, in and of itself... but it's certainly notable, just in general terms.
by cryptostorm_ops
Fri Mar 06, 2015 12:32 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: cryptostorm - #cleanVPN information disclosure & discussion of results
Replies: 0
Views: 21209

cryptostorm - #cleanVPN information disclosure & discussion of results

https://malwr.com/analysis/OTQ2YmZjMDg2 ... g3OTcxZTk/

https://www.virustotal.com/en/file/3212 ... 425580490/


Tested file, for reference:

MD5: 9520d5d320687f5fe162a11ed9dd9b29
SHA1: 9ede6494f03109f93022a6e145ececa4304f937f
SHA256: 32120b07be4675f5d88a562c78632f0bc027d4f0b010543aa0a65759e98ce171
setup_v2.22f.exe
(11.56 MiB) Downloaded 889 times


More data coming as we gather it. Of course, others are welcome to participate, as well.

~ cryptostorm
by cryptostorm_ops
Fri Jan 09, 2015 5:20 am
Forum: general chat, suggestions, industry news
Topic: full Snowden December 2014 document set - VPN & SSL/TLS crypto
Replies: 7
Views: 43639

Exclusive: Edward Snowden on Cyber Warfare

Full text, from the PBS website:
Last June, journalist James Bamford, who is working with NOVA on a new film about cyber warfare that will air in 2015, sat down with Snowden in a Moscow hotel room for a lengthy interview. In it, Snowden sheds light on the surprising frequency with which cyber attacks occur, their potential for destruction, and what, exactly, he believes is at stake as governments and rogue elements rush to exploit weaknesses found on the internet, one of the most complex systems ever built by humans. The following is an unedited transcript of their conversation.

Image

James Bamford: Thanks very much for coming. I really appreciate this. And it’s really interesting—the very day we’re meeting with you, this article came out in The New York Times, seemed to be downplaying the potential damage, which they really seem to have hyped up in the original estimate. What did you think of this article today?

Edward Snowden: So this is really interesting. It’s the new NSA director saying that the alleged damage from the leaks was way overblown. Actually, let me do that again.

So this is really interesting. The NSA chief in this who replaced Keith Alexander, the former NSA director, is calling the alleged damage from the last year’s revelations to be much more insignificant than it was represented publicly over the last year. We were led to believe that the sky was going to fall, that the oceans were going to boil off, the atmosphere was going to ignite, the world would end as we know it. But what he’s saying is that it does not lead him to the conclusion that the sky is falling.

And that’s a significant departure from the claims of the former NSA director, Keith Alexander. And it’s sort of a pattern that we’ve seen where the only U.S. officials who claim that these revelations cause damage rather than serve the public good were the officials that were personally embarrassed by it. For example, the chairs of the oversight committees in Congress, the former NSA director himself.

But we also have, on the other hand, the officials on the White House’s independent review panels who said that these programs had never been shown to stop even a single imminent terrorist attack in the United States, and they had no value. So how could it be that these programs were so valuable that talking about them, revealing them to the public would end the world if they hadn’t stopped any attacks?

But what we’re seeing and what this article represents is that the claims of harm that we got last year were not accurate and could in fact be claimed to be misleading, and I think that’s a concern. But it is good to see that the director of NSA himself now today, with full access to classified information, is beginning to come a little bit closer to the truth, getting a little bit closer to the President’s viewpoint on that, which is this discussion that we’ve had over the last year doesn’t hurt us. It makes us stronger. So thanks for showing that.

Bamford: Thanks. One other thing that the article gets into, which is what we’re talking about here today, is the article quotes the new NSA director, who is also the commander of Cyber Command, as basically saying that it’s possible in the future that these cyber weapons will become sort of normal military weapons, and they’ll be treated sort of like guided missiles or cruise missiles and so forth.

Snowden: Cruise missiles or drones.

Bamford: What are your thoughts about that, having spent time in this whole line of work yourself?

Snowden: I think the public still isn’t aware of the frequency with which these cyber-attacks, as they’re being called in the press, are being used by governments around the world, not just the US. But it is important to highlight that we really started this trend in many ways when we launched the Stuxnet campaign against the Iranian nuclear program. It actually kicked off a response, sort of retaliatory action from Iran, where they realized they had been caught unprepared. They were far behind the technological curve as compared to the United States and most other countries. And this is happening across the world nowadays, where they realize that they’re caught out. They’re vulnerable. They have no capacity to retaliate to any sort of cyber campaign brought against them.

The Iranians targeted open commercial companies of U.S. allies. Saudi Aramco, the oil company there—they sent what’s called a wiper virus, which is actually sort of a Fisher Price, baby’s first hack kind of a cyber-campaign. It’s not sophisticated. It’s not elegant. You just send a worm, basically a self-replicating piece of malicious software, into the targeted network. It then replicates itself automatically across the internal network, and then it simply erases all of the machines. So people go into work the next day and nothing turns on. And it puts them out of business for a period of time.

But with enterprise IT capabilities, it’s not trivial, but it’s not impossible to restore a company to working order in fairly short time. You can image all of the work stations. You can restore your backups from tape. You can perform what’s called bare metal restores, where you get entirely new hardware that matches your old hardware, if the hardware itself was broken, and just basically paint it up, restore the data just like the original target was, and you’re back in the clear. You’re moving along.

Now, this is something that people don’t understand fully about cyber-attacks, which is that the majority of them are disruptive, but not necessarily destructive. One of the key differentiators with our level of sophistication and nation-level actors is they’re increasingly pursuing the capability to launch destructive cyber-attacks, as opposed to the disruptive kinds that you normally see online, through protestors, through activists, denial of service attacks, and so on. And this is a pivot that is going to be very difficult for us to navigate.

Bamford: Let me ask you about that, because that is the focus of the program here. It’s a focus because very few people have ever discussed this before, and it’s the focus because the U.S. launched their very first destructive cyber-attack, the Stuxnet attack, as you mentioned, in Iran. Can you just tell me what kind of a milestone that was for the United States to launch their very first destructive cyber-attack?

Snowden: Well, it’s hard to say it’s the first ever, because attribution is always hard with these kind of campaigns. But it is fair to say that it was the most sophisticated cyber-attack that anyone had ever seen at the time. And the fact that it was launched as part of a U.S. authorized campaign did mark a radical departure from our traditional analysis of the levels of risks we want to assume for retaliation.

When you use any kind of internet based capability, any kind of electronic capability, to cause damage to a private entity or a foreign nation or a foreign actor, these are potential acts of war. And it’s critical we bear in mind as we discuss how we want to use these programs, these capabilities, where we want to draw the line, and who should approve these programs, these decisions, and at what level, for engaging in operations that could lead us as a nation into a war.

The reality is if we sit back and allow a few officials behind closed doors to launch offensive attacks without any oversight against foreign nations, against people we don’t like, against political groups, radicals, and extremists whose ideas we may not agree with, and could be repulsive or even violent—if we let that happen without public buy-in, we won’t have any seat at the table of government to decide whether or not it’s appropriate for these officials to drag us into some kind of war activity that we don’t want, but we weren’t aware of at the time.

Bamford: And what you seem to be talking about also is the blowback effect. In other words, if we launch an attack using cyber warfare, a destructive attack, we run the risk of having been the most industrialized and electronically connected country in the world, that that’s a major problem for the US. Is that your thinking?

Snowden: I do agree that when it comes to cyber warfare, we have more to lose than any other nation on earth. The technical sector is the backbone of the American economy, and if we start engaging in these kind of behaviors, in these kind of attacks, we’re setting a standard, we’re creating a new international norm of behavior that says this is what nations do. This is what developed nations do. This is what democratic nations do. So other countries that don’t have as much respect for the rules as we do will go even further.

And the reality is when it comes to cyber conflicts between, say, America and China or even a Middle Eastern nation, an African nation, a Latin American nation, a European nation, we have more to lose. If we attack a Chinese university and steal the secrets of their research program, how likely is it that that is going to be more valuable to the United States than when the Chinese retaliate and steal secrets from a U.S. university, from a U.S. defense contractor, from a U.S. military agency?

We spend more on research and development than these other countries, so we shouldn’t be making the internet a more hostile, a more aggressive territory. We should be cooling down the tensions, making it a more trusted environment, making it a more secure environment, making it a more reliable environment, because that’s the foundation of our economy and our future. We have to be able to rely on a safe and interconnected internet in order to compete.

Bamford: Where do you see this going in terms of destruction? In Iran, for example, they destroyed the centrifuges. But what other types of things might be targeted? Power plants or dams? What do you see as the ultimate potential damage that could come from the cyber warfare attack?

Snowden: When people conceptualize a cyber-attack, they do tend to think about parts of the critical infrastructure like power plants, water supplies, and similar sort of heavy infrastructure, critical infrastructure areas. And they could be hit, as long as they’re network connected, as long as they have some kind of systems that interact with them that could be manipulated from internet connection.

However, what we overlook and has a much greater value to us as a nation is the internet itself. The internet is critical infrastructure to the United States. We use the internet for every communication that businesses rely on every day. If an adversary didn’t target our power plants but they did target the core routers, the backbones that tie our internet connections together, entire parts of the United States could be cut off. They could be shunted offline, and we would go dark in terms of our economy and our business for minutes, hours, days. That would have a tremendous impact on us as a society and it would have a policy backlash.

The solution, however, is not to give the government more secret authorities to put kill switches and monitors and snooping devices on the internet. It’s to reorder our priorities for how we deal with threats to the security of our critical infrastructure, for our electronic infrastructure. And what that means is taking bodies like the National Security Agency that have traditionally been about securing the nation and making sure that that’s their first focus.

In the last 10 years, we’ve seen—in the last 10 years, we’ve seen a departure from that traditional role of signals intelligence gathering overseas that’s related to responding to threats that are—

Bamford: Take your time.

Snowden: Right. What we’ve seen over the last decade is we’ve seen a departure from the traditional work of the National Security Agency. They’ve become sort of the national hacking agency, the national surveillance agency. And they’ve lost sight of the fact that everything they do is supposed to make us more secure as a nation and a society.

The National Security Agency has two halves, one that handles defense and one that handles offense. Michael Hayden and Keith Alexander, the former directors of NSA, they shifted those priorities, because when they went to Congress, they saw they could get more budget money if they advertised their success in attacking, because nobody is ever really interested in doing the hard work of defense.

But the problem is when you deprioritize defense, you put all of us at risk. Suddenly, policies that would have been unbelievable, incomprehensible even 20 years ago are commonplace today. You see decisions being made by these agencies that authorize them to install backdoors into our critical infrastructure, that allow them to subvert the technical security standards that keep your communication safe when you’re visiting a banking website online or emailing a friend or logging into Facebook.

And the reality is, when you make those systems vulnerable so that you can spy on other countries and you share the same standards that those countries have for their systems, you’re also making your own country more vulnerable to the same attacks. We’re opening ourselves up to attack. We’re lowering our shields to allow us to have an advantage when we attack other countries overseas, but the reality is when you compare one of our victories to one of their victories, the value of the data, the knowledge, the information gained from those attacks is far greater to them than it is to us, because we are already on top. It’s much easier to drag us down than it is to grab some incremental knowledge from them and build ourselves up.

Bamford: Are you talking about China particularly?

Snowden: I am talking about China and every country that has a robust intelligence collection program that is well-funded in the signals intelligence realm. But the bottom line is we need to put the security back in the National Security Agency. We can’t have the national surveillance agency. We’ve got to go—look, the most important thing to us is not being able to attack our adversaries, the most important thing is to be able to defend ourselves. And we can’t do that as long as we’re subverting our own security standards for the sake of surveillance.

Bamford: That is a very strange combination, where you have one half of the NSA, the Information Assurances Directorate, which is charged with protecting the country from cyber-attacks, coexisting with the Signals Intelligence Directorate and the Cyber Command, which is pretty much focused on creating weaknesses. Can you just tell me a little bit about how that works, the use of vulnerabilities and implants and exploits?

Snowden: So broadly speaking, there are a number of different terms that are used in the CNO, computer networks operations world.

Broadly speaking, there are a number of different terms that are used to define the vernacular in the computer network operations world. There’s CNA, computer network attack, which is to deny, degrade, or destroy the functioning of a system. There’s CND, computer network defense, which is protecting systems, which is noticing vulnerabilities, noticing intrusions, cutting them off, and repairing them, patching the holes. And there’s CNE, computer network exploitation, which is breaking into a system and leaving something behind, this sort of electronic ear that will allow you to monitor everything that’s happening from that point forward. CNE is typically used for espionage, for spying.

To achieve these goals, we use things like exploits, implants, vulnerabilities, and so on. A vulnerability is a weakness in a system, where a computer program has a flaw in its code that, when it thinks it’s going to execute a normal routine task, it’s actually been tricked into doing something the attacker asks it to do. For example, instead of uploading a file to display a picture online, you could be uploading a bit of code that the website will then execute.

edward-snowden-nova
Edward Snowden in his interview with NOVA
Or instead of logging into a website, you could enter code into the username field or into the password field, and that would crash through the boundaries of memory—that were supposed to protect the program—into the executable space of computer instructions. Which means when the computer goes through its steps of what is supposed to occur, it goes, I’m looking for user login. This is the username. This is the password. And then when it should go, check to see that these are correct, if you put something that was too long in the password field, it actually overwrites those next instructions for the computer. So it doesn’t know it’s supposed to check for a password. Instead, it says, I’m supposed to create a new user account with the maximum privileges and open up a port for the adversary to access my network, and then so on and so forth.

Vulnerabilities are generally weaknesses that can be exploited. The exploit itself are little shims of computer code that allow you to run any sort of program you want.

Exploits are the shims of computer code that you wedge into vulnerabilities to allow you to take over a system, to gain access to them, to tell that system what you wanted to do. The payload or implant follows behind the exploit. The exploit is what wedges you into the system. The payload is the instructions that are left behind. Now, those instructions often say install an implant.

The implant is an actual program that runs—it stays behind after the exploit has occurred—and says, tell me all of the files on this machine. Make a list of all of the users. Send every new email or every new keystroke that’s been recorded on this program each day to my machine as the attacker, or really anything you can imagine. It can also tell nuclear centrifuges to spin up to the maximum RPM and then spin down quickly enough that no one notices. It can tell a power plant to go offline.

Or it could say, let me know what this dissident is doing day to day, because it lives on their cell phone and it keeps track of all their movements, who they call, who they’re associating with, what wireless device it’s nearby. Really an exploit is only limited—or not an exploit. An implant is only limited by the imagination. Anything you can program a computer to do, you can program an implant to do.

Bamford: So you have the implant, and then you have the payload, right?

Snowden: The payload includes the implant. The exploit is what basically breaks into the vulnerability. The payload is what the exploit runs, and that is basically some kind of executable code. And the implant is a payload that’s left behind long term, some kind of basically listening program, some spying program, or some kind of a destructive program.

Bamford: Interviewing you is like doing power steering. I don’t have to pull this out.

Snowden: Yeah, sorry, I get a little ramble-y on my answers, and the political answers aren’t really strong, but I’m not a politician, so I’m just trying my best on these.

Bamford: This isn’t nightly news, so we’ve got an hour.

Snowden: Yeah, I hope you guys cut this so it’s not so terrible.

Producer: We’ve got two cameras, and we can carve your words up.

Snowden: (laughter) Great.

Producer: But we won’t.

Bamford: Should mention this implant now—the implant sounds a bit like what used to be sleeper agents back in the old days of the Cold War, where you have an agent that’s sitting there that can do anything. It can do sabotage. It can do espionage. It can do whatever. And looking at one of those slides that came out, what was really fascinating was the fact that the slide was a map of the world, and they had little yellow dots on it. The little yellow dots were indicated as CNEs, computer network exploitation. And you expect to see them in North Korea, China, different places like that. But what was interesting when we looked at it was there were quite a few actually in Brazil, for example, and other places that were friendly countries. Any idea why the U.S. would want to do something like that?

Snowden: So the way the United States intelligence community operates is it doesn’t limit itself to the protection of the homeland. It doesn’t limit itself to countering terrorist threats, countering nuclear proliferation. It’s also used for economic espionage, for political spying to gain some knowledge of what other countries are doing. And over the last decade, that sort of went too far.

No one would argue that it’s in the United States’ interest to have independent knowledge of the plans and intentions of foreign countries. But we need to think about where to draw the line on these kind of operations so we’re not always attacking our allies, the people we trust, the people we need to rely on, and to have them in turn rely on us. There’s no benefit to the United States hacking Angela Merkel’s cell phone. President Obama said if he needed to know what she was thinking, he would just pick up the phone and call her. But he was apparently allegedly unaware that the NSA was doing precisely that. These are similar things we see happening in Brazil and France and Germany and all these other countries, these allied nations around the world.

And we also need to remember that when we talk about computer network exploitation, computer network attack, we’re not just talking about your home PC. We’re not just talking about a control system in a factory somewhere. We’re talking about your cell phone, and we’re also talking about internet routers themselves. The NSA and its sister agencies are attacking the critical infrastructure of the internet to try to take ownership of it. They hack the routers that connect nations to the internet itself.

And this is dangerous for a number of reasons. It does provide us a real intelligence advantage, but at the same time, it’s a serious risk. If one of these hacking operations goes wrong, and this has happened in the past, and it’s a core router that connects all of the internet service providers for an entire country to the internet, we’ve blacked out that entire nation from online access until that problem can be corrected. And these routers are not your little Linksys, D-Link routers sitting at home. We’re talking $60,000, $600,000, $6 million devices, complexes, that are not easy to fix, and they don’t have an off the shelf replacement that’s ready to swap in.

So we need to be very careful, and we need to make sure that whenever we’re engaging in a cyber-warfare campaign, a cyber-espionage campaign in the United States, that we understand the word cyber is used as a euphemism for the internet, because the American public would not be excited to hear that we’re doing internet warfare campaigns, internet espionage campaigns, because we realize that we ourselves are impacted by it. The internet is shared critical infrastructure for everyone on earth. It’s not supposed to be a domain of warfare. We’re not supposed to be putting our economy on the frontlines in the battleground. But that’s increasingly what’s happening today.

So we need to put processes, policies, and procedures in place with real laws that forbid going beyond the borders of what’s reasonable to ensure that the only time that we and other countries around the world exercise these authorities are when it is absolutely necessary, there’s not alternative means of achieving the appropriate outcome, and it’s proportionate to the threat. We shouldn’t be putting an entire nation’s infrastructure at risk to spy on one company, to spy on one person. But increasingly, we see that happening more and more today.

Bamford: You mentioned the problems, the dangers involved if you’re trying to put an exploit into some country’s central nervous system when it comes to the internet. For example in Syria, there was a time when everything went down, and that was blamed on the president of Syria, Bashar al-Assad. Did you have any particular knowledge of that?

Snowden: I don’t actually want to get into that one on camera, so I’ll have to demur on that.

Bamford: Can you talk around it somehow?

Snowden: What I would say is when you’re attacking a router on the internet, and you’re doing it remotely, it’s like trying to shoot the moon with a rifle. Everything has to happen exactly right. Every single variable has to be controlled and precisely accounted for. And that’s not possible to do when you have limited knowledge of the target you’re attacking.

So if you’ve got this gigantic router that you’re trying to hack, and you want to hack it in a way that’s undetectable by the systems administrators for that device, you have to get below the operating system level of that device, of that router. Not where it says here are the rules, here are the user accounts, here are the routes and the proper technical information that everybody who’s administering this device should have access to. Down onto the firmware level, onto the hardware control level of the device that nobody ever sees, because it’s sort of a dark place.

The problem is if you make a mistake when you’re manipulating the hardware control of a device, you can do what’s called bricking the hardware, and it turns it from a $6 million internet communications device to a $6 million paperweight that’s in the way of your entire nation’s communications. And that’s something that all I can say is has happened in the past.

Bamford: When we were in Brazil, we were shown this major internet connection facility. It was the largest internet hub in the southern hemisphere, and it’s sitting in Brazil. And the Brazilians had a lot of concern, because again, they saw the slide that showed all this malware being planted in Brazil. Is that a real concern that they should have, the fact that they’ve, number one, got this enormous internet hub sitting right in Sao Paulo, and then on the second hand, they’ve got NSA flooding the country with malware?

Snowden: The internet exchange is sort of the core points where all of the international cables come together, where all of the internet service providers come together, and they trade lines with each other, where we move from separate routes, separate highways on the internet into one coherent traffic circle where everybody can get on and off on the exit they want. These are priority one targets for any sort of espionage agency, because they provide access to so many people’s communications.

Internet exchanges and internet service providers—international fiber optic landing points—these are the key tools that governments go after in order to enable their programs of mass surveillance. If they want to be able to watch the entire population of a country instead of a single individual, you have to go after those bulk interchanges. And that’s what’s happening.

So it is a real threat, and the only way that can be accounted for is to make sure that there’s some kind of independent control and auditing, some sort of routine forensic investigations into these devices, to ensure that not only were they secure when they were installed, but they hadn’t been monitored or tampered with or changed in any way since that last audit occurred. And that requires doing things like creating mathematical proofs called hashes of the validity of the actual hardware signature and software signatures on these devices and their hardware.

Bamford: Another area—you mentioned the presidential panel that looked into all these areas that are of concern now, which you’ve basically brought out these areas. And the presidential panel came out with I think 46 different recommendations. One of those recommendations dealt with restricting the use or cutting back or maybe even doing away with the idea of going after zero-day exploits. Can you tell me a little bit about your fears that you may have of the U.S. creating this market of zero-day exploits?

Snowden: So a zero-day exploit is a method of hacking a system. It’s sort of a vulnerability that has an exploit written for it, sort of a key and a lock that go together to a given software package. It could be an internet web server. It could be Microsoft Office. It could be Adobe Reader or it could be Facebook. But these zero-day exploits—they’re called zero-days because the developer of the software is completely unaware of them. They haven’t had any chance to react, respond, and try to patch that vulnerability away and close it.

The danger that we face in terms of policy of stockpiling zero-days is that we’re creating a system of incentives in our country and for other countries around the world that mimic our behavior or that see it as a tacit authorization for them to perform the same sort of operations is we’re creating a class of internet security researchers who research vulnerabilities, but then instead of disclosing them to the device manufacturers to get them fixed and to make us more secure, they sell them to secret agencies.

They sell them on the black market to criminal groups to be able to exploit these to attack targets. And that leaves us much less secure, not just on an individual level, but on a broad social level, on a broad economic level. And beyond that, it creates a new black market for computer weapons, basically digital weapons.

And there’s a little bit of a free speech issue involved in regulating this, because people have to be free to investigate computer security. People have to be free to look for these vulnerabilities and create proof of concept code to show that they are true vulnerabilities in order for us to secure our systems. But it is appropriate to regulate the use and sale of zero-day exploits for nefarious purposes, in the same way you would regulate any other military weapon.

And today, we don’t do that. And that’s why we see a growing black market with companies like Endgame, with companies like Vupen, where all they do—their entire business model is finding vulnerabilities in the most critical infrastructure software packages we have around the internet worldwide, and instead of fixing those vulnerabilities, they tear them open and let their customers walk in through them, and they try to conceal the knowledge of these zero-day exploits for as long as possible to increase their commercial value and their revenues.

Bamford: Now, of those 46 recommendations, including the one on the zero-day exploits that the panel came up with, President Obama only approved maybe five or six at the most of those 46 recommendations, and he didn’t seem to talk at all about the zero-day exploit recommendation. What do you think of that, the fact that that was sort of ignored by the President?

Snowden: I can’t comment on presidential policies. That’s a landmine for me. I would recommend you ask Chris Soghoian at the ACLU, American Civil Liberties Union, and he can get you any quote you want on that. You don’t need me to speak to that point, but you’re absolutely right that where there’s smoke, there’s fire, as far as that’s concerned.

Bamford: Well, as someone who has worked at the NSA, been there for a long time, during that time you were there, they created this entire new organization called Cyber Command. What are your thoughts on the creation of this new organization that comes just like the NSA, under the director of NSA? Again, backing up, the director of NSA for ever since the beginning was only three stars, and now he’s a four star general, or four star admiral, and he’s got this enormous largest intelligence agency in the world, the NSA, under him, and now he’s got Cyber Command. What are your thoughts on that, having seen this from the inside?

Snowden: There was a strong debate last year about whether or not the National Security Agency and Cyber Command should be split into two independent agencies, and that was what the President’s independent review board suggested was the appropriate method, because when you have an agency that’s supposed to be defensive married to an agency that’s entire purpose in life is to break things and set them on fire, you’ve got a conflict of interest that is really going to reduce the clout of the defensive agency, while the offensive branch gains more clout, they gain more budget dollars, they gain more billets and personnel assignments.

So there’s a real danger with that happening. And Cyber Command itself has always existed in a—Cyber Command itself has always been branded in a sort of misleading way from its very inception. The director of NSA, when he introduced it, when he was trying to get it approved, he said he wanted to be clear that this was not a defensive team. It was a defend the nation team. He’s saying it’s defensive and not defensive at the same time.

Now, the reason he says that is because it’s an attack agency, but going out in front of the public and asking them to approve an aggressive warfare focused agency that we don’t need is a tough sell. It’s much better if we say, hey, this is for protecting us, this is for keeping us safe, even if all it does every day is burn things down and break things in foreign countries that we aren’t at war with.

So there’s a real careful balance that needs to be struck there that hasn’t been addressed yet, but so long as the National Security Agency and Cyber Command exist under one roof, we’ll see the offensive side of their business taking priority over the defensive side of the business, which is much more important for us as a country and as a society.

Bamford: And you mentioned earlier, if we could just go back a little bit over this again, how much more money is going to the cyber offensive time than going to the cyber defensive side. Not only more money, but more personnel, more attention, more focus.

Snowden: I didn’t actually get the question on that one.

Bamford: I just wondered if you could just elaborate a little bit more on that. Again, we have Cyber Command and we have the Information Assurance Division and so forth, and there’s far more money and personnel and emphasis going on the cyber warfare side than the defensive side.

Snowden: I think the key point in analyzing the balance and where we come out in terms of offense versus defense at the National Security Agency and Cyber Command is that, more and more, what we’ve read in the newspapers and what we see debating in Congress, the fact the Senate is now trying to put forward a bill called CISPA, the Cyber Intelligence Sharing—I don’t even know what it’s called—let me take that back.

We see more and more things occurring like the Senate putting forward a bill called CISPA, which is for cyber intelligence sharing between private companies and government agencies, where they’re trying to authorize not just the total immunity, a grant of total immunity, to private companies if they share the information on all of their customers, on all the American citizens and whatnot that are using their services, with intelligence agencies, under the intent that that information be used to protect them.

Congress is also trying to immunize companies in a way that will allow them to invite groups like the National Security Agency or the FBI to voluntarily put surveillance devices on their internal networks, with the stated intent being to detect cyber-attacks as they occur and be able to respond to them. But we’re ceding a lot of authority there. We’re immunizing companies from due diligence and protecting their customers’ privacy rights.

Actually, this is a point that’s way too difficult to make in the interview. Let me dial back out of that.

What we see more and more is sort of a breakdown in the National Security Agency. It’s becoming less and less the National Security Agency and more and more the national surveillance agency. It’s gaining more offensive powers with each passing year. It’s gained this new Cyber Command that’s under the director of NSA that by any measure should be an entirely separate organization because it has an entirely separate mission. All it does is attack.

And that’s putting us, both as a nation and an economy, in a state of permanent vulnerability and permanent risk, because when we lose a National Security Agency and instead get an offensive agency, we get an attack agency in its place, all of our eyes are looking outward, but they’re not looking inward, where we have the most to lose. And this is how we miss attacks time and time again. This results in intelligence failures such as the Boston Marathon bombings or the underwear bomber, Abdul Farouk Mutallab (sic).

In recent years, the majority of terrorist attacks that have been disrupted in the United States have been disrupted due to things like the Time Square bomber, who was caught by a hotdog vendor, not a mass surveillance program, not a cyber-espionage campaign.

So when we cannibalize dollars from the defensive business of the NSA, securing our communications, protecting our systems, patching zero-day vulnerabilities, and instead we’re giving those dollars to them to be used for creating new vulnerabilities in our systems so that they can surveil us and other people abroad who use the same systems. When we give those dollars to subvert our encryption methods so we don’t have any more privacy online and we apply all of that money to attacking foreign countries, we’re increasing the state of conflict, not just in diplomatic terms, but in terms of the threat to our critical infrastructure.

When the lights go out at a power plant sometime in the future, we’re going to know that that’s a consequence of deprioritizing defense for the sake of an advantage in terms of offense.

Bamford: One other problem I think is that people think that, as you mentioned—just to sort of clarify this—people out there that don’t really follow this that closely think that the whole idea of Cyber Command was to protect the country from cyber-attacks. Is that a misconception, the fact that these people think that the whole idea of Cyber Command is to protect them from cyber-attack?

Snowden: Well, if you ask anybody at Cyber Command or look at any of the job listings for openings for their positions, you’ll see that the one thing they don’t prioritize is computer network defense. It’s all about computer network attack and computer network exploitation at Cyber Command. And you have to wonder, if these are people who are supposed to be defending our critical infrastructure at home, why are they spending so much time looking at how to attack networks, how to break systems, and how to turn things off? I don’t think it adds up as representing a defensive team.

Bamford: Now, also looking a little bit into the future, it seems like there’s a possibility that a lot of this could be automated, so that when the Cyber Command or NSA sees a potential cyber-attack coming, there could be some automatic devices that would in essence return fire. And given the fact that it’s so very difficult to—or let me back up. Given the fact that it’s so easy for a country to masquerade where an attack is coming from, do you see a problem where you’re automating systems that automatically shoot back, and they may shoot back at the wrong country, and could end up starting a war?

Snowden: Right. So I don’t want to respond to the first part of your question, but the second part there I can use, which is relating to attribution and automated response. Which is that the—it’s inherently dangerous to automate any kind of aggressive response to a detected event because of false positives.

Let’s say we have a defensive system that’s tied to a cyber-attack capability that’s used in response. For example, a system is created that’s supposed to detect cyber-attacks coming from Iran, denial of service attacks brought against a bank. They detect what appears to be an attack coming in, and instead of simply taking a defensive action, instead of simply blocking it at the firewall and dumping that traffic so it goes into the trash can and nobody ever sees it—no harm—it goes a step further and says we want to stop the source of that attack.

So we will launch an automatic cyber-attack at the source IP address of that traffic stream and try to take that system online. We will fire a denial of service attack in response to it, to destroy, degrade, or otherwise diminish their capability to act from that.

But if that’s happening on an automated basis, what happens when the algorithms get it wrong? What happens when instead of an Iranian attack, it was simply a diagnostic message from a hospital? What happens when it was actually an attack created by an independent hacker, but you’ve taken down a government office that the hacker was operating from? That wasn’t clear.

What happens when the attack hits an office that a hacker from a third country had hacked into to launch that attack? What if it was a Chinese hacker launching an attack from an Iranian computer targeting the United States? When we retaliate against a foreign country in an aggressive manner, we the United States have stated in our own policies that’s an act of war that justifies a traditional kinetic military response.

We’re opening the doors to people launching missiles and dropping bombs by taking the human out of the decision chain for deciding how we should respond to these threats. And this is something we’re seeing more and more happening in the traditional means as our methods of warfare become increasingly automated and roboticized such as through drone warfare. And this is a line that we as a society, not just in the United States but around the world, must never cross. We should never allow computers to make inherently governmental decisions in terms of the application of military force, even if that’s happening on the internet.

Bamford: And Richard Clarke has said that it’s more important for us to defend ourselves against attacks from China than to attack China using cyber tools. Do you agree with that?

Snowden: I strongly agree with that. The concept there is that there’s not much value to us attacking Chinese systems. We might take a few computers offline. We might take a factory offline. We might steal secrets from a university research programs, and even something high-tech. But how much more does the United States spend on research and development than China does? Defending ourselves from internet-based attacks, internet-originated attacks, is much, much more important than our ability to launch attacks against similar targets in foreign countries, because when it comes to the internet, when it comes to our technical economy, we have more to lose than any other nation on earth.

Bamford: I think you said this before, but in the past, the U.S. has actually used cyber warfare to attack things like hospitals and things like that in China?

Snowden: So they’re not cyber warfare capabilities. They’re CNE, computer network exploitation.

Bamford: Yeah, if you could just explain that a little.

Snowden: I’m not going to get into that on camera. But what the stories showed and what you can sort of voice over is that Chinese universities—not just Chinese, actually—scratch that—is that the National Security Agency has exploited internet exchanges, internet service providers, including in Belgium—the Belgacom case— through their allies at GCHQ and the United Kingdom. They’ve attacked universities, hospitals, internet exchange points, internet service providers—the critical infrastructure that all of us around the world rely on.

And it’s important to remember when you start doing things like attacking hospitals, when you start doing things like attacking universities, when you start attacking things like internet exchange points, when something goes wrong, people can die. If a hospital’s infrastructure is affected, lifesaving equipment turns off. When an internet exchange point goes offline and voice over IP calls with the common method of communication—cell phone networks rout through internet communications points nowadays—people can’t call 911. Buildings burn down. All because we wanted to spy on somebody.

So we need to be very careful about where we draw the line and what is absolutely necessary and proportionate to the threat that we face at any given time. I don’t think there’s anything, any threat out there today that anyone can point to, that justifies placing an entire population under mass surveillance. I don’t think there’s any threat that we face from some terrorist in Yemen that says we need to hack a hospital in Hong Kong or Berlin or Rio de Janeiro.

Bamford: I know we’re on a time limit here, but are there questions that I haven’t—

Producer: Let’s take a two minute break here.

Bamford: One of the most interesting things about the Stuxnet attack was that the President—both President Bush and President Obama—were told don’t worry, this won’t be detected by anybody. There’ll be no return address on this. And number two, it won’t escape from the area that they’re focusing it anyway, the centrifuges. Both of those proved wrong, and the virus did escape, and it was detected, and then it was traced back to the United States. So is this one of the big dangers, the fact that the President is told is these things, the President doesn’t have the capability to look into every technical issue, and then these things can wind up hitting us back in the face?

Snowden: The problem is the internet is the most complex system that humans have ever invented. And with every internet enabled operation that we’ve seen so far, all of these offensive operations, we see knock on effects. We see unintended consequences. We see emergent behavior, where when we put the little evil virus in the big pool of all our private lives, all of our private systems around the internet, it tends to escape and go Jurassic Park on us. And as of yet, we’ve found no way to prevent that. And given the complexity of these systems, it’s very likely that we never will.

What we need to do is we need to create new international standards of behavior—not just national laws, because this is a global problem. We can’t just fix it in the United States, because there are other countries that don’t follow U.S. laws. We have to create international standards that say these kind of things should only ever occur when it is absolutely necessary, and that the response that the operation is tailored to be precisely restrained and proportionate to the threat faced. And that’s something that today we don’t have, and that’s why we see these problems.

Bamford: Another problem is, back in the Cold War days—and most people are familiar with that—when there was a fairly limited number of countries that could actually develop nuclear weapons. There were a handful of countries basically that could have the expertise, take the time, find the plutonium, put a nuclear weapon together. Today, the world is completely different, and you could have a small country like Fiji with the capability of doing cyber warfare. So it isn’t limited like it was in those days to just a handful of countries. Do you see that being a major problem with this whole idea of getting into cyber warfare, where so many countries have the capability of doing cyber warfare, and the U.S. being the most technologically vulnerable country?

Snowden: Yeah, you’re right. The problem is that we’re more reliant on these technical systems. We’re more reliant on the critical infrastructure of the internet than any other nation out there. And when there’s such a low barrier to entering the domain of cyber-attacks—cyber warfare as they like to talk up the threat—we’re starting a fight that we can’t win.

Every time we walk on to the field of battle and the field of battle is the internet, it doesn’t matter if we shoot our opponents a hundred times and hit every time. As long as they’ve hit us once, we’ve lost, because we’re so much more reliant on those systems. And because of that, we need to be focusing more on creating a more secure, more reliable, more robust, and more trusted internet, not one that’s weaker, not one that relies on this systemic model of exploiting every vulnerability, every threat out there. Every time somebody on the internet sort of glances at us sideways, we launch an attack at them. That’s not going to work out for us long term, and we have to get ahead of the problem if we’re going to succeed.

Bamford: Another thing that the public doesn’t really have any concept of, I think at this point, is how organized this whole Cyber Command is, and how aggressive it is. People don’t realize there’s a Cyber Army now, a Cyber Air Force, a Cyber Navy. And the fact that the models for some of these organizations like the Cyber Navy are things like we will dominate the cyberspace the same way we dominate the sea or the same way that we dominate land and the same way we dominate space. So it’s this whole idea of creating an enormous military just for cyber warfare, and then using this whole idea of we’re going to dominate cyberspace, just like it’s the navies of centuries ago dominating the seas.

Snowden: Right. The reason they say that they want to dominate cyberspace is because it’s politically incorrect to say you want to dominate the internet. Again, it’s sort of a branding effort to get them the support they need, because we the public don’t want to authorize the internet to become a battleground. We need to do everything we can as a society to keep that a neutral zone, to keep that an economic zone that can reflect our values, both politically, socially, and economically. The internet should be a force for freedom. The internet should not be a tool for war. And for us, the United States, a champion of freedom, to be funding and encouraging the subversion of a tool for good to be a tool used for destructive ends is, I think, contrary to the principles of us as a society.

Bamford: You had a question, Scott?

Producer: It was really just a question about (inaudible) vulnerabilities going beyond operating systems that we know of, (inaudible) and preserving those vulnerabilities, that that paradox extends over into critical infrastructure as well as—

Snowden: Let me just freestyle on that for a minute, then you can record the question part whenever you want. Something we have to remember is that everything about the internet is interconnected. All of our systems are not just common to us because of the network links between them, but because of the software packages, because of the hardware devices that comprise it. The same router that’s deployed in the United States is deployed in China. The same software package that controls the dam floodgates in the United States is the same as in Russia. The same hospital software is there in Syria and the United States.

So if we are promoting the development of exploits, of vulnerabilities, of insecurity in this critical infrastructure, and we’re not fixing it when we find it—when we find critical flaws, instead we put it on the shelf so we can use it the next time we want to launch an attack against some foreign country. We’re leaving ourselves at risk, and it’s going to lead to a point where the next time a power plant goes down, the next time a dam bursts, the next time the lights go off in a hospital, it’s going to be in America, not overseas.

Bamford: Along those lines, one of the things we’re focusing on in the program is the potential extent of cyber warfare. And we show a dam, for example, in Russia, where there was a major power plant under that. This was a facility that was three times larger than the Hoover Dam, and it exploded. One of the turbines, which weighed as much as two Boeing 747s, exploded 50 feet into the air and then crashed down and killed 75 people. And that was all because of what was originally thought was a cyber-attack, but turned out to be a mistaken piece of cyber that was sent to make this happen. It was accidental.

But the point is this is what can happen if somebody wants to deliberately do this, and I don’t think that’s what many people in the U.S. have a concept of, that this type of warfare can be that extensive. And if you could just give me some ideas along those lines of how devastating this can be, not just in knocking off a power grid, but knocking down an entire dam or an entire power plant.

Snowden: So I don’t actually want to get in the business of enumerating the list of the horrible of horribles, because I don’t want to hype the threat. I’ve said all these things about the dangers and what can go wrong, and you’re right that there are serious risks. But at the same time, it’s important to understand that this is not an existential threat. Nobody’s going to press a key on their keyboard and bring down the government. Nobody’s going to press a key on their keyboard and wipe a nation off the face of the earth.

We have faced threats from criminal groups, from terrorists, from spies throughout our history, and we have limited our responses. We haven’t resorted to total war every time we have a conflict around the world, because that restraint is what defines us. That restraint is what gives us the moral standing to lead the world. And if we go, there are cyber threats out there, this is a dangerous world, and we have to be safe, we have to be secure no matter the cost, we’ve lost that standing.

We have to be able to reject disproportionate and unjustified responses in the cyber domain just as we do in the physical domain. We reject techniques like torture regardless of whether they’re effective or ineffective because they are barbaric and harmful on a broad scale. It’s the same thing with cyber warfare. We should never be attacking hospitals. We should never be taking down power plants unless that is absolutely necessary to ensure our continued existence as a free people.

Bamford: That’s fine with me. If there’s anything that you think we didn’t cover or you want to put in there?

Snowden: I was thinking about two things. One is—I went a lot off on the politics here, and a lot of it was ramble-y, so I might try one more thing on that. The other one I was talking about the VFX thing for the cloud, how cyber-attacks happen.

Producer: So I just want sort of an outline of where you want to go to make sure we get that.

Bamford: Yeah, what kind of question you want me to ask.

Snowden: You wouldn’t even necessarily have to ask a question. It would just be—

Producer: (inaudible).

Snowden: Yeah. It would just be like a segment. I would say people ask how does a cyber-attack happen. People ask what does exploitation on the internet look like, and how do you find out where it came from. Most people nowadays are aware of what IP addresses are, and they know that you shouldn’t send an email from a computer that’s associated with you if you don’t want it to be tracked back to you. You don’t want to hack the power plant from your house if you don’t want them to follow the trail back and see your IP address.

But there are also what are called proxies, proxy servers on the internet, and this is very typical for hackers to use. They create what are called proxy chains where they gain access to a number of different systems around the world, sometimes by hacking these, and they use them as sort of relay boxes. So you’ve got the originator of an attack all the way over here on the other side of the planet in the big orb of the internet, just a giant constellation of network links all around. And then you’ve got their intended victim over here.

But instead of going directly from them to the victim in one straight path where this victim sees the originator, the attacker, was the person who sent the exploit to them, who attacked their system, you’ll see they do something where they zigzag through the internet. They go from proxy to proxy, from country to country around the world, and they use that last proxy, that last step in the chain, to launch the attack.

So while the attack could have actually come from Missouri, an investigator responding to the attack will think it came from the Central African Republic or from the Sudan or from Yemen or from Germany. And the only way to track that back is to hack each of those systems back through the chain or to use mass surveillance techniques to have basically a spy on each one of those links so you can follow the tunnel all the way home.

The more I think about it, the more I think that would be way too complicated to—

Producer: No, I was just watching your hands. That was just filling in the blanks.

Bamford: No, I was, too. That’ll be fine.

Producer: And it’s a good point of how you can automate responses and how you—

Bamford: Yeah, we can just drive in and draw in those zigzags.

Snowden: Right. I mean, yeah, the way I would see it is just sort of like stars, like a constellation of points. And you’ve got different colored paths going between them. And then you just highlight the originator and the victim. And they don’t have to be on the edges. They could even be in the center of the cloud somewhere. And then you have sort of a green line going straight between them, and it turns red when it hacks, but then you see the little police agency follow it back. And then so you put an X on it and you replace it with the zigzag line that’s green, and then it goes red when it attacks, to sort of call it the path.

Bamford: From Missouri to the Central African Republic.

Snowden: Yeah.

Producer: Are there any other visualizations that you can think of that maybe you see it as an image as opposed to a (multiple conversations; inaudible).

Snowden: I think one of the good ones to do—and you can do it pretty cheaply, even almost funny, like cartoon-like, and sort of like almost a Flash animation, like paper cutouts—would be to help people visualize the problem with the U.S. prioritizing offense over defense is you look at it—and I’ll give a voiceover here.

When you look at the problem of the U.S. prioritizing offense over defense, imagine you have two bank vaults, the United States bank vault and the Bank of China. But the U.S. bank vault is completely full. It goes all the way up to the sky. And the Chinese bank vault or the Russian bank vault of the African bank vault or whoever the adversary of the day is, theirs is only half full or a quarter full or a tenth full.

But the U.S. wants to get into their bank vault. So what they do is they build backdoors into every bank vault in the world. But the problem is their vault, the U.S. bank vault, has the same backdoor. So while we’re sneaking over to China and taking things out of their vault, they’re also sneaking over to the United States and taking things out of our vault. And the problem is, because our vault is full, we have so much more to lose. So in relative terms, we gain much less from breaking into the vaults of others than we do from having others break into our vaults. That’s why it’s much more important for us to be able to defend against foreign attacks than it is to be able to launch successful attacks against foreign adversaries.

You know, just something sort of symbolic and quick that people can instantly visualize.

Producer: The other thing I’d like to put to you, because we have to find somebody to do it, is how do you make a cyber-weapon? What is malware? What is that?

Snowden: When people are talking about malware, what they really mean is—when people are talking about malware, what they—

When people are talking about cyber weapons, digital weapons, what they really mean is a malicious program that’s used for a military purpose. A cyber weapon could be something as simple as an old virus from 1995 that just happens to still be effective if you use it for that purpose.

Custom developed digital weapons, cyber weapons nowadays typically chain together a number of zero-day exploits that are targeted against the specific site, the specific target that they want to hit. But it depends, this level of sophistication, on the budget and the quality of the actor who’s instigating the attack. If it’s a country that’s less poor or less sophisticated, it’ll be a less sophisticated attack.

But the bare bones tools for a cyber-attack are to identify a vulnerability in the system you want to gain access to or you want to subvert or you want to deny, destroy, or degrade, and then to exploit it, which means to send codes, deliver code to that system somehow, whether it’s locally in the physical realm or on the same network or remotely across the internet, across the global network, and get that code to that vulnerability, to that crack in their wall, jam it in there, and then have it execute.

The payload can then be the action, the instructions that you want to execute on that system, which typically, for the purposes of espionage, would be leaving an implant behind to listen in on what they’re doing, but could just as easily be something like the wiper virus that just deletes everything from the machines and turns them off. Really, it comes down to any instructions that you can think of that you would want to execute on that remote system.

Bamford: Along those lines, there’s one area that could really be visualized I think a lot better, and that’s the vulnerabilities. The way I’ve said it a few times but might be good if you thought about it is looking at a bank vault, and then there are these little cracks, and that enables somebody to get into the bank vault. So what the U.S. is doing is cataloguing all those little cracks instead of telling the bank how to correct those cracks. Problem is other people can find those same cracks.

Snowden: Other people can see the same cracks, yeah.

Bamford: And take the money from the bank, in which case the U.S. did a disservice to the customers of the bank, which is the public, by not telling the bank about the cracks in the first place.

Snowden: Yeah, that’s perfect. And another way to do it is not just cracks in the walls, but it could be other ways in. You can show a guy sort of peeking over the wall, you can see a guy tunneling underneath, you can see a guy going through the front door. All of those, in cyber terms, are vulnerabilities, because it’s not that you have to look for one hole of a specific type. It’s the whole paradigm. You look at the totality of their security situation, and you look for any opening by which you might subvert the intent of that system. And you just go from there. There’s a whole world of exploitation, but it goes beyond the depth of the general audience.

Producer: We can just put them all (multiple conversations; inaudible).

Bamford: Any others?

Snowden: One thing, yeah. There were a couple things I wanted to think about. One was man-in-the-middle, a type of attack you should illustrate. It’s routine hacking, but it’s related to CNE specifically, computer network exploitation. But I think in conflating in into cyber warfare helps people understand what it is.

A man-in-the-middle attack is where someone like the NSA, somebody who has access to the transmission medium that you use for communicating, actually subverts your communication. They intercept it, read it, and pass it on, or they intercept it, modify it, and pass it on.

You can imagine this as you put a letter in your mailbox for the postal carrier to pick up and then deliver, but you don’t know that the postal carrier actually took it to the person that you want until they confirm that it happened. The postal carrier could have replaced it with a different letter. They could have opened it. If it was a gift, they could have taken the gift out, things like that.

We have, over time, created global standards of behavior that mean mailmen don’t do that. They’re afraid of the penalties. They’re afraid of getting caught. And we as a society recognize that the value of having trusted means of communication, trusted mail, far outweighs any benefit that we might get from being able to freely tamper with mail. We need those same standards to apply to the internet. We need to be able to trust that when we send our emails through Verizon, that Verizon isn’t sharing with the NSA, that Verizon isn’t sharing them with the FBI or German intelligence or French intelligence or Russian intelligence or Chinese intelligence.

The internet has to be protected from this sort of intrusive monitoring or else the medium upon which we all rely for the basis of our economy and our normal life—everybody touches the internet nowadays—we’ll lose that, and it’s going to have broad effects as a consequence that we cannot predict.

Producer: Terrific. I think we ought to keep going and do like an interactive Edward Snowden kind of app.

Snowden: My lawyer would murder me.

Producer: No, you really—(inaudible) used to give classes.

Snowden: Yeah, I used to teach. It was on a much more specific level, which is why I keep having to dial back and think about it.

Producer: You’re a very clear speaker about it.

Snowden: Let me just one more time do the offense and defense and security thing. I think you guys already have enough to patch it together, but let me just try to freestyle on it.

The community of technical experts who really manage the internet, who built the internet and maintain it, are becoming increasingly concerned about the activities of agencies like the NSA or Cyber Command, because what we see is that defense is becoming less of a priority than offense. There are programs we’ve read about in the press over the last year, such as the NSA paying RSA $10 million to use an insecure encryption standard by default in their products. That’s making us more vulnerable not just to the snooping of our domestic agencies, but also foreign agencies.

We saw another program called Bullrun which subverted the—which subverts—it continues to subvert similar encryption standards that are used for the majority of e-commerce all over the world. You can’t go to your bank and trust that communication if those standards have been weakened, if those standards are vulnerable. And this is resulting in a paradigm where these agencies wield tremendous power over the internet at the price of making the rest of their nation incredibly vulnerable to the same kind of exploitative attacks, to the same sort of mechanisms of cyber-attack.

And that means while we may have a real advantage when it comes to eavesdropping on the military in Syria or trade negotiations over the price of shrimp in Indonesia—which is an actually real anecdote—or even monitoring the climate change conference, it means it results. It means we end up living in an America where we no longer have a National Security Agency. We have a national surveillance agency. And until we reform our laws and until we fix the excesses of these old policies that we inherited in the post-9/11 era, we’re not going to be able to put the security back in the NSA.

Bamford: That’s great. Just along those lines, from what you know about the project Bullrun and so forth, how secure do you think things like AES, DES, those things are, the advanced encryption standard?

Snowden: I don’t actually want to respond to that one on camera, and the answer is I actually don’t know. But yeah, so let’s leave that one.

Bamford: I mean, that would have been the idea to weaken it.

Snowden: Right. The idea would be to weaken it, but which standards? Like is it AES? Is it the other ones? DES was actually stronger than we thought it was at the time because the NSA had secretly manipulated the standard to make it stronger back in the day, which was weird, but that shows the difference in thinking between the ’80s and the ’90s. It was the S-boxes. That’s what it was called. The S-boxes was the modification made. And today, where they go, oh, this is too strong, let’s weaken it. The NSA was actually concerned back in the time of the crypto-wars with improving American security. Nowadays, we see that their priority is weakening our security, just so they have a better chance of keeping an eye on us.

Bamford: Right, well, I think that’s perfect. So why don’t we just do the—

Producer: Would you like some coffee? Something to drink?

Bamford: Yeah, we can get something from room service, if you like.

Snowden: I actually only drink water. That was one of the funniest things early on. Mike Hayden, former NSA CIA director, was—he did some sort of incendiary speech—

Bamford: Oh, I know what you’re going to say, yeah.

Snowden: —in like a church in D.C., and Barton Gellman was there. He was one of the reporters. It was funny because he was talking about how I was—everybody in Russia is miserable. Russia is a terrible place. And I’m going to end up miserable and I’m going to be a drunk and I’m never going to do anything. I don’t drink. I’ve never been drunk in my life. And they talk about Russia like it’s the worst place on earth. Russia’s great.

Bamford: Like Stalin is still in charge.

Snowden: Yeah, I know. It’s crazy.

Bamford: But you know what he was referring to, I think. You know what he was flashing back to was—and I’d be curious whether you’ve actually heard about this or not—

Snowden: Philby and Burgess and—

Bamford: Martin and Mitchel.

Snowden: I actually don’t remember the Martin and Mitchell case that well. I’m aware of the outlines of it.

Bamford: But you know what they did?

Snowden: No.
by cryptostorm_ops
Sun Dec 28, 2014 11:13 am
Forum: general chat, suggestions, industry news
Topic: 1.4 config files: bugtracking, feedback, discussion, questions, etc.
Replies: 24
Views: 31856

Re: 1.4 config files (draft versions posted here)

loop wrote:Sorry about that posted the wrong address it's linux-uscentral.cstorm.pw

Name: linux-uscentral.cstorm.pw
Address: 79.134.255.83
Excellent catch. The HAF entry was actually nonexistent, and our DNS resolvers were providing essentially a default value - which isn't an instance-mapped IP at all.

In doing some testing on this, it appears the TLD registrar's domain management system has an odd behavior when faced with hostnames including capital letters. The HAF entries were, in some of our internal systems, listed as "UScentral" - when those were added as A Records, they simply didn't get processed by the registrar. No error, no warning, the commit screen appears successful - but the entry has not been made.

It's fixed now, and we'll be more careful in our assumption that input scrubbing is being done on such fields, with respect to capitalisation, in the future.

Thanks again,

cs ops
by cryptostorm_ops
Sat Dec 27, 2014 5:16 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: US-central exitnode cluster | anchor node = mishigami.cryptostorm.net | updates & "crazy shit"
Replies: 4
Views: 23301

US-central exitnode cluster: mishigami.cryptostorm.net

We have completed the transition to the newly-architected central US exitnode cluster, anchored by the new node mishigami.cryptostorm.net in Chicago.

All prior HAF mappings pointing to 'chili' are now remapped at the TLD resolver to the relevant mishigami instances. Additionally, mishigami is now in the resource pools for all relevant balancers of whatever OS flavour, having replaced chili's IP-space in that regard.

Further, mishigami is fully HAF 1.4 compliant. Thus, for those who want to get a jump on the use of long-term-stable HAF connection settings, the newly-added resolvers for central US are:

  • windows-UScentral.cryptostorm.net
    windows-uscentral.cryptostorm.ch
    windows-uscentral.cryptostorm.nu
    windows-uscentral.cstorm.pw

    linux-uscentral.cryptostorm.net
    linux-uscentral.cryptostorm.ch
    linux-uscentral.cryptostorm.nu
    linux-uscentral.cstorm.pw


The HAF 1.4 pooled resolvers (which currently only include Lisbon and central US-based instances, but which will grow to include all nodes and instances as the 1.4 HAF upgrade continues across the network), are and will remain as:

  • Sticky (TTL-based, DNS randomised) balancer:
    windows-balancer.cryptostorm.net
    windows-balancer.cryptostorm.ch
    windows-balancer.cryptostorm.nu
    windows-balancer.cstorm.pw

    - - - -

    Sticky (TTL-based, DNS randomised) balancer:
    linux-balancer.cryptostorm.net
    linux-balancer.cryptostorm.ch
    linux-balancer.cryptostorm.nu
    linux-balancer.cstorm.pw


Finally, here's a pre-release 1.4 version of the linux config file with the central US connection parameters already included:



And here's the text of the document, for those who prefer not to download a separate file:

Code: Select all

# this is the cryptostorm.is client settings file, versioning...
# cstorm_linux-uscentral_3.conf
# last update date: 28 Dec 2014

# it is intended to provide connection solely to the central USA exitnode cluster
# DNS resolver redundancy provided by TLD-striped, randomised lookup queries
# Chelsea Manning is indeed a badassed chick: #FreeChelsea!
# also... FuckTheNSA - for reals. W00d!


client
dev tun
resolv-retry 16
nobind
float

txqueuelen 686
# expanded packet queue plane, to improve throughput on high-capacity sessions

sndbuf size 1655368
rcvbuf size 1655368
# increase pre-ring packet buffering cache, to improve high-throughput session performance


remote-random
# randomizes selection of connection profile from list below, for redundancy against...
# DNS blacklisting-based session blocking attacks


<connection>
remote linux-uscentral.cryptostorm.net 443 udp
</connection>

<connection>
remote linux-uscentral.cryptostorm.ch 443 udp
</connection>

<connection>
remote linux-uscentral.cryptostorm.nu 443 udp
</connection>

<connection>
remote linux-uscentral.cstorm.pw 443 udp
</connection>


comp-lzo no
# specifies refusal of link-layer compression defaults
# we prefer compression be handled elsewhere in the OSI layers
# see forum for ongoing discussion - https://cryptostorm.ch/viewtopic.php?f=38&t=5981

down-pre
# runs client-side "down" script prior to shutdown, to help minimise risk...
# of session termination packet leakage

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

explicit-exit-notify 3
# attempts to notify exit node when client session is terminated
# strengthens MiTM protections for orphan sessions

hand-window 37
# specified duration (in seconds) to wait for the session handshake to complete
# a renegotiation taking longer than this has a problem, & should be aborted

mssfix 1400
# congruent with server-side --fragment directive

auth-user-pass
# passes up, via bootstrapped TLS, SHA512 hashed token value to authenticate to darknet

# auth-retry interact
# 'interact' is an experimental parameter not yet in our production build.

ca ca.crt
# specification & location of server-verification PKI materials
# for details, see http://pki.cryptostorm.ch

<ca>
-----BEGIN CERTIFICATE-----
MIIFHjCCBAagAwIBAgIJAKekpGXxXvhbMA0GCSqGSIb3DQEBCwUAMIG6MQswCQYD
VQQGEwJDQTELMAkGA1UECBMCUUMxETAPBgNVBAcTCE1vbnRyZWFsMTYwNAYDVQQK
FC1LYXRhbmEgSG9sZGluZ3MgTGltaXRlIC8gIGNyeXB0b3N0b3JtX2RhcmtuZXQx
ETAPBgNVBAsTCFRlY2ggT3BzMRcwFQYDVQQDFA5jcnlwdG9zdG9ybV9pczEnMCUG
CSqGSIb3DQEJARYYY2VydGFkbWluQGNyeXB0b3N0b3JtLmlzMB4XDTE0MDQyNTE3
MTAxNVoXDTE3MTIyMjE3MTAxNVowgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJR
QzERMA8GA1UEBxMITW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBM
aW1pdGUgLyAgY3J5cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMx
FzAVBgNVBAMUDmNyeXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRt
aW5AY3J5cHRvc3Rvcm0uaXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDJaOSYIX/sm+4/OkCgyAPYB/VPjDo9YBc+zznKGxd1F8fAkeqcuPpGNCxMBLOu
mLsBdxLdR2sppK8cu9kYx6g+fBUQtShoOj84Q6+n6F4DqbjsHlLwUy0ulkeQWk1v
vKKkpBViGVFsZ5ODdZ6caJ2UY2C41OACTQdblCqaebsLQvp/VGKTWdh9UsGQ3LaS
Tcxt0PskqpGiWEUeOGG3mKE0KWyvxt6Ox9is9QbDXJOYdklQaPX9yUuII03Gj3xm
+vi6q2vzD5VymOeTMyky7Geatbd2U459Lwzu/g+8V6EQl8qvWrXESX/ZXZvNG8QA
cOXU4ktNBOoZtws6TzknpQF3AgMBAAGjggEjMIIBHzAdBgNVHQ4EFgQUOFjh918z
L4vR8x1q3vkp6npwUSUwge8GA1UdIwSB5zCB5IAUOFjh918zL4vR8x1q3vkp6npw
USWhgcCkgb0wgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJRQzERMA8GA1UEBxMI
TW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBMaW1pdGUgLyAgY3J5
cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMxFzAVBgNVBAMUDmNy
eXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRtaW5AY3J5cHRvc3Rv
cm0uaXOCCQCnpKRl8V74WzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IB
AQAK6B7AOEqbaYjXoyhXeWK1NjpcCLCuRcwhMSvf+gVfrcMsJ5ySTHg5iR1/LFay
IEGFsOFEpoNkY4H5UqLnBByzFp55nYwqJUmLqa/nfIc0vfiXL5rFZLao0npLrTr/
inF/hecIghLGVDeVcC24uIdgfMr3Z/EXSpUxvFLGE7ELlsnmpYBxm0rf7s9S9wtH
o6PjBpb9iurF7KxDjoXsIgHmYAEnI4+rrArQqn7ny4vgvXE1xfAkFPWR8Ty1ZlxZ
gEyypTkIWhphdHLSdifoOqo83snmCObHgyHG2zo4njXGExQhxS1ywPvZJRt7fhjn
X03mQP3ssBs2YRNR5hR5cMdC
-----END CERTIFICATE-----
</ca>

ns-cert-type server
# requires TLS-level confirmation of categorical state of server-side certificate for MiTM hardening.

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once well-tested...
# cipher libraries support our choice - AES-GCM is looking good currently

replay-window 128 30
# settings which determine when to throw out UDP datagrams that are out of order...
# either temporally or via sequence number

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x & its ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-client
key-method 2
# specification of entropy source to be used in initial generation of TLS keys as part of session bootstrap

log devnull.txt
verb 0
mute 1
# sets logging verbosity client-side, by default, to zero
# no logs kept locally of connections - this can be changed...
# if you'd like to see more details of connection initiation & negotiation
Thank you,

cs_ops

EDIT: removed out-of-date config file
- cryptostorm_support
by cryptostorm_ops
Sun Nov 30, 2014 4:30 am
Forum: #cleanVPN ∴ encouraging transparency & clean code in network privacy service
Topic: "Advanced Alien Technology" ✨ ✨ ✨
Replies: 2
Views: 27185

"Advanced Alien Technology" ✨ ✨ ✨

{direct link: cryptostorm.ch/alientech}


Yes, this really happened:
"With control of one’s level of encryption, even if someone were utilizing advanced alien technology, they would have a tough time if you changed your encryption settings every time you connect"
alientech.png
ALIEN-LEVEL TECH REQUIRED TO CRACK NEW VPN ENCRYPTION SETUP, MAKERS SAY

BY ANDY ON SEPTEMBER 21, 2013

In the wake of the Edward Snowden NSA revelations the use of encryption has become an extremely hot topic. Demand for anonymity tools has increased rapidly and providers are offering better services to satisfy that demand. Today we bring news of a new VPN client from Private Internet Access, one containing features that if regularly configured correctly would require "advanced alien technology" to crack.

Previously the domain of the particularly Internet savvy, in recent years the issue of online privacy has become a regular talking point in many mainstream tech publications.

The use of VPNs and services such as TOR have proven particularly popular with those looking to keep a low profile online with the added benefit of enabling users to bypass government imposed websites censorship around the world.

Of course, this year came a watershed moment for privacy when ex-CIA contractor Edward Snowden spilled the beans on the activities of the NSA, revelations which have sent shockwaves around the world. While previously corporations and geeks might have sought to heavily encrypt their communications, now everyone is getting in on the act. Needless to say, security-focused products are enjoying the boom.

For regular file-sharers, security requirements are somewhat different to those looking to whistle-blow or widely share government secrets. Nevertheless, one of the biggest VPN providers in the space will today up the ante with the release of a brand new VPN client. It offers more features than ever before to encrypt users’ communications to a level that will perfectly suit them but disappoint would-be attackers.

TorrentFreak was given access to the new software earlier this week for testing. It’s an upgrade to the current Private Internet Access OpenVPN client and installed without a hitch. It looks very much like the old software until a press of the ‘Advanced’ button reveals a new option titled ‘Encryption’.


PIA Client

“Our application allows our clients to change their encryption and security settings with just a few clicks to any combination they choose,” PIA CEO Andrew Lee told TorrentFreak. “We allow our customers to configure their handshake encryption, data authentication encryption and even the data itself with levels up to AES-256 and RSA 4096!”

With so many options now available, we took a brief look at each and detailed a summary below. We have avoided rocket-science type explanations – those will appear in a follow up article.


DATA ENCRYPTION AES-128 V AES-256 V BLOWFISH

Currently PIA uses 128-bit blowfish. Why should users get excited about the option to use AES-128 / AES-256 over the previous standard?

“As AES-128 is, in general, faster than Blowfish 128 on most modern processors, our customers will enjoy extra speed with this exciting addition,” Lee told us.

Interestingly, the client also allows users not to encrypt their communications at all. PIA confirmed that this setting is there for people who don’t care about encrypting their communications but still want to hide their IP addresses from sites and services they use. This setting also has the side effect of offering the greatest speeds.


DATA AUTHENTICATION – SHA1 OR SHA-256 ?

This hashing technology is used to ensure the integrity and authentication of data sent within a message. SHA1 (160bit) is the fastest option, but is it more desirable than SHA-256 (256bit)?

“SHA1 should be more than fine,” Lee explained. “However, we’re simply offering a stronger alternative for those who may feel it is a necessity.”


HANDSHAKE – RSA-2048 V RSA-3072 V RSA-4096

In 2010 it was reported that RSA 1024 bit encryption had been cracked. Now that PIA offers 2048, 3072 and 4096, is there a preferred setting for optimal efficiency?

“We believe that 2048 bit is sufficient at this point, but in-line with the previous question, we are providing the option for much stronger keysizes if the user feels it is a necessity,” Lee says.

Additionally, the new PIA client also offers elliptic curve cryptography options – ECC-256K1 (in use by BitCoin), ECC-256R1 and ECC-521. With rumors circulating that ECC may be vulnerable to NSA backdoor access, what is the best option?

“To be honest, at this point after the NSA revelations, we do not know exactly who has exactly what capability. In a crazy scenario, it could be possible that RSA is completely broken and ECC is the only viable option. Of course, we do not believe this, but again, we want to give people the choice,” Lee says.


OK, ENOUGH CRYPTO-BABBLE…WHAT’S THE BEST SETUP?

PIA recommends the following setups for speed, safety and best trade-off performance.

- Default Recommended Protection — AES-128 / SHA1 / RSA-2048
- All Speed No Safety — None / None / ECC-256k1
- Maximum Protection — AES-256 / SHA256 / RSA-4096
- Risky Business — AES-128 / None / RSA-2048


Lee says that PIA have included the extra options for those who want to feel extra secure or may want to experiment a little more with cryptography. He adds that for those looking for the ultimate in protection, frequent changes of setup within the client could lead to an almost impossible situation for would-be attackers.

With control of one’s level of encryption, even if someone were utilizing advanced alien technology, they would have a tough time if you changed your encryption settings every time you connect. But we recommend choosing the encryption strength/mode you desire and sticking with it,” Lee concludes.

Those wanting to learn more about the encryption options should head over to this page. The brand new client can be downloaded here.

TorrentFreak has also asked several other VPN providers to share their thoughts and concerns about encryption after the Snowden revelations. These will be addressed in a follow-up article.

Disclosure: PIA is a TorrentFreak sponsor
by cryptostorm_ops
Fri Jul 18, 2014 6:11 am
Forum: general chat, suggestions, industry news
Topic: Are "Rock Solid" VPN connections ideal?
Replies: 7
Views: 12075

Are "Rock Solid" VPN connections ideal?

We just wanted to write a short note about some of the intermittence issues we are seeing with some clients. While discussing with the team, there was a very interesting point made, and it went like this:
"That's a good thing!"
Now, if you're a victim of occasional drop outs while you were playing Minecraft and were just about to fight a creeper when your connection went down, you may be excused for thinking, "what holy load of bullshit is this 'good' thing?! I just lost a diamond pick!"

Let us explain...

In the old days of our VPN experiences, we were mostly using VPNs for torrents of some late 90's blockbuster. What we really didn't want is someone to see us persistently connected to some torrent, thus they'd finger us and sue us, or just threaten or whatever. If the connection fell over for a minute, it was sort of tolerable that it occasionally fell back to clear text for mere few minutes... Sort of. Hopefully.

These days, many, many more people are using VPN for critical privacy issues. And, coinciding with that, we have setup our servers and configs as best we could to PREVENT any fallbacks to less secure protocols, etc. I am not sure it is perfect, but it's way, way better than 90% of the stuff out there. Maybe 99%. Do you see where this is going yet? :)

Well, I guess the result of not allowing fallback to lower security - or no security - is that when the internet (or our servers? We're still looking into root causes) start dropping packets or whatever, and just cannot sustain that connection, OpenVPN is forced to drop it as we have given it no options. It waits until conditions are right to re-establish a solid, 100% secure connection, THEN it connects.

In the past, some of us here have personally worked for VPN companies that had what appeared to be a "rock solid connection" - we never really noticed a blip. Occasionally, however, a check of the IP would lead to surprise to see that we were surfing "in the clear" but a quick restart of the client would get me back online securely. We'd scratch our heads and wonder how long things were left ... hanging out.

So, the point is this - we are very, very sorry for the stability issues on a couple servers. We will address them, as we have in the past. In the mean-time, be reassured at least that the good news is that it's really a side effect of us doing it in a no-compromise sort of way. We could indeed fix this quickly with a "fallback" bandaid - but we won't.
by cryptostorm_ops
Thu Jul 03, 2014 10:17 pm
Forum: general chat, suggestions, industry news
Topic: XKEYSCORE source code
Replies: 8
Views: 16777

XKEYSCORE source code

Shoving a copy here just because I hate when links break. Via here:

Code: Select all

// START_DEFINITION
/**
 * Fingerprint Tor authoritative directories enacting the directory protocol.
 */
fingerprint('anonymizer/tor/node/authority') = $tor_authority
  and ($tor_directory or preappid(/anonymizer\/tor\/directory/));
// END_DEFINITION

// START_DEFINITION
/*
Global Variable for Tor foreign directory servers. Searching for potential Tor
clients connecting to the Tor foreign directory servers on ports 80 and 443.
*/

$tor_foreign_directory_ip = ip('193.23.244.244' or '194.109.206.212' or
'86.59.21.38' or '213.115.239.118' or '212.112.245.170') and port ('80' or
'443');
// END_DEFINITION

// START_DEFINITION
/*
this variable contains the 3 Tor directory servers hosted in FVEY countries.
Please do not update this variable with non-FVEY IPs. These are held in a
separate variable called $tor_foreign_directory_ip. Goal is to find potential
Tor clients connecting to the Tor directory servers.
*/
$tor_fvey_directory_ip = ip('128.31.0.39' or '216.224.124.114' or
'208.83.223.34') and port ('80' or '443');
// END_DEFINITION


// START_DEFINITION
requires grammar version 5
/**
 * Identify clients accessing Tor bridge information.
 */
fingerprint('anonymizer/tor/bridge/tls') =
ssl_x509_subject('bridges.torproject.org') or
ssl_dns_name('bridges.torproject.org');

/**
 * Database Tor bridge information extracted from confirmation emails.
 */
fingerprint('anonymizer/tor/bridge/email') =
email_address('bridges@torproject.org')
  and email_body('https://bridges.torproject.org/' : c++
  extractors: {{
    bridges[] = /bridge\s([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}):?([0-9]{2,4}?[^0-9])/;
  }}
  init: {{
    xks::undefine_name("anonymizer/tor/torbridges/emailconfirmation");
  }}
  main: {{
    static const std::string SCHEMA_OLD = "tor_bridges";
    static const std::string SCHEMA_NEW = "tor_routers";
    static const std::string FLAGS = "Bridge";
    if (bridges) {
      for (size_t i=0; i < bridges.size(); ++i) {
        std::string address = bridges[i][0] + ":" + bridges[i][1];
        DB[SCHEMA_OLD]["tor_bridge"] = address;
        DB.apply();
        DB[SCHEMA_NEW]["tor_ip"] = bridges[i][0];
        DB[SCHEMA_NEW]["tor_port_or"] = bridges[i][1];
        DB[SCHEMA_NEW]["tor_flags"] = FLAGS;
        DB.apply();
      }
      xks::fire_fingerprint("anonymizer/tor/directory/bridge");
    }
    return true;
  }});
// END_DEFINITION


// START_DEFINITION
/*
The fingerprint identifies sessions visiting the Tor Project website from
non-fvey countries.
*/
fingerprint('anonymizer/tor/torpoject_visit')=http_host('www.torproject.org')
and not(xff_cc('US' OR 'GB' OR 'CA' OR 'AU' OR 'NZ'));
// END_DEFINITION


// START_DEFINITION
/*
These variables define terms and websites relating to the TAILs (The Amnesic
Incognito Live System) software program, a comsec mechanism advocated by
extremists on extremist forums.
*/

$TAILS_terms=word('tails' or 'Amnesiac Incognito Live System') and word('linux'
or ' USB ' or ' CD ' or 'secure desktop' or ' IRC ' or 'truecrypt' or ' tor ');
$TAILS_websites=('tails.boum.org/') or ('linuxjournal.com/content/linux*');
// END_DEFINITION

// START_DEFINITION
/*
This fingerprint identifies users searching for the TAILs (The Amnesic
Incognito Live System) software program, viewing documents relating to TAILs,
or viewing websites that detail TAILs.
*/
fingerprint('ct_mo/TAILS')=
fingerprint('documents/comsec/tails_doc') or web_search($TAILS_terms) or
url($TAILS_websites) or html_title($TAILS_websites);
// END_DEFINITION


// START_DEFINITION
requires grammar version 5
/**
 * Aggregate Tor hidden service addresses seen in raw traffic.
 */
mapreduce::plugin('anonymizer/tor/plugin/onion') =
  immediate_keyword(/(?:([a-z]+):\/\/){0,1}([a-z2-7]{16})\.onion(?::(\d+)){0,1}/c : c++
    includes: {{
      #include <boost/lexical_cast.hpp>
    }}
    proto: {{
      message onion_t {
        required string address = 1;
        optional string scheme = 2;
        optional string port = 3;
      }
    }}
    mapper<onion_t>: {{
      static const std::string prefix = "anonymizer/tor/hiddenservice/address/";

      onion_t onion;
      size_t matches = cur_args()->matches.size();
      for (size_t pos=0; pos < matches; ++pos) {
        const std::string &value = match(pos);
        if (value.size() == 16)
          onion.set_address(value);
        else if(!onion.has_scheme())
          onion.set_scheme(value);
        else
          onion.set_port(value);
      }

      if (!onion.has_address())
        return false;

      MAPPER.map(onion.address(), onion);
      xks::fire_fingerprint(prefix + onion.address());
      return true;
    }}
    reducer<onion_t>: {{
      for (values_t::const_iterator iter = VALUES.begin();
          iter != VALUES.end();
          ++iter) {
        DB["tor_onion_survey"]["onion_address"] = iter->address() + ".onion";
        if (iter->has_scheme())
          DB["tor_onion_survey"]["onion_scheme"] = iter->scheme();
        if (iter->has_port())
          DB["tor_onion_survey"]["onion_port"] = iter->port();
        DB["tor_onion_survey"]["onion_count"] = boost::lexical_cast<std::string>(TOTAL_VALUE_COUNT);
        DB.apply();
        DB.clear();
      }
      return true;
    }});

/**
 * Placeholder fingerprint for Tor hidden service addresses.
 * Real fingerpritns will be fired by the plugins
 *   'anonymizer/tor/plugin/onion/*'
 */
fingerprint('anonymizer/tor/hiddenservice/address') = nil;
// END_DEFINITION


// START_DEFINITION
appid('anonymizer/mailer/mixminion', 3.0, viewer=$ascii_viewer) =
        http_host('mixminion') or
        ip('128.31.0.34');
// END_DEFINITION

by cryptostorm_ops
Thu Jun 12, 2014 6:27 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: New Montreal Server - "Maple" - In Beta
Replies: 6
Views: 12706

New Montreal Server - "Maple" - In Beta

note: folks seeking the most current client configuration files need not wade through this entire discussion thread! The current versions are always posted in a separate, dedicated thread, and will be continuously updated there. Continue reading this thread if you're curious about the details of the config files, want to see earlier versions of them, or have comments/feedback to provide - thanks! :thumbup:

If you have the 1.0 windows client, click that new "update" button to see the new node. Enjoy it (at this time there's probably only a couple people on it. :P )

This is a beta "raw" .conf file I just hacked together. Please tell me if it works! And if it doesn't, I guess. :P




Now, the bigger question, is, "why add another Canada node??" - well, we're just hedging our bets as we may be heading to irreconcilable differences with our other server provider. Not sure at this point, but we wanted to be proactive. Also, we're in the process of adding another US server in the next 24 hours or so for similar reasons (we also have a fair amount of traffic moving thu North America at the moment) ... THEN we will finally spread out to the requested Asia and other geo regions.
by cryptostorm_ops
Tue Apr 29, 2014 5:48 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Post-Heartbleed Certificate Upgrade Trajector
Replies: 79
Views: 191529

routers & routing

I wanted to step in and provide some clarification on the functionality of routers (or "residential gateways," which is the au courant nomenclature for residential routers in some locales). You say that:
"they ash for local IP/DNS, my router tunnels them to a foreign one...
Unless you're running cryptostorm sessions directly from your router (which you may be doing, and which we strongly encourage), your router isn't being asked about "tunnelling" by anything. Routers, by definition, route packets: they look at OSI layer 3 - only! They check the "destination IP : destination port" metrics and make decisions about where to send packets based solely on that information, not on any layer 4 (or higher) payload. For a residential router connected to cryptostorm, it's a pretty easy process: "oh, look, another packet headed for IP address xxx.xxx.xxx.xxx {which is the address of a given instance on a given cryptostorm exitnode, somewhere]... guess I'll route it that-a-way." :-)

  • (many residential routers also do some DHCP-stuffs as an ancillary service, which is really OSI 2 / ARP more than it is a "traditional" function of a standard router... and some decide to be firewalls in their spare time, which is often a terrible idea for a host of reasons - but in any case, they are almost universally state-less in their actions and don't do any sort of DPI on packet contents along the way)


Thinking about what routers really do is helpful to understand why a successful MiTM attack requires full ("oracular") control of routing parameters at some place in the required path of packets in transit (bidirectionally): the local machine of network members already knows what IP it wants to send packets to - a legitimate cryptostorm instance/node - and something "upstream" is going to have to have fully-corrupted routing table data in order to send those packets elsewhere, rather than to the intended instance/node.

As you say, a local ISP could exert oracular control over packets in this way... but if it's doing so, there's all sorts of other attack vectors that also open up and which are very difficult to control with full confidence. This has been mentioned earlier by my colleagues, and to go into that subject in depth is I think beyond the scope of this threat (might be interesting in a parallel thread). In short, if someone can poison the routing tables of every packet coming and going to a given machine - either via direct pwnage of routing hardware or via full DNS injection control - the challenge to creation of secure network communications is very much, as we say, "nontrivial." Impossible? Perhaps not... but most every scenario requires a discrete chunk of out-of-band comms to be provably secure. Which is worth remembering, always.

As to the automation question, you state that...
"...the government {has had} this capability built in, at every ISP... they've had it for many years."
I think you are conflating apples and oranges here. The automated attack vectors on "VPN" sessions disclosed by Snowden thus far relate to (as far as I know) two specific sorts of technical foundations: IPsec, and PPTP. IPSec, as we now know, is a standard that has been poisoned internally by NSA scheming (via NIST) to make it so bloody complex, intractable, and internally contradictory as to be all but impossible to implement in-the-wild in a secure manner. Anyone who has reviewed the standard firsthand can see this with their own eyes; it's a bloody mess. Unfortunately, some "VPN companies" offer naive IPsec-based services that are, as a result, terrifically insecure from top to bottom. It might look good in marketing literature to "support" IPsec, but personally I'd not feel comfortable configuring such a connection for my own use... and I do this for a living. How a non-technical (or even moderately technically sophisticated) "customer" could ever be expected to do so is beyond my understanding.

Anyway, that's how the NSA was bulk-breaking IPsec VPN sessions (and most certainly still is): default misconfigurations. As to PPTP... well, what can we really say? As far back as 2008, anyone who was "trusting" PPTP for security purposes - ten years after Schneier's famous paper exposing its deep flaws - was not exacly taking prudent steps. Any "VPN company" that was offering such service, back then or today, is criminally negilgent (to use Graze's apt phrase): when Peter Sunde admitted that IPredator's PPTP-based service was utterly insecure and intended merely as a "political statement," he was at least speaking the truth... why he kept offering that broken service - and charging "customers" for it! - for years afterwards is anyone's guess.

Automating an attack against cryptostorm exitnodes requires replicating the exitnode from top to bottom. This is not deeply challenging - we publish, as a matter of security fundamentals, all of our server-side configuration and paramaterizations, so it's easy enough to spin up an exitnode. But, that's not something that's automated by the NSA - let's be frank about that. We're growing like the proverbial healthy weed... but we're still small in the larger ecosystem out there. The TAO might spin up a fake cryptostorm node, but it's not built into a GUI at NSA headquarters just yet.

Too, that effort will need to keep track of current hostname <--> IP mappings, realtime, to ensure it stays correctly routed. Again, not impossible - those data are not "secret" in our model... but not trivial, and certainly not something that'd be worth automating with a point-and-click system.

I know that the dev folks are in process of testing out a new widget with the new cert materials embedded, and I believe a whack of additional, newly-spawned server instances (with new cert materials) are due to arrive imminently. So it's not that this isn't being "taken seriously" by our team; rather, it's that we're approaching it soberly, carefully, and with an eye towards full network functionality. Not in a panic.

Which is the way we tend to do things, around here.

With respect,

  • ~ cryptostorm_ops
by cryptostorm_ops
Wed Apr 23, 2014 5:21 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Post-Heartbleed Certificate Upgrade Trajector
Replies: 79
Views: 191529

Re: cryptostorm's Post-Heartbleed Certificate Upgrade Trajec

Well, I've done my best - along with several colleagues - to explain the relevant distinctions in modeling threats related to the heartbleed vulnerability. Others have explained the upgrade path put in place to cycle all server-side certificate material which, in an abundance of caution, is a step being undertaken to ensure ongoing hardening against second-order threat vectors related to server-side spoofing (often - with varying levels of accuracy - referred to as "MiTM" or, only tangentially related "session hijacking").

I've also done my best to explain the severe routing-based challenges involved in making real-world use of an exploit related to a compromised server-side authentication certificate, although I do think my efforts in this regard have been less than stellar. It's a difficult subject to explain, in mere words, without use of diagrams or topological models to help set the foundational elements in place. The core distinction between packet- and circuit-switched network topologies (at a logical level) is essential in understanding the challenge of mounting a successful server-side spoof in this scenario. Because every packet coming, and going, from the cryptostorm network is hard-coded at the OSI 3 layer with a known-good cryptostorm IP address, a "session hijack" must also entail some mechanism for enabling mis-routing of all packets - coming and going - for a given legitimate session. This is a non-trivial challenge, as anyone familiar with routing mechanics will surely understand.

Beyond these efforts, I am not convinced that further benefit is gained from going back and forth in a "let's all panic" - "well, perhaps not" - cycle. There is a definite lack of technical detail in replies to efforts made in explaining the details of the cryptostorm model as it relates to this particular vulnerability. I know that the mainstream media outlets have jumped on the heartbleed bandwagon to (justifiably, in many cases) hype it as the Next Big Thing... but that doesn't mean that technical professionals suddenly stop caring about the actual packets, actual code, and actual cryptographic foundations of real-world security modelling and deployment. For us, it's still the details that matter - not the generalisations about heartbleed itself.

Finally, let me make one personal observation: Mullvad, like every other "VPN service," had deployed non-PFS asymmetric key exchange before heartbleed hit. Our cryptostorm model has never made this tragic error. In contrast, every "customer" of Mullvad now knows that any and all stored packet traffic - going back several years - is perfectly vulnerable to decryption if anyone bothered to snag the private keys from their servers during this time. I can only imagine that such customers are not sleeping too soundly at this point in time, knowing that their "secure" traffic was - and is - anything but. It's all fine and good that they've now issued new private keys... but those cows left the barn a long time ago. They're out in the field, happily eating grass. Putting new locks on the barn door is more than a bit late.

So, if you - or anyone else - wants to switch to a "VPN service" like Mullvad that has proven themselves incompetent in their selection of cryptographic primitives... well, I can only wish you the best of luck. I would not make such a mistake - but I'm perhaps close enough to the code to fully understand what a catastrophic failure it is to deply poorly-suited cryptographic models in real-world security context. I don't care about the marketing hype, or whether Ars (again) falls for a publicity-hype claim of "proof of concept" code that is not publicly released. What's the CVE on that vaporware, eh?

We, on this team, stand by our work to ensure that real security threats are first and foremost in the priority queue. We stand by our record of making solid decisions about real security threats, before it ends up on the BBC news and it's prime time to hype "fixes" to said threats. Perhaps ironically, we have no "fix" to hype... because we pre-emptively avoided the problem in the first place. In this world - real security systems - avoiding problems is top priority. It doesn't make for good hype in the Ars or HN context, perhaps... but it's what our members expect from us.

And we deliver.

As always, ongoing discussion is appreciated and closely followed by our core tech team. However, repetitive exhortations towards generalised panic aren't, in sum, particularly useful. Just like PoC code that isn't published, vague panic doesn't get anyone in the direction of functionally better security.

We don't live in a world of hype, or marketing-catnip nonsense. We live in the world of reality, and real attacks. It can be frustrating for some folks, I understand, when compared to hype-centric "VPN services" that jump on whatever bandwagon in an attempt to fool customers into trusting them. In our world, trust is earned - not given. We earn trust by avoiding disasters (such as non-PFS crypto in network security deployments), not by bragging about how fast we jump on the bandwagon after they've been exposed by others.

Thank you,

  • ~ cryptostorm_admin
by cryptostorm_ops
Fri Apr 18, 2014 4:37 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Post-Heartbleed Certificate Upgrade Trajector
Replies: 79
Views: 191529

details matter; boring... but still true

This is becoming something of a tedious exercise in repetition of the same point over and over. And, unfortunately, some of the press reporting linked to in your post does an excellent job of confusing two related - but quite distinct - issues. I will once again explain the core distinction, and beyond that I do not see it as particularly useful to keep repeating this over and over. Either we care about facts, or we can just run about in a misinformed panic, helping nobody in the process.

First, let me quote from the OpenVPN team themselves. They are discussing their commercial version "Server" product specifically, which is dispositive:
"Only the server that your client connects to could possibly exploit this vulnerability, and even then it is unlikely because we use Perfect Forward Security {sic} and TLS-auth on top of the SSL connection." [the conventional phrase is "perfect forward secrecy"]
As I have explained repeatedly, the use of PFS involves generation of ephemeral keys for use in the asymmetrically-secured symmetric key exchange. This is explained, as well, in detail in our crypto framework thread elsewhere on this forum. There are no "permanent keys" used on our cryptographic model. There never have been. This is not an "upgrade" to our framework; it has been the case since day one.

Those who designed our model (namely, our founding team and outside cryptographic advisors) made this decision specifically in order to protect against any potential exfiltration of "private (server) keys," via any method or vulnerability, at any point in the future. Panicking about someone "stealing our keys" makes about as much sense as worrying someone will steal a painting from a museum... a museum that does not house the painting, and never has. It borders on silly.

There are extensive resources online explaining the fundamentals of "PFS" (we dislike the acronym and always have, preferring "transient keying" ourselves) and how it works from a mathematical perspective. I don't see the value in an attempt on my part to re-explain it here, as I'll do a poor job compared to others who have already done it quite well.

To "steal" a transient, discrete-logs coordinated, "private key" by exfiltrating gigabytes of raw memory dump via heartbleed - within the 20 minute window during which re-keying occurs - is not only impractical, it's functionally impossible given bandwidth constraints (it'd melt down any of our nodes, in the attempt: a firehose of malformed packets fired at one machine in a short timespan). The theft of such a "private key" would also be useless outside of that 20 minute window, on that particular machine: hence the use of the word "transient."

We cannot "issue all new encryption keys" - as you vehemently demand... because we do not make use of persistent encryption keys in our security model. Please study that sentence, if it is at first unclear, as it is quite dispositive to your anger.

With respect to server identification certificates, as has already been extensively discussed in this thread, we are in process of re-issuing new certificate materials across the network. This is being done in a professional, staged, calm manner: not in a frenzied, sloppy, hare-brained panic. That is how we do things, here.

Passion is wonderful, but being passionately wrong does not help. Thank you for reading this post carefully.

  • ~ cryptostorm_admin
by cryptostorm_ops
Wed Apr 16, 2014 7:38 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Post-Heartbleed Certificate Upgrade Trajector
Replies: 79
Views: 191529

Re: cryptostorm's Post-Heartbleed Certificate Upgrade Trajec

A condensed reply to a number of questions raised:

1. We deploy very few custom-coded components in our security model - this is not for lack of motivation on our part, but rather because bespoke cryptography is a catastrophically wrong-headed approach to security. When we refer to things such as "our implementation of HMAC validation" we mean exactly that: an implementation, not some harebrained effort to re-code the concept of HMAC validation from scratch. In this case, yes the HMAC procedure in question is one embedded in the OpenVPN framework; our specific implementation details are covered extensively in other threads, here on the forum, relating to our cryptographic framework more broadly.

2. We have been working through a full whitepaper describing our 'Hostname Assignment Framework" (HAF, for short) which should be ready for public review shortly. It describes, in detail, the TLD-based redundancy deployed within the HAF. Our apologies this has not yet been fully explored in a public posting; the HAF has been evolving so quickly since our launch that our efforts to 'snapshot' it always end up running behind the most current updates and extensions. At this point, it has stabilised enough that our final draft of the HAF whitepaper looks about ready to post for full community review.

3. Often, we receive concurrent feedback that our explanations are "too technical/verbose" or "short on details" and, unfortunately, our skills are not sufficient to meet both those needs simultaneously. We do our best. However, what we've found is that, if an early explanation ends up being short on details for folks participating in a specific discussion, a request for more detail is a great way for us to narrow in on specific areas of interest. Otherwise, we may well end up explaining fine-grained details of areas of little interest to community members, and unintentionally skipping over those that are. Thus, an iterative process - post, reply, post - is both efficient and mutually respectful.

4. We strive towards explanations in many posts here that abstract away technical detail in favor of systems-level understanding and explanation. This is a feature, and not a bug. Anyone who has read widely within raw RFQs will appreciate how technical detail can swamp out any sort of essential understanding, all to easily, in written explanations. We can always add technical detail, as requested (per above) - but a systems-level analysis of why given decisions have been made wrt to the security model or implementation framework are always our primary goal. We're not always successful in this first try, of course, but it remains the target towards which we strive.

5. There's some fundamental issues with regards to what a "Man In the Middle" attack is - and is not - that likely go beyond the scope of this thread to fully explain. Recall, however, that - definitionally - "MiTM" is distinct from passive surveillance of in-transit packets; MiTM involves decryption/read/re-encryption in one form or another... an actual, live box pulling packets and making substantive transformations of the form and substance of those packets, realtime. Nothing about Lavabit was related in any way to "MiTM" - that's an entirely different issue, in topological terms.

6. We agree with regards to ISP subersion as a known, legitimate attack scenario. As discussed in detail in our earlier post, this is an issue that transcends heartbleed, and is perhaps worthy of its own dedicated thread.

7. We are quite aware of the degree of hatred directed at this project, and at our project team members - this is not a new finding for us, after many years of providing no-compromise network security service. Indeed, we've had team members do prison time as a direct result of their work on this project, more than once. We model all threats against network-level security with this in mind, always. This is not "security theatre," and we are perhaps the last team that would fail to recognize that essential fact.

8. That said, real-work threat modelling and security assessment always - by definition - requires prioritization of threats. Simply categorizing all potential threats as "serious" means no specific class of threats is actually taken seriously; this may seem counter-intuitive from a lay perspective, but in real-world security administration, these prioritisation decisions are life and death. We are constantly reviewing, studying, and classifying threats to network-level security. Those threats range from known, documented, in-the-wild attack vectors to purely hypothetical, non-production scenarios that might be interesting from an academic perspective but are unlikely to be practically relevant in the near-term. Obviously, there is as much art as science in such prioritisation decisions - we do not claim otherwise, and know of no credible security professionals who do.

Just as the CVE's "severity" rankings are in the end entirely subjective, so is threat vector ranking. We make as much of this process public as possible, not because we think we are always right but rather exactly the reverse: we deeply value community insight and feedback on our threat vector mappings, and as such we work hard to bring that input to the surface.

9. Heartbleed is a real threat. However, because we do not and have never used persistent private keys in our cryptographic model, the vast majority of discussions regarding the "dangers" of heartbleed are irrelevant to our security model itself. This is a good thing. It does not mean that we are immune entirely to heartbleed-related concerns, of course; that's why we're addressing, here in public, the certificate authentication issue with respect to heartbleed. However, in doing so, there is no benefit to a "sky is falling" approach to the analytic side of our discussions. Were we - like the vast majority of lower-tier "VPN services" - relying on persistent cryptographic keys, things would be vastly different. We are not in that category, and our absence from that category is (of course) not an accident. Good security architectures matter.

10. It's worth recalling that an active MiTM on IP-routed packet streams requires not only transient root-level access to a chokepoint, but reliable, persistent root access to router resources at that chokepoint over time. Of course, the NSA does have some such access in some geographical nodes. However, a simple traceroute analysis of packets going to and from our exitnodes, from a member's local machine, will show that route details are rarely if ever static over time; they change, evolve, and 'drift' as network characteristics themselves change over time. Internet-based packet routing is inherently quasi-stochastic; from BGP to RIP, routers advertise new route details (and metrics) continuously. Active MiTM is made much more challenging in such
environments, for IP-routed traffic. This is a topological reality.

11. Again, we do not use persistent "private keys" and never have; we deploy DHE-based PFS. What we do use in our security model is RSA-based certificate authentication of server identity, within our network. These are related, but quite distinct, concepts. The former is a fatal, catastrophic risk given heartbleed; the latter does open up a new attack surface for a heartbleed-based campaign to enable active MiTM "evil exitnodes" as a method of achieving visibility into encrypted network traffic. No "bump on the wire" attack is congruent with certificate-based attack vectors.

12. The topological reality of IP-based routing - its inherent stochasticity - means that oracular MiTM must take place at the (logical) edge of a network session between two geographically-distant nodes. If a car was driving from New York to Washington DC, and the goal were to block the car from getting there, attempting to set up roadblocks in the middle of the USA would be inefficient to the point of infeasability; there are no "chokepoints" to target. In contrast, finding the one or two entry paths into/out of each city provides obvious, effective chokepoint locations. This is the same topological
issue as we are describing wrt oracular MiTM.

13. It is entirely correct, as we've said before, that a colo facility could be coerced into providing corrupted routing table updates (or simply root access) to certain nation-state attackers; this is a known, legitimate threat vector. We do not assume that any colo we use would "tell us" if such took place - indeed, they might not even know, in the event of a surreptitious rooting. There is no "magic bullet" to resolve this issue, of which we are aware. Good certificate-based verification of server identity is one of the core defenses against it - which is why this issue is real, and not merely of academic or theoretical interest. There are some inescapable signatures we'll see, sever-side, in the event of corrupted local (with colo) routing (or ARP-based) table attacks: we'd see packet traffic coming in from sessions that appear to be inside the local subnet! This would be highly unusual, and indeed our development team is exploring ways to automate identification of such aberrant network sessions. Thus far, that's our most promising avenue of active defense against this class of attack vectors; we're not ready to deploy the automated version, but we are taking pains to manually scan network exitnodes for such aberrant network sessions. None have, as yet, been seen.

14. Clever ideas for out-of-band distribution of public certificate materials are very much solicited, and encouraged! This is a problem that goes back right to the core of public-key-based cryptography itself: does anyone else remember "key exchange parties?" :-) Also, it's why quantum key distribution (QKD) systems are not only of theoretical interest, but in fact are used by banking institutions today. Good key exchange (which is covalent with good certificate-identity validation) is crucial to good public key cryptography.

15. In terms of tokens, for the integrity of the authentication model all tokens must work across all exitnodes and all clusters: to break that congruence would be to break the core of the authentication model itself. That said, we're open to all ideas with respect to certificate notification.

16. Naturalisch, we can say that any conceivable attack can be "automated" - in the same way we can say that a program can be written to undertake any possible series of computations or algorithmic transformations. However, we also recognize that "degree of difficult in automating an attack" is a real metric - just as the degree of compressability of an algorithm (the algorithm's "Kolgomorov complexity") is a real metric. We do recognize that this metric is scalar and not binary - there's no such thing as "automateable" and "non-automateable" attacks. So when we say something "cannot be automated" we mean, more formally, that the degree of difficulty in automating the attack is such that it would be impractical to do so relative to simply mounting the attack manually on an as-needed basis. Any programmer will be familiar with cases in which efforts to automate a given task become so difficult as to dwarf the effort required to simply do the task manually; such circumstances tend to evolve into Rube Goldberg-esque algorithmic contraptions, lacking in both elegance and robustness. Active MiTM attack vectors are, by definition, bespoke in nature; certainly some components of the attack methodology can be automated, but the actual in-process deployment of a session-interception box between a given network member and a given
exitnode cluster must be overseen, manually, by an expert systems operator. This is not a "point and click" attack as it depends crucially on the current-state topological realities of the network sessions themselves. Saying this doesn't mean the attack is "impractical" or "unlikely" - not at all. Rather, it simply means that the attack does not scale smoothly to global levels... nor, perhaps, even to a full-network level within our darknet. The implications of this flow out into our defensive stance with respect to this attack vector, as well: understanding the need for manual tuning of such attacks allows us to
in-build extra layers of obfuscation and/or complexity with which the attacker must come to terms for a successful attack.

17. We are in complete agreement that any member information exfiltrated as a result of such an attack by an entity such as the NSA would most certainly end up being loaded into automated, large-scale database systems for tracking individual citizens. This is a distinct issue from the question of automating the attack itself, however.

18. An excellent exercise for those interested in understanding the challenges and opportunities of threat modelling is to ask oneself the following question: if I were an attacker targeting this system, what paths would I tend to follow and which areas would look most tempting in terms of cost/reward metrics implicit in any large-scale surveillance architecture? This kind of thinking, of course, comes more easily if one has firsthand experience building, designing, or administering large-scale information systems... but such is not entirely required for the analytic technique to be quite useful. It is a necessary step towards threat modelling, although of course not sufficient (clever attackers may well have clever techniques of which we are not aware): we must view the "low hanging fruit" attacks and protect against them, first and foremost, as a core element of our attack surface minimization methodology. Conversely, attacks that seem to be fiendishly complex and/or challenging to pull off in real-world systems design are perhaps less likely to be seen as commonly as those which look obvious and easy to deploy.

19. We are again in agreement that a subset of our network members literally put their lives on the line when they entrust our framework with their network security needs. This is both a humbling responsibility, and an inescapable reality for our team. We are not new to this level of operational pressure. We are not perfect - nobody is. We remind ourselves that our members make choices between us and "outside options" - other approaches to security, apart from our network. We do our absolute best to substantially improve upon all outside options, including other services and roll-your-own security models. This work is ongoing.

20. Again, we don't use "private keys" in our security model - those show up occasionally in some documentation, but are purely vestigial and do not play a role in our cryptographic framework (details available in threads here describing our precise crypto architecture and deployed parameters). The challenge of protecting private server authentication certificates is one that is an ongoing area of research and development for us. As we've mentioned last year, our mid-range goal is to leverage either "certificate pinning" based CA-validation techniques or (far more preferred, by our tech team) blockchain-based public validation mechanisms that remove entirely the concept of centralised "certificate authority" itself (we currently self-sign our exitnode certificates, specifically because we mistrust the fundamentals of the current public CA model... very much so).

Namecoin, for example, provides a compelling alternative framework - as does, in a somewhat commutable sense, CJDNS. Our development team
actively experiments with all these tools, in search of a substantively improved model for server verification procedures.

21. That said, if someone roots one of our boxes - in part or in full - this is an inescapable security breach. We run extensive IDS, log-monitoring, and firewall-based rulesets to harden our exitnodes... as well as ongoing, regular, manual monitoring by our sysadmin team. No IDS framework is perfect, and only experienced eyes can fill those gaps. We also minimise how many services run on exitnodes - which do not run any ancillary/administrative services such as production webservers, email, etc. They are stripped-down kernels - in a sense, overgrown, cryptographically enabled routers. This is our goal, as the fewer active services, the fewer attack surface.

As always, and on behalf of our administration team, I appreciate the time and care that goes into these member questions - and the dialog resulting therefrom. Let's continue to build on this constructive process, as part of our ongoing network improvements overall.

With respect,

  • ~ cryptostorm_ops
by cryptostorm_ops
Mon Apr 14, 2014 5:37 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's Post-Heartbleed Certificate Upgrade Trajector
Replies: 79
Views: 191529

cryptostorm's Post-Heartbleed Certificate Upgrade Trajector

The purpose of this post is to provide a roadmap for cryptostorm's response to certain security implications relating to the recently-disclosed heartbleed vulnerability. As has been discussed elsewhere, our proactive deployment of DHE-based PFS capability has insulated our membership from any retroactive risk of bulk packet decryption as a result of heartbleed: we do not use persistent "private keys" in our cryptographic model, and as such no theft of same is of concern. Further, our token-based authentication model does not - and never has - made use of (hashed) token values for anything other than session authentication: they are not used as IVs, nonces, or in any step of our cryptographic model. Thus, "stealing" a token - whether via a heartbleed scrape or any other vector - would result only in replicated network access without requirement to purchase a token. This could be a minor hassle for any network member who would find their token stolen - they'd need to acquire a new one, otherwise finding themselves unable to initiate a concurrent network session alongside the stolen token - but does not reflect a security risk for the membership.

However, our cryptographic and network authentication model does make use of certificate-based server verification; this post outlines the issues related to that component of our security model.

Background

As can be explored on other posts here for extra detail, cryptostorm's cryptographic authentication model does not make use of a naive, two-directional certificate framework for network validation. Rather, we have removed a substantial component of the default method of validation commonly found in "VPN services," because this deployment framework is discongruent with the real-world security needs of network members. Instead, we deploy cryptographic (RSA-based) certificate authentication only for server verification.

The purpose of certificates, in our model, is to lessen the risk of an attack scenario in which the attacker "masquerades" as a legitimate cryptostorm exitnode, thereby causing network members to connect to this false node as if it were genuine. In such an attack scenario, the attacker then gains plaintext access to network member traffic as a result of the attacker running what is fundamentally an "evil exitnode" (as is found so often in Tor attacks) pretending to be genuine. Through the use of RSA-validated session parameters, cryptostorm network session initiation is limited only to servers which have the "private" component of the RSA-model key material ("public" key data for this certificate framework is, of course, published publicly and is found in all inlined conf files as well as bundled with widget installs as "ca.crt").

An attacker who has gained access to cryptostorm's private certificate component will be able, if they can meet the requirements outlined below, to run an "evil exitnode" that pretends to be cryptostorm but is in fact not run by our team. This is a legitimate security concern.

Preexisting Mitigation Layers

This description of "evil cryptostorm exitnodes" is commutable with an active MiTM attack; the two are, in functional terms, identical. Our security model deploys several layers of protection against both passive and active MiTM attacks; our defensive layering is not solely dependent on certificate-based verification of server authenticity, although such is a core component of our defensive framework.

Additionally, our implementation of per-packet HMAC validation with heavy cryptographic algorithms assists in ensuring that "partial" MiTM attacks will result in active session termination at the legitimate exitnode end, thereby subverting the ability of a partially-successful active MiTM attacker from instantiating persistent network sessions. Further, We actively monitor and protect against DNS-based (cache or realtime) attacks on node/cluster lookup parameters, by ensuring that multiple TLDs in multiple jurisdictions, managed by multiple registrars, are concurrently deployed for hostname-to-IP lookup duties. To subvert this via a successful MiTM attack requires comprehensive subersion of all deployed TLDs, across registrars (this is feasible for nation-state level attackers who control "edge" routing within a given geographic terrain, for example, via corrution of BGP route data).

The end result of these additional layers of anti-MiTM protections is that the only functionally effective MiTM attack that represents what we feel is a substantive real-world threat for network members is an active, "oracular" attack that intercepts via route poisoning all packets destined for a legitimate cryptostorm exitnode/cluster.

Constraints
As we have discussed elsewhere, all cryptostorm exitnodes - throughout our network - have been brought fully current with newly-patched OpenSSL libraries, and all dependent binaries have been re-compiled to ensure these libraries are called at runtime. Independent testing via heartbleed exploit code generator/testing suites can confirm this for network members. Secondary, non-frontline backend systems such as our websites do not impact member network sessions in any way and, a heartbleed-based attack on such secondary systems, if successful, would not expose any member data or other member-sensitive information. In short, spoofing our website would allow someone to pretend to have competed a successful "defacement" of the site itself... this is not a security-core concern for us, relative to member-impacting issues.

This leaves the matter of private certificate materials carried on our production exitnodes worldwide. Given the known parameters of the heartbleed vulnerability within OpenSSL, it is preferred to issue new certificate materials on these nodes now that they are fully patched to post-heartbleed status. However, there is a core constraint to be considered: certificate-based handshake is completed via mathematical "matching" of the public key materials, as circulated via conf files and with widget installs, with private key data on exitnodes (this is an oversimplification of the maths; those curious as to the fundamentals are encouraged to explore available literature on RSA-based certificate models). Issuance of new key materials on exitnodes will break backwards compatability for all network members seeking to initiate sessions based on the old public component of the replaced keys. Unlike, for example, HTTPS sessions based on TLS/SSL, cryptostorm's public key data are not mediated via outside CAs (certificate authorities); we find this model less than compelling given our network's use-case scenario, and thus circulate such materials directly to members as part of conf's and widget installs.

The issue of backward compatability is nontrivial. Many of our network members do not routinely check our website or forum to review newly-issued conf's, and we make a strong priority of backwards compatability in such matters. Further, of course, we have no way to contact "all" of our members - our token-based authentication model successfully decouples network member identity from network activity, and simply put we don't know who the vast majority of our members are. There are no "broadcast mailings" to members, as a result - it is simply not possible.

Prospective Attack Scenario

For an attacker targeting the heartbleed-exposed private server certificate materials attack surface, our threat modelling suggests that an effective strategy would require full "oracular" control of a key routing chokepoint. This is the case because, in order to successfulyl masquerade as a [] exitnode, the attacker must intercept all IP-routed UDP packets to and from the network member, destined for a specific []-controlled IP octet. To do this requires more than passive MiTM tactics (which are sometimes called "bump on the wire" attacks); rather, they require the ability to infect route data.

Route data could be infected either close to the member's session initiation point (in physical and/or topologic terms), or in the colo facility housing a legitimate cryptostorm exitnode cluster. It is easy to see why this is the case, when one considers traceroute data and the essential stochasticity of within-internet route details; going to the "edge" of the route is the only reliable place to mount such an attack. In the case of the latter attack model - setting up an "evil exitnode" within a colo facility, full control/ownership of on-subnet switching our routing hardware would be required (or, alternatively ARP cache poisoning... which is morphologically similar). This is possible, but would require either the active assistance of the colo or a level of attacker capability in gaining root control of remote routing infrastructure that is highly advanced.

Attacks based on ISP-level routing subversion are perhaps more likely for network members located in geographically attacker-intensive locations such as unfriendly national governmental regimes. In this case, oracular control of route data presents not only issues with regard to heartbleed, but brings up the entire spectrum of oracular MiTM risks. In short, absent an out-of-band "control channel" to validate key/certificate materials, members facing such attacks are vulnerable irrespective of heartbleed. The full scope of such vulnerabilities - metaphorically similar to the famous 'Ken Thompson' issues surrounding compiler design - are beyond the scope of this post.

Tactical Hardening
As a go-forward strategy, our approach is as follows:
  • 1. We will be instantiating newly-created server instances on each exitnode and cluster, which deploy newly-issued certificate material (both private and public, of course). As those instances are spun up, tested, and placed in production, we will post their public key materials and requisite hostname mappings to this thread. Members who want to connect only to these "updated" nodes will then be able to modify their connection configurations, proactively, to ensure exactly that.

    2. In the meantime, those members still using network parameters from before the upgrade will continue to see successful network connections. All future releases of configuration files and widget versions (such as the v1.0 which is in alpha testing) will include only new key materials, and we will taper off the old, pre-update instances across exitnodes and clusters. When each old instance reaches a critical threshold of member usage, we will decommission them manually, one at a time.
Recommended Actions

For many network members, immediate migration to the new instances carrying new certificate materials may not be absolutely essential - the ability to mount active, oracular MiTM-style attacks on IP-routed packet data (without missing packets & triggering HMAC-based defensive spin-down of sessions) is nontrivial. Very few attackers or attacker organizations have the expertise, resources, and capacity to mount such attacks; further, such attacks would need to be bespoke-generated, cannot be automated, and as such reflect mostly TAO-style tactics. These tactics are certainly known to exist, but are not common and do not scale across broad target populations.

However, for some members who require higher levels of security hardening against exactly this level of attack scenario, immediate migration to the new exitnode instances is warranted. We will accelerate our deployment of these new instances, within constraints of required in-house testing & security validation prior to public availability, specifically to meet the needs of this class of network members. We respect that those needs are genuine, and have concluded that we can meet those needs without breaking backwards compatibility for the concomitant class of members who do not require immediate upgrade.

Finally, we strongly encourage members concerned with this kind of active/oracular MiTM (or, to put it another way, "evil exitnode") attack scenario to validate future public key data out-of-band from their local ISP or connection channel. Obviously, anyone able to have full oracular control over local routing data can spoof any and all source/destination data not only at the IP (link) layer but also at any layers "up" the OSI model from there (transport, application, etc.) - such spoofing makes it trivially easy to, for example, pass down false "public certificate" materials when new conf files are downloaded (via packet payload replacement, among other techniques); this would obviate the entire benefit of upgrading private certificate data. Out-of-band verification of these data, and of widget installer MD5 checksums, is as such strongly recommended for those members in this category.

~ ~ ~

Questions & feedback encouraged & appreciated, as always.

With respect,

  • ~ []cyptostorm_ops
by cryptostorm_ops
Sun Apr 13, 2014 8:58 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: URGENT: The Heartbleed Bug
Replies: 34
Views: 61079

Re: URGENT: The Heartbleed Bug

Sorry we didn't get a solid answer in here sooner. Basically once we did the patch we felt pretty good about the whole thing and went out for a beer. :P Here's the gist of what you should know about the VPN side of the biz: We run DHE so there are no long-term "keys" being used in the actual session crypto; it's all ephemeral. That's a deep structural decision we made last summer, SPECIFICALLY to protect against potential vulns like this (as Snowden made it clear were likely to be exploited.) So basically here's your tl;dr:

Diffie Hellman Ephemeral ftw!

"Stealing private keys" when such keys are transient by design is a waste of time, and introduces no vulns into the system.

HOWEVER ... we do have brochureware around such as this forum - which we will re-issue certs for, but on the scale of things, it's not a priority. It's mostly pretty pics that could be replaced with p0wned text with zero impact to the VPN exitnodes, since we have no centralized auth.

Also remember that even if someone got access to the server, they'd see that SHA512 #beafa3f4b58254063257fc20a2bbd824a3046472302e0684d7d5562e93d1e4512fcb030589... (or whatever) was connecting to 4chan/b or whatever. They will not be able to tie that to a customer without a fuckload of work, thanks to the token decoupling.

IN FACT, we've been considering re-patching the OpenSSL with a nice little bit of code that produces cute ASCII art pr0n whenever someone tries to grab shit off of our server, but the only thing holding us back was having to explain over and over again that, no, actually, it's not vulnerable, it just looks like it is to lazy testing tools... So we didn't - yet. ;)

Thanks, and again, apologies it took us a bit to respond.
by cryptostorm_ops
Tue Apr 08, 2014 6:33 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: URGENT: The Heartbleed Bug
Replies: 34
Views: 61079

URGENT: The Heartbleed Bug

via http://heartbleed.com/
The Heartbleed Bug

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Heartbleed Bug
What leaks in practice?

We have tested some of our own services from attacker's perspective. We attacked ourselves from outside, without leaving a trace. Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

How to stop the leak?

As long as the vulnerable version of OpenSSL is in use it can be abused. Fixed OpenSSL has been released and now it has to be deployed. Operating system vendors and distribution, appliance vendors, independent software vendors have to adopt the fix and notify their users. Service providers and users have to install the fix as it becomes available for the operating systems, networked appliances and software they use.

Q&A

What is the CVE-2014-0160?

CVE-2014-0160 is the official reference to this bug. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by the MITRE. Due to co-incident discovery a duplicate CVE, CVE-2014-0346, which was assigned to us, should not be used, since others independently went public with with the CVE-2014-0160 identifier.

Why it is called the Heartbleed Bug?

Bug is in the OpenSSL's implementation of the TLS/DTLS (transport layer security protocols) heartbeat extension (RFC6520). When it is exploited it leads to the leak of memory contents from the server to the client and from the client to the server.

What makes the Heartbleed Bug unique?

Bugs in single software or library come and go and are fixed by new versions. However this bug has left large amount of private keys and other secrets exposed to the Internet. Considering the long exposure, ease of exploitation and attacks leaving no trace this exposure should be taken seriously.

Is this a design flaw in SSL/TLS protocol specification?

No. This is implementation problem, i.e. programming mistake in popular OpenSSL library that provides cryptographic services such as SSL/TLS to the applications and services.

What is being leaked?

Encryption is used to protect secrets that may harm your privacy or security if they leak. In order to coordinate recovery from this bug we have classified the compromised secrets to four categories: 1) primary key material, 2) secondary key material and 3) protected content and 4) collateral.

What is leaked primary key material and how to recover?

These are the crown jewels, the encryption keys themselves. Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services and to impersonate the service at will. Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed. Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys. Even doing all this will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption. All this has to be done by the owners of the services.

What is leaked secondary key material and how to recover?

These are for example the user credentials (user names and passwords) used in the vulnerable services. Recovery from this leaks requires owners of the service first to restore trust to the service according to steps described above. After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services that have been compromised. All session keys and session cookies should be invalided and considered compromised.

What is leaked protected content and how to recover?

This is the actual content handled by the vulnerable services. It may be personal or financial details, private communication such as emails or instant messages, documents or anything seen worth protecting by encryption. Only owners of the services will be able to estimate the likelihood what has been leaked and they should notify their users accordingly. Most important thing is to restore trust to the primary and secondary key material as described above. Only this enables safe use of the compromised services in the future.

What is leaked collateral and how to recover?

Leaked collateral are other details that have been exposed to the attacker in the leaked memory content. These may contain technical details such as memory addresses and security measures such as canaries used to protect against overflow attacks. These have only contemporary value and will lose their value to the attacker when OpenSSL has been upgraded to a fixed version.

Recovery sounds laborious, is there a short cut?

After seeing what we saw by "attacking" ourselves, with ease, we decided to take this very seriously. We have gone laboriously through patching our own critical services and are in progress of dealing with possible compromise of our primary and secondary key material. All this just in case we were not first ones to discover this and this could have been exploited in the wild already.

How revocation and reissuing of certificates works in practice?

If you are a service provider you have signed your certificates with a Certificate Authority (CA). You need to check your CA how compromised keys can be revoked and new certificate reissued for the new keys. Some CAs do this for free, some may take a fee.

Am I affected by the bug?

You are likely to be affected either directly or indirectly. OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company's site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.

How widespread is this?

Most notable software using OpenSSL are the open source web servers like Apache and nginx. The combined market share of just those two out of the active sites on the Internet was over 66% according to Netcraft's April 2014 Web Server Survey. Furthermore OpenSSL is used to protect for example email servers (SMTP, POP and IMAP protocols), chat servers (XMPP protocol), virtual private networks (SSL VPNs), network appliances and wide variety of client side software. Fortunately many large consumer sites are saved by their conservative choice of SSL/TLS termination equipment and software. Ironically smaller and more progressive services or those who have upgraded to latest and best encryption will be affected most. Furthermore OpenSSL is very popular in client software and somewhat popular in networked appliances which have most inertia in getting updates.

What versions of the OpenSSL are affected?

Status of different versions:

OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
OpenSSL 1.0.1g is NOT vulnerable
OpenSSL 1.0.0 branch is NOT vulnerable
OpenSSL 0.9.8 branch is NOT vulnerable
Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

How common are the vulnerable OpenSSL versions?

The vulnerable versions have been out there for over two years now and they have been rapidly adopted by modern operating systems. A major contributing factor has been that TLS versions 1.1 and 1.2 came available with the first vulnerable OpenSSL version (1.0.1) and security community has been pushing the TLS 1.2 due to earlier attacks against TLS (such as the BEAST).

How about operating systems?

Some operating system distributions that have shipped with potentially vulnerable OpenSSL version:

Debian Wheezy (stable), OpenSSL 1.0.1e-2+deb7u4
Ubuntu 12.04.4 LTS, OpenSSL 1.0.1-4ubuntu5.11
CentOS 6.5, OpenSSL 1.0.1e-15
Fedora 18, OpenSSL 1.0.1e-4
OpenBSD 5.3 (OpenSSL 1.0.1c 10 May 2012) and 5.4 (OpenSSL 1.0.1c 10 May 2012)
FreeBSD 8.4 (OpenSSL 1.0.1e) and 9.1 (OpenSSL 1.0.1c)
NetBSD 5.0.2 (OpenSSL 1.0.1e)
OpenSUSE 12.2 (OpenSSL 1.0.1c)
Operating system distribution with versions that are not vulnerable:

Debian Squeeze (oldstable), OpenSSL 0.9.8o-4squeeze14
SUSE Linux Enterprise Server
How can OpenSSL be fixed?

Even though the actual code fix may appear trivial, OpenSSL team is the expert in fixing it properly so latest fixed version 1.0.1g or newer should be used. If this is not possible software developers can recompile OpenSSL with the handshake removed from the code by compile time option -DOPENSSL_NO_HEARTBEATS.

Should heartbeat be removed to aid in detection of vulnerable services?

Recovery from this bug could benefit if the new version of the OpenSSL would both fix the bug and disable heartbeat temporarily until some future version. It appears that majority if not almost all TLS implementations that respond to the heartbeat request today are vulnerable versions of OpenSSL. If only vulnerable versions of OpenSSL would continue to respond to the heartbeat for next few months then large scale coordinated response to reach owners of vulnerable services would become more feasible.

Can I detect if someone has exploited this against me?

Exploitation of this bug leaves no traces of anything abnormal happening to the logs.

Can IDS/IPS detect or block this attack?

Although the content of the heartbeat request is encrypted it has its own record type in the protocol. This should allow intrusion detection and prevention systems (IDS/IPS) to be trained to detect use of the heartbeat request. Due to encryption differentiating between legitimate use and attack can not be based on the content of the request, but the attack may be detected by comparing the size of the request against the size of the reply. This seems to imply that IDS/IPS can be programmed to detect the attack but not to block it unless heartbeat requests are blocked altogether.

Has this been abused in the wild?

We don't know. Security community should deploy TLS/DTLS honeypots that entrap attackers and to alert about exploitation attempts.

Can attacker access only 64k of the memory?

There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.

Is this a MITM bug like Apple's goto fail bug was?

No this doesn't require a man in the middle attack (MITM). Attacker can directly contact the vulnerable service or attack any user connecting to a malicious service. However in addition to direct threat the theft of the key material allows man in the middle attackers to impersonate compromised services.

Does TLS client certificate authentication mitigate this?

No, heartbeat request can be sent and is replied to during the handshake phase of the protocol. This occurs prior to client certificate authentication.

Does OpenSSL's FIPS mode mitigate this?

No, OpenSSL Federal Information Processing Standard (FIPS) mode has no effect on the vulnerable heartbeat functionality.

Does Perfect Forward Secrecy (PFS) mitigate this?

Use of Perfect Forward Secrecy (PFS), which is unfortunately rare but powerful, should protect past communications from retrospective decryption. Please see how leaked tickets may affect this.

Can heartbeat extension be disabled during the TLS handshake?

No, vulnerable heartbeat extension code is activated regardless of the results of the handshake phase negotiations. Only way to protect yourself is to upgrade to fixed version of OpenSSL or to recompile OpenSSL with the handshake removed from the code.

Who found the Heartbleed Bug?

This bug was independently discovered by a team of security engineers (Riku, Antti and Matti) at Codenomicon and Neel Mehta of Google Security, who first reported it to the OpenSSL team. Codenomicon team found heartbleed bug while improving the SafeGuard feature in Codenomicon's Defensics security testing tools and reported this bug to the NCSC-FI for vulnerability coordination and reporting to OpenSSL team.

What is the Defensics SafeGuard?

The SafeGuard feature of the Codenomicon's Defensics security testtools automatically tests the target system for weaknesses that compromise the integrity, privacy or safety. The SafeGuard is systematic solution to expose failed cryptographic certificate checks, privacy leaks or authentication bypass weaknesses that have exposed the Internet users to man in the middle attacks and eavesdropping. In addition to the Heartbleed bug the new Defensics TLS Safeguard feature can detect for instance the exploitable security flaw in widely used GnuTLS open source software implementing SSL/TLS functionality and the "goto fail;" bug in Apple's TLS/SSL implementation that was patched in February 2014.

Who coordinates response to this vulnerability?

NCSC-FI took up the task of reaching out to the authors of OpenSSL, software, operating system and appliance vendors, which were potentially affected. However, this vulnerability was found and details released independently by others before this work was completed. Vendors should be notifying their users and service providers. Internet service providers should be notifying their end users where and when potential action is required.

Is there a bright side to all this?

For those service providers who are affected this is a good opportunity to upgrade security strength of the secret keys used. A lot of software gets updates which otherwise would have not been urgent. Although this is painful for the security community, we can rest assured that infrastructure of the cyber criminals and their secrets have been exposed as well.

Where to find more information?

This Q&A was published as a follow-up to the OpenSSL advisory, since this vulnerability became public on 7th of April 2014. NCSC-FI is likely to publish an advisory at https://www.cert.fi/en/reports/2014.html. The OpenSSL has made a statement at http://www.openssl.org/news/secadv_20140407.txt. Individual vendors of operating system distributions, affected owners of Internet services, software packages and appliance vendors may issue their own advisories.

References

CVE-2014-0160
NCSC-FI case# 788210
http://www.openssl.org/news/secadv_20140407.txt (published 7th of April 2014, ~17:30 UTC)
http://blog.cloudflare.com/staying-ahea ... rabilities (published 7th of April 2014, ~18:00 UTC)
http://heartbleed.com (published 7th of April 2014, ~19:00 UTC)
https://access.redhat.com/security/cve/CVE-2014-0160
http://www.ubuntu.com/usn/usn-2165-1/
http://www.freshports.org/security/openssl/
https://blog.torproject.org/blog/openss ... -2014-0160
by cryptostorm_ops
Mon Mar 03, 2014 6:47 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CLOSED: aleph tokens ~ unlimited duration batch
Replies: 33
Views: 57903

invites sent

Invites sent via email. Purchase page enabled... but not being posted publicly until the waitlist has been cleared.

This has been really interesting, technologically. Forced us to stretch our model when it comes to tokens & their temporal behaviours. Useful.

~ cryptostorm_ops
by cryptostorm_ops
Fri Feb 21, 2014 6:20 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: zero tolerance policy implemented
Replies: 23
Views: 60690

Re: cryptostorm: zero tolerance policy implemented

spotshot wrote:
sigdkiqf.jpg
speeds are great, download is better than other day, but would have excepted faster
download with these speeds.
not sure how this helps, but I will test again when I'm not in peak hours.

montreal 70.38.46.224

way better than other day.
tested again, different time of the day, still in peak hours, but speeds much better,
averaged approx. 150KB and 77seconds.
Thanks for doing this - it really helps.

We're in the midst of scheduling a swap-out of some suspected-buggy hardware in our Montreal infrastructure, and in my own monitoring of resources over there I still see unacceptable performance on the box in question (we suspect a bad NIC) - so it's good to see the cluster isn't entirely incapable of supporting some strong throughput even before we get this maintenance work done.
by cryptostorm_ops
Fri Feb 21, 2014 6:13 pm
Forum: member support & tech assistance
Topic: version control: non-widget Windows conf's, most current?
Replies: 22
Views: 21067

Re: Are these latest raw windows config version

Thanks for bringing these into one place, ss - it's pretty clear they need some version control TLC and assuming the underlying connection parameters are stable at this point, I'll be happy to take the framework and synthesize formalised, version-congruent versions for all of the production clusters.

Also, last nite in our tech scrum session, the topic of widget CPU usage came up - some folks felt this 25% issue was somewhat rare, others from our support team believe it's not unusual at all... but that most members don't find the CPU consumption as it doesn't impact their PC performance in any noticeable way. I don't have an opinion, as that's not my world, but it brings up a question worth pursuing further. I believe the tech dev folks feel that there's some inevitable sub-optimization of CPU utilization given that we've chosen to go with 100% open, platform-neutral, standards-compliant tools in producing the widget... rather than just using Visual Basic like everyone else - which is to say a closed binary model.

Has anyone seen the CPU usage profile on the widget actually impact desktop performance in a measurable way? I suppose that's the question with which I walked away from our team discussion last nite, for what it's worth...
by cryptostorm_ops
Fri Feb 21, 2014 6:03 pm
Forum: guides, HOWTOs & tutorials
Topic: HOWTO: Kali Linux distro | ONGOING
Replies: 11
Views: 31849

HOWTO: Kali Linux distro | ONGOING

This thread is a repository for findings related to the work several network members have put into producing successful cryptostorm network connections from Kali workstations. That's nontrivial, as the distro doesn't ship with OpenVPN support in the kernel (no Tap driver integration), and has a somewhat idiosyncratic relationship with the OpenSSL libraries.

As I've been working with the most thorough member who has tackled this challenge, I'll be posting into this thread snippets of our communications through the process. We can't report success yet, but we're coming very close. Once we have clean test connects, we'll boil this down to a formal connection guide for ease of reference.

Thank you,

~ c_o


{direct link: kali.cryptostorm.ch}
by cryptostorm_ops
Tue Feb 18, 2014 7:59 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm exitnode clusters: listing+requests+roadmap
Replies: 89
Views: 125467

Re: poll: where to add new exitnode clusters?

Update: we've just leased a nice little dedicated machine in the US as a starting footprint for our cluster over there - and are in negotiations on some serious additional hardware with a specialist premium-network provisioner, which likely will take a bit longer to complete and spin up. But the decision to create our US exitnode footprint has been taken - both based on early feedback here, and on structural realities.

Please do keep voting, as we're likely to bring a second new cluster online imminently, as well - and perhaps a third. Once clusters are established, they scale smoothly - so we do prefer to take on the work of spawning new clusters in batches, if possible.

Thank you,

~ c_o


edited to add: also, I've only just been informed ( :problem: ) we apparently have a new cluster footprint in Paris, France. I don't know those details yet - appears to be a call pj made based on some particular circumstances with a datacenter he knows down there - but I'm told it's been purchased and, I assume, I'll eventually be told it's in queue for provisioning. So there you go.
by cryptostorm_ops
Mon Feb 10, 2014 3:17 pm
Forum: general chat, suggestions, industry news
Topic: optimising torrenting performance on cryptostorm: discussion
Replies: 68
Views: 191867

Re: optimising torrenting performance on cryptostorm: discus

parityboy wrote:I've no idea what you Internet setup is, but generally with asymmetric xDSL type connections maximising the download speed will affect the upload speed; something to do with allocated spectrum. Going by the speeds you were quoting, can I assume you are on a DOCSIS (cable) connection? Can you normally maximise your download without affecting your maximum upload?
This is correct.

Further, it's possible to saturate kernel resources with upload "slots" and thus crimp inbound sessions. These sorts of things are governed by the parameters relating to network/socket allocation... the same sysctl (in Linux) settings we're working with server-side.

As an example of certain upload tools swamping kernel ability to concurrently handle download volume - but not because of ISP or hardware/DSU/CSU issue - try uploading a file to Mega's filesharing site using their default parameters. Doesn't matter if you're on-net (cryptostorm) or not. Spool the upload, let it come up to speed, and then try downloading something entirely unrelated. On most OS flavours, you'll see that the upload effectively monopolises total network stack capacity, and the download gets severely crimped.

This isn't a bug; it's a feature. They've enable concurrent upload streams as a default, via their .js libraries. One can throttle that down (or up) via settings. But even a simple thing like a half a dozen concurrent TLS/TCP sessions being pushed upstream can overwhelm default kernel parameters. I suspect this has to do with the 'tc' algorithms being used (choosing a stochastic socket-allocation procedure would, as a test, eliminate this bottleneck), but haven't actually validated it myself.

For intrepid network members on mainstream Linux distros, it's quite possible to fine-tune kernel settings client-side to maximise performance during cryptostorm sessions; oversimplified, if those settings are congruent by and between clients and cryptostorm exitnodes, session performance is optimised (this is not formally accurate, but is metaphorically useful). At the least, opening up some of the overly-conservative ring-buffering parameters in the Linux kernel (and attendant NIC drivers) can - and does - substantially improve throughput for cryptostorm sessions. We don't actually recommend this in general, as it's beyond the scope of our support folks to help troubleshoot things if problems arise. But, for those with a bit more technical background, a bit of iterative tuning of those parameters can show considerable improvements... bordering on enormous, in some cases. This is all the more true for widely-spanned, multi-swarm, ephemeral UDP session scenarios such as 1000+ concurrent torrent connections.

If folks do play around with those settings, I'm happy to provide unofficial guidance - although I can't promise it'll be top-tier triage level response time. In general, it's "no harm, no foul" to do so - setting them temporarily is easy via "echo" & even saved sysctl.conf edits can be reverted so long as a snapshot is maintained.

We're currently using PJ as a guinea pig for this. He's got a Linux box in the office that's now running an identical (1.6) sysctl.conf as runs on nearly all of our production exitnodes... on an old laptop. Somewhat surprisingly, it works fairly well - although occasionally the "minimum allocation" memory settings lock up his machine tight, as they're calculated for dedicated servers with dozens of gigs of fast-poll, low-error RAM... and old laptops don't quite fit that profile.
by cryptostorm_ops
Mon Feb 10, 2014 3:58 am
Forum: member support & tech assistance
Topic: Issue Accessing Sony Online Entertainment
Replies: 3
Views: 7597

Re: Issue Accessing Sony Online Entertainment

parityboy wrote:Trying to access Sony Online Entertainment from the VPN connection results in a hanging browser, which eventually times out. Accessing the same URI from outside of the VPN network has no issues. It's not a DNS issue, because I also tried specifying the IP address, with the same result.

I've also just tried accessing it from an Android phone running OrBot - works perfectly, so I don't think they are blocking proxies. It may be a routing issue; I'm using the Frankfurt exit node....ok, yeah it looks like a routing issue. The Iceland exit node works fine.
We've seen a couple of these - Network Solutions has presented issues, on and off - and our initial read is that they have to do with script-intensive sites running over SSL that implements session parameters which conflict with the TLS-layer session management implicit in cryptostorm's packet NATting on darknet egress.

That said, the fact that you had a clean load on Iceland's cluster, but not Frankfurt, might be instructive. There's recently been an upgrade of Frankfurt's kernel-level networking parameters (to sysctl.conf version 1.6), which may have taken place just after you tested that pageload. Given that, does the page still fail to load from Franfurt for you this evening? Have you flushed local routes/route parameters lately?

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Sat Feb 08, 2014 6:45 pm
Forum: general chat, suggestions, industry news
Topic: optimising torrenting performance on cryptostorm: discussion
Replies: 68
Views: 191867

Re: optimising torrenting performance on cryptostorm: discus

Try enabling PeX, btw...
PEXIEEEp2p.pdf
(221.27 KiB) Downloaded 1811 times
Understanding Peer Exchange in BitTorrent Systems
Author(s) Di Wu ; Sun Yat-Sen Univ., Guangzhou, China ; Dhungel, P. ; Xiaojun Hei ; Chao Zhang


Peer Exchange (PEX), in which peers directly exchange with each other lists of active peers in the torrent, has been widely implemented in modern BitTorrent clients for decentralized peer discovery. However, there is little knowledge about the behavior of PEX in operational systems. In this paper, we perform both passive measurements and Planetlab experiments to study the impact and properties of BitTorrent PEX. We first study the impact of PEX on the download efficiency of BitTorrent. We observe that PEX can significantly reduce the download time for some torrents. We then analyze the freshness, redundancy and spread speed of PEX messages. Finally, we also conduct large- scale Planetlab experiments to understand the impact of PEX on the overlay properties of BitTorrent.
by cryptostorm_ops
Sat Feb 08, 2014 3:00 pm
Forum: general chat, suggestions, industry news
Topic: optimising torrenting performance on cryptostorm: discussion
Replies: 68
Views: 191867

Re: optimising torrenting performance on cryptostorm: discus

What happens, for both reported examples so far, when you punch up the cap you've got on maximum upload slots globally?

One thing that's much harder to do through a decoupled/NATted infrastructure is the sort of heuristic best-performing-peer tricks that will allow direct connects to choose (for example) the eight peers who can accept the fastest connections from you. This is doubly so if DHT is disabled, as you're then entirely dependent on the tracker itself to mediate all that stuff... and many trackers are barely able to designate peers let alone do clever optimization of performance.

I'm also curious what sort of A/B results you get if you toggle UPnP on/off in your client. It shouldn't be necessary, and we don't implement UPnP within cryptostorm... but there are some home routers that really want it to be part of their session mediation and it's possible they are enforcing that even in the context of a tunnelled session to cryptostorm.

This morning we've put some test parameters into production in Montreal, to see if we can bump up discoverability (which is really a proxy for UDP session management efficacy) for connected peers... it's not ready for full production, but we're hopeful that it gives a good bump to that particular metric.
by cryptostorm_ops
Sat Feb 08, 2014 12:50 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: performance-tuning high-capacity cryptostorm sessions
Replies: 6
Views: 14281

version 1.6 kernel session parameters (sysctl)

This is the latest version (1.6) of the production sysctl parameters being tested in the Montreal cluster, as of this morning. Note that it is deployed only on one test node, to enable A/B performance monitoring. We've been looking closely at all the available data to see how we can best optimize the network for consistently strong throughput. This is something we will continue, as there is always room for improvement.

It is too early to say whether these further refinements in kernel network settings are proving effective, or not, as it takes some time for client sessions to pick up the new parameters (indirectly) and begin throwing data at the node in question at a different clip.

Code: Select all

# cryptostorm.is modded perf-tuned sysctl rev. 1.6
# CentOS 6.whatever - tweaked by p_j
# For binary values, 0 is disabled, 1 is enabled.

net.ipv4.ip_local_port_range = 32768 61000

# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 512
net.ipv4.tcp_keepalive_probes = 13
net.ipv4.tcp_keepalive_intvl = 32

# TCP window scaling for high-throughput, high-pingtime TCP performance
net.ipv4.tcp_window_scaling = 1

# Enables packet forwarding
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1

# Disable IP spoofing protection, turn off source route verification
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0

# disabling SYN cookies to ensure best throughput, for now
net.ipv4.tcp_sack = 0
net.ipv4.tcp_dsack = 0
net.ipv4.tcp_fack = 0
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_synack_retries = 4
net.ipv4.tcp_max_syn_backlog = 65535

# Enables IP source routing
net.ipv4.conf.all.send_redirects = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.accept_source_route = 1
net.ipv4.conf.default.accept_source_route = 1
net.ipv6.conf.all.accept_source_route = 1
net.ipv6.conf.default.accept_source_route = 1

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Don't ignore directed pings
net.ipv4.icmp_echo_ignore_all = 0

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maximum size of a message queue
kernel.msgmax = 65536

# specifies the minimum virtual address that a process is allowed to mmap
vm.mmap_min_addr = 4096

# How many times to retry killing an alive TCP connection
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_retries1 = 3

# Increase the maximum memory used to reassemble IP fragments
net.ipv4.ipfrag_high_thresh = 512000
net.ipv4.ipfrag_low_thresh = 446464

# maximum number of concurrent network sessions
fs.file-max = 360000

# Set maximum amount of memory allocated to shm to 256MB
kernel.shmmax = 268435456
kernel.shmall = 268435456

# per https://gist.github.com/kfox/1942782
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 16384
net.ipv4.neigh.default.gc_interval = 5
net.ipv4.neigh.default.base_reachable_time = 120
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.neigh.default.base_reachable_time = 120
net.ipv4.neigh.default.gc_stale_time = 120
net.core.netdev_max_backlog = 262144
# net.core.rmem_default = 16777216
# net.core.optmem_max = 2048000
net.core.rmem_max = 108544
net.core.somaxconn = 262144
net.core.wmem_max = 108544
net.netfilter.nf_conntrack_max = 10000000
net.netfilter.nf_conntrack_tcp_timeout_established = 40
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 10
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 10
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 10
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 10
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 10
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 10
net.ipv4.tcp_fin_timeout = 32
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_timestamps = 0

# tuning TCP for web pdf
net.ipv4.tcp_rmem = 4096 65536 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 13107200

# Increase TCP queue length
net.ipv4.neigh.default.proxy_qlen = 96
net.ipv4.neigh.default.unres_qlen = 6

# Do a 'modprobe tcp_cubic' first
net.ipv4.tcp_congestion_control = cubic

# cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_moderate_rcvbuf = 0

# Enable a fix for RFC1337 - time-wait assassination hazards in TCP
net.ipv4.tcp_rfc1337 = 1

# UDP parameters
net.ipv4.udp_mem = 65536 173800 419430
net.ipv4.udp_rmem_min = 65536
net.ipv4.udp_wmem_min = 65536
 
# Enable ignoring broadcasts request
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Enable bad error message Protection
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Enable ICMP Redirect Acceptance
net.ipv4.conf.all.accept_redirects = 1
net.ipv4.conf.default.accept_redirects = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.default.accept_redirects = 1

vm/min_free_kbytes = 65536

# Disable Log Spoofed Packets, Source Routed Packets, Redirect Packets
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.default.log_martians = 0

# disable ipv6
# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.default.disable_ipv6 = 1
# net.ipv6.conf.lo.disable_ipv6 = 1

# This will ensure that immediately subsequent connections use the new values
net.ipv4.route.flush = 1
net.ipv6.route.flush = 1
by cryptostorm_ops
Tue Feb 04, 2014 8:19 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: performance-tuning high-capacity cryptostorm sessions
Replies: 6
Views: 14281

Re: performance-tuning high-capacity cryptostorm sessions

Another thing I frequently see is technically unsophisticated VPN review articles that report test results for networks. Actually, I don't think I've ever seen a VPN review test done that is at all useful in tracking actual network performance.

What I have seen is some companies that pick settings for their servers that throw "good" test results with speedtest.net, but suck for actual network use. That's actually very common, and is really unfortunate because then people try to to use these "fast" networks and find their actual performance is crap.

Without getting into too much boring detail, the kinds of data that flow across cryptostorm's network backbone are enormously variable. There's everything from low-bandwith TCP sessions to massive-volume, state-free, UDP-based "sessions" involving thousands of simultaneous peer connections in a filesharing application. Plus, all of these come into cryptostorm from a wide range of local network configurations: some are already NATted through a residential router that is barely able to handle packet transit, whereas others are coming out of really well-administered academic or corporate network environments that might as well be their own bloody standalone ASes! And some members see sessions actively packet-shaped/throttled by ISPs who apparently feel it's ok to pinch down encrypted network traffic if they feel like it.

The net result is that a single "speed test" application cannot really give an accurate picture of network performance. In fact, most "speed test" apps are fairly simple: TCP-based tests that self-throttle as they see packets start to back up into source-device queues (otherwise they'd crash their own outbound machines if they kept shoveling packets into queue when their routers and/or NICs notice the queues are filling up for a given session). In a generic sense, that's fine - but when sessions come through cryptostorm, they are natively stripped down to the packet level, NATted through the kernel, and pushed off the physical NICs of our cluster servers as one big clump of packets (and the reverse, for inbound-from-member data - think of the topological model of the TUN interface mediating between the cryptostorm network daemon and the physical NIC's 'window to the world' layering). There's a series of packet (and socket assignment) queues that happen in this process - and a speed test app that doesn't know about that can throw very unreliable/inaccurate results through no fault of its own. It's just not designed to measure this kind of network topology.

Our in-house metrics for cluster performance are driven almost entirely by close attention to socket allocation overhead at the kernel level, as well as packet queues at the NIC. If sockets are assigning smoothly, and the NICs are able to onload/offload packets without having their buffers overflow into kernel space (and the attendant ring buffers involved), then our perf-tuning is successful. This is because, obviously, there's a huge amount of stochasticity in actual network traffic coming through an individual node/server: maybe there's 100 members connected, but not many using alot of bandwidth... or perhaps there's only 20 connected, but several are sitting on 100 megabit local pipes & are pushing big files through the network. Or: some of our nodes are heavily provisioned, some much less so and rather are clustered together to loadbalance amoungst themselves. So one machine might carry 100 sessions of high-traffic members just fine, whereas another would choke long before that.

In the end, the best "test data" come from members who tell us whether the network is performing on par with "bareback" non-cryptostorm sessions. That's the real metric. You hear alot of blabber about how "encryption slows down VPNs," and it's essentially all bullshit. I've yet to see kernel metrics on a VPN network that showed raw CPU bottlenecking as a result of simple application of symmetric crypto. Long before that, other areas of kernel performance are the cause of transit problems. These areas require much more experience and knowledge to diagnose and fine-tune, and so you see amateurs blame "crypto" when their servers are slow. That's nonsense. The OpenSSL libraries themselves, for all their flaws, are fast post-compile and work nicely with all the major chipset architectures at the binary level. They aren't where things slow down.

Perhaps the worst thing we see is some networks that, because their admins don't have any idea how to properly run machines, simply allow any one network session to effectively monopolise an entire server (or more often, VPS instance) during a "speed test." This is how you see really bad VPN networks show "good" speed test results. That one TCP session for a speedtest.net result has grabbed all the kernel's resources for packet transit, and is all but locking down the NIC as it pushes packets through. Sure, it shows 10 megabits/second download or whatever... but every other person logged into that machine just saw their sessions slow to a crawl or start dropping TCP packets entirely. Of course, the "test doesn't report that - since those other people have no idea they've been crowded out by that speedtest.net session. And the review goes up on some clicbbait blog somewhere: fast!

But if you're actually running a network to benefit all the network members, and not just to trick uneducated clickbait "reviewers" into saying your network is fast, then doing this is a terrible plan. You want everyone on the network to have consistently good network performance, whether they're pushing a big file across via TCP or whether they're gathering bits and pieces of obscure .torrents from a few hundred global swarm peers on ephemeral UDP connects. And you want the people streaming video to get reliable stream performance, plus those using videochat or other realtime apps to have non-glitchy sessions. That's a big-picture challenge that is NOT reflected by clicking an icon on speedtest.net and posting a screenshot of the result.

We know we have a performance problem when we get messages via our support folks that "the network seems slow" from a chunk of actual network members. We go into high-gear when that happens, as it's always "real" as compared to nonexistent "performance issues" that come when one person somewhere clicks on a speed test and worries that the numbers don't look high enough. Sometimes, that's a sign of a problem - but usually not. We're monitoring our machines closely enough, 24/7, that a simple problem like that will already have thrown red flags that are going to hit my monitory long before that. It's still good to get those reports, of course, but usually they're transient: an ISP bottleneck, a router that isn't handling port 443 UDP packets well, that sort of thing. But if a dozen members say that the Frankfurt cluster is slow, then I guarantee you there's a problem there - it's a question of hunting it down.

Perf-tuning cryptostorm is a really fascinating technical challenge: unlike most areas of network administration, it's mostly new questions that we ask and we can't really just go to Stack Exchange and see what other smart people are already doing. So we do alot of experimental parameter tweaking at the kernel level, within exitnode clusters, realtime. This kind of work is evolving into a full-time job, to be honest, as it's clear that there's still big gains to be made on overall, real-use-scenario performance at the network level. Probably a few dissertation topics lurking in there, too, as time goes by.

Anyway, when people want to see if cryptostorm meets their needs for performance we ask our support folks to provide them with testing tokens, and let actual network use be the standard. At that level, it is very very rare that someone tests the network and feels that it's slow in actual use (not just a speedtest.net result) - and if we do hear that, we listen closely as it's a chance to learn something important.

I am happy to share as much as people are interested in reading, when it comes to the specifics of how we perf-tune the clusters. I don't worry that some competitor will "steal" what we report, because doing this is alot of work and there's not some bash script that will just magically make it happen if someone wants to click on it. It requires careful attention to many layers of systems architecture - and anyone able to do that effectively is welcome to borrow what we've learned, at cryptostorm, in their own work. Hopefully, they'll share back their own results and experience - but even if they don't, we're not keeping stuff secret.

But sometimes I can see people's eyes glaze over when I talk about this stuff - to me it is fascinating, but not for everyone.

Finally, for people connecting to cryptostorm from Linux machines, I'm happy to provide some advice on tuning param options locally to ensure good throughput. The current kernel builds are pretty good about most stuff, but some of the packet buffering defaults are... mystifying to me, really. Those guys are super smart, so I am sure they have their reasons, but from my perspective I'd never stay with default kernel settings on one of my own local machines, in terms of network optimization. I assume this might also be true for Macs as well, since they're just Linux hiding under a layer of high-margin walled-garden obfuscation... but I don't know firsthand as it's not my world.

~ c_o
by cryptostorm_ops
Tue Feb 04, 2014 7:47 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: performance-tuning high-capacity cryptostorm sessions
Replies: 6
Views: 14281

performance-tuning high-capacity cryptostorm sessions

Our ops team has been, in recent weeks, doing some intensive work server-side in support of maximizing the broadest range of high-bandwidth network use-case scenarios. This is an ongoing process, as network dynamics are essentially an emergent phenomenon at the level of topological modification we've done to the foundations of packet transit in our security model. This means, basically, that there are occasionally "backward steps" in perceived/tested throughput as we test out various parameters in exitnode clusters, review the test data, and tune accordingly.

I've been asked to summarise some of this work in a public thread, and I will do my best to do so.

To start, I am bringing over some text from email messages I've been exchanging with network members who are able to push high-bandwidth test sessions through exitnodes and report on results. For reasons that are pretty obvious when considered fully, having a broad swath of test-case examples is very helpful in real-world perf-tuning a network of this sort. No matter how much in-house work we do, we're limited by our local network parameters - and of course "testing" the network from inside the network is basically pointless.
We're in the midst of fine-tuning some of the clusters, to support maximum session throughput. It's an ongoing process, and right now the Montreal cluster is... well, in the middle of a dip in performance as we get things settled the way we like them.

My advice is that you - for now, at least - direct a session specifically at our Icelandic cluster. It's been perf-tuned extensively, and should be able to support high throughput consistently. (all of our clusters have ample capacity; perf-tuning at this level isn't about raw capacity, but rather a series of cascading kernel settings relate to packet buffers and socket allocation)

I believe there's a forum thread on selecting clusters, but short version is that you can change the config file to specify this remote parameter:

cluster-iceland.cryptostorm.net

If there's any problems with this, let me know right away. We're always keen to have high-throughput members who can help assist in performance tuning, as it's a big focus of our network.

Finally, please note that some "speed test" applications will give erratic results when routed through our network - they assume certain topological structures in how they calculate and throttle throughput during testing, and those assumptions can run up against the layers of SNATted de-coupling & attendant re-packetization - that happen during transit through cryptostorm. So, while naive throughput testing is useful, it's really helpful to test several with several tools and also, if possible, do a couple of brute-force wget file grabs to see how they perform. Last: because of the way we dynamically allocate socket assignments for network sessions at the kernel level (using a stochastic round-robin algorithm rather than the FIFO-based Linux defaults), it can take a few seconds for big single-socket (i.e. TCP) sessions to spool up to capacity.

Generally, we see an initial jump to a megabit or so, then a lull for a second or two, then a linear spool to full capacity (this is why some speed test apps show odd results - they see that first plateau, and throttle down data flowing into the test pipe, resulting in a self-fulfilling error). That's why a raw wget can be useful, as it's a "dumb" test and just dumps packets into a TCP session as fast as it possibly can. Those will, then, fill all the cascading packet buffers as the session punches across cryptostorm's network interfaces, and as that happens reported throughput will see big jumps... which makes sense, if you imagine a series of buckets that each must fill, before dumping into the next one: it looks like it's a slow process, at first, but once all the buckets are brimming water will pass through the series at a monotonically increasing rate.

Thanks for the ongoing feedback,

~ cryptostorm_ops
by cryptostorm_ops
Tue Jan 28, 2014 10:15 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: non-widget Windows config file beta testing
Replies: 27
Views: 45996

Re: cryptostorm: non-widget Windows config file beta testing

I've taken the liberty of splitting the original post into a dedicated thread, to facilitate beta results.

We have just posted the first (and second) iterations of non-widget Windows GUI configuration file, in the canonical configuration reference post at conf.cryptostorm.ch (at the bottom of the post). Only one version - "dynamic" - has been posted thus far, to facilitate structured testing and iterative refinement. As soon as the parameters prove through testing to be viable, we will immediately stripe the necessary --remote directives into parallel files for exitnode-specific connection profiles, as well as the "locked" profile.

It is expected that this beta iteration 0.91 will be imperfect - it has not been tested in-house and has been generated via concatenation manually of Linux-based (1.3) settings, and integrated widget configuration parameters, exported plaintext. Our hope is that it is viable, but we are prepared to rapidly iterate towards a stable version through member feedback.

Note that this profile does connect to the same Windows-specific server-side instantiations - and subhost --remote mappings - as the widget itself. In other words, if you attempt to use the remote mappings found in the Linux configuration profiles for non-widget Windows connections, don't be surprised to find significant connection problems. This will not, however, result in security failures... except insofar as sessions likely won't fully instantiate in the first place, nor pass control channel traffic. That's bad - but not as bad as traffic coming off the NIC that's not protected during a putatively "secure" network session.

Thank you in advance for your help with these testing efforts!

~ cryptostorm_ops
by cryptostorm_ops
Mon Jan 27, 2014 6:50 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

full deployment of "raw" 1.3 client framework

Please note that all of the client configuration files for Linux/"raw" connections have now been upgraded to 1.3 versioning. This includes both cluster-specific connection profiles, as well as the locked/dynamic network-wide meta-profiles.

They have been, per standard, posted to the canonical conf.cryptostorm.ch resource in order to be available for production deployment.

Anyone using earlier versions of the configs is strongly encouraged to update. We have done our best to support backwards-compatability, but it has not been possible to provide full coverage of all prior config versions without crippling performance improvements. As such, use of earlier configs is likely to result in sporadic, unreliable network connectivity.

We do not expect to be deploying substantive changes to these Linux/"raw" 1.3 configuration profiles for some time; major functionality updating has been compressed into this one, larger break with prior versioning. It has not been convenient for network members, and we regret that - however, in doing so, we've cleared a path forward with no visible roadblocks looming ahead.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Mon Jan 27, 2014 6:41 pm
Forum: member support & tech assistance
Topic: Linux/Tunnelblick connect snags | RESOLVED (via 1.3 conf's)
Replies: 31
Views: 29410

1.3 upgrade complete

Please note that all of the client configuration files for Linux/"raw" connections have now been upgraded to 1.3 versioning. This includes both cluster-specific connection profiles, as well as the locked/dynamic network-wide meta-profiles.

They have been, per standard, posted to the canonical conf.cryptostorm.ch resource in order to be available for production deployment.

Anyone using earlier versions of the configs is strongly encouraged to update. We have done our best to support backwards-compatability, but it has not been possible to provide full coverage of all prior config versions without crippling performance improvements. As such, use of earlier configs is likely to result in sporadic, unreliable network connectivity.

We do not expect to be deploying substantive changes to these Linux/"raw" 1.3 configuration profiles for some time; major functionality updating has been compressed into this one, larger break with prior versioning. It has not been convenient for network members, and we regret that - however, in doing so, we've cleared a path forward with no visible roadblocks looming ahead.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Wed Jan 22, 2014 8:26 pm
Forum: member support & tech assistance
Topic: Linux/Tunnelblick connect snags | RESOLVED (via 1.3 conf's)
Replies: 31
Views: 29410

Linux 1.3 Frankfurt conf now available

We've just released the 1.3 "raw"/Linux connection profile and concomitant server-side updates for the Frankfurt exitnode cluster. They have been posted up via the usual conf.cryptostorm.ch location, and we are also posting a copy here for ease of access.

As we have the Icelandic cluster - and its 1.3 nodes - offline currently to complete final hardware adjustments & performance tuning, we have pushed to put this Frankfurt Linux cluster into production for those seeking a short term 1.3 alternative.

Finally, this Frankfurt 1.3 update was pushed through internal testing faster than usual. Please, if any unexpected behaviors are seen we ask that you note them here or in an email to our support team so that we can immediately research and rectify them.

Thank you.
by cryptostorm_ops
Wed Jan 22, 2014 3:22 pm
Forum: member support & tech assistance
Topic: Asus router w/ Asuswrt-Merlin build: conf? | RESOLVED
Replies: 17
Views: 21846

merlin success

parib wrote:small update: After the creation of password.txt with my hashed token in the first line and copying to /tmp/password.txt and editing the Custom Configuration with auth-user-pass /tmp/password.txt and the new remote cluster-iceland.cstorm.pw adress it works. I think it s a problem if you do not enter a password and only a username or ... And the performance is much better compared with the dd-wrt setup. Now I have about 9 Mbit of a regular 11 Mbit line.
This is nicely done work.

To summarize, it appears that dd-wrt's current build introduces some serious performance issues that do not directly relate to hardware constraints. Reimaging with the merlin firmware, as well as several tweaks in the config, were able to create a successful connection via our 1.3 version "raw" connection profile. This connection produces much more appropriate performance characteristics.

We will hope to see a write-up in the form of a digest-form "howto" for the merlin install, as this looks like the best router-based instantiation we've seen thus far.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Tue Jan 21, 2014 9:33 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: server-side configuration publication
Replies: 19
Views: 27260

1.3 server-side config proposal

The below-cited server-side configuration has been deployed within the Icelandic exitnode cluster as a proposed template for Linux-specific connections across the network. It is posted here for review and member/community feedback, as per standard cryptostorm procedures:

Code: Select all

# cryptostorm_server_raw version 1.3 config - linux/"raw" framework
# optimised for flexibility & general applicability to raw connections
# but it's NOT altered in cipher selection & enforcement, in any way!
# discussion & details in http://serverconf.cryptostorm.ch


daemon
local {node instantiation IP}
port 443
proto udp
dev tun

txqueuelen 286
# expanded packet queue plane, to improve throughput on high-capacity sessions

sndbuf size 655368
rcvbuf size 655368
# increase pre-ring packet buffering cache, to improve high-throughput session performance

persist-key
push "persist-key"
# not essential, but smooths SIGHUPs of individual openvpn processes as in new conf loads

persist-tun
push "persist-tun"
# retain tun instantiation client-side during reconnects, to smooth process

fast-io
# experimental directive in OpenVPN 2.3.2 - testing for performance gains on openvpn
# optimize TUN/TAP/UDP I/O writes by avoiding a call to poll/epoll/select prior...
# to the write operation

ca /etc/rawvpn/easy-rsa/keys/ca.crt
cert /etc/rawvpn/easy-rsa/keys/server.crt
key /etc/rawvpn/easy-rsa/keys/server.key
dh /etc/rawvpn/easy-rsa/keys/dh2048.pem
# standard PKI/CA asymmetric key materials storage
# we manually generate & manage all key materials firsthand via cryptographic best practices

script-security 2
auth-user-pass-verify /etc/rawvpn/auth.sh via-file
client-connect /etc/rawvpn/session_up.sh
client-disconnect /etc/rawvpn/session_down.sh
# custom-generated script hooks into our token auth system

tmp-dir /tmp
# manually set temp directory, to ensure active swap over-writes temp data consistently

topology subnet
server 10.55.0.0 255.255.0.0
# internal, non-routed subnet topology for ephemeral network member assignment
# essentially, an internal DHCP framework for client-to-exitnode tunnelized packet transit

float
# allows client to change IP, as with DHCP re-lease, & retain secure session...
# if HMAC continues to validate

# push "redirect-gateway bypass-dhcp"
# generally windows-specific

push "redirect-gateway def1"
# directives to allow clients to re-lease local DHCP outside of secure session via...
# LAN route details & metrics

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

# these below are our selected DNS services for within-network canonical resolution
push "dhcp-option DNS 198.100.146.51"
push "dhcp-option DNS 76.74.205.228"
# OpenNICproject.org, Canuck-optimised :-)
push "dhcp-option DNS 91.191.136.152"
# Telecomix is.gd/jj4IER
push "dhcp-option DNS 213.73.91.35"
# CCC http://is.gd/eC4apk

duplicate-cn
client-cert-not-required
# we do not use certs to uniquely identify connected members...
# doing so is a serious security failure and needlessly endangers anonymity on-net

keepalive 20 60
# retains active sessions with connected members during temporary traffic lulls

max-clients 300
# caps the number of simultaneous connections to a specific exitnode machine

# fragment 1400
mssfix 1400
# tunes the UDP session by fragmenting below the MTU upper bound
# much undocumented/unexpected behaviours result from these parameters, beware!
# we routinely test & refine these parameters, in-house, for best performance
# cannot be 'pushed' to clients, as are required a priori for control channel setup

reneg-sec 1200
# cycle symmetric keys via tls renegotiation every 20 minutes
# an essential fallback to TLS-based 'perfect forward secrecy' via Diffie Hellman keygen

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once cipher libraries offer our choice...
# AES-GCM is looking good currently

tls-server
key-method 2
# specification of entropy sources (PRNG) used in generation of key materials

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x, natively, thru ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-exit
# exit on TLS negotiation failure

comp-lzo
# push "comp-lzo no"
# we are working towards removal of this from the network...
# but it sneaks in via client-side prefs

user nobody
group nobody
# nogroup on some distros

tran-window 256
# amount of overlap between old and new TLS control channel session keys allowed
# default is 3600, which is way too long to work with PFS & 1200 2nd key renegotiations

verb 5
mute 2
status /var/log/rawvpn-status.log
log /var/log/rawvpn.log
# rotating error & connection log parameters - cycle w/ each connection
# used to track packet-level errors within secure sessions
# does not retain any session-level detail - also wipes via regular session cycle
by cryptostorm_ops
Tue Jan 21, 2014 9:22 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: Icelandic exitnode cluster now in production
Replies: 2
Views: 8014

Linux/"raw" Icelandic support now in production (1.3)

After extensive testing and upstream route optimization, we now provide Linux/"raw" connection support within our Icelandic dedicated exitnode cluster.

This support increments forward to configuration version 1.3, so please note that simply editing earlier configuration files to change the --remote parameter will NOT result in successful connections. This is a rare instance in which we have stepped away from full backwards compatibility in order to ensure fast, decisive resolution of a member-impacting variance from expected connection performance. As there is no "installation" required to move to the 1.3 configuration itself - merely downloading the config file for use from the terminal, or import to the Network Manager - we hope this is not unreasonably inconvenient for network members.

We will be watching performance metrics within the exitnode cluster, to see how it scales as member usage climbs - please, as always, report results from your experience. While good results are always appreciated when posted, it's the unexpected and/or unexpectedly disappointing data points that most often point to immediate areas for improvement. Don't be shy about sharing any and all non-positive findings, in other words.

Our next tasks within the cluster are deployment of Android- and iOS-specific connection daemons across its infrastructure footprint. When those are ready for public testing, they will be posted here in this thread, as well.
by cryptostorm_ops
Tue Jan 21, 2014 1:06 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: TCP-based cstorm sessions, & "port striping" techniques
Replies: 0
Views: 13331

TCP-based cstorm sessions, & "port striping" techniques

 ! Message from: df
There was old/obsolete stuff here, so pruning and locking thread
The most recent port striping setup is described at https://cryptostorm.is/blog/port-striping-v2
by cryptostorm_ops
Sun Jan 12, 2014 6:03 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: server-side configuration publication
Replies: 19
Views: 27260

cryptostorm 1.2 Windows-specific server.conf

Here is the corresponding 1.2 version of the server-side configuration for Windows-specific network sessions:

Code: Select all

# cryptostorm_server version 1.2 config - widget framework
# supports & tested for Windows-compiled network access widget
# discussion & details in http://serverconf.cryptostorm.ch

daemon
local {assigned server IP}
port 443
proto udp
dev tun

txqueuelen 286
# expanded packet queue plane, to improve throughput on high-capacity sessions

sndbuf size 655368
rcvbuf size 655368
# increase pre-ring packet buffering cache, to improve high-throughput session performance

# tun-ipv6
# we aren't yet supporting IPv6 as it's not supported fully by OpenVPN & OpenSSL
# several active dev projects are at work on this & we are following them regularly

persist-key
push "persist-key"
# not essential, but smooths SIGHUPs of individual openvpn processes as in new conf loads

persist-tun
push "persist-tun"
# retain tun instantiation client-side during reconnects, to smooth process

fast-io
# experimental directive in OpenVPN 2.3.2 - testing for performance gains on openvpn
# optimize TUN/TAP/UDP I/O writes by avoiding a call to poll/epoll/select prior...
# to the write operation

ca /etc/windowsvpn/easy-rsa/keys/ca.crt
cert /etc/windowsvpn/easy-rsa/keys/server.crt
key /etc/windowsvpn/easy-rsa/keys/server.key
dh /etc/windowsvpn/easy-rsa/keys/dh2048.pem
# standard PKI/CA asymmetric key materials storage
# we manually generate & manage all key materials firsthand via cryptographic best practices

script-security 2
auth-user-pass-verify /etc/windowsvpn/auth.sh via-file
client-connect /etc/windowsvpn/session_up.sh
client-disconnect /etc/windowsvpn/session_down.sh
# custom-generated script hooks into our token auth system

tmp-dir /tmp
# manually set temp directory, to ensure active swap over-writes temp data consistently

topology subnet
server 10.77.0.0 255.255.0.0
# internal, non-routed subnet topology for ephemeral network member assignment
# essentially, an internal DHCP framework for client-to-exitnode tunnelized packet transit

float
# allows client to change IP, as with DHCP re-lease, & retain secure session...
# if HMAC continues to validate

push "redirect-gateway def1"
push "redirect-gateway bypass-dhcp"
# directives to allow clients to re-lease local DHCP outside of secure session via...
# LAN route details & metrics

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

# these below are our selected DNS services for within-network canonical resolution
push "dhcp-option DNS 198.100.146.51"
push "dhcp-option DNS 76.74.205.228"
# OpenNICproject.org, Canuck-optimised :-)
push "dhcp-option DNS 91.191.136.152"
# Telecomix is.gd/jj4IER
push "dhcp-option DNS 213.73.91.35"
# CCC http://is.gd/eC4apk

duplicate-cn
client-cert-not-required
# we do not use certs to uniquely identify connected members...
# doing so is a serious security failure and needlessly endangers anonymity on-net

keepalive 20 60
# retains active sessions with connected members during temporary traffic lulls

max-clients 300
# caps the number of simultaneous connections to a specific exitnode machine

fragment 1400
# mssfix 1400
# tunes the UDP session by fragmenting below the MTU upper bound
# much undocumented/unexpected behaviours result from these parameters, beware!
# we routinely test & refine these parameters, in-house, for best performance
# they cannot be 'pushed' to clients, as are required a priori for control channel setup

reneg-sec 1200
# cycle symmetric keys via tls renegotiation every 20 minutes
# an essential fallback to TLS-based 'perfect forward secrecy' via Diffie Hellman keygen

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once cipher libraries offer our choice...
# AES-GCM is looking good currently

tls-server
key-method 2
# specification of entropy sources (PRNG) used in generation of key materials

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x, natively, thru ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-exit
# exit on TLS negotiation failure

comp-lzo
# push "comp-lzo no"
# we are working towards removal of this from the network...
# but it sneaks in via client-side prefs

user nobody
group nobody
# nogroup on some distros

tran-window 256
# amount of overlap between old and new TLS control channel session keys allowed
# default is 3600, which is way too long to work with PFS & 1200 2nd key renegotiations

verb 5
mute 2
status /var/log/windowsvpn-status.log
log /var/log/windowsvpn.log
# rotating error & connection log parameters - cycle w/ each connection
# used to track packet-level errors within secure sessions
# does not retain any session-level detail - also wipes via regular session cycle
by cryptostorm_ops
Sun Jan 12, 2014 5:46 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

Re: cryptostorm: client config discussions, bugs, requests,

Lignus wrote:Ran some DNS checks, the .nu domain had either not propagated or has not been updated. In addition, it looks like one of the .net ones was overloked:

raw-montreal.cryptostorm.nu
***MISSING***

windows-montreal.cryptostorm.nu
***MISSING***

windows-iceland.cryptostorm.nu
***MISSING***
Excellent work on those - one was still propagating, but the other two had indeed been overlooed in this round of updates. We've not yet moved to a bulk-friendly nameserver service (for a number of reasons, we have avoided doing nameserver & DNS resolution in-house for many years - likely a good topic to discuss at some point in time, here in the forum), and as a result these entries are being manually version-controlled across multiple registrars & TLDs. It's ripe for minor errors, unfortunately, so outside checks are much appreciated.
These were checked against 8.8.8.8
Fair enough. Google's DNS system has become somewhat canonical in recent years, although server-side we do not use them as their fooprint within the United States of NSAmerica is just a bit too impossible to ignore at this point.
raw-iceland.cryptostorm.nu
***MISSING***

raw-iceland.cryptostorm.net
***MISSING***
Officially, there's no "raw-iceland" TLD set currently as there's not yet a dedicated daemon for raw connections on our Icelandic exitnode (reason: additional IPs are still being assigned to the hardware in the datacentre, & current IP allocations are already deployed against Windows & administrative requirements - hope to have that resolved shortly).

We didn't want to give too much of a "false hope" that the raw Iceland subdomains are pointing at properly-instantiated network daemons... when in fact they're simply dumping session handshakes into the Windows daemon for the time being (note: some folks have tweaked their raw configs to enable connects to that daemon, which is something we've no problem with but which we can't explicitly recommend as it's a rather fiddly process & will be resolved with extra IPs shortly).

If you notice any other anomalies in the TLD mappings, please do let us know. We're working upfront to ensure a broad & diverse sweep of registrars & TLDs within our resolution framework, so that any outage or censorship of a given domain and/or registrar is transparently handled via the existing, already-pushed redundancy within our <connections> elements.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Sun Jan 12, 2014 2:53 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

cryptostorm client config 1.2 - Frankfurt

We have tested and confirmed functionality for the following "raw"/Linux client configuration file, which is intended for dedicated connections to the Frankfurt exitnode cluster:

If you see any unexpected or anomalous behaviors when using this config, please post a note here so that we can investigate at once. Thank you.

Code: Select all

# this is the cryptostorm.is client settings file, versioning...
# cryptostorm_client_raw-frankfurt1_2.conf

# it is intended to provide connection solely to the Frankfurt exitnode cluster
# DNS resolver redundancy provided via connection TLD round-robin logic
# Chelsea Manning is indeed a badassed chick: #FreeChelsea!
# also... FuckTheNSA - for reals


client
dev tun
resolv-retry 16
nobind
float

remote-random
# randomizes selection of connection profile from list below, for redundancy against...
# DNS blacklisting-based session blocking attacks


# frankfurt cluster
<connection>
remote raw-frankfurt.cryptostorm.net 443 udp
</connection>

<connection>
remote raw-frankfurt.cryptostorm.ch 443 udp
</connection>

<connection>
remote raw-frankfurt.cryptostorm.nu 443 udp
</connection>

<connection>
remote raw-frankfurt.cstorm.pw 443 udp
</connection>


comp-lzo no
# specifies refusal of link-layer compression defaults
# we prefer compression be handled elsewhere in the OSI layers
# see forum for ongoing discussion - https://cryptostorm.ch/viewtopic.php?f=38&t=5981

down-pre
# runs client-side "down" script prior to shutdown, to help minimise risk...
# of session termination packet leakage

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

explicit-exit-notify 3
# attempts to notify exit node when client session is terminated
# strengthens MiTM protections for orphan sessions

hand-window 37
# specified duration (in seconds) to wait for the session handshake to complete
# a renegotiation taking longer than this has a problem, & should be aborted

fragment 1400
# congruent with server-side --fragment directive

auth-user-pass
# passes up, via bootstrapped TLS, SHA512 hashed token value to authenticate to darknet

# auth-retry interact
# 'interact' is an experimental parameter not yet in our production build.

ca ca.crt
# specification & location of server-verification PKI materials
# for details, see http://pki.cryptostorm.ch

<ca>
-----BEGIN CERTIFICATE-----
MIIFHjCCBAagAwIBAgIJAPXIBgkKVkuyMA0GCSqGSIb3DQEBCwUAMIG6MQswCQYD
VQQGEwJDQTELMAkGA1UECBMCUUMxETAPBgNVBAcTCE1vbnRyZWFsMTYwNAYDVQQK
FC1LYXRhbmEgSG9sZGluZ3MgTGltaXRlIC8gIGNyeXB0b3N0b3JtX2RhcmtuZXQx
ETAPBgNVBAsTCFRlY2ggT3BzMRcwFQYDVQQDFA5jcnlwdG9zdG9ybV9pczEnMCUG
CSqGSIb3DQEJARYYY2VydGFkbWluQGNyeXB0b3N0b3JtLmlzMB4XDTEzMTAxMTEz
NDA0NloXDTE3MDYwOTEzNDA0NlowgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJR
QzERMA8GA1UEBxMITW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBM
aW1pdGUgLyAgY3J5cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMx
FzAVBgNVBAMUDmNyeXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRt
aW5AY3J5cHRvc3Rvcm0uaXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDS4TuqOoT6NrE7oNXj5il97Ml306F9rmEf22+E/5uCsiTNL7inanLsDixihq2l
e0anBK8UvDPExYIWLpXu4ERFFsWS//AoZer8BlVYKnEEgzPh5UV8Jy2TyOlZ26Yz
g1A4MRcDFdPUXLq5Z8hw09k1uqOPU6trv5J+5TwhzMHrMunip8hvx8uXjzQ4DLPK
RKfRzwl+2ydyXgAGdfY1zLlvYvzvVUc4GcLXmAOLT4ZjWKxl4MoqNwf9VBfdLWn5
mWuYp/tT3RxNjKHnuqZlYhCvfWp1hbzSW/OdlO13B1C/PSfFnfFzlANWh31bfvos
pbCIFYG6RXIiP+Arc2sLVgTHAgMBAAGjggEjMIIBHzAdBgNVHQ4EFgQUWmCUeZzm
Qa+zcOA+KWfNF1e2Z9cwge8GA1UdIwSB5zCB5IAUWmCUeZzmQa+zcOA+KWfNF1e2
Z9ehgcCkgb0wgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJRQzERMA8GA1UEBxMI
TW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBMaW1pdGUgLyAgY3J5
cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMxFzAVBgNVBAMUDmNy
eXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRtaW5AY3J5cHRvc3Rv
cm0uaXOCCQD1yAYJClZLsjAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IB
AQDKDYRtxELcCUZwnGvQa8hp5lO/U87yYzOSP3OON4hBS6YWEmRyV3GvZtGibadl
8HbOU0TRS1skcS0g8OfiY+t/qitIpBuLMHgJHubBMWQ5SP9RlSy2ilxt7J+UGbw3
Xi6u7RRG1dOEZkN0RxpbZQeGf7MD6RTI+4JMRvstI0t2wpfAk0eF0FM++iqhR9mu
aH8apEFDUvCQv4NnDrXJqDUJi8Z56SHEJQ5NMt3ugv7vtY3kI7sciuPdW3hDPsJh
/T3cOWUeYeIVknVHwMuUFf6gdxZ8crrWkANpjwOm0gVh1BPRQzXXPKlSVUGgEVFD
XgJyvkX663aTcshEON1+bXp6
-----END CERTIFICATE-----
</ca>

ns-cert-type server
# requires TLS-level confirmation of categorical state of server-side certificate for MiTM hardening.

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once well-tested...
# cipher libraries support our choice - AES-GCM is looking good currently

replay-window 128 30
# settings which determine when to throw out UDP datagrams that are out of order...
# either temporally or via sequence number

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x & its ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-client
key-method 2
# specification of entropy source to be used in initial generation of TLS keys as part of session bootstrap

log devnull.txt
verb 0
mute 1
# sets logging verbosity client-side, by default, to zero
# no logs kept locally of connections - this can be changed...
# if you'd like to see more details of connection initiation & negotiation
by cryptostorm_ops
Sun Jan 12, 2014 2:49 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

windows-frankfurt-cryptostorm.net

marzametal wrote:Congrats on all the work behind the scenes!
I would like to mention that I had no internet response when using "windows-frankfurt.cryptostorm.net". I switched to the Iceland one and all was swell...
Thanks for the bug report - we did find an erroneous A record mapping that had been propagated. That has been resolved, and this mapping - windows-frankfurt.cryptostorm.net - should be resolving to the correct IP/instance within the Frankfurt exitnode cluster.

If you have a chance to test that out, please let us know what your results are so that we can be sure we've flushed the incorrect mapping.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Wed Jan 08, 2014 4:50 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: HOWTO: manual editing of widget exitnode preferences
Replies: 13
Views: 23087

windows-{clusterID}.cryptostorm.net hostname mappings

marzametal wrote:EDIT: The network kept on rediscovering every couple of minutes. Even after discovery was complete, I had no active internet access (browser, email etc...). I followed the above instructions, making sure to Run as Administrator and logged off before making .conf adjustment.
You've jumped ahead of our rollout of the new widget-specific server daemons, which is likely why you're getting those reconnects.

Throughout today, we'll be deploying descriptive hostname mappings to best reflect the OS specificity of chosen connections. For example, windows-iceland.cryptostorm.net is in process of mapping to the Windows widget-specific openvpn instance on our new cluster there. Those will be the hostnames you'll want to be putting into the widget's config file, to ensure it points at the daemons server-side with the widget-optimised server configuration settings.

If this all seems a bit fiddly right now, you're right: once all the requisite A records, SNAT, NIC, & iptables rules are settled & validated in production, the end result will be a far more elegant & complexity-free way of pointing specific client instances at exitnode cluster resources specifically optimised for their requirements. Basically, we're wrapping the (somewhat surprising) complexity of implementing all these customised, optimised server-side configuration instances behind an abstracted layer of encapsulating punchdowns. It's a bot of an object-oriented-inspired approach to resource management, and we are confident it'll be a qualitatively better & more elegant process for network members, first and foremost.

In the meantime, the hostnames that'll be widget-specific will be (although these are not fully propagated just yet!):

  • windows-montreal.cryptostorm.net
    windows-frankfurt.cryptostorm.net
    windows-iceland.cryptostorm.net
    windows-dynamic.cryptostorm.net


...TLD redundancy, as in the raw config files (.org | .pw | .nu), is being deployed in the 1.0 widget.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Wed Jan 08, 2014 4:38 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

Iceland 1.2

DesuStrike wrote:Yesterday I tried the Iceland node and it constantly reconnected every 2 minutes or so.
What client/OS are you using for those connections? The current mapping has been essentially *nix-specific, on an interim basis. We are in the midst of deploying OS-specific bindings in Iceland right now, in order to support widget connections, Android connections, iOS connections, and generic/*nix connections most effectively.

The 1.2 Iceland config has been tested specifically for command-line Linux with excellent throughput and stability - but not as yet for other platforms. That is currently, again, in process.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Wed Jan 08, 2014 2:37 am
Forum: general chat, suggestions, industry news
Topic: Default PRNGs in standard OpenSSL compiles
Replies: 0
Views: 10519

Default PRNGs in standard OpenSSL compiles

Saw this come across the screen a while back, and wonder whether it's sound advice to patch & recompile as the poster suggests...
Date: Sat, 14 Dec 2013 04:33:31 -0800
From: coderman <coderman@gmail.com>
To: cpunks <cypherpunks@cpunks.org>, Full Disclosure
<full-disclosure@lists.grok.org.uk>
Subject: [Full-disclosure] RDRAND used directly when default engines
loaded in openssl-1.0.1-beta1 through openssl-1.0.1e


as per the FreeBSD announcement[0] and others[1][2] direct use of RDRAND as sole entropy source is not recommended.

from Westmere onward you could use AES-NI to make crypto fast in OpenSSL. a common theme is to initialize OpenSSL via ENGINE_load_builtin_engines() which lets OpenSSL take advantage of this acceleration.

with Sandy Bridge you also got RDRAND. now load_builtin_engines results in the application using RDRAND directly for all entropy, in addition to accelerating AES.


if you are using an application linked with openssl-1.0.1-beta1 through openssl-1.0.1e you should do one of the following:
  • a.) rebuild your OpenSSL with OPENSSL_NO_RDRAND defined.

    b.) call RAND_set_rand_engine(NULL) after ENGINE_load_builtin_engines().

    c.) git pull latest openssl with commit: "Don't use rdrand engine as default unless explicitly requested." - Dr. Stephen Henson


the OPENSSL_NO_RDRAND option is recommended; an inadvertent call to load engines elsewhere could re-enable this bad rng behavior.


best regards,


0. "FreeBSD Developer Summit: Security Working Group, /dev/random"
https://wiki.freebsd.org/201309DevSummit/Security

1. "Surreptitiously Tampering with Computer Chips"
https://www.schneier.com/blog/archives/ ... ously.html

2. "How does the NSA break SSL? ... Weak random number generators"
http://blog.cryptographyengineering.com ... k-ssl.html
by cryptostorm_ops
Mon Jan 06, 2014 9:22 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

Re: cryptostorm: client config discussions, bugs, requests,

spotshot wrote:the other 4, dynamic1_1 frankfurt1_1 locked1_1 montreal1_1
get these warning when connected, without access to net
That's correct - the other four conf files are being forked to Windows & "raw" dedicated daemons on each exitnode, which should be done shortly once testing of the IP mappings is completed in-house. In the meantime, we make the Icelandic exitnode available only to "raw"/direct connections to bridge the gap until the forking is complete.

~ cryptostorm_ops
by cryptostorm_ops
Mon Jan 06, 2014 6:32 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

cryptostorm_client_iceland1_2.conf

We have incremented our client-side configuration to version 1.2, as we implement forked configuration options for OS/ecosystem flavours.

This is the first 1.2-class configuration file presented for public use and validation, and is specific to our new Icelandic exitnode cluster. It is intended for "raw" OpenVPN sessions; config settings for Windows widget sessions, iOS connections, and other subclasses are currently being prepared for release.


Code: Select all

# this is the cryptostorm.is client settings file, versioning...
# cryptostorm_client_iceland1_2.conf

# it is intended to provide connection solely to the Iceland exitnode cluster
# DNS resolver redundancy provided by cluster-iceland randomised lookup queries
# Chelsea Manning is indeed a badassed chick: #FreeChelsea!
# also... FuckTheNSA - for reals


client
dev tun
resolv-retry 16
nobind
float

remote-random
# randomizes selection of connection profile from list below, for redundancy against...
# DNS blacklisting-based session blocking attacks


<connection>
remote cluster-iceland.cryptostorm.net 443 udp
</connection>

<connection>
remote cluster-iceland.cryptostorm.ch 443 udp
</connection>

<connection>
remote cluster-iceland.cryptostorm.nu 443 udp
</connection>

<connection>
remote cluster-iceland.cstorm.pw 443 udp
</connection>


comp-lzo no
# specifies refusal of link-layer compression defaults
# we prefer compression be handled elsewhere in the OSI layers
# see forum for ongoing discussion - https://cryptostorm.ch/viewtopic.php?f=38&t=5981

down-pre
# runs client-side "down" script prior to shutdown, to help minimise risk...
# of session termination packet leakage

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

explicit-exit-notify 3
# attempts to notify exit node when client session is terminated
# strengthens MiTM protections for orphan sessions

hand-window 37
# specified duration (in seconds) to wait for the session handshake to complete
# a renegotiation taking longer than this has a problem, & should be aborted

fragment 1400
# congruent with server-side --fragment directive

auth-user-pass
# passes up, via bootstrapped TLS, SHA512 hashed token value to authenticate to darknet

# auth-retry interact
# 'interact' is an experimental parameter not yet in our production build.

ca ca.crt
# specification & location of server-verification PKI materials
# for details, see http://pki.cryptostorm.ch

<ca>
-----BEGIN CERTIFICATE-----
MIIFHjCCBAagAwIBAgIJAPXIBgkKVkuyMA0GCSqGSIb3DQEBCwUAMIG6MQswCQYD
VQQGEwJDQTELMAkGA1UECBMCUUMxETAPBgNVBAcTCE1vbnRyZWFsMTYwNAYDVQQK
FC1LYXRhbmEgSG9sZGluZ3MgTGltaXRlIC8gIGNyeXB0b3N0b3JtX2RhcmtuZXQx
ETAPBgNVBAsTCFRlY2ggT3BzMRcwFQYDVQQDFA5jcnlwdG9zdG9ybV9pczEnMCUG
CSqGSIb3DQEJARYYY2VydGFkbWluQGNyeXB0b3N0b3JtLmlzMB4XDTEzMTAxMTEz
NDA0NloXDTE3MDYwOTEzNDA0NlowgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJR
QzERMA8GA1UEBxMITW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBM
aW1pdGUgLyAgY3J5cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMx
FzAVBgNVBAMUDmNyeXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRt
aW5AY3J5cHRvc3Rvcm0uaXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDS4TuqOoT6NrE7oNXj5il97Ml306F9rmEf22+E/5uCsiTNL7inanLsDixihq2l
e0anBK8UvDPExYIWLpXu4ERFFsWS//AoZer8BlVYKnEEgzPh5UV8Jy2TyOlZ26Yz
g1A4MRcDFdPUXLq5Z8hw09k1uqOPU6trv5J+5TwhzMHrMunip8hvx8uXjzQ4DLPK
RKfRzwl+2ydyXgAGdfY1zLlvYvzvVUc4GcLXmAOLT4ZjWKxl4MoqNwf9VBfdLWn5
mWuYp/tT3RxNjKHnuqZlYhCvfWp1hbzSW/OdlO13B1C/PSfFnfFzlANWh31bfvos
pbCIFYG6RXIiP+Arc2sLVgTHAgMBAAGjggEjMIIBHzAdBgNVHQ4EFgQUWmCUeZzm
Qa+zcOA+KWfNF1e2Z9cwge8GA1UdIwSB5zCB5IAUWmCUeZzmQa+zcOA+KWfNF1e2
Z9ehgcCkgb0wgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJRQzERMA8GA1UEBxMI
TW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBMaW1pdGUgLyAgY3J5
cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMxFzAVBgNVBAMUDmNy
eXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRtaW5AY3J5cHRvc3Rv
cm0uaXOCCQD1yAYJClZLsjAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IB
AQDKDYRtxELcCUZwnGvQa8hp5lO/U87yYzOSP3OON4hBS6YWEmRyV3GvZtGibadl
8HbOU0TRS1skcS0g8OfiY+t/qitIpBuLMHgJHubBMWQ5SP9RlSy2ilxt7J+UGbw3
Xi6u7RRG1dOEZkN0RxpbZQeGf7MD6RTI+4JMRvstI0t2wpfAk0eF0FM++iqhR9mu
aH8apEFDUvCQv4NnDrXJqDUJi8Z56SHEJQ5NMt3ugv7vtY3kI7sciuPdW3hDPsJh
/T3cOWUeYeIVknVHwMuUFf6gdxZ8crrWkANpjwOm0gVh1BPRQzXXPKlSVUGgEVFD
XgJyvkX663aTcshEON1+bXp6
-----END CERTIFICATE-----
</ca>

ns-cert-type server
# requires TLS-level confirmation of categorical state of server-side certificate for MiTM hardening.

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once well-tested...
# cipher libraries support our choice - AES-GCM is looking good currently

replay-window 128 30
# settings which determine when to throw out UDP datagrams that are out of order...
# either temporally or via sequence number

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x & its ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-client
key-method 2
# specification of entropy source to be used in initial generation of TLS keys as part of session bootstrap

log devnull.txt
verb 0
mute 1
# sets logging verbosity client-side, by default, to zero
# no logs kept locally of connections - this can be changed...
# if you'd like to see more details of connection initiation & negotiation
by cryptostorm_ops
Mon Jan 06, 2014 6:26 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: server-side configuration publication
Replies: 19
Views: 27260

cryptostorm server config 1.2 - Iceland

Here is the 1.2 build of our server-side "raw" configuration file, for public review and auditing. There are, as usual, a number of small bugfixes, parameter tweaks, and performance-tuning adjustments. Those curious regarding specific parameter choices are encouraged to post queries and feedback in this thread.

Thank you.

Code: Select all

[root@fenrir openvpn]# cat server.conf# cryptostorm_server_windows version 1.2 config - linux/"raw" framework
# optimised for flexibility & general applicability to raw connections
# but it's NOT altered in cipher selection & enforcement, in any way!
# discussion & details in http://serverconf.cryptostorm.ch


daemon
local {local IP}
port 443
proto udp
dev tun

txqueuelen 286
# expanded packet queue plane, to improve throughput on high-capacity sessions

sndbuf size 655368
rcvbuf size 655368
# increase pre-ring packet buffering cache, to improve high-throughput session performance

persist-key
push "persist-key"
# not essential, but smooths SIGHUPs of individual openvpn processes as in new conf loads

persist-tun
push "persist-tun"
# retain tun instantiation client-side during reconnects, to smooth process

fast-io
# experimental directive in OpenVPN 2.3.2 - testing for performance gains on openvpn
# optimize TUN/TAP/UDP I/O writes by avoiding a call to poll/epoll/select prior...
# to the write operation

ca /etc/rawvpn/easy-rsa/keys/ca.crt
cert /etc/rawvpn/easy-rsa/keys/server.crt
key /etc/rawvpn/easy-rsa/keys/server.key
dh /etc/rawvpn/easy-rsa/keys/dh2048.pem
# standard PKI/CA asymmetric key materials storage
# we manually generate & manage all key materials firsthand via cryptographic best practices

script-security 2
auth-user-pass-verify /etc/rawvpn/auth.sh via-file
client-connect /etc/rawvpn/session_up.sh
client-disconnect /etc/rawvpn/session_down.sh
# custom-generated script hooks into our token auth system

tmp-dir /tmp
# manually set temp directory, to ensure active swap over-writes temp data consistently

topology subnet
server 10.55.0.0 255.255.0.0
# internal, non-routed subnet topology for ephemeral network member assignment
# essentially, an internal DHCP framework for client-to-exitnode tunnelized packet transit

float
# allows client to change IP, as with DHCP re-lease, & retain secure session...
# if HMAC continues to validate

# push "redirect-gateway bypass-dhcp"
# generally windows-specific

push "redirect-gateway def1"
# directives to allow clients to re-lease local DHCP outside of secure session via...
# LAN route details & metrics

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

# these below are our selected DNS services for within-network canonical resolution
push "dhcp-option DNS 198.100.146.51"
push "dhcp-option DNS 76.74.205.228"
# OpenNICproject.org, Canuck-optimised :-)
push "dhcp-option DNS 91.191.136.152"
# Telecomix is.gd/jj4IER
push "dhcp-option DNS 213.73.91.35"
# CCC http://is.gd/eC4apk

duplicate-cn
client-cert-not-required
# we do not use certs to uniquely identify connected members...
# doing so is a serious security failure and needlessly endangers anonymity on-net

keepalive 20 60
# retains active sessions with connected members during temporary traffic lulls

max-clients 300
# caps the number of simultaneous connections to a specific exitnode machine

# fragment 1400
# mssfix 1400
# tunes the UDP session by fragmenting below the MTU upper bound
# much undocumented/unexpected behaviours result from these parameters, beware!
# we routinely test & refine these parameters, in-house, for best performance
# cannot be 'pushed' to clients, as are required a priori for control channel setup

reneg-sec 1200
# cycle symmetric keys via tls renegotiation every 20 minutes
# an essential fallback to TLS-based 'perfect forward secrecy' via Diffie Hellman keygen

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once cipher libraries offer our choice...
# AES-GCM is looking good currently

tls-server
key-method 2
# specification of entropy sources (PRNG) used in generation of key materials

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x, natively, thru ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-exit
# exit on TLS negotiation failure

comp-lzo no
push "comp-lzo no"
# we are working towards removal of this from the network...
# but it sneaks in via client-side prefs

user nobody
group nobody
# nogroup on some distros

tran-window 256
# amount of overlap between old and new TLS control channel session keys allowed
# default is 3600, which is way too long to work with PFS & 1200 2nd key renegotiations

verb 5
mute 2
status /var/log/rawvpn-status.log
log /var/log/rawvpn.log
# rotating error & connection log parameters - cycle w/ each connection
# used to track packet-level errors within secure sessions
# does not retain any session-level detail - also wipes via regular session cycle
by cryptostorm_ops
Mon Dec 30, 2013 4:33 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

cryptostorm_client_locked1_1f.conf

Here's the guts of the 1.1(f) version of the client config family; we're posting it here for review by folks who are helping to refine this generation of the settings.

Note that server-side conf has been substantively updated over the weekend, so if you've tested config settings previously they may work better now that the server updates have been pushed to production across the network.

A decisions has been made by the ops team to standardise on a MTU length target of 1400 for the current generation of cryptostorm's framework. This also maintains backwards-compatability as best possible for current network members. That's a variance from earlier instances of the latest version configs posted here, which were moving towards a setting of 1350. This could not be maintained given compatability and other issues in the network.

Here is the text file of the (f) iteration of the 'locked' config; all other flavours will follow identical parameters, save for differences in <connection> constellations and round-robin logic:

Code: Select all

# this is the cryptostorm.is client settings file, versioning...
# cryptostorm_client_locked1_1f.conf

# it is intended for randomised initial selection of geographic exitnode cluster...
# then retention of specific node IP across session restarts within that cluster
# current version of this file can always be found in http://conf.crytostorm.org

# also... FuckTheNSA.


client
dev tun
resolv-retry 16
nobind
float

remote-random
# randomizes selection of connection profile from list below, for redundancy against...
# DNS blacklisting-based session blocking attacks


<connection>
remote exitnode-balancer.cryptostorm.net 443 udp
</connection>

<connection>
remote exitnode-balancer.cryptostorm.ch 443 udp
</connection>

<connection>
remote exitnode-loadbalancer.cstorm.pw 443 udp
</connection>

<connection>
remote exitnode-loadbalancer.cryptostorm.nu 443 udp
</connection>


comp-lzo no
# specifies refusal of link-layer compression defaults
# we prefer compression be handled elsewhere in the OSI layers
# see forum for ongoing discussion - https://cryptostorm.ch/viewtopic.php?f=38&t=5981

down-pre
# runs client-side "down" script prior to shutdown, to help minimise risk...
# of session termination packet leakage

explicit-exit-notify 3
# attempts to notify exit node when client session is terminated
# strengthens MiTM protections for orphan sessions

hand-window 37
# specified duration (in seconds) to wait for the session handshake to complete
# a renegotiation taking longer than this has a problem, & should be aborted

fragment 1400
# congruent with server-side --fragment directive

auth-user-pass
# passes up, via bootstrapped TLS, SHA512 hashed token value to authenticate to darknet

# auth-retry interact
# 'interact' is an experimental parameter not yet in our production build.

ca ca.crt
# specification & location of server-verification PKI materials
# for details, see http://pki.cryptostorm.ch

<ca>
-----BEGIN CERTIFICATE-----
MIIFHjCCBAagAwIBAgIJAPXIBgkKVkuyMA0GCSqGSIb3DQEBCwUAMIG6MQswCQYD
VQQGEwJDQTELMAkGA1UECBMCUUMxETAPBgNVBAcTCE1vbnRyZWFsMTYwNAYDVQQK
FC1LYXRhbmEgSG9sZGluZ3MgTGltaXRlIC8gIGNyeXB0b3N0b3JtX2RhcmtuZXQx
ETAPBgNVBAsTCFRlY2ggT3BzMRcwFQYDVQQDFA5jcnlwdG9zdG9ybV9pczEnMCUG
CSqGSIb3DQEJARYYY2VydGFkbWluQGNyeXB0b3N0b3JtLmlzMB4XDTEzMTAxMTEz
NDA0NloXDTE3MDYwOTEzNDA0NlowgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJR
QzERMA8GA1UEBxMITW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBM
aW1pdGUgLyAgY3J5cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMx
FzAVBgNVBAMUDmNyeXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRt
aW5AY3J5cHRvc3Rvcm0uaXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDS4TuqOoT6NrE7oNXj5il97Ml306F9rmEf22+E/5uCsiTNL7inanLsDixihq2l
e0anBK8UvDPExYIWLpXu4ERFFsWS//AoZer8BlVYKnEEgzPh5UV8Jy2TyOlZ26Yz
g1A4MRcDFdPUXLq5Z8hw09k1uqOPU6trv5J+5TwhzMHrMunip8hvx8uXjzQ4DLPK
RKfRzwl+2ydyXgAGdfY1zLlvYvzvVUc4GcLXmAOLT4ZjWKxl4MoqNwf9VBfdLWn5
mWuYp/tT3RxNjKHnuqZlYhCvfWp1hbzSW/OdlO13B1C/PSfFnfFzlANWh31bfvos
pbCIFYG6RXIiP+Arc2sLVgTHAgMBAAGjggEjMIIBHzAdBgNVHQ4EFgQUWmCUeZzm
Qa+zcOA+KWfNF1e2Z9cwge8GA1UdIwSB5zCB5IAUWmCUeZzmQa+zcOA+KWfNF1e2
Z9ehgcCkgb0wgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJRQzERMA8GA1UEBxMI
TW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBMaW1pdGUgLyAgY3J5
cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMxFzAVBgNVBAMUDmNy
eXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRtaW5AY3J5cHRvc3Rv
cm0uaXOCCQD1yAYJClZLsjAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IB
AQDKDYRtxELcCUZwnGvQa8hp5lO/U87yYzOSP3OON4hBS6YWEmRyV3GvZtGibadl
8HbOU0TRS1skcS0g8OfiY+t/qitIpBuLMHgJHubBMWQ5SP9RlSy2ilxt7J+UGbw3
Xi6u7RRG1dOEZkN0RxpbZQeGf7MD6RTI+4JMRvstI0t2wpfAk0eF0FM++iqhR9mu
aH8apEFDUvCQv4NnDrXJqDUJi8Z56SHEJQ5NMt3ugv7vtY3kI7sciuPdW3hDPsJh
/T3cOWUeYeIVknVHwMuUFf6gdxZ8crrWkANpjwOm0gVh1BPRQzXXPKlSVUGgEVFD
XgJyvkX663aTcshEON1+bXp6
-----END CERTIFICATE-----
</ca>

ns-cert-type server
# requires TLS-level confirmation of categorical state of server-side certificate for MiTM hardening.

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing CBC alternatives & will deploy once well-tested...
# cipher libraries support our choice - AES-GCM is looking good currently

replay-window 128 30
# settings which determine when to throw out UDP datagrams that are out of order...
# either temporally or via sequence number

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x & its ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-client
key-method 2
# specification of entropy source to be used in initial generation of TLS keys as part of session bootstrap

log devnull.txt
verb 1
mute 1
# sets logging verbosity client-side, by default, to zero
# no logs kept locally of connections - this can be changed...
# if you'd like to see more details of connection initiation & negotiation
by cryptostorm_ops
Mon Dec 30, 2013 2:40 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: server-side configuration publication
Replies: 19
Views: 27260

cryptostorm_server1_1d.conf

We have released and are in process of deploying across all exitnodes a new build of our server-side configuration file: cryptostorm_server1_1d.conf. There are a number of small bugfixes in this version, as well as stubs for some leakblock-specific enhancements that are making their way into production client connection tools (including the widget).

Please feel free to ask about any of these changes; rather than burying folks with minutia, we're thinking it might be best to follow up specific requests - at the level we're manipulating these configuration files, a full overview of all the parameters would quickly run to many pages of text, and be somewhat unmanageable (even if pretty interesting to those with a deep focus on the subject).

Here is the text-based file:
cryptostorm_server1_1d.conf
(4.56 KiB) Downloaded 779 times
The parameters in the file are as follows (with only specific server-side IPs redacted as they vary between exitnodes/clusters):

Code: Select all

# cryptostorm_server1_1d version server	config framework
# discussion & details in http://serverconf.cryptostorm.ch

daemon
local {exitnode local routable IP}
port 443
proto udp
dev tun

# tun-ipv6
# we aren't yet supporting IPv6 as it's not supported fully by OpenVPN & OpenSSL
# several active dev projects are at work on this & we are following them regularly

persist-key
push "persist-key"
# not essential, but smooths SIGHUPs of individual openvpn processes as in new conf loads

persist-tun
push "persist-tun"
# retain tun instantiation client-side during reconnects, to smooth process

fast-io
# experimental directive in OpenVPN 2.3.2 - testing for performance gains on openvpn
# optimize TUN/TAP/UDP I/O writes by avoiding a call to poll/epoll/select prior to the write operation

ca /etc/openvpn/easy-rsa/keys/ca.crt
cert /etc/openvpn/easy-rsa/keys/server.crt
key /etc/openvpn/easy-rsa/keys/server.key
dh /etc/openvpn/easy-rsa/keys/dh2048.pem
# standard PKI/CA asymmetric key materials storage
# we manually generate & manage all key materials firsthand via cryptographic best practices

script-security 2
auth-user-pass-verify /etc/openvpn/auth.sh via-file
client-connect /etc/openvpn/session_up.sh
client-disconnect /etc/openvpn/session_down.sh
# custom-generated script hooks into our token auth system

tmp-dir /tmp
# manually set temp directory, to ensure active swap over-writes temp data consistently

topology subnet
server 10.66.66.0 255.255.255.0
# internal, non-routed subnet topology for ephemeral network member assignment
# essentially, an internal DHCP framework for client-to-exitnode tunnelized packet transit

float
# allows client to change IP, as with DHCP re-lease, & retain secure session if HMAC continues to validate

# push-peer-info
# we only use this on test instances, not production nodes - helps debugging

push "redirect-gateway def1"
push "redirect-gateway bypass-dhcp"
# directives to allow clients to re-lease local DHCP outside of secure session via...
# LAN route details & metrics

allow-pull-fqdn
# allows client to pull DNS names from server
# we don't use but may in future leakblock integration

# these below are our selected DNS services for within-network canonical resolution
push "dhcp-option DNS 198.100.146.51"
# OpenNICproject.org
push "dhcp-option DNS 91.191.136.152"
# Telecomix is.gd/jj4IER
push "dhcp-option DNS 213.73.91.35"
# CCC http://is.gd/eC4apk

duplicate-cn
client-cert-not-required
# we do not use certs to uniquely identify connected members...
# doing so is a serious security failure and needlessly endangers anonymity on-net

keepalive 20 60
# retains active sessions with connected members during temporary traffic lulls

max-clients 300
# caps the number of simultaneous connections to a specific exitnode machine

# tun-mtu 1500
fragment 1400
# tunes the UDP session by fragmenting below the MTU upper bound
# much undocumented/unexpected behaviours result from these parameters, beware!
# we routinely test & refine these parameters, in-house, for best performance
# they cannot be 'pushed' to clients, as are required a priori for control channel setup

reneg-sec 1200
# cycle symmetric keys via tls renegotiation every 20 minutes
# an essential fallback to TLS-based 'perfect forward secrecy' via Diffie Hellman keygen

auth SHA512
# data channel HMAC generation
# heavy processor load from this parameter, but the benefit is big gains in packet-level...
# integrity checks, & protection against packet injections / MiTM attack vectors

cipher AES-256-CBC
# data channel stream cipher methodology
# we are actively testing upgrades to CBC & will deploy once cipher libraries support our choice...
# AES-GCM is looking good currently

tls-server
key-method 2
# specification of entropy sources (PRNG) used in generation of key materials

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements 'perfect forward secrecy' via TLS 1.x, natively, thru ephemeral Diffie-Hellman...
# see our forum for extensive discussion of ECDHE v. DHE & tradeoffs wrt ECC curve choice
# http://ecc.cryptostorm.ch

tls-exit
# exit on TLS negotiation failure

comp-lzo no
# push "comp-lzo no"
# we are working towards removal of this from the network...
# but it sneaks in via client-side prefs

user nobody
group nobody
# nogroup on some distros

tran-window 256
# amount of overlap between old and new TLS control channel session keys allowed
# default is 3600, which is way too long to work with PFS & 1200 2nd key renegotiations

verb 5
mute 2
status /var/log/openvpn-status.log
log-append /var/log/openvpn.log
# rotating error & connection log parameters - cycle w/ each connection
# used to track packet-level errors within secure sessions
# does not retain any session-level detail - also wipes via regular session cycle
by cryptostorm_ops
Mon Dec 30, 2013 2:09 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

Re: cryptostorm: config & parameter settings (client & serve

We've been testing configs and config deployments all weekend long, and are now complete with the server-side updates. I'll be posting up the new rev client configs this morning, as we deploy the necessary tweaks to the server-side parameters to support both the new adjustments and, as far as is possible, all backwards-compatible issues with prior client configs.

There is some... undocumented behavior in the way OpenVPN handles some of these parameters in certain OS settings, particularly the "fragment" directives and the "comp-lzo" compression settings; the syntax of errors tends to blur these two issues together, unfortunately, and it takes considerable forensic analysis to determine which is really causing issues in a given setting. Our team has been engaged in extensive, manual testing of all possible settings in config space, this weekend, to ensure we've determined these behaviors firsthand - not merely relying on the documentation.

We hope to push back some of our findings into the OpenVPN's project framework, as they could be useful to others who are deploying the tool in more cryptographically-intensive, security conscious settings. Basically, some of these parameters start to interact oddly when parameters are (broadly speaking) tightened up and packet-level errors are taken seriously.

We'll be posting out our results, as I said, throughout this morning - full details in this thread, as always.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Mon Dec 30, 2013 4:09 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: CAPTCHAs for guest posting & registration | CLOSED
Replies: 7
Views: 12865

Re: CAPTCHAs for guest posting in forum: ideas to improve

We're currently testing out the NuCaptcha tool to manage the spambots, both for guest posting and new forum registrations. It seems to be effective so far, but we're interested in any feedback from visitors who have interacted with the new tool.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Sat Dec 21, 2013 6:15 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm exitnode clusters: listing+requests+roadmap
Replies: 89
Views: 125467

Re: cryptostorm exitnode clusters: listing+requests+roadmap

We have just posted release candidate version 1.1 of the client configuration files, which allow for exitnode & cluster selection as well as enhanced failover/redundancy against DNS-based filtering attacks, & round-robin stochastic connection selection capabilities.

The files are available here and feedback is being sought as to any issues that may surface as they go into full production.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Sat Dec 21, 2013 6:09 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

Version 1.1(a) client config files

We've just completed a major overhaul of our exitnode/cluster nomenclature, hostname mappings, and loadbalancing framework. One of the results is a fork of the client configuration file into four separate files, depending on which exitnode connection logic members prefer. Briefly...

  • 1. locked
    2. dynamic
    3. Frankfurt-only
    4. Montreal-only


There's also some minor tuning of secondary parameters in these config files, as compared to the latest numbered revs released previously.

NOTE: while all the parameters & A record mappings have been fully tested, we are posting these config files to this thread in hopes of a broader testing of them by members themselves before they are considered "official." There's nothing in them that should be a problem... but as with all tech, the proof is in the deployment. Please, if you give any of these config files a run and experience problems of any kind, let us know right away via post here (preferred) or via other channels.

There's plans for a much more thorough explanation of these new cluster/node architectures and the ways in which they scale over time - for example, when our Icelandic node is ready for production. For now, our hope is that these can help folks who have older versions of configs, or who want to take advantage of node/cluster selection by choosing their preferred config file version from this batch.



EDIT: removed old, unusable configuration files
by cryptostorm_ops
Sat Dec 21, 2013 1:41 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm connections from Linux | DEPRECATED
Replies: 20
Views: 26029

Re: HOWTO: cryptostorm network connections from Linux

Here's the quote from the newly-revised page on OpenVPN installation for Linux:
Notes on old apt/yum repositories

The current incarnation of OpenVPN apt repositories is the third one. The first repositories were hosted on build.openvpn.net and the second ones on repos.openvpn.net, a now discontinued server. The apt lines for the latter still work, but new OpenVPN releases (2.3.3 and later) will only be added to current swupdate.openvpn.net repos. Unfortunately due to the complete restructuring of the apt repository structure it is not possible to cleanly migrate from the repos.openvpn.net-based configuration to the swupdate.openvpn.net configuration.
by cryptostorm_ops
Fri Dec 20, 2013 8:10 am
Forum: general chat, suggestions, industry news
Topic: optimising torrenting performance on cryptostorm: discussion
Replies: 68
Views: 191867

Re: Torrents?

caustic386 wrote:forward to trying it out once the 70.x.x.x issue is resolved.
Resolved.
by cryptostorm_ops
Mon Dec 09, 2013 1:36 pm
Forum: member support & tech assistance
Topic: error connecting: Windows OpenVPN package | RESOLVED
Replies: 2
Views: 4386

Re: error connecting: Windows OpenVPN package

We're marking this thread as "resolved," as it appears to have been settled during recent upgrades to the Montreal exitnode cluster.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Mon Dec 09, 2013 1:33 pm
Forum: member support & tech assistance
Topic: constantly reconnecting | RESOLVED
Replies: 23
Views: 24434

Re: constantly reconnecting

We're marking this thread as "resolved," as it appears to have been settled during recent upgrades to the Montreal exitnode cluster and concomitant work done on the auth system more broadly.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Mon Dec 09, 2013 1:30 pm
Forum: member support & tech assistance
Topic: issue with IPs in 70.38.*.* subnet? | RESOLVED
Replies: 12
Views: 14140

Re: issue with IPs in 70.38.*.* subnet?

We're marking this thread as "resolved," as it appears to have been settled during recent upgrades to the Montreal exitnode cluster.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Mon Dec 09, 2013 1:12 pm
Forum: member support & tech assistance
Topic: Specific website access issues? Report 'em here!
Replies: 7
Views: 9741

Re: issues resolving/routing to 1stpharmacy.us

It appears that recent network upgrades have resolved any reported issues regarding access to sites including:

http://1stpharmacy.us
http://warez-bb.org

Additionally, TLS/SSL load issues relating to session-intensive cites such as the following seem to have been substantially decreased, and/or resolved:

https://amazon.com
https://networksolutions.com

If network members can replicate any problems loading these pages - in particular, load fails while on-net - we ask that reports be posted here. MTRs and/or other traceroute-based forensics are particularly useful in determining where issues might reside.

To repeat: cryptostorm does not and has never intentionally blocked any website, web resource, or network-accessible service for members connected to the darknet. This will never happen, not intentionally. It is technically possible that an upstream bandwidth provider would do some sort of resource blocking/censorship... but we are not aware of any documented instances of this with any of our current infrastructure providers. If we were to become aware of such we would - simply put - raise holy hell, and if not satisfied that the block was removed, cut any business relationship with the censors involved.

Occasionally, internet-accessible resources that are somewhat far off the main routing/transit pathways (either physically or logically) can be subject to intermittently spotty access for some people; this has to do with the vagaries of route propagation, route announcement, and network topology in general. It is not something we, at cryptostorm, can control. In the event such a problem materialises for people while connected to cryptostorm, trarecroute-based analytics are often helpful in determining where the problem/bottleneck is to be found. Most often, it's out towards the fringes of internet connectivity.

Please remember that, for many network members, when connected to cryptostorm via certain exitnode clusters, physical (and logical) network paths for some network resources are vastly different than "direct"/bareback internet connections. As an example: someone seeking to load a website hosted on a server just down the road, but connected to cryptostorm through an exitnode many thousands of kms distant, will see their route to that "local" server jump from a couple of short hops to dozens of far longer, more complex network steps between them and the website in question. This is not a flaw in secure networking; rather, it's the nature of anonymised network access via shared, pooled secure packet transit. That local webserver might load fine via the route map provided by the short, local hops... but fall apart when contacted via a distant exitnode cluster due to poor deployment of broad-scale routing/AS/BGP data on the part of their chosen upstream providers.

We're happy to help troubleshoot any such issues, as the arise. And they do arise, because internetworked connectivity is complex and in some senses emergent. As such, any change in the picture can occasionally bring about unexpected consequences. For the vast majority of website/resources, and the vast majority of cryptostorm members, access via the darknet to unusual and/or "near-the-edge-of-network" resources will be substantially improved when on-net than when running plaintext: all of our exitnodes are hosted at robust, multiply-connected, well-resourced, censorship-rejecting datacenters... which cannot be said of many local ISPs otherwise relied on by members for access to the world's online resources.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Mon Dec 09, 2013 8:51 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: HOWTO: choosing exitnode clusters
Replies: 7
Views: 12578

Re: HOWTO: choosing exitnode clusters

We've also got direct connections to one of our secondary Montréal nodes enabled, as of this morning, and we'd love to know how test connects run to it without using the loadbalancer. The direct connection is run via:

Code: Select all

remote exitnode-bruno.cryptostorm.net 443
In general, it's better to use exitnode-montreal.cryptostorm.net, as it'll always be mapped to the best-performing local resources there... but it's sometimes helpful to hit a specific machine with some high-throughput connections to ensure it's carrying things properly.

Thank you.

~ cryptostorm_ops
by cryptostorm_ops
Sat Dec 07, 2013 8:14 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

client config, 0.9d

Here's an interim update to the client configuration file; it'll be formally announced as 1.0 soon, and posted to the main website, but for folks who would like to have the tweaks sooner here you go.

Nothing major has been changed, just a few minor bugfixes that we'll document formally during the 1.0 release. A few performance enhancements via small adjustments to MTU window settings, and more work to turn off the "comp-lzo" compression settings from both directions.

Also, this version has ca.crt inlined inside the file, so there's no need for an external ca.crt - mandatory for Android folks, and also handy for other OS flavours. We believe this has been tested across platforms, and are planning to include the inlining with the 1.0 build.

Enjoy!

Code: Select all

# this is the cryptostorm.is client settings file, versioning cryptostorm_client_production_0-9d.conf.
# current version of this file can always be found in the most recent post in conf.crytostorm.org .
# please post your comments, questions, suggestions, and observations about config options in that thread.

# also... FuckTheNSA.

client
dev tun
proto udp
remote exitnode-balancer.cryptostorm.net 443
resolv-retry infinite
nobind
float
# standard setup stuff to identify the network location of the exit node handshake machines.

comp-lzo no
# specifies availability of link-layer compression, which is something we're planning to remove in full-production configuration.

down-pre
# runs client-side "down" script prior to shutdown, to help minimise risk of session termination packet leakage.

explicit-exit-notify 3
# attempts to notify exit node when client session is terminated; strengthens MiTM protections for orphan sessions.

hand-window 17
# specified duration (in seconds) to wait for the session handshake to complete; a renegotiation taking longer than this has a problem, and should be aborted.

fragment 1400
# tunes the UDP session by ensuring packets are split under the upper-bound MTU threshold

# register-dns
# Windows-specific directive to ensure Windows DNS caching doesn't delay registration of new domain resolvers; we're still experimenting with this and haven't placed into production yet.

# script-security 2
# up "client.up"
# down "client.down"
# a *nix-only directive to ensure pushed DNS values are applied & torn down appropriately; still an experimental setting not yet ready for production.

log devnull.txt
verb 0
mute 1
# sets logging verbosity client-side, by default, to zero - no logs kept locally of connections; this can be changed if you'd like to see more details of connection initiation & negotiation.

auth-user-pass
# auth-retry interact
# passes up, via bootstrapped TLS, SHA512 hashed token value to authenticate to darknet; 'interact' is an experimental parameter not yet in our production build.

ca ca.crt
# cert clientgeneric.crt
# key clientgeneric.key
# specification and location of RSA cryptographic keys; for details, see pki.cryptostorm.ch .

ns-cert-type server
# requires TLS-level confirmation of categorical state of server-side certificate for MiTM hardening.

auth SHA512
# data channel HMAC generation; substantial improvement over default digest-generation algorithm.

cipher AES-256-CBC
# data channel stream cipher methodology; not currently known to be formally vulnerable to any theoretical or practical attacks.

replay-window 128 30
# settings which determine when to throw out UDP datagrams that are out of order, either temporally or via sequence number; this is a test configuration parameter not yet put into production.

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# full PFS via selection of ephemeral Diffie Hellman key regeneration & exchange for use in asymmetric control channel renegotiation.
# for details on this discrete logarithm-based alternative to elliptical-curve DHE key generation/synchronisation, see vincent.bernat.im/en/blog/2011-ssl-perfect-forward-secrecy.html .
# We're still experimenting with ECC-based PFS, but until we develop a deeper confidence in the mechanism for choosing & implementing curves within standard ECC frameworks, we're not deploying
# see this resource for full details: cryptostorm.ch/viewtopic.php?f=37&p=5156#p5156 .

tls-client
key-method 2
# specification of entropy source to be used in initial generation of TLS keys as part of session bootstrap

<ca>
-----BEGIN CERTIFICATE-----
MIIFHjCCBAagAwIBAgIJAPXIBgkKVkuyMA0GCSqGSIb3DQEBCwUAMIG6MQswCQYD
VQQGEwJDQTELMAkGA1UECBMCUUMxETAPBgNVBAcTCE1vbnRyZWFsMTYwNAYDVQQK
FC1LYXRhbmEgSG9sZGluZ3MgTGltaXRlIC8gIGNyeXB0b3N0b3JtX2RhcmtuZXQx
ETAPBgNVBAsTCFRlY2ggT3BzMRcwFQYDVQQDFA5jcnlwdG9zdG9ybV9pczEnMCUG
CSqGSIb3DQEJARYYY2VydGFkbWluQGNyeXB0b3N0b3JtLmlzMB4XDTEzMTAxMTEz
NDA0NloXDTE3MDYwOTEzNDA0NlowgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJR
QzERMA8GA1UEBxMITW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBM
aW1pdGUgLyAgY3J5cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMx
FzAVBgNVBAMUDmNyeXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRt
aW5AY3J5cHRvc3Rvcm0uaXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDS4TuqOoT6NrE7oNXj5il97Ml306F9rmEf22+E/5uCsiTNL7inanLsDixihq2l
e0anBK8UvDPExYIWLpXu4ERFFsWS//AoZer8BlVYKnEEgzPh5UV8Jy2TyOlZ26Yz
g1A4MRcDFdPUXLq5Z8hw09k1uqOPU6trv5J+5TwhzMHrMunip8hvx8uXjzQ4DLPK
RKfRzwl+2ydyXgAGdfY1zLlvYvzvVUc4GcLXmAOLT4ZjWKxl4MoqNwf9VBfdLWn5
mWuYp/tT3RxNjKHnuqZlYhCvfWp1hbzSW/OdlO13B1C/PSfFnfFzlANWh31bfvos
pbCIFYG6RXIiP+Arc2sLVgTHAgMBAAGjggEjMIIBHzAdBgNVHQ4EFgQUWmCUeZzm
Qa+zcOA+KWfNF1e2Z9cwge8GA1UdIwSB5zCB5IAUWmCUeZzmQa+zcOA+KWfNF1e2
Z9ehgcCkgb0wgboxCzAJBgNVBAYTAkNBMQswCQYDVQQIEwJRQzERMA8GA1UEBxMI
TW9udHJlYWwxNjA0BgNVBAoULUthdGFuYSBIb2xkaW5ncyBMaW1pdGUgLyAgY3J5
cHRvc3Rvcm1fZGFya25ldDERMA8GA1UECxMIVGVjaCBPcHMxFzAVBgNVBAMUDmNy
eXB0b3N0b3JtX2lzMScwJQYJKoZIhvcNAQkBFhhjZXJ0YWRtaW5AY3J5cHRvc3Rv
cm0uaXOCCQD1yAYJClZLsjAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IB
AQDKDYRtxELcCUZwnGvQa8hp5lO/U87yYzOSP3OON4hBS6YWEmRyV3GvZtGibadl
8HbOU0TRS1skcS0g8OfiY+t/qitIpBuLMHgJHubBMWQ5SP9RlSy2ilxt7J+UGbw3
Xi6u7RRG1dOEZkN0RxpbZQeGf7MD6RTI+4JMRvstI0t2wpfAk0eF0FM++iqhR9mu
aH8apEFDUvCQv4NnDrXJqDUJi8Z56SHEJQ5NMt3ugv7vtY3kI7sciuPdW3hDPsJh
/T3cOWUeYeIVknVHwMuUFf6gdxZ8crrWkANpjwOm0gVh1BPRQzXXPKlSVUGgEVFD
XgJyvkX663aTcshEON1+bXp6
-----END CERTIFICATE-----
</ca>
by cryptostorm_ops
Thu Dec 05, 2013 3:38 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm exitnode clusters: listing+requests+roadmap
Replies: 89
Views: 125467

Montréal exitnode cluster upgrade

This morning, we completed a major capacity addition to our Montréal exitnode cluster. The new machine has been cycled into our loadbalancer as the primary connect point, with existing capacity deprecated to fallback status. Cryptostorm members don't have to make any changes to have the transition take place (we cycled VMs to initiate the transition over to the primary machine, which takes place transparently during a session).

As member network traffic has increased and increased again, our primary test hardware in Montréal began to feel the strain. In fact, after much troubleshooting and fine-tuning of network performance we finally came to the conclusion based on all available metrics that it has been CPU that's bottlenecking total network performance. This is vastly different than traditionally is the case with "VPN services" - for example, we've seen 300+ megabit/second throughput on a box with dual processors (older procs, too) and CPU utilization not go over 20%. Ever. But, with cryptostorm's crypto suite selection choosing vastly more powerful (and CPU intensive) algorithms and ephemeral session cycling parameters, this has flipped totally in reverse: now, we're seeing boxes choke on CPU long before any sort of networking bottleneck comes forth.

(for those technically curious, we've seen unusually high packet losses on our virtual NICs, within the Xen framework, that act as SDN-based gateways between domU "guest" VMs and domO hypervisors - even on paravirtualised OS frameworks)

We've tuned, and tuned, and tuned... convinced we were missing something obvious in resolving this apparent CPU bottleneck. Eventually, our resident crypto geek (pj) convinced the rest of the team that, yes, these cipher suites can make smoke come out the ears of a big, modern, multi-CPU server. Easily. Astonishingly, he was proved right - we tend to think of him as like that guy in Jurassic Park ranting on about how "life will find a way" and "chaos manifests" and whatnot. Except with him, it's crypto. :P

Anyway.

With the help of the great folks at iWeb, we got a line on a badassed, CPU-heavy machine with 24 gigs of RAM (!!). We stripped down all non-security-essential elements on our standard exitnode "golden snapshot," and began the process of provisioning it for production use. This was given primary-priority in the work queue, meaning our tech folks have worked cyclical shifts 24/7 to bring things up as fast as safely possible.

The machine is now in production after accelerated internal testing. It seems quite speedy thus far.

We ask folks to hammer it with heavy loads, and let us know of any performance issues - good or bad. We're closely watching packet characteristics at the physical NIC, to ensure we've got no bottlenecks. There's still some fine-tuning of the kernels' TCP and general network characteristics going on, and for the next few days we expect to see ongoing per-session improvements.

The existing nodes in the cluster are still online, and we're tuning our loadbalancer to dynamically stripe sessions across machines in the cluster based on realtime performance characteristics - this is an ongoing process.

Cryptographic suites, session parameters, auth procedures, and all other security-centric components do not vary between exitnodes. There is no "more secure" exitnode than another, and thus no need to change between them apart from performance considerations (pingtimes/latency, throughput, packet integrity, route topology, etc.).

Here's how we do the process, and information for folks that might want to compare specific nodes in the cluster manually to see how performance is evolving. Note that the next release of the widget includes menu-based exitnode selection; the information below is purely for those curious about the inner workings of the system.

  • exitnode_balancer.cryptostorm.ch is the old (deprecated) resolver - per bugfix submitted by a beta tester, it's now been replaced with exitnode-balancer.cryptostorm.ch (although the deprecated version continues to be maintained for backwards compatability).

    exitnode-balancer.cryptostorm.net is the new, preferred resolver for network wide best-performance - we're using this TLD as frontline network admin preference (although we're striping in numerous fallback TLDs in parallel for systemic redundancy in the event of DNS-based attack vectors)

    exitnode-shadow.cryptostorm.net is the new node in the cluster

    exitnode-bruno.cryptostorm.net is the previous "primary" node in the cluster, still online and available for direct connections and in use as fallback rolling forward

    exitnode-germany.cryptostorm.net is our new German cluster; it's very nearly ready for production connects and may well be able to handle connections at the time of this forum post going live (we'll announce it formally, as well)

    exitnode-iceland.cryptostorm.net is not yet mapped, as the cluster is still in provisioning stage

    (all resolvers are set to a fairly low TTL setting of 1337 seconds... naturally)


For those who prefer to manually control their node selection by swapping out the "remote" entry in the client configuration file - this is fine, and there's no security risks or operational problems in doing so. Once you're finished experimenting, you can default back to exitnode-balancer and return to automated node/cluster selection, if you prefer.

As a reminder: our servers are actually servers - physical machines we run from the hardware forward. By controlling our machines at the "iron" level, we retain substantially improved security and audit controls, as well as direct capability to ensure full cipher suite compatibility. We don't spin out hypeware lists of "a hundred servers" that only represent low-capacity, insecure VPS instances - that may be easier and aid in fooling the less experienced into thinking that these networks are "bigger," they aren't. That's like saying a house with 100 rooms is "bigger" than one with only three... and the three rooms are a warehouse, as compared to a dollhouse with dozens of tiny little cubbies.

Let us know how the cluster is performing. We're happy with the location, and the datacentre - even as we know we've got plenty of room for performance improvements as we grow. Our goal is to seamlessly handle 100megabit/second individual network sessions without any loss of throughput as compared to bareback/plaintext transit, if needed. Indeed, because encrypted packets generally avoid packet-shaping/DPI tools often deployed by local ISPs, many members see faster real throughput of network data on-net with cryptostorm than bareback. This is how it should be - those who say that "VPN networks are always slower" are referring to poorly-run, poorly-provisioned, poorly-administered "networks" generally using default setting, parameters, libraries, and protocols.

We tune every piece of our network to be (of course) secure... but also fast as fuck. That's our internal motto: fast as fuck. We want it to scream, always. If it doesn't, we drop everything to make it do so. Indeed, if/when you see us tech ops folks vanish from view, it's likely because we're seeing suboptimal network performance and we're digging into that to get it back to fast. We monitor performance continuously, internally, and a drop in performance is a "all-hands alarm" state for our team.

Fast as fuck - let us know if we're doing the job.

~ cryptostorm_ops
by cryptostorm_ops
Mon Dec 02, 2013 8:06 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm: server-side configuration publication
Replies: 19
Views: 27260

cryptostorm_server0.9c.conf

We've made a number of small edits to the current server.conf settings, which have now been bundled together into a newly-numbered release. Attached is this revision, which is the current production instance on both our Montreal cluster and the German cluster awaiting production rollout:

Code: Select all

daemon
local 70.38.46.226
port 443
proto udp
dev tun
tun-ipv6
# persist-key
# persist-tun

ca /etc/openvpn/easy-rsa/keys/ca.crt
cert /etc/openvpn/easy-rsa/keys/server.crt
key /etc/openvpn/easy-rsa/keys/server.key
dh /etc/openvpn/easy-rsa/keys/dh2048.pem

script-security 2
auth-user-pass-verify /etc/openvpn/auth.sh via-file
client-connect /etc/openvpn/session_up.sh
client-disconnect /etc/openvpn/session_down.sh

tmp-dir /tmp

topology subnet
server 10.77.0.0 255.255.0.0

push "redirect-gateway def1"
push "bypass-dhcp"

push "dhcp-option DNS 198.100.146.51"
# OpenNICproject.org
push "dhcp-option DNS 91.191.136.152"
# Telecomix is.gd/jj4IER
push "dhcp-option DNS 213.73.91.35"
# CCC http://is.gd/eC4apk

duplicate-cn
client-cert-not-required
#username-as-common-name

keepalive 20 60
max-clients 300

tun-mtu 1500
fragment 1350
mssfix
# tunes the UDP session by fragmenting below the MTU upper bound

reneg-sec 1200
# cycle symmetric keys via tls renegotiation every 20 minutes

auth SHA512
# data channel HMAC generation

cipher AES-256-CBC
# data channel stream cipher methodology

# tls-auth /etc/openvpn/ta.key 0
# static key crap, which we're not using

tls-server
key-method 2

tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA
# implements PFS via TLS 1.2, natively, thru ephemeral Diffie-Hellman key creation

tls-exit
# exit on TLS negotiation failure

comp-lzo no
push comp-lzo no

user nobody
group nobody
# nogroup on some distros

tran-window 256
# amount of overlap between old and new TLS control channel session keys allowed
# default is 3600, which is way too long to work with PFS and 1200 second key renegotiations

verb 2
mute 3
status /var/log/openvpn-status.log
log /var/log/openvpn.log
# log-append  exitnode_bruno.log
cryptostorm_server0.9c.conf
(1.7 KiB) Downloaded 769 times
by cryptostorm_ops
Mon Dec 02, 2013 5:55 am
Forum: member support & tech assistance
Topic: connected but not connected to vpn | RESOLVED (?)
Replies: 17
Views: 18072

Re: connected but not connected to vpn

spotshot wrote:3rd time, just lost connection, but it shows connected, vpnetmon thinks it's still connected
so utorrent isn't killed and keeps on running.

after about a minute it popped up disconnected and is just trying to reconnect
What's happening from the cryptostorm side of things is that sessions are getting started, but then utorrent appears to be - via the "port forwarding" on your router - throwing packets upstream that have as their "source IP" your private 192. IP - and that's going to result in those packets getting tossed, for bad source IP (we see it happening in the server packet stats).

That happens enough, and cryptostorm decides "ok, this entire session is really suspicions - it looks like someone's trying to hijack it by sticking a valid session ID/HMAC identifier on invalid packets - to be on the safe side, we're terminating the session, period." And then the reconnect starts.

It also appears - congrats! - that these bad packets are managing to "step on the toes" of other network sessions (somehow, we're not quite clear on that), resulting in "neighbouring" network sessions being dropped, too. Which is then cascading down thru the entire exitnode cluster in Montreal in some sort of emergent behaviour that's fascinating... but pretty annoying, too.

Anyway, this is easy to fix - we'll get your router settled properly, and the problem is gone. Trying to port forward into cryptostorm is going to end badly - this badly is certainly a surprise, but it's not going to be happy. I can work through that with you here in a thread - which is best - or via email (ops@cryptostorm.is) if you prefer.
by cryptostorm_ops
Mon Dec 02, 2013 5:31 am
Forum: member support & tech assistance
Topic: connected but not connected to vpn | RESOLVED (?)
Replies: 17
Views: 18072

Re: connected but not connected to vpn

This is likely an error thrown from your session:

Code: Select all

(04:22:16 PM) df: Sun Dec  1 19:14:27 2013 us=175077 192.92.208.179:4093 MULTI: bad source address from client [192.168.1.104], packet dropped
What sort of "port forwarding" are you referring to, in your router setup?

edited to add: it appears that your router is reporting to the rest of the world that your public IP address is 192.168.1.104 - which is impossible, as that's a private/nonroutable IP sector and can't be used to identify network addresses in the public space.

So if I'm reading this correctly, your router is misconfigured or is just having a terrible day and breaking one of the fundamental rules of public-switched internetwork setup: don't try to route private IPs over public network links!
by cryptostorm_ops
Sun Dec 01, 2013 1:09 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: source code - cryptostorm widget, version 0.91(w)
Replies: 7
Views: 15378

Re: source code - cryptostorm widget, version 0.91(w)

Had a good question come across the transom: why the tar archive, above, and not just a direct .exe of the installer? The forum here doesn't like .exe attachments, and while we could bully it into being ok with that, it seems a bit needless. If you don't want to bother de-archiving after download, the direct .exe is posted on the main website; link here: https://cryptostorm.is//cryptostorm_widget_0-91w.exe

Also, the MD5 fingerprint of the installer is: 073d2cfddaa0fe16ec30ac5eb12563ad

...if you download it, hash it, and get something different then - congratulations! - you've been MiTM'd. Use a less-compromised channel to pull the binaries, re-check the hash, and keep the backdoors for kinky bedroom stuff.
by cryptostorm_ops
Sun Nov 24, 2013 3:22 am
Forum: member support & tech assistance
Topic: constantly reconnecting | RESOLVED
Replies: 23
Views: 24434

Re: constantly reconnecting

The tech ops team is looking at this reconnection issue currently.

It does appear that a selection of members are having an issue related to premature reset of DHCP credentials. It's somewhat tricky as this is not universal and thus debugging has been slower than usual.

Additionally, we are testing internally several adjustments to client configurations to increase resilience in the event DHCP leases terminate during a sessions as a result of network anomalies between the client and the cryptostorm exitnode clusters.

If anyone else is seeing these reconnect issues this weekend, PLEASE let us know in this thread - and most importantly let us know what OS and connection method you are using.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Thu Nov 21, 2013 7:56 pm
Forum: member support & tech assistance
Topic: Weekend network tuning, some bugs, etc. | RESOLVED
Replies: 7
Views: 9692

Re: Weekend network tuning, some bugs, etc.

There's a thread here somewhere discussing Windows route additions/teardowns & route metric parameterization. It's a really important issue, and one that's in active development as part of the Windows-compile widget roadmap of future functionality.

Basically, we're moving towards taking full control of local routing table setup/teardown on the Windows compile - because Windows does a really terrible job of it. Unpredictable, too. I suspect those lines you're seeing are actually an example of that: not directly dispositive, but indicative of suboptimal route management, to say the least. I did pass that post direct to the widget dev folks, and I know they've been discussing it.

If someone finds where this issue has been discussed - perhaps in a HOWTO thread? - and can cross-post a reference to this thread, that would be great. It's an ongoing area and I expect will have lots of discussion building from it, so it might even split into its own thread as time goes on.

Thank you.

~ cryptostorm_ops
by cryptostorm_ops
Wed Nov 20, 2013 1:54 pm
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: client config for cryptostorm: general discussion & bughunt
Replies: 57
Views: 83760

Re: cryptostorm: config & parameter settings (client & serve

DesuStrike wrote:I am not exactly sure how to pull this off but I would love if there was a way to highlight all changes to the client and server configuration compared to the previous version.
This is something we're keen to support, and is one of the notable weaknesses of our current "post it to a forum thread" style of version control. Just about any more advanced version control system automates this change-highlighting information, so it's easy to see what's changed.

So the question is: what system is best for our next iteration? It's possible to do this manually, of course, via some basic highlighting in a forum thread - different colors, for example. Or we could so something more fancy, like Git-style distributed version control. Basically, we're open to suggestions and advice.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Wed Nov 20, 2013 1:39 pm
Forum: member support & tech assistance
Topic: DD-WRT: Dropped VPN connection + Resloving bugs
Replies: 7
Views: 8088

Re: DD-WRT: Dropped VPN connection + Resloving bugs

We default to verbosity 0 for client config for basic security: fewer logs mean that in the event of an endpoint security breach client-side, there's less forensic data with which an attacker can make ex post facto network correlations. Somewhat of an unlikely scenario, but its probability is not zero and thus the benefit gained from defaulting to 0 verbosity client-side is greater than zero. Hence, the decision is made to default to 0... with the option for members to manually up verbosity as they so choose.

We're curious to see what the logfiles say, as at this point the team doesn't have much of a theory as to what might be going on, to be honest.

Thank you,

~ cryptostorm_ops
by cryptostorm_ops
Wed Nov 20, 2013 12:30 am
Forum: cryptostorm in-depth: announcements, how it works, what it is
Topic: cryptostorm's network access widget, rev. 0.9 public beta
Replies: 21
Views: 62910

Re: cryptostorm's network access widget, rev. 0.9 public bet

The 0.84 release candidate has indeed been promoted to 0.90 beta version without any code edits. It's a bit confusing to have the jump in numbers, but we decided to keep the underyling .exe as 0.84 so folks with that version already installed can see that a reinstall is not necessary - while at the same time making clear that the release candidate is now an official beta.

There's a couple of minor known bugs in the 0.90 version relating to graphical component behavior, which may result in a 0.91 version being distributed once they are fully resolved. However, given that core functionality has stabilized, the 0.90 is a good beta release.

We're currently working through some options for managing upgrades/updates to the widget, and will be posting more on that in this thread throughout the week.

Thank you.

~ cryptostorm_team