Skip to main content

External Blogs

Montréal-Python: Call to Action

Montreal Python - lun, 02/01/2016 - 00:00

Ladies and Gentlemen, we at Montreal Python are super excited for 2016 and we have come up with some great ideas.

In order to turn these ideas into reality, we will need some help. Montreal Python is an open community and collaboration is key to our success. So we are inviting beginners, experts and newcomers to join us at our next organization meeting on Monday February 8th.

Here we will discuss topics about:

  • Annual event / conference
  • Workshops
  • Hackathons / Project nights
  • The future of Montreal Python / elections

Montreal has a pretty exciting Python scene, and thanks to the community that is something we'll maintain for years to come. It is now your chance to come and make things happening.

When

Monday February 8th at 6pm

Where

Shopify Offices 490 rue de la Gauchetière west https://goo.gl/maps/MJrA2RN8e912

Who

Anyone who want to help or is curious about the Python Community in Montreal

For those that can't attend, don't worry, send us a email with your ideas at mtlpyteam@googlegroups.com.

Catégories: External Blogs

My free software activities, January 2016

Anarcat - dim, 01/31/2016 - 17:18
Debian Long Term Support (LTS)

This is my second month working on Debian LTS, started by Raphael Hertzog at Freexian. I think this month has been a little better for me, as I was able to push two "DLA" (Debian LTS Advisories, similar to regular DSAs (Debian Security Advisories) but only applying to LTS releases).

phpMyAdmin and Prosody

I pushed DLAs for phpmyadmin (DLA-406-1) and [prosody (CVE-2016-0756)][]. Both were pretty trivial, but I still had to boot a squeeze VM to test the resulting packages, something that was harder than expected. Still the packages were accepted in squeeze-lts and should work fine.

icu and JDK vulnerabilities

I also spent a good amount of time trying to untangle the mess that java software has become, and in particular the icu vulnerabilities, CVE-2015-4844 and CVE-2016-0494. I ended up being able to backport patches and build packages, not without a significant amount of pain because of how upstream failed to clearly identify which patches did what.

The fact that they (Oracle) did not notify their own upstream (icu) is also a really questionable practice in the free software world, which doesn't come as a surprise coming from Oracle anymore, unfortunately. Even worse, CVE-2016-0494 was actually introduced as part of the fix for CVE-2015-4844. I am not even sure the patches provided actually fix the problem because of course Oracle didn't clearly state what the problem was or how to exploit it.

Still: I did the best I could under the circumstances and built packages which I shared with the debian-lts list in the hope others could test it. I am not much familiar with the icu package or even Java anymore, so I do not feel comfortable uploading those fixes directly right now, especially since I just trust whatever was said on the Redhat and icu bugtrackers. Hopefully someone else can pick this up and confirm I had the right approach.

OpenSSH vulnerabilities

I also worked on CVE-2016-1908 a fairly awkward vulnerability in OpenSSH involving bypassing a security check in the X server that forbids certain clients from looking at keystrokes, selections and more stuff from other clients. The problem is pretty well described in this article. It is, basically, that there are two ways for applications to talk to the X server: "trusted" and "untrusted". If an application is "trusted", they can do all sorts of stuff like manipulate the clipboard, send keystrokes to other applications, sniff keystrokes and so on. This seems fine if you are running local apps (a good example is xdotool to test this) but can be pretty bad once X forwarding comes in play in SSH because then the remote server can use your X credentials to run arbitrary X code in your local X server. In other words, once you forward X, you trust the remote server as being local, more or less.

This is why OpenSSH 3.8 introduced the distinction between -X (untrusted) and -Y (trusted). Unfortunately, after quite a bit of research and work to reproduce the issue (i could not not reproduce the issue!), I realized that Debian has, even since 3.8 has been released (around the "sarge" days!) forcibly defaulted ForwardX11Trusted to yes which makes -X and -Y behave the same way. I described all of this in a post to the LTS list and OpenSSH maintainers and it seems there were good reasons for this back then (-X actually breaks a lot of X clients, for example selecting text will crash xterm), but I still don't quite see why we shouldn't tell people to use -Y consciously if they need to, instead of defaulting to the nasty insecure behavior.

Anyways, this will probably end up being swept under the rug for usability reasons, but just keep in mind that -X can be pretty nasty if you run it against an untrusted server.

Xscreensaver vulnerability

This one was fun. JWZ finally got bitten by his own rants and a pretty embarrassing vulnerability (CVE-2015-8025) that allowed one to crash the login dialog (and unlock the screen) by hot swapping external monitors (!). I worked on trying to reproduce the issue (I couldn't: my HDMI connector stopped working on my laptop, presumably because of a Linux kernel backport) and building the patches provided by the maintainer on wheezy, and pushed debdiffs to the security team which proceeded with the upload.

Other LTS work

I still spend a bit too much time to my taste trying to find work in LTS land. Very often, all the issues are assigned or the ones that remain seem impossible to fix (like the icu vulnerabilities) or packages so big that I am scared to work on it (like eglibc). Still, the last week of work was much better, thanks to the excellent work that Guido Günther has done on the front desk duties this week. My turn is coming up next week and I hope I can do the same for my fellow LTS workers.

Oh, and I tried to reproduce the cpio issue (CVE-2016-2037) and failed, because I didn't know enough about Valgrind. But even then, I don't exactly know where to start to fix that issue. It seems no one does, because this unmaintained package is still not fixed anywhere...

systemd-nspawn adventures

In testing the OpenSSH, phpMyAdmin and prosody issues, I had high hopes that systemd-nspawn would enable me to run an isolated squeeze container reliably. But I had trouble: for some reason, squeeze does not seem to like nspawn at all. First off, it completely refuses to boot it because it doesn't recognize it as an "OS root directory", which, apparently, need the os-release file (in /etc, but it doesn't say that because it would be too easy):

$ sudo systemd-nspawn -b -D /var/cache/pbuilder/squeeze-amd64-vm Directory /home/pbuilder/squeeze-amd64-vm doesn't look like an OS root directory (os-release file is missing). Refusing.

I just created that as an empty file (it works: i also tried copying it from a Jessie system and "faking" the data in it, but it's not necessary), and then nspawn accepts booting it. The next problem is that it just hangs there: it seems that the getty programs can't talk to the nspawn console:

$ sudo systemd-nspawn -b -D /var/cache/pbuilder/squeeze-amd64-vm Spawning container squeeze-amd64-vm on /home/pbuilder/squeeze-amd64-vm. Press ^] three times within 1s to kill container. /etc/localtime is not a symlink, not updating container timezone. INIT: version 2.88 booting Using makefile-style concurrent boot in runlevel S. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Sun Jan 31 15:57:31 UTC 2016 ... (warning). Activating swap...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Sun Jan 31 15:57:31 UTC 2016 ... (warning). Activating lvm and md swap...done. Checking file systems...fsck from util-linux-ng 2.17.2 done. Mounting local filesystems...done. Activating swapfile swap...done. Cleaning up temporary files.... Cleaning up temporary files.... INIT: Entering runlevel: 2 Using makefile-style concurrent boot in runlevel 2. INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "1" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes INIT: no more processes left in this runlevel

Note that before the INIT messages show up, quite a bit of time passes, around a minute or two. And then the container is just stuck there: no login prompt, no nothing. Turning off the VM is also difficult:

$ sudo machinectl list MACHINE CONTAINER SERVICE squeeze-amd64-vm container nspawn 1 machines listed. $ sudo machinectl status squeeze-amd64-vm squeeze-amd64-vm Since: dim 2016-01-31 10:57:31 EST; 4min 44s ago Leader: 3983 (init) Service: nspawn; class container Root: /home/pbuilder/squeeze-amd64-vm Address: fe80::ee55:f9ff:fec5:f255 2001:1928:1:9:ee55:f9ff:fec5:f255 fe80::ea9a:8fff:fe6e:f60 2001:1928:1:9:ea9a:8fff:fe6e:f60 192.168.0.166 Unit: machine-squeeze\x2damd64\x2dvm.scope ├─3983 init [2] └─4204 lua /usr/bin/prosody $ sudo machinectl poweroff squeeze-amd64-vm # does nothing $ sudo machinectl terminate squeeze-amd64-vm # also does nothing $ sudo kill 3983 # does nothing $ sudo kill -9 3983 $

So only the latter kill -9 worked:

Container squeeze-amd64-vm terminated by signal KILL.

Pretty annoying! So I ended up doing all my tests in a chroot, which involved shutting down the web server on my laptop (for phpmyadmin) and removing policy-rc.d to allow the services to start in the chroot. That worked, but I would prefer to run that code in a container. I'd be happy to hear how other maintainers are handling this kind of stuff.

For the OpenSSH vulnerability testing, I also wanted to have a X server running from squeeze, something which I found surprisingly hard. I was not able to figure out how to make [qemu][] boot from a directory (the above chroot), so I turned to the Squeeze live images from the cdimage archive. Qemu, for some reason, was not able to boot those either: I would only get a grey screen. So I ended up installing Virtualbox, which worked perfectly, but I'd love to hear how I could handle this better as well.

Other free software work

As usual, I did tons more stuff on the computer this month. Way more than I should, actually. I am trying to take some time to reflect upon my work and life these days, and the computer is more part of the problem than the solution, so those feel like a vice that I can't get rid of more than an accomplishment. Still, you might be interested to know about those, so here they are.

Ledger timetracking

I am tracking the time I work on various issues through the overwhelming org-mode in Emacs. The rationale I had was that I didn't want to bother writing yet another time tracker, having written at least two before. One is the old phpTimeTracker, the other is a rewrite that never got anywhere, and finally, I had to deal with the formidable kProject during my time at Koumbit.org. All of those made me totally allergic to project trackers, timetrackers, and reinventing the wheel, so I figured it made sense to use an already existing solution.

Plus, org-mode allows me to track todos in a fairly meaningful way and I can punch into todo items fairly transparently. I also had a hunch I could bridge this with ledger, a lightweight accounting tool I started using recently. I was previously using the heavier Gnucash, almost a decade ago - but I really was seduced by the idea of a commandline tool that stores its data in a flat file that I can checkin to git.

How wrong I was! First off, ledger can't read org files out of the box. It's weird, but you need to convert those files into timeclock.el formatted files, which, oddly enough, is a completely different file format, and completely different timetracker. Interestingly, it's a very interesting format. An example:

i 1970-01-01 12:00:00 project test 4h o 1970-01-01 16:00:00

... which makes it possible to write a timetracker with two simple shell aliases:

export TIMELOG=$HOME/.timelog alias ti="echo i `date '+%Y-%m-%d %H:%M:%S'` \$* >>$TIMELOG" alias to="echo o `date '+%Y-%m-%d %H:%M:%S'` >>$TIMELOG"

How's that for simplicity!

So you use John Wiegley's org2tc (or my fork which adds a few improvements) to convert from org to timeclock.el. From there on, a bunch of tools can do reporting on those files, the most interesting being obviously ledger itself, as it can read those files natively (although hledger has trouble including them). So far so good: I can do time tracking very easily and report on my time now!

Now, to turn this into bills and actual accounting, well... it's really much more complicated. To make a long story short, it works, but I really had to pull my hair out and ended up making yet another git repository to demonstrate how that could work. I am now stuck at the actual step of generating bills more automatically, which seems to be a total pain. Previous examples and documentation I found were limited and while I feel that some people are actively doing this, they still have to reveal their magic sauce in a meaningful way. I was told on IRC no one actually achieved converting timeclock.el entries directly into bills...

Kodi

I have done a small patch to the rom collection browser to turn off an annoying "OK" dialog that would block import of ROMs. This actually was way more involved than expected, considering the size of the patch: I had to export the project to Github since the original repository at Google Code is now archived, just like all Google Code repositories. I hope someone will pick it up from there

Sopel

I have finally got my small patch for SNI support in Sopel! It turns out they are phasing out their own web module in favor of Requests, something that was refused last year. It seems the Sopel developers finally saw the interest in avoiding the maintenance cost of their own complete HTTP library... in an IRC bot.

Working on this patch, I filed a bug in requests which was promptly fixed.

Feed2tweet and spigot

I already mentioned how I linked this blog to Twitter through the use of feed2tweet. Since then, part of my pull requests and issues were merged and others are still pending.

In the meantime, I figured it would make sense to also post to identi.ca. This turned out to be surprisingly more difficult - the only bridge available would not work very well for me. I also filed a bunch of issues in the hope things would stabilize, but so far I have not made this work properly.

It seems to me that all of this stuff is really just reinventing the wheel. There are pretty neat IM libraries out there, one that is actively developed is libturpial, used in the Turpial client. It currently only supports status.net and Twitter, but if Pump.io support is implemented, it would all of the above problems at once...

Git-mediawiki

Did I mention how awesome the git-mediawiki remote is? It allows me to clone Mediawiki wikis and transparently read and write to them using usual git commands! I use it to keep a mirror of the amateur radio wiki site, for example, as it makes no sense to me to not have this site available offline. I was trying to mirror Wikivoyage and it would block at 500 pages, so I made a patch to support larger wikis.

Borg resignation

Finally, it is with great sadness that I announce that I have left the Borg backup project. It seems that my views of release and project management are irreconcilable with those of the maintainer of the Attic fork. Those looking for more details and explanations are welcome to look in the issue tracker for the various discussions regarding i18n, the support timeframe and compatibility policy, or to contact me personally.

Catégories: External Blogs

Internet in Cuba

Anarcat - lun, 01/25/2016 - 14:50

A lot has been written about the Internet in Cuba over the years. I have read a few articles, from New York Times' happy support for Google's invasion of Cuba to RSF's dramatic and fairly outdated report about censorship in Cuba. Having written before about Internet censorship in Tunisia, I was curious to see if I could get a feel of what it is like over there, now that a new Castro is in power and the Obama administration has started restoring diplomatic ties with Cuba. With those political changes coming signifying the end of an embargo that has been called genocidal by the Cuban government, it is surprisingly difficult to get fresh information about the current state of affairs.

This article aims to fill that gap in clarifying how the internet works in Cuba, what kind of censorship mechanisms are in place and how to work around them. It also digs more technically into the network architecture and performance. It is published in the hope of providing both Cubans and the rest of the world with a better understanding of their network and, if possible, Cubans ways to access the internet more cheaply or without censorship.

"Censorship" and workarounds

Unfortunately, I have been connected to the internet only through the the Varadero airport and the WiFi of a "full included" resort near Jibacoa. I have to come to assume that this network is likely to be on a segregated, uncensored internet while the rest of the country suffers the wrath of the Internet censorship in Cuba I have seen documented elsewhere.

Through my research, I couldn't find any sort of direct censorship. The Netalyzr tool couldn't find anything significantly wrong with the connection, other than the obvious performance problems related both to the overloaded uplinks of the Cuban internet. I ran an incomplete OONI probe as well, and it seems there was no obvious censorship detected there as well, at least according to folks in the helpful #ooni IRC channel. Tor also works fine, and could be a great way to avoid the global surveillance system described later in this article.

Nevertheless, it still remains to be seen how the internet is censored in the "real" Cuban internet, outside of the tourist designated areas - hopefully future visitors or locals can expand on this using the tools mentioned above, using the regular internet.

Usual care should be taken when using any workaround tools, mentioned in this post or not, as different regimes over the world have accused, detained, tortured and killed sometimes for the mere fact of using or distributing circumvention tools. For example, a Russian developer was arrested and detained in 2001 by United States' FBI for exposing vulnerabilities in the Adobe e-books copy protection mechanisms. Similarly, people distributing Tor and other tools have been arrested during the period prior to the revolution in Tunisia.

The Cuban captive portal

There is, however, a more pernicious and yet very obvious censorship mechanism at work in Cuba: to get access to the internet, you have to go through what seems to be a state-wide captive portal, which I have seen both at the hotel and the airport. It is presumably deployed at all levels of the internet access points.

To get credentials through that portal, you need a username and password which you get by buying a Nauta card. Those cards cost 2$CUC and get you an hour of basically unlimited internet access. That may not seem like a lot for a rich northern hotel party-goer, but for Cubans, it's a lot of money, given that the average monthly salary is around 20$CUC. The system is also pretty annoying to use, because it means you do not get continuous network access: every hour, you need to input a new card, which will obviously make streaming movies and other online activities annoying. It also makes hosting servers basically impossible.

So while Cuba does not have, like China or Iran, a "great firewall", there is definitely a big restriction to going online in Cuba. Indeed, it seems to be how the government ensures that Cubans do not foment too much dissent online: keep the internet slow and inaccessible, and you won't get too many Arab spring / blogger revolutions.

Bypassing the Cuban captive portal

The good news is that it is perfectly possible for Cubans (or at least for a tourist like me with resources outside of the country) to bypass the captive portal. Like many poorly implemented portals, the portal allows DNS traffic to go through, which makes it possible to access the global network for free by using a tool like iodine which tunnels IP traffic over DNS requests.

Of course, the bandwidth and reliability of the connection you get through such a portal is pretty bad. I have regularly seen 80% packet loss and over two minutes of latency:

--- 10.0.0.1 ping statistics --- 163 packets transmitted, 31 received, 80% packet loss, time 162391ms rtt min/avg/max/mdev = 133.700/2669.535/64188.027/11257.336 ms, pipe 65

Still, it allowed me to login to my home server through SSH using Mosh to workaround the reliability issues.

Every once in a while, mosh would get stuck and keep on trying to send packets to probe the server, which would clog the connection even more. So I regularly had to restart the whole stack using these commands:

killall iodine # stop DNS tunnel nmcli n off # turn off wifi to change MAC address macchanger -A wlan0 # change MAC address nmcli n on # turn wifi back on sleep 3 # wait for wifi to settle iodine-client-start # restart DNS tunnel

The Koumbit Wiki has good instructions on how to setup a DNS tunnel. I am wondering if such a public service could be of use for Cubans, although I am not sure how it could be deployed only for Cubans, and what kind of traffic it could support... The fact is that iodine does require a server to operate, and that server must be run on the outside of the censored perimeter, something that Cubans may not be able to afford in the first place.

Another possible way to save money with the captive portal would be to write something that automates connecting and disconnecting from the portal. You would feed that program a list of credentials and it would connect to the portal only on demand, and disconnect as soon as no traffic goes through. There are details on the implementation of the captive portal below that may help future endeavours in that field.

Private information revealed to the captive portal

It should be mentioned, however, that the captive portal has a significant amount of information on clients, which is a direct threat to the online privacy of Cuban internet users. Of course the unique identifiers issued with the Nauta cards can be correlated with your identity, right from the start. For example, I had to give my room number to get a Nauta card issued.

Then the central portal also knows which access point you are connected to. For example, the central portal I was connected to Wifi_Memories_Jibacoa which, for anyone that cares to research, will give them a location of about 20 square meters where I was located when connected (there is only one access point in the whole hotel).

Finally, the central portal also knows my MAC address, a unique identifier for the computer I am using which also reveals which brand of computer I am using (Mac, Lenovo, etc). While this address can be changed, very few people know that, let alone how.

This led me to question whether I would be allowed back in Cuba (or even allowed out!) after publishing this blog post, as it is obvious that I can be easily identified based on the time this article was published, my name and other details. Hopefully the Cuban government will either not notice or not care, but this can be a tricky situation, obviously. I have heard that Cuban prisons are not the best hangout place in Cuba, to say the least...

Network configuration assessment

This section is more technical and delves more deeply in the Cuban internet to analyze the quality and topology of the network, along with hints as to which hardware and providers are being used to support the Cuban government.

Line quality

The internet is actually not so bad in the hotel. Again, this may be because of the very fact that I am in that hotel, and I get a privileged access to the new fiber line to Venezuela, the ALBA-1 link.

The line speed I get is around 1mbps, according to speedtest, which selected a server from LIME in George Town, Cayman Islands:

[1034]anarcat@angela:cuba$ speedtest Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from Empresa de Telecomunicaciones de Cuba (152.206.92.146)... Selecting best server based on latency... Hosted by LIME (George Town) [391.78 km]: 317.546 ms Testing download speed........................................ Download: 1.01 Mbits/s Testing upload speed.................................................. Upload: 1.00 Mbits/s

Latency to the rest of the world is of couse slow:

--- koumbit.org ping statistics --- 122 packets transmitted, 120 received, 1,64% packet loss, time 18731,6ms rtt min/avg/max/sdev = 127,457/156,097/725,211/94,688 ms --- google.com ping statistics --- 122 packets transmitted, 121 received, 0,82% packet loss, time 19371,4ms rtt min/avg/max/sdev = 132,517/160,095/724,971/93,273 ms --- redcta.org.ar ping statistics --- 122 packets transmitted, 120 received, 1,64% packet loss, time 40748,6ms rtt min/avg/max/sdev = 303,035/339,572/965,092/97,503 ms --- ccc.de ping statistics --- 122 packets transmitted, 72 received, 40,98% packet loss, time 19560,2ms rtt min/avg/max/sdev = 244,266/271,670/594,104/61,933 ms

Interestingly, Koumbit is actually the closest host in the above test. It could be that Canadian hosts are less affected by bandwidth problems compared to US hosts because of the embargo.

Network topology

The various traceroutes show a fairly odd network topology, but that is typical of what I would described as "colonized internet users", which have layers and layers of NAT and obscure routing that keep them from the real internet. Just like large corporations are implementing NAT in a large scale, Cuba seems to have layers and layers of private RFC 1918 IPv4 space. A typical traceroute starts with:

traceroute to koumbit.net (199.58.80.33), 30 hops max, 60 byte packets 1 10.156.41.1 (10.156.41.1) 9.724 ms 9.472 ms 9.405 ms 2 192.168.134.137 (192.168.134.137) 16.089 ms 15.612 ms 15.509 ms 3 172.31.252.113 (172.31.252.113) 15.350 ms 15.805 ms 15.358 ms 4 pos6-0-0-agu-cr-1.mpls.enet.cu (172.31.253.197) 15.286 ms 14.832 ms 14.405 ms 5 172.31.252.29 (172.31.252.29) 13.734 ms 13.685 ms 14.485 ms 6 200.0.16.130 (200.0.16.130) 14.428 ms 11.393 ms 10.977 ms 7 200.0.16.74 (200.0.16.74) 10.738 ms 10.019 ms 10.326 ms 8 ix-11-3-1-0.tcore1.TNK-Toronto.as6453.net (64.86.33.45) 108.577 ms 108.449 ms

Let's take this apart line by line:

1 10.156.41.1 (10.156.41.1) 9.724 ms 9.472 ms 9.405 ms

This is my local gateway, probably the hotel's wifi router.

2 192.168.134.137 (192.168.134.137) 16.089 ms 15.612 ms 15.509 ms

This is likely not very far from the local gateway, probably still in Cuba. It in one bit away from the captive portal IP address (see below) so it is very likely related to the captive portal implementation.

3 172.31.252.113 (172.31.252.113) 15.350 ms 15.805 ms 15.358 ms 4 pos6-0-0-agu-cr-1.mpls.enet.cu (172.31.253.197) 15.286 ms 14.832 ms 14.405 ms 5 172.31.252.29 (172.31.252.29) 13.734 ms 13.685 ms 14.485 ms

All those are withing RFC 1918 space. Interestingly, the Cuban DNS servers resolve one of those private IPs as within Cuban space, on line #4. That line is interesting because it reveals the potential use of MPLS.

6 200.0.16.130 (200.0.16.130) 14.428 ms 11.393 ms 10.977 ms 7 200.0.16.74 (200.0.16.74) 10.738 ms 10.019 ms 10.326 ms

Those two lines are the only ones that actually reveal that the route belongs in Cuba at all. Both IPs are in a tiny (/24, or 256 IP addresses) network allocated to ETECSA, the state telco in Cuba:

inetnum: 200.0.16/24 status: allocated aut-num: N/A owner: EMPRESA DE TELECOMUNICACIONES DE CUBA S.A. (IXP CUBA) ownerid: CU-CUBA-LACNIC responsible: Rafael López Guerra address: Ave. Independencia y 19 Mayo, s/n, address: 10600 - La Habana - CH country: CU phone: +53 7 574242 [] owner-c: JOQ tech-c: JOQ abuse-c: JEM52 inetrev: 200.0.16/24 nserver: NS1.NAP.ETECSA.NET nsstat: 20160123 AA nslastaa: 20160123 nserver: NS2.NAP.ETECSA.NET nsstat: 20160123 AA nslastaa: 20160123 created: 20030512 changed: 20140610

Then the last hop:

8 ix-11-3-1-0.tcore1.TNK-Toronto.as6453.net (64.86.33.45) 108.577 ms 108.449 ms 108.257 ms

...interestingly, lands directly in Toronto, in this case going later to Koumbit but that is the first hop that varies according to the destination, hops 1-7 being a common trunk to all external communications. It's also interesting that this shoves a good 90 milliseconds extra in latency, showing that a significant distance and number of equipment crossed. Yet a single hop is crossed, not showing the intermediate step of the Venezuelan link or any other links for that matter. Something obscure is going on there...

Also interesting to note is the traceroute to the redirection host, which is only one hop away:

traceroute to 192.168.134.138 (192.168.134.138), 30 hops max, 60 byte packets 1 192.168.134.138 (192.168.134.138) 6.027 ms 5.698 ms 5.596 ms

Even though it is not the gateway:

$ ip route default via 10.156.41.1 dev wlan0 proto static metric 1024 10.156.41.0/24 dev wlan0 proto kernel scope link src 10.156.41.4 169.254.0.0/16 dev wlan0 scope link metric 1000

This means a very close coordination between the different access points and the captive portal system. Finally, note that there seems to be only three peers to the Cuban internet: Teleglobe, formerly Canadian, now owned by the Indian [[!wiki Tata group]], and Telefónica, the Spanish Telco that colonized most of Latin America's internet, all the way down to Argentina. This is confirmed by my traceroutes, which show traffic to Koumbit going through Tata and Google's going through Telefónica.

Captive portal implementation

The captive portal is https://www.portal-wifi-temas.nauta.cu/ (not accessible outside of Cuba) and uses a self-signed certificate. The domain name resolves to 190.6.81.230 in the hotel.

Accessing http://1.1.1.1/ gives you a status page which allows you to disconnect from the portal. It actually redirects you to https://192.168.134.138/logout.user. That is also a self-signed, but different certificate. That certificate actually reveals the implication of Gemtek which is a "world-leading provider of Wireless Broadband solutions, offering a wide range of solutions from residential to business". It is somewhat unclear if the implication of Gemtek here is deliberate or a misconfiguration on the part of Cuban officials, especially since the certificate is self-signed and was issued in 2002. It could be, however, a trace of the supposed involvement of China in the development of Cuba's networking systems, although Gemtek is based in Taiwan, and not in the China mainland.

That IP, in turn, redirects you to the same portal but in a page that shows you the statistics:

https://www.portal-wifi-temas.nauta.cu/?mac=0024D1717D18&script=logout.user&remain_time=00%3A55%3A52&session_time=00%3A04%3A08&username=151003576287&clientip=10.156.41.21&nasid=Wifi_Memories_Jibacoa&r=ac%2Fpopup

Notice how you see the MAC address of the machine in the URL (randomized, this is not my MAC address), along with the remaining time, session time, client IP and the Wifi access point ESSID. There may be some potential in defrauding the session time there, I haven't tested it directly.

Hitting Actualizar redirects you back to the IP address, which redirects you to the right URL on the portal. The "real" logout is at:

http://192.168.134.138/logout.user?cmd=logout

The login is performed against https://www.portal-wifi-temas.nauta.cu/index.php?r=ac/login with a referer of:

https://www.portal-wifi-temas.nauta.cu/?&nasid=Wifi_Memories_Jibacoa&nasip=192.168.134.138&clientip=10.156.41.21&mac=EC:55:F9:C5:F2:55&ourl=http%3a%2f%2fgoogle.ca%2f&sslport=443&lang=en-US%2cen%3bq%3d0.8&lanip=10.156.41.1

Again, notice the information revealed to the central portal.

Equipment and providers

I ran Nmap probes against both the captive portal and the redirection host, in the hope of finding out how they were built and if they could reveal the source of the equipment used.

The complete nmap probes are available in nmap, but it seems that the captive portal is running some embedded device. It is confusing because the probe for the captive portal responds as if it was the gateway, which blurs even more the distinction between the hotel's gateway and the captive portal. This raises the distinct possibility that all access points are actually captive portal that authenticate to another central server.

The nmap traces do show three distinct hosts however:

  • the captive portal (www.portal-wifi-temas.nauta.cu, 190.6.81.230)
  • some redirection host (192.168.134.138)
  • the hotel's gateway (10.156.41.1)

They do have distinct signatures so the above may be just me misinterpreting traceroute and nmap results. Your comments may help in clarifying the above.

Still, the three devices show up as running Linux, in the two last cases versions between 2.4.21 and 2.4.31. Now, to find out which version of Linux it is running is way more challenging, and it is possible it is just some custom Linux distribution. Indeed, the webserver shows up as G4200.GSI.2.22.0155 and the SSH server is running OpenSSH 3.0.2p1, which is basically prehistoric (2002!) which corroborates the idea that this is some Gemtek embedded device.

The fact that those devices are running 14 years old software should be a concern to the people responsible for those networks. There is, for example, a remote root vulnerability that affects that specific version of OpenSSH, among many other vulnerabilities.

A note on Nauta card's security

Finally, one can note that it is probably trivial to guess card UIDs. All cards i have here start with the prefix 15100, the following digits being 3576 or 4595, presumably depending on the "batch" that was sent to different hotels, which seems to be batches of 1000 cards. You can also correlate the UID with the date at which the card was issued. For example, 15100357XXX cards are all valid until 19/03/2017, and 151004595XXX cards are all valid until 23/03/2017. Here's the list of UIDs I have seen:

151004595313 151004595974 151003576287 151003576105 151003576097

The passwords, on the other hand, do seem fairly random (although my sample size is small). Interestingly, those passwords are also 12 digits long, which is about as strong as a seven-letter password (mixed uppercase and lowercase). If there are no rate-limiting provisions on that captive portal, it could be possible to guess those passwords, since you have free rein on accessing those routers. Depending on the performance of the routers, you could be lucky and find a working password for free...

Conclusion

Clearly, Internet access in Cuba needs to be modernized. We can clearly see that Cuba years behind the rest of the Americas, if only through the percentage of the population with internet access, or download speeds. The existence of a centralized captive portal also enables a huge surveillance potential that should be a concern for any Cuban, or for that matter, anyone wishing to live in a free society.

The answer, however, lies not in the liberalization of commerce and opening the doors to the US companies and their own systems of surveillance. It should be possible, and even desirable for Cubans to establish their own neutral network, a proposal I have made in the past even for here in Québec. This network could be used and improved by Cubans themselves, prioritizing local communities that would establish their own infrastructure according to their own needs. I have been impressed by this article about the El Paquete system - it shows great innovation and initiative from Cubans which are known for engaging in technology in a creative way. This should be leveraged by letting Cubans do what they want with their networks, not telling them what to do.

The best the Googles of this world can do to help Cuba is not to colonize Cuba's technological landscape but to cleanup their own and make their own tools more easily accessible and shareable offline. It is something companies can do right now, something I detailed in a previous article.

Catégories: External Blogs

The Downloadable Internet

Anarcat - mar, 01/12/2016 - 12:07
How corporations killed the web

I have read with fascination what we would have called before a blog post, except it was featured on The Guardian: Iran's blogfather: Facebook, Instagram and Twitter are killing the web The "blogfather" is Hossein Derakshan or h0d3r, an author from Teheran that was jailed for almost a decade for his blogging. The article is very interesting both because it shows how fast things changed in the last few years, technology-wise, but more importantly, how content-free the web have become, where Facebook's last acquisition, Instagram, is not even censored by Iran. Those platforms have stopped being censored, not because of democratic progress but because they have become totally inoffensive (in the case of Iran) or become a tool of surveillance for the government and targeted advertisement for companies (in the case of, well, most of the world).

This struck a chord, personally, at the political level: we are losing control of the internet (if we ever had it). The defeat isn't directly political: we have some institutions like ICANN and the IETF that we can still have an effect on, even if only at the technological level. The defeat is economic, and, of course, through economy comes enormous power. That defeat meant that we have first lost free and open access to the internet (yes, dialup used to be free) and then free hosting of our content (no, Google and Facebook are not free, you are the product). This marked a major change in the way content is treated online.

H0d3r explains this as the shift from a link-based internet to a stream-based internet, a "deparure from a books-internet towards a television-internet". I have been warning about this "television-internet" in my talks and conversation for a while and with Netflix taking the crown off Youtube (and making you pay for it, of course), we can assuredly say that H0d3r is right and the television, far from disappearing, is finally being resurrected and taking over the internet.

The Downloadable internet and open standards

But I would like to add to that: it is not merely that we had "links" before. We had, and still have, open standards. This made the internet "downloadable" (and by extension, uploadable) and decentralized.

(In fact, I still remember my earlier days on the web when I would actually download images (as in "right-click" and "Save as..." images, not just have the browser download and display it on the fly). I would download images because they were big! It could take a minute or sometimes more to download images on older modems. Later, I would do the same with music: I would download WAV files before the rise of the MP3 format, of which I ended up building a significant collection (just fair use copies from friends and owned CDs, of course) and eventually video files.)

The downloadable internet is what still allows me to type this article in a text editor, without internet access, while reading H0d3r's blog post on my e-reader, because I downloaded his article off an RSS feed. It is what makes it possible for anyone to download a full copy of this blog post and connected web pages as a git repository and this way get the full history of modifications on all the pages, but also be able to edit it offline and push modifications back in.

Wikipedia is downloadable (there are even offline apps for your phone). Open standards like RSS feeds and HTML are downloadable. Heck, even the Internet Archive is downloadable (and I mean, all of it, not just the parts you want), surprisingly enough.

The app-based internet and proprietary software

App-based websites like Google Plus and Facebook are not really downloadable. They are meant to be browsed through an app, so what you actually see through your web browser is really more an application, downloaded software than a downloaded piece of content. If you turn off Javascript, you will see that visiting Facebook actually shows no content: everything is downloaded on the fly by an application itself downloaded, on the fly, by your browser. In a way, your browser has become an operating system that runs proprietary, untrusted and unverified applications from the web.

(The software is generally completely proprietary, except some frameworks that are published as free software in what looks like the lenient act of a godly king, but is actually more an economic decision of a clever corporation which outsources, for free, R&D and testing to the larger free software community. The real "secret sauce" is basically always proprietary, if only so that we don't freak out on stuff like PRISM that reports everything we do to the government.)

Technology is political. This new "app design" is not a simple optimization or an cosmetic accident of a fancy engineer: by moving content through an application, Facebook, Twitter and the like can see exactly what you do on a web page, what you actually read (as opposed to what you click on) and how long. By adding a proprietary interface between you and the content online, the advertisement-surveillance complex can track every move you make online.

This is a very fine-tuned surveillance system, and because of the App, you cannot escape it. You cannot share the content outside of Facebook, as you can't download it. Or at least, it's not obvious how you can. Projects like youtube-dl are doing an amazing job reverse-engineering what is becoming the proprietary Youtube streaming protocol, which is constantly changing and is not really documented. But it's a hack: it's a Sisyphus struggle which is bound to fail, and it does, all the time, until we figure out how to either turn those corporations into good netizens respecting and contributing to open standards (unlikely) or destroy those corporations (most likely).

You are trapped in their walled garden. No wonder internet.org is Facebook only: for most people nowadays, the internet is the web, and the web is Facebook, Twitter and Google, or an iPad with a bunch of apps, each their own cute little walled garden, crafted just for you. If you think you like the Internet, you should really reconsider what you are watching, what you are consuming, or rather, how it is consuming you. There are alternatives. Facebook is a though nut to crack for free software activists because we lack the critical mass. But Facebook it is also an addiction for a lot of people, and spending less time on that spying machine could be a great improvement for you I am sure. For everything else, we have good free software alternatives and open standards, use them.

"Big brother ain't watching you, you're watching him." - CRASS, Nineteen Eighty Bore (audio)

Catégories: External Blogs

Montréal-Python 56: Yugoslav Zoophobia

Montreal Python - mar, 01/12/2016 - 00:00

We very proud the announce you the speakers of our 56th meeting that will be held at the Lightspeed Retail offices in Montreal.

Join un next Monday, at 6:00pm for an amazing evening of Python :)

Flash presentations:

Van Duc Nguyen: CEDILLE - Ingénierie et logiciels libres (http://cedille.etsmtl.ca/).

Présentation d'un club scientifique de l'École de Technologie Supérieure qui réalise des projets d'ingénierie variés à l'aide de logiciels libres.

Main program:

Ted Landis: Using Pytest’s Fixtures for Specification By Example (SBE) Test Automation

Pytest’s incredibly flexible fixtures feature dependency injection and a modular extendable test design. But by adding the pytest-bdd plugin these fixtures also support high level Cucumber/Gherkin style scenario testing that allow your automated tests to be authored and reviewed by anyone on the product team. In this talk we will look at how pytest-bdd, and selenium based page objects are being used to automate functional testing of a cloud application.

Chris Parmer and Étienne Tétreault-Pinard: Plotly.js - The open-source charting library behind Plotly

Talk about the decision to open-source Plotly.js, Plotly's core technology and graphing library (https://plot.ly/javascript/open-source-announcement/). They'll discuss the details of maintaining a large-scale open-source project at the core of Plotly's business and how motivated individuals can use the library or contribute. There will be a short hands-on workshop on using Plotly.js as a business analyst or data scientist.

Pablo Duboue: Quick Prototyping Use Cases with Widgets in Jupyter Notebook

Jupyter Notebooks (anciennement appelé ipython notebooks) permet d'avoir un REPL (une boucle de Lecture-Evalution-Impression) en python directement dans votre navigateur. Il fournit également des capacités pour avoir des "widgets" interactifs qui communiquent en temps réel avec la session python.

Cette fonctionnalité est vraiment utile pour l’analyse de données, mais dans cette présentation je présenterai une application différente qui peut être intéressante pour d’autres pythonistas : la création interactive de prototypes pour des cas d'expérience d’utilisateur.

http://duboue.net/blog14.html

Where

Lightspeed offices, 700 St-Antoine E., #300 (3rd floor), Montréal https://goo.gl/maps/KBgi5L2qLus

When

Monday, January 18th 2016

Schedule
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • Lightspeed
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

My free software activities in December 2015

Anarcat - mar, 01/05/2016 - 12:52

I am hereby joining the crowd of Debian and free software developers doing a monthly and public summary of their work.

Debian Long Term Support (LTS)

This is the first month when I have been paid to work on Debian LTS. I wasn't nearly as available as I hoped, but i was still able to work the 8 hours I committed to. It was mostly a learning process at this point. For example, I spent a rather long time working on the Redmine security issues, only to notice, at the end of December, that Redmine is not actually supported in LTS releases (mainly due to Rails itself not being supported).

Still, the work is not completely lost: some patches were backported, and patches sources and status were clarified. I also made a tool to track patches through git and SVN merges in Redmine's history, which can sometimes be really complicated.

However, because I lack the resources to setup a Squeeze Redmine instance and the feedback I received so far for the work has been mostly that Redmine is unsupported, I will not push those updates into LTS for now (until users explicitly ask for it, in which case I can perform the updates in January, with the help of testers).

I also got more familiar with the security team infrastructure. One of the reasons that I didn't notice Redmine was unsupported was that it is not clear at all from the Security Tracker's perspective, which versions are supported and which are not. I have proposed a small patch to ignore CVEs that are affecting only unsupported version and the discussion on this is ongoing. I will hold off work on the security tracker for now and try to focus on actual uploads for the upcoming month.

Feed to tweet

I have worked quite a bit on the [feed2tweet tool][], which allows me to connect this blog to twitter. I have filed the first 17 issues or pull requests of the project, basically rewriting almost all the code to tailor my needs, make it faster and easier to configure. I am wondering if it wouldn't have been simpler to write something from scratch using libturpial, especially since this brings in GNU Social support and maybe, eventually, pump.io support, which is on the way to be standardized by the W3C.

Hopefully, this post will confirm the software still works after all my changes...

Miscellaneous projects

I have also worked on my plethora of random projects in December.

  • i am looking for co-maintainers for bup-cron, a tool I wrote a while back to automate bup backups and related operations, which I only use now for my offsite backups, which are likely to be replaced with borg backup
  • i have updated the Debian package for sopel, which was renamed from Willie in Debian, and filed two related issues
  • my brain got eaten by Sudokus in the christmas vacations and i found an easy way to generate new sudokus for basically zero money (small booklets can cost up to 8$ here!) using the sudoku software, packaged in Debian. Unfortunately, it only generates a single sudoku per page - if people have better knowledge of Postscript than me, this would be a great and easy contribution (probably)
  • i have contributed significantly to the tuptime project by performing a fairly detailed code review and helping with the Debian packaging (that was november, but still) - tuptime is pretty cool: i use it to guess how long my next reboot will be...
Catégories: External Blogs

Montréal-Python 56: Call for speakers

Montreal Python - mar, 01/05/2016 - 00:00

2016 is just starting and we've just finished celebrating the new year but it's time to prepare yourself for our next event, Montréal-Python 56.

It's your opportunity to come present, we are looking for speakers for talks of 30, 15 or 5 minutes.

For example, if you are using Python to deploy Docker services, doing Big Data or simply having fun at discovering new tricks that make your life easier, we want you on stage :)

Join us for the occasion at the Lightspeed offices!

To submit your talk, write us at mtlpyteam@googlegroups.com

Where

Lightspeed offices, 700 St-Antoine E., #300 (3rd floor), Montréal https://goo.gl/maps/WL119K6qj8x

When

Monday, January 18th 2016

Schedule
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • Lightspeed
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

Bridging Ikiwiki and Twitter with Python and feed2tweet

Anarcat - mar, 12/29/2015 - 02:03

Typical:

  1. find new interesting software (feed2tweet)
  2. try it out, file two issues:
  3. itch enough that i need to scratch it and file 3 pull requests to ask for forgiveness:

All fairly trivial, but it allowed me to make a simple cronjob to post my Ikiwiki blog posts straight out to my Twitter account. It's basically abusing my RSS feed to bridge with Twitter: the most boring and annoying part is setting up a new app, pasting the credentials in the config file and then running the thing in a cron job.

But in the end, I really ended up spending one more hour at a time when I should be really sleeping, scratching an itch that I didn't have before I started working on this thing in the first place.

Coincidentally, I requested to be added to Python Planet, looks like a fun place...

Catégories: External Blogs

Using a Yubikey NEO for SSH and OpenPGP on Debian jessie

Anarcat - lun, 12/14/2015 - 22:58

I recently ordered two Yubikey devices from Yubico, partly because of a special offer from Github. I ordered both a Yubikey NEO and a Yubikey 4, although I am not sure I remember why I ordered two - you can see their Yubikey product comparison if you want to figure that out, but basically, the main difference is that the NEO has support for NFC while the "4" has support for larger RSA key sizes (4096).

This article details my experiment on the matter. It is partly based on first hand experience, but also links to various other tutorials that helped me along the way. Especially thanks to folks on various IRC channels that really helped me out in understanding this.

My objective in getting a hardware security token like this was three-fold:

  1. use 2FA on important websites like Github, to improve the security of critical infrastructure (like my Borg backup software)
  2. login to remote SSH servers without exposing my password or my private key material on third party computers
  3. store OpenPGP key material on the key securely, so that the private key material can never be compromised

To make a long story short: this article documents step 2 and implicitly step 3 (because I use OpenPGP to login to SSH servers). However it is not possible to use the key on arbitrary third party computers, given how much setup was necessary to make the thing work at all. 2FA on the Github site completely failed, but could be used on other sites, although this is not covered by this article.

I have also not experimented in details the other ways the Yubikey can also be used (sorry for the acronym flood) as:

Update: OATH works! It is easy to configure and i added a section below.

Not recommended

After experimenting with the device and doing a little more research, I am not sure it was the right decision to buy a Yubikey. I would not recommend buying Yubikey devices because they don't allow changing the firmware, making the device basically proprietary, even in the face of an embarrassing security vulnerability on the Yubikey NEO that came out in 2015. A security device, obviously, should be as open as the protocols it uses, otherwise it's basically impossible to trust that the crypto hasn't been backdoored or compromised, or, in this case, is vulnerable to the simplest drive-by attacks.

Furthermore, it turns out that the primary use case that Github was promoting is actually not working as advertised: to use the Yubikey on Github, you actually first need to configure 2FA with another tool, either with your phone's text messages (SMS) or with something like Google Authenticator. After contacting Github support, they explained that the Yubikey is seen as a "backup device", which seems really odd to me, especially considering the promotion and the fact that I don't have a "smart" (aka "Google", it seems these days) phone or the desire to share my personal phone number with Github.

Finally, as I mentioned before, the fact that those devices are fairly new and the configuration necessary to make them work at all is completely obtuse, non-standardized or at least not available by default on arbitrary computers makes them basically impossible to use on other computers than your own specially crafted gems.

Plugging it in

The Yubikey, when inserted into a USB port, seems to be detected properly. It shows up both as a USB keyboard and a generic device.

déc 14 17:23:26 angela kernel: input: Yubico Yubikey NEO OTP+U2F as /devices/pci0000:00/0000:00:12.0/usb3/3-2/3-2:1.0/0003:1050:0114.0016/input/input127 déc 14 17:23:26 angela kernel: hid-generic 0003:1050:0114.0016: input,hidraw3: USB HID v1.10 Keyboard [Yubico Yubikey NEO OTP+U2F] on usb-0000:00:12.0-2/input0 déc 14 17:23:26 angela kernel: hid-generic 0003:1050:0114.0017: hiddev0,hidraw4: USB HID v1.10 Device [Yubico Yubikey NEO OTP+U2F] on usb-0000:00:12.0-2/input1

We'll be changing this now - we want to to support OTP, U2F and CCID. Don't worry about those acronyms now, but U2F is for the web, CCID is for GPG/SSH, and OTP is for the One Time Passwords stuff mentionned earlier.

I am using the Yubikey Personalization tool from stretch because the jessie one is too old, according to Gorzen. Indeed, I found out that the jessie version doesn't ship with the proper udev rules. Also, note that we need to run as sudo otherwise we get a permission denied:

$ sudo apt install yubikey-personalization/stretch $ sudo ykpersonalize -m86 Firmware version 3.4.3 Touch level 1541 Program sequence 1 The USB mode will be set to: 0x86 Commit? (y/n) [n]: y

To understand better what the above does, see the NEO composite device documentation.

The next step is to reconnect the key, for the udev rules to kick in. If you were like me, you enthusiastically plugged in the device before installing the yubikey-personalization package, and the udev rules were not present then.

Configuring a PIN

Various operations will require you to enter a PIN when talking to the key. The default PIN is 123456 and the default admin PIN is 12345678. You will want to change that, otherwise someone that gets a hold of your key could do any operation without your consent. For this, you need to use:

$ gpg --card-edit > passwd > admin > passwd

Be sure to remember those passwords! Of course, the key material on the Yubikey can be revoked when you loose the key, but only if you still have control of the master key, or if you have a OpenPGP revocation certification (which you should have).

Configuring GPG

To do OpenPGP operations (like decryption, signatures and so on), or SSH operations (like authentication on a remote server), you need to talk with GPG. Yes, OpenPGP keys are RSA keys that can be used to authenticate with SSH servers, that's not new and I have already been doing this with Monkeysphere for a while. Now the challenge is how to make GPG talk with the Yubikey.

So the next step is to see if gpg can see the key alright, as described in the Yubikey importing keys howto - you will need first to install scdaemon and pcscd (according to this howto) for gpg-agent to be able to talk with the key:

$ sudo apt install scdaemon gnupg-agent pcscd $ gpg-connect-agent --hex "scd apdu 00 f1 00 00" /bye ERR 100663404 Card error <SCD>

Well that failed. At this point, touching the key types a bunch of seemingly random characters wherever my cursor is sitting - fun but totally useless still. That was because I failed to reconnect the key: make sure the udev rules are in place and reconnect the key, the above should work:

$ gpg-connect-agent --hex "scd apdu 00 f1 00 00" /bye D[0000] 01 00 10 90 00 ..... OK

This shows it is running the firmware 1.10, which is not vulnerable to the infamous security issue.

(Note: I also happened to install opensc and gpgsm because of this suggestion but I am not sure they are required at all.)

Using SSH

To make GPG work with SSH, you need somehow to start gpg-agent with ssh support, for example with:

gpg-agent --daemon --enable-ssh-support bash

Of course, this will work better if it's started with your Xsession. Such an agent should already be started, so you just need to add the ssh emulation to its configuration file and restart your X session.

echo 'enable-ssh-support' >> .gnupg/gpg-agent.conf

In Debian Jessie, the ssh-agent wrapper will not start if it detects that you have already one running (for example from gpg-agent) but if that fails, you can try commenting out use-ssh-agent from /etc/X11/Xsession.options to keep it from starting up in your session. (Thanks to stackexchange for that reference.)

Here I assume you have already created an authentication subkey on your PGP key. If you haven't, I suggest trying out simply monkeysphere gen-subkey, which will generate an authentication subkey for you. You can also do it by hand by following one of the OpenPGP/SSH tutorials from Yubikey, especially the more complete one. If you are going to generate a completely new OpenPGP key, you may want to follow this simpler tutorial here.

Then you need to move your authentication subkey to the Yubikey. For this, you need to edit the key and use the keytocard command:

$ gpg2 --edit-key anarcat@debian.org > toggle > key 2 > keytocard > save

Here, we use toggle to show the OpenPGP private key material. You should see a key marked with A for Authentication. Mine was the second one so I selected it with key 2 which put a star next to it. The keytocard command moved it to the key and save ensure the key was removed from the local keyring.

Obviously, backups are essential before doing this, because it's perfectly possible to loose that key in the process, for example if you destroy or lose the key or forget the password. It's probably better to create a completely different authentication subkey for just this purpose, but that may require reconfiguring all remote SSH hosts, and you may not want to do that.

Then SSH should magically talk with the GPG agent and ask you for the PIN! That's pretty much all there is to it - if it doesn't, it means that gpg-agent is not your SSH agent, and obviously things will fail...

Also, you should be able to see the key being loaded in the agent when it is:

$ ssh-add -l 2048 23:f3:be:bf:1e:da:e8:ad:4b:c7:f6:60:5e:03:c2:a6 cardno:000603647189 (RSA)

.. that's about it! I have yet to cover 2FA and OpenPGP support, but that got me going for a while and I'll stop geeking around with that thing for now. It was fun, for sure, but not sure it's worth it for now.

Using OATH

This is pretty neat: it allows you to add two factor authentication to a lot of things. For example, PAM has such a module, which I will configure here to allow myself to login to my server from untrusted machines. While I will expose my main password to keyloggers, the OTP password will prevent that from being reused. This is a simplified version of this OATH tutorial.

We install the PAM module with:

sudo apt install libpam-oath

Then, we can hook it into any PAM consumer, for example with sshd:

--- a/pam.d/sshd +++ b/pam.d/sshd @@ -1,5 +1,8 @@ # PAM configuration for the Secure Shell service +# for the yubikey +auth required pam_oath.so usersfile=/etc/users.oath window=5 digits=8 + # Standard Un*x authentication. @include common-auth

We also needed to allow OTP passwords in sshd explicitely, with:

ChallengeResponseAuthentication yes

This will force the user to enter a valid oath token on the server. Unfortunately, this will affect all users, regardless of whether they are present in the users.oath file. I filed bug #807990 regarding this, with a patch.

Also, it means the main password is still exposed on the client machine - you can use the sufficient keyword instead of required to workaround that, but then it means anyone with your key can login to your machine, which is something to keep in mind.

The /etc/users.oath file needs to be created with something like:

#type username pin start seed HOTP anarcat - 00

00 is obviously a fake, and insecure string. Generate a proper one with:

dd if=/dev/random bs=1k count=1 | sha1sum # create a random secret

Then the shared secret needs to be added to the Yubikey:

ykpersonalize -1 -o oath-hotp -o oath-hotp8 -o append-cr -a

You simply paste the random secret you created above, when prompted, and that shared secret will be saved in a Yubikey slot for future use. Next time you login to the SSH server, you will be prompted for a OATH password, you just touch the button on the key and it will be pasted there:

$ ssh -o PubkeyAuthentication=no anarc.at One-time password (OATH) for `anarcat': Password: [... logged in!]

Final note: the centralized file approach makes it hard, if not impossible, for users to update their own secret token.. It would be nice if there would be a user-accessible token file, maybe ~/.oath? Filed feature request #807992 about this as well.

Catégories: External Blogs

Montréal-Python Holiday Evening

Montreal Python - ven, 12/11/2015 - 00:00

Montréal-Python invites you to celebrate the holidays with us on Monday December 14th at the Benelux 245 Sherbrooke West.

That's your opportunity to meet a lot of Python developers all over the city.

There will be no meetup in December but be prepared to submit a talk for our next event, Montreal-Python 56, in January 2016.

If you already have an idea, if you are thinking of presenting something, just send us an email at: mtlpyteam@googlegroups.com.

In the mean time, see you next Monday at 6pm at the Benelux for our winter celebration, and don't forget your holiday cheer !

When

Monday, December 14th 2015 at 6pm

Where

Benelux 245 Sherbrooke West (map)

How

Just come !

Catégories: External Blogs

Montréal-Python 55: Wagnerian Xenosaurus

Montreal Python - ven, 11/13/2015 - 00:00

Montréal-Python just got back from PyCon Canada and we are proud to announce all the speakers for our next meetup. Thanks a ton to everyone who submitted a talk.

From all the amazing submissions we received, we've selected 5 talks. Also, we are very excited to welcome you at the new Shopify offices in downtown Montreal. This is your opportunity to meet the local Python community.

We would like to thank again our generous sponsors for their support to our community.

We would like to thank Shopify for the food especially, as well as w.illi.am/, Outbox, Savoir-faire Linux, and iWeb for their continuous support.

Flash presentations:

Alexandre Desilets-Benoit Starting on a GUI: WXpython vs Kivy

Should I build a GUI using WXpython or Kivy? Why not both! A quick overview of the beginner's toolkit with practical examples in science, games, etc.

Main program:

Jake Sethi-Reiner How to teach Python to a ten year old

Everyone can benefit from hearing about Jake's experiences learning Python — what was helpful and what was not! The good and the bad…

Kamal Marhubi asyncio.get_event_loop() → what is that?

Last week I realized I had no idea how event loops work. A couple of days later, I was looking at the asyncio source with a friend, and I want to share some of what we found out.

In this talk, we'll find out at what an event loop is, why you might want to use one, and get a look at some of the key parts of the implementation in the standard asyncio module.

François Maillet Epic NHL goal celebration hack with a hue light show and real-time machine learning

This talk shows how Python was used to trigger an epic sound and light show whenever the Montreal Canadiens hockey team scored a goal in last season's playoffs.

The author trained a machine learning model to detect in real-time that a goal was just scored by the Habs based on the live audio feed of a game and to trigger a light show using Philips hues in his living room. The system was built using various Python modules, more specifically scikit-learn, pyaudio, librosa, phue and bottle.

Federico Ariza Introduction to new Matplotlib toolbar

Latest Matplotlib release includes an optional new toolbar that allows easy modification and simple creation of tools. Introduction to use and internals of this toolbar.

Join us for the occasion at the new Shopify Offices!

Where

Shopify Montreal Offices 490 de la Gauchetière Ouest suite 300

https://goo.gl/maps/6gc5rxRqGqS2

When:

Monday, November 23rd 2015

Schedule:
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • Shopify
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

Call for Speakers - Montréal-Python 55: Wagnerian Xenosaurus

Montreal Python - mar, 11/03/2015 - 00:00

It is already November and we are inviting you to our 55th meetup. For this opportunity, we are looking for speakers for talks of 30, 15 or 5 minutes.

It is your chance to join the biggest community of Python developers in town and show us what amazing things you've created with our favourite language.

For example, if you are using Python to deploy Docker services, doing Big Data or simply having fun at discovering new tricks that make your life easier, we want you on stage :)

Join us for the occasion at the new Shopify Offices!

To submit your talk, write us at mtlpyteam@googlegroups.com

Where

Shopify Montreal Offices 500 de la Gauchetière Ouest suite 3000

https://goo.gl/maps/6gc5rxRqGqS2

When:

Monday, November 23rd 2015

Schedule:
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

Proprietary VDSL2 Linux routers adventures

Anarcat - mar, 10/20/2015 - 23:40

I recently bought a wireless / phone adapter / VDSL modem from my Internet Service Provider (ISP) during my last outage. It generally works fine as a VDSL modem, but unfortunately, I can't seem to get used to configuring the device through their clickety web user interface... Furthermore, I am worried that I can't backup the config in a meaningful way, that is: if the device fails, I will probably not find the same model again and because they run a custom Linux distributions, the chances of the backup being possible to restore on another machine are basically zero. No way i will waste my time configuring this black box. So I started looking at running a distribution like OpenWRT on it.

(Unfortunately, I don't even dare hoping to run a decent operating system like Debian on those devices, if only because of the exotic chipsets that require all sorts of nasty hacks to run...)

The machine is a SmartRG SR630n (specs). I am linking to third party site, because the SmartRG site doesn't seem to know about their own product (!). I paid extra for this device to get one that would do both Wifi and VoIP, so i could replace two machines: my current Soekris net5501 router and a Cisco ATA 186 phone adapter that seems to mysteriously defy the challenges of time. (I don't remember when I got that thing, but it's at least from 2006.)

Unfortunately, it seems that SmartRG are running a custom, proprietary Linux distribution. According to my ISP, init is a complete rewrite that reads an XML config file (and indeed it's the format of the backup files) and does the configuration through a shared memory scheme (!?). According to DSL reports, the device seems to be running a Broadcom 63168 SOC (system on a chip) that is unsupported in Linux. There are some efforts to write drivers for those from scratch, but they have been basically stalled for years now.

Here are more details on the sucker:

Now the next step would logically be to "simply" build a new image with OpenWRT and install it in place. Then I would need to figure out a way to load the binary blobs into the OpenWRT kernel and run all the ADSL utilities as well. It's basically impossible: the odds of the binary modules being compatible with another arbitrary release of the Linux kernel are near zero. Furthermore, the userland tool are most likely custom as well. And worse of all: it seems that Bell Canada deployed a custom "Lucent Stinger" DSLAM which requires a custom binary firmware in the modem. This could be why the SmartRG is so bizarre in the first place. As long as the other end is non-standard, we are all screwed. And those Stinger DSLAM will stick around for a long time, thanks to bell.

See this other good explanation of Stinger.

Which means this machine is now yet another closed box sitting on the internet without firmware upgrades, totally handicapped. I will probably end up selling it back for another machine that has OpenWRT support for their VDSL modems. But there are very few such machines, and with a lot of those, VDSL support is often marked as "spotty" or "in progress". Some machines are supported but are basically impossible to find. There's the Draytek modems are also interesting because, apparently, some models run OpenWRT out of the box too, which is a huge benefit. This is because they use the more open Lantiq SOC. Which are probably not going to support Stinger lines.

Still, there are some very interesting projects out there... The Omnia is one I am definitely interested in right now. I really like their approach... But then they don't have a VDSL chipset in there (I asked for one, actually). And the connectors are only mini-PCIe, which makes it impossible to connect a VDSL PCI card into it.

I could find a single VDSL2 PCI card online, and it could be supported, but only the annex B is available, not the annex A, and it seems the network is using "annex A" according to the ADSL stats i had in 2015-05-28-anarcat-back-again. With such a card, I could use my existing Soekris net5501 router, slam a DSL card into it, and just use the SmartRG as a dumb wifi router/phone adapter. Then it will remain to see how supported are those VDSL cards in FreeBSD (they provide Linux source code, so that's cool). And of course, all this assumes the card works with the "Stinger" mode, which is probably not the case anyways. Besides, I have VDSL2 here, not the lowly ADSL2+.

By the way, Soekris keeps on pushing new interesting products out: their net6501, with 2 extra Gig-E cards could be a really interesting high-end switch, all working with free software tools.

A friend has a SmartRG 505n modem, which looks quite similar, except without the ATA connectors. And those modems are the ones that Teksavvy recommends ("You may use a Cellpipe 7130 or Sagemcom F@ST 2864 in lieu of our SmartRG SR505N for our DSL 15/10, DSL 25 or DSL 50 services."). Furthermore, Teksavvy provides a firmware update for the 505n - again, no idea if it works with the 630n. Of course, the 505n doesn't run OpenWRT either.

So, long story short, again I got screwed by my ISP: I thought i would get a pretty hackable device, "running Linux" that my ISP said over the phone. I got weeks of downtime, no refund, and while i got a better line (more reliable, higher bandwidth), my costs doubled. And I have yet another computing device to worry about: instead of simplifying and reducing waste, I actually just added crap on top of my already cluttered desk.

Next time, maybe I'll tell you about how my ISP overbilled me, broke IPv6 and drops large packets to the floor. I haven't had a response from them in months now... hopefully they will either answer and fix all of this (doubtful) or I'll switch to some other provider, probably Teksavvy.

Many thanks to the numerous people in the DSL reports Teksavvy forum that have amazing expertise. They are even building a map of Bell COs... Thanks also to Taggart for helping me figure out how the firmware images work and encouraging me to figure out how my machine works overall.

Note: all the information shared here is presented in the spirit of the fair use conditions of copyright law.

Catégories: External Blogs

Epic Lameness

Eric Dorland - lun, 09/01/2008 - 17:26
SF.net now supports OpenID. Hooray! I'd like to make a comment on a thread about the RTL8187se chip I've got in my new MSI Wind. So I go to sign in with OpenID and instead of signing me in it prompts me to create an account with a name, username and password for the account. Huh? I just want to post to their forum, I don't want to create an account (at least not explicitly, if they want to do it behind the scenes fine). Isn't the point of OpenID to not have to create accounts and particularly not have to create new usernames and passwords to access websites? I'm not impressed.
Catégories: External Blogs

Sentiment Sharing

Eric Dorland - lun, 08/11/2008 - 23:28
Biella, I am from there and I do agree. If I was still living there I would try to form a team and make a bid. Simon even made noises about organizing a bid at DebConfs past. I wish he would :)

But a DebConf in New York would be almost as good.
Catégories: External Blogs
Syndiquer le contenu