Skip to main content

Anarcat

Syndicate content
Pourquoi faire simple quand on peut faire compliqué.
Updated: 23 hours 19 min ago

Réellement compléter la révolution tranquille

Fri, 05/17/2019 - 07:32

"Compléter l'oeuvre de la révolution tranquille", pour reprendre la couverture du Devoir de ce matin, devrait commencer par réparer les dommages faits par l'Église catholique au Québec. Les crimes horribles des prêtres contre les enfants restent impunis. L'état laisse ici le soin à l'Église de s'occuper de ces affaires criminelles. Pendant ce temps, les évêques font la morale sur l'éducation sexuelle ou religieuse des enfants en prenant position publiquement sur la réforme scolaire. Le banc des accusés est le seul endroit où on devrait permettre aux curés de parler de sexualité et de morale.

Notre histoire est irrémédiablement liée à la colonisation incluant la destruction d'une diversité de peuples autochtones et qui continue à ce jour. On imagine souvent un vague crime passé mais la réalité est que le génocide a continué jusqu'à la fermeture des pensionnats autochtones à la fin du siècle. La Révolution tranquille n'a certainement pas fini ses devoirs, mais pas au sens où l'entend Guy Rocher et les défenseurs du projet de loi 21.

J'ai été éduqué à la Commission des Écoles Catholiques de Montréal (CECM). Durant mon séjour dans cette institution, j'ai suivi des cours de catéchèse "destinée à faire grandir les enfants [...] dans l'intelligence du message chrétien" (Wikipédia). Ce n'était pas l'époque de la grande noirceur mais bien des années 80, où on avait encore le "privilège" d'entrer à l'église durant le curriculum standard de l'école primaire. Évidemment, "communier avec Dieu" était réservé aux baptisés, groupe d'élite dont je ne faisais pas partie. J'ai donc cru important de me faire baptiser à ce jeune âge pour tenter de corriger ce faux-pas parental, dans l'espoir d'atteindre l'illumination dans la noirceur du confessionnal.

Étant donc devenu un athée convaincu, je me désole de voir mes concitoyens s'entre-déchirer sur les questions religieuses. Compléter la véritable Révolution serait de convertir les églises et presbytères en centre sociaux au lieu de condos, traduire les prêtres en justice au lieu de les passer à la radio, redonner aux peuples que nous avons volé et commencer à réparer les erreurs du passé.

Comme disait Borduas, il faut opposer le "refus global" à la "responsabilité entière". Reconnaître les fautes et les erreurs de notre propre culture, et commencer à les réparer, au lieu de s'attarder aux vices possible d'une culture que nous ne connaissons pas vraiment. Alors que l'extrême droite est la source de la majorité des attentats terroristes en Amérique du nord, pourquoi se préoccuper des voiles de nos enseignantes? "Place aux nécessités!" L'urgence climatique et la montée du fascisme devraient être les sujets d'importance au lieu de ces questions vestimentaires.

Cet article a été refusé au Devoir.

Categories: External Blogs

On free speech at Puri.sm and Mastodon

Mon, 05/13/2019 - 10:22

I have been cautiously enthusiastic about Puri.sm. They have done interesting work liberating their own hardware from the clutches of Intel backdoors and are enthusistically creating a new kind of phone. Recently, they figured they would also become a new hosting provider but that not going as well as one might hope. It seems they have decided to rewrite the standard Community Covenant code of conduct and rinse it down to create a absolutist "free speech zone".

This is a serious mistake and will create an escape hatch from mainstream social media for neo-nazis, trolls, masculinists and other scum1 of the internet. Purism should not be part of this, and if they do not revert this stance, I will discourage anyone from doing business with them ever again.

An introduction to the Purism projects

In a private mailing list, I summarized the situation of the Librem projects as follows:

Hi all,

Do people on this list have any opinion about https://librem.one ?

Overall, I think it's a good idea.

Devil is in the details, however. There was some controversy on how Purism has rebranded and forked existing free software projects without giving clear credit in the original announcements. They have responded to this, however, with something I find somewhat satisfactory.

I'm a little concerned about Purism taking on too much: they started by making laptops and ventured into forking Debian to have their own distribution - a common pattern in hardware manufacturers supporting Debian, same happened with System76. But now they are building a phone, and not content with Android, they are building their own OS, based on Debian, and I worry it will not deliver and disappoint a lot of people.

This is another venture that, coming from a hardware manufacturer, I am also somewhat worried about. Launching, simultaneously, an Email, Chat, social networking and VPN provider is a very ambitious goals. Members of our communities have been spending years deploying those services and it's a little frustrating to see Purism just barge in there and offer their services, for a fee on top of that.

But I will be the first to recognize that running services comes at a cost: hardware, cooling, real-estate and especially labor are not free. So I think it's fair they charge a price, and a fair one at that too.

So I wish them good luck and I am curious to see where it will go. At least they picked federated protocols which interoperate with our stuff: that is good. I'm worried they will undercut other community providers like ours, but I guess the more the merrier...

The Purism code of conduct tolerates Nazis

Now something else came up and that's the Librem.one code of conduct which more less says "Nazis are okay, as long as they don't harrass people", a position which I have come to fundamentally disagree with.

This post is what brought the problem to my attention. It includes screenshots2 from a conversation with Kyle Rankin, the Purism Chief Security Officer where he claims that Purism doesn't need to list "bad behaviors" in their code of conduct because "harrassment" suffices. He also argues that control over content isn't required because they don't have a "shared Mastodon3 timeline".

Concretely, their code of conduct states that:

This Code of Conduct is adapted from the Community Covenant, The only change made was to remove the list of examples in the interest of readability.

This seems innocuous enough, but the changes go beyond simply "readability". This is how the Covenant code of conduct actually begins:

Our pledge

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.

In comparison, this is how the Purism code begins:

Our goal

This community is dedicated to providing a harassment-free experience for everyone. We do not tolerate harassment of participants in any form.

By removing specific the list of unacceptable behavior, they are implicitely allowing it. Purism seem to pivot around "legally protected free speech" and argue that "harrassment is not legally protected" which is why it's not allowed in their code of conduct. Their argument is they shouldn't decide what's allowed on their own server and instead seem to delegate this to the US constitution and law enforcement. Indeed, their FAQ says:

How do I report illegal content?

Any illegal content or illegal acts should be reported to the appropriate authorities who are equipped to handle it.

So it's not just a matter of "readability", but also that they don't actually want to "restrict free speech". This seems to me, at best a cop-out that leaves victims totally on their own, and, at worst, creates a "safe space" for neo-nazis to escape the narrowing controls imposed on larger platforms like Twitter, Facebook and Reddit. This is the same position that "big tech" (as Purism calls its competitors) are taking. They are trying really hard to remove themselves from the editorial process and claim they are not responsible for content.

In practice, this is a little white lie: Facebook, Twitter and all those platforms employ armies of moderators that constantly police their network.4 The question, therefore, is what that platform specifically allows and refuses. Pornography, for example, is definitely allowed "legally protected free speech" in the USA, yet it's forbidden on Facebook. Some large providers have also started to crack down on neo-nazis, like Facebook, Youtube, Apple, and Spotify banning Alex Jones from their networks. Twitter seems slower to follow and some claim that's because they might they risk banning Republicans as well because they confuse artificial intelligence (and, arguably, human intelligence as well).

Free speech absolutism and its impacts

The first impact of this is that some Mastodon servers are blocking the Purism instance altogether. This makes Purism's claims of federation somewhat dishonest:

Yes, you can follow and fully interact with people inside or outside the librem.one domain. (not locked-in to one technology company)

Of course, that's the nature of federation, but I am not aware of such a company (especially one which claims to have a social purpose) blocked right off the bat from the federation.

The second impact, of course, is that free speech fanatics, the alt-right, and neo-nazis are soon going to invade that space. The hordes of trolls, tired of getting banned on Twitter, will be happy to find a safe haven on Librem.one, especially since there will be a juicy community of unsuspecting "social justice warriors" like me there to troll and brutalize.

There's a long history of tolerating hate speech in the USA, based on the US constitution, at least from state institutions. As a reminder, the first amendment says that:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Free speech absolutits like to read this by disregarding the words "congress", "law" and "government" in there and interpret this as applying to the entire fabric of society. But that's not how free speech works, even in the US. The first amendment concerns Congress and the laws it passes. There is absolutely no law in the US that forbids a private company to enforce contents on its own. It's the editorial right of any content editor (because that's what you become when you start your own twitter) to censor any speech that they like. This is also how XKCD put it:

Public Service Announcement: The Right to Free Speech means the government can't arrest you for what you say.

It doesn't mean that anyone else has to listen to your bullshit, or host you while you share it.

The 1st Amendment doesn't shield you from criticism or consequences.

If you're yelled at, boycotted, have your show canceled, or get banned from an Internet community, your free speech rights aren't being violated.

It's just that the people listening think you're an asshole.

And they're showing you the door.

For the record, I used to be a free speech absolutist myself. But I have since then reviewed my position on this: I think free speech, like any human right, is not absolute, and should take into account political and social dynamics. Free speech, right now, is not in danger, or at least specifically not right wing fear-mongering, racism and sexism. Hate speech is on the rise, and I find it particularly offensive to hear the arugment that it is "legally protected" because it is false and dangerous.

Hate speech was the prelude to the rise of facism in the early 20th century. Those fascists support free speech as long as it serves their purpose, but they are the first to destroy it when they are back in power. Not only figuratively, through censorship, but litterally, by harrassing, beating up, and murdering people. By allowing hate speech, we are paving the way for those people to come out of the closet and pose more daring actions.

We can already see this happening in the US and elsewhere:

  • In 2015, a white supremacist walked into a church in Soutch Carolina and murdered nine african-americans "in the hope of igniting a race war".

  • In 2017, Heather Heyer was one of the victims in a large fascist rally in Virginia. The perpetrator was previously posting neo-nazi memes and symbols on Facebook.

  • In 2018, another neo-nazi walked into a synagogue in Pittsburg and murdered eleven people. He had previously posted anti-semitic comments on the far-right Gab social network.

  • And this year, in 2019, another neo-nazi walked into a Mosque and murdered 51 people in New Zealand. He streamed everything on Facebook Live and he distributed his manifesto on Twitter and 8chan.

This is real. This is now. This is what Purism enables by tolerating hate speech. And it's not right. Free speech should never be an enabler for such horrors. We don't tolerate it for ISIL and jihadist terrorism, why should we tolerate it for the white supremacy groups?

First they came for the socialists, and I did not speak out — because I was not a socialist.

Then they came for the trade unionists, and I did not speak out — because I was not a trade unionist.

Then they came for the Jews, and I did not speak out — because I was not a Jew.

Then they came for me — and there was no one left to speak for me.

Martin Niemöller

For the sake of transparency, I should state that I have ordered a laptop from Purism about a month ago and the machine was "dead on arrival" when it arrived last week. I've also been having trouble getting the machine returned although it seems this will might resolve itself today.

  1. scum, the topmost liquid layer of a cesspool or septic tank, a reprehensible person or persons. Nazi Scum. ↩

  2. The screenshots do not display correctly in the thread, but here are Internet Archive links: 1 2. ↩

  3. For context, Mastodon is a Twitter/Twitdeck clone that implements standard federated protocol and can interoperate with other instances like Gnu Social. It's presumably Twitter done right, like email. In practice, you'll see there are tricky edge cases, naturally. ↩

  4. For a good perspective on that gruesome work, I recommend this article on The Verge and there are also two documentaries I'm aware of that cover the topic as well, The Cleaners and The Moderators. ↩

Categories: External Blogs

Securing registration email

Wed, 03/20/2019 - 10:28

I've been running my own email server basically forever. Recently, I've been thinking about possible attack vectors against my personal email. There's of course a lot of private information in that email address, and if someone manages to compromise my email account, they will see a lot of personal information. That's somewhat worrisome, but there are possibly more serious problems to worry about.

TL;DR: if you can, create a second email address to register on websites and use stronger protections on that account from your regular mail.

Hacking accounts through email

Strangely what keeps me up at night is more what kind of damage an attacker could do to other accounts I hold with that email address. Because basically every online service is backed by an email address, if someone controls my email address, they can do a password reset on every account I have online. In fact, some authentication systems just gave up on passwords algother and use the email system itself for authentication, essentially using the "password reset" feature as the authentication mechanism.

Some services have protections against this: for example, GitHub require a 2FA token when doing certain changes which the attacker hopefully wouldn't have (although phishing attacks have been getting better at bypassing those protections). Other services will warn you about the password change which might be useful, except the warning is usually sent... to the hacked email address, which doesn't help at all.

The solution: a separate mailbox

I had been using an extension (anarcat+register@example.com) to store registration mail in a separate folder for a while already. This allows me to bypass greylisting on the email address, for one. Greylisting is really annoying when you register on a service or do a password reset... The extension also allows me to sort those annoying emails in a separate folder automatically with a simple Sieve rule.

More recently, I have been forced to use a completely different email alias (register@example.com) on some services that dislike having plus signs (+) in email address, even though they are perfectly valid. That got me thinking about the security problem again: if I have a different alias why not make it a completely separate account and harden that against intrusion. With a separate account, I could enforce things like SSH-only access or 2FA that would be inconvenient for my main email address when I travel, because I sometimes log into webmail for example. Because I don't frequently need access to registration mail, it seemed like a good tradeoff.

So I created a second account, with a locked password and SSH-only authentication. That way the only way someone can compromise my "registration email" is by hacking my physical machine or the server directly, not by just bruteforcing a password.

Now of course I need to figure out which sites I'm registered on with a "non-registration" email (anarcat@example.com): before I thought of using the register@ alias, I sometimes used my normal address instead. So I'll have to track those down and reset those. But it seems I already blocked a large attack surface with a very simple change and that feels quite satisfying.

Implementation details

Using syncmaildir (SMD) to sync my email, the change was fairly simple. First I need to create a second SMD profile:

if [ $(hostname) = "marcos" ]; then exit 1 fi SERVERNAME=smd-server-register CLIENTNAME=$(hostname)-register MAILBOX_LOCAL=Maildir/.register/ MAILBOX_REMOTE=Maildir TRANSLATOR_LR="smd-translate -m move -d LR register" TRANSLATOR_RL="smd-translate -m move -d RL register" EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

Very similar to the normal profile, except mails get stored in the already existing Maildir/.register/ and different SSH profile and translation rules are used. The new SSH profile is basically identical to the previous one:

# wrapper for smd Host smd-server-register Hostname imap.anarc.at BatchMode yes Compression yes User register IdentitiesOnly yes IdentityFile ~/.ssh/id_ed25519_smd

Then we need to ignore the register folder in the normal configuration:

diff --git a/.smd/config.default b/.smd/config.default index c42e3d0..74a8b54 100644 --- a/.smd/config.default +++ b/.smd/config.default @@ -59,7 +59,7 @@ TRANSLATOR_RL="smd-translate -m move -d RL default" # EXCLUDE_LOCAL="Mail/spam Mail/trash" # EXCLUDE_REMOTE="OtherMail/with%20spaces" #EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*" -EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*" +EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/* Maildir/.register/*" #EXCLUDE_LOCAL="$MAILBOX_LOCAL/.notmuch/hooks/* $MAILBOX_LOCAL/.notmuch/xapian/*" #EXCLUDE_REMOTE="$MAILBOX_REMOTE/.notmuch/hooks/* $MAILBOX_REMOTE/.notmuch/xapian/*" #EXCLUDE_REMOTE="Maildir/Koumbit Maildir/Koumbit* Maildir/Koumbit/* Maildir/Koumbit.INBOX.Archives/ Maildir/Koumbit.INBOX.Archives.2012/ Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

And finally we add the new profile to the systemd services:

diff --git a/.config/systemd/user/smd-pull.service b/.config/systemd/user/smd-pull.service index a841306..498391d 100644 --- a/.config/systemd/user/smd-pull.service +++ b/.config/systemd/user/smd-pull.service @@ -8,6 +8,7 @@ ConditionHost=!marcos Type=oneshot # --show-tags gives email counts ExecStart=/usr/bin/smd-pull --show-tags +ExecStart=/usr/bin/smd-pull --show-tags register [Install] WantedBy=multi-user.target diff --git a/.config/systemd/user/smd-push.service b/.config/systemd/user/smd-push.service index 10d53c7..caa588e 100644 --- a/.config/systemd/user/smd-push.service +++ b/.config/systemd/user/smd-push.service @@ -8,6 +8,7 @@ ConditionHost=!marcos Type=oneshot # --show-tags gives email counts ExecStart=/usr/bin/smd-push --show-tags +ExecStart=/usr/bin/smd-push --show-tags register [Install] WantedBy=multi-user.target

That's about it on the client side. On the server, the user is created with a locked password the mailbox moved over:

adduser --disabled-password register mv ~anarcat/Maildir/.register/ ~register/Maildir/ chown -R register:register Maildir/

The SSH authentication key is added to .ssh/authorized_keys, and the alias is reversed:

--- a/aliases +++ b/aliases @@ -24,7 +24,7 @@ spamtrap: anarcat spampd: anarcat junk: anarcat devnull: /dev/null -register: anarcat+register +anarcat+register: register # various sandboxes anarcat-irc: anarcat

... and the email is also added to /etc/postgrey/whitelist_recipients.

That's it: I now have a hardened email service! Of course there are other ways to harden an email address. On-disk encryption comes to mind but that only works with password-based authentication from what I understand, which is something I want to avoid to remove bruteforce attacks.

Your advice and comments are of course very welcome, as usual

Categories: External Blogs

February 2019 report: LTS, HTML mail, new phone and new job

Tue, 03/05/2019 - 21:04
Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

This is my final LTS report. I have found other work and will unfortunately not be able to continue working on the LTS project in the foreseeable future. I will continue my volunteer work on Debian and might even contribute to LTS in my normal job, but not directly part of the LTS team.

It is too bad because that team is doing essential work, and needs more help. Security is, at best, lacking everywhere and I do not believe the current approach of "minimal viable product, move fast, then break things" is sustainable. The people working on Linux distributions and also the LTS people are doing hard, dirty work of maintaining free software in the long term. It's thankless but I believe it's one of the most important jobs out there right now. And I suspect there will be only more of it as time goes by.

Legacy systems are not going anywhere: this is the next generation's "y2k bug": old, forgotten software no one understands or cares to work with that suddenly break or have a critical vulnerability that needs patching. Moving faster will not help us fix this problem: it only piles up more crap to deal with for real systems running in production.

The survival of humans and other species on planet Earth in my view can only be guaranteed via a timely transition towards a stationary state, a world economy without growth.

-- Peter Custers

Website work

I again worked on the website this month, doing one more mass import (MR 53) which was finally merged by Holger Levsen, after I fixed an issue with PGP signatures showing up on the website.

I also polished the misnamed "audit" script that checks for missing announcements on the website and published it as MR 1 on the "cron" project of the webmaster team. It's still a "work in progress" because it is still too noisy: there are a few DLAs missing already and we haven't published the latest DLAs on the website.

The remaining work here is to automate the import of new announcements on the website (bug #859123). I've done what is hopefully the last mass import and updated the workflow in the wiki.

Finally, I have also done a bit of cleanup on the website that was necessary after the mass import which also required rewrite rules at the server level. Hopefully, I will have this fairly well wrapped up for whoever picks this up next.

Python GPG concerns

Following a new vulnerability (CVE-2019-6690) disclosed in the python-gnupg library, I have expressed concerns at the security reliability of the project in future updates, referring to wider issues identified by isis lovecroft in this post.

I suggested we should simply drop security support for the project, citing it didn't have many reverse dependencies. But it seems that wasn't practical and the response was that it was actually possible to keep on maintaining it an such an update was issued for jessie.

Golang concerns

Similarly, I have expressed more concerns about the maintenance of Golang packages following the disclosure of a vulnerability (CVE-2019-6486) regarding elliptic curve implementations in the core Golang libraries. An update (DLA-1664-1) was issued for the core, but because Golang is statically compiled, I was worried the update wasn't sufficient: we also needed to upload updates for any build dependency using the affected code as well.

Holger asked the golang team for help and i also asked on irc. Apparently, all the non-dev packages (with some exceptions) were binNMU'd in stretch but the process needs to be clarified.

I also wondered if this maintenance problem could be resolved in the long term by switching to dynamic linking. Ubuntu tried to switch to dynamic linking but abandoned the effort, so it seems Golang will be quite difficult to maintain for security updates in the foreseeable future.

Libarchive updates

I have reproduced the problem described in CVE-2019-1000020 and CVE-2019-1000019 in jessie. I published a fix as DLA-1668-1. I had to build the update without sbuild's overlay system (in a tar chroot) otherwise the cpio tests fail.

Netmask updates

This one was minimal: a patch was sent by the maintainer so I only wrote and sent DLA 1665-1. Interestingly, I didn't have access to the .changes file which made writing the DLA a little harder, as my workflow normally involves calling gen-DLA --save with the .changes file which autopopulates a template. I learned that .changes files are normally archived on coccia.debian.org (specifically in /srv/ftp-master.debian.org/queue/done/), but not in the case of security uploads.

Libreoffice

I once again tried to tackle an issue (CVE-2018-16858) with Libreoffice. The last time I tried to work on LibreOffice, the test suite was failing and the linker was crashing after hours of compilation and I never got anywhere. But that was wheezy, so I figured jessie might be in better shape.

I quickly got into trouble with sbuild: I ran out of space on both / and /home so I moved all my photos to external drive (!). The patch ended up being trivial. I could reproduce with a simple proof of concept, but could not quite get code execution going. It might just be I haven't found the right Python module to load, so I assumed the code was vulnerable and, given the patch was simple, it was worth doing an update.

The build ended up taking close to nine hours and 35GiB of disk space. I published DLA-1669-1 as a result.

I also opened a bug report against dput-ng against dput-ng because it still doesn't warn users about uploads to security-master the same way dput does.

Enigmail

Finally, Enigmail was finally taken off the official support list in jessie when the debian-security-support proposed update was approved.

Other free software work

Since I was going to start that new job in March, I figured I would try to take some time off before work starts. I therefore mostly tried to wrap things up and didn't do as much volunteer work as I usually do. I'm unsure I'll be able to do as much volunteer work now that I start a full time job either, so this might possibly be my last report for a while.

Debian work before the freeze

I uploaded new versions of bitlbee-mastodon (1.4.1-1), sopel (6.6.3-1 and 6.6.3-2) and dateparser (0.7.1-1). I've also sponsored new uploads of smokeping and tuptime.

I also uploaded convertdate to NEW as it was a (missing but optional) dependency of dateparser. Unfortunately, it didn't make it through NEW in time for the freeze so dateparser won't be totally fixed in buster.

I also made two new releases of feed2exec, my programmable feed reader, to fix date parsing on broken feeds, add a JSON output plugin, and fix an issue with the ikiwiki_recentchanges plugin.

New phone

I got tired and bought a new phone. Even though I have almost a dozen old phones in a plastic box here, most of them are basically unusable:

  • two are just "feature phones" - I need OSMand
  • two are Nokia n900 phones that can't read a SIM card
  • at least two have broken screens
  • one is "declared stolen or lost" (same, right?) which means it can't be used as a phone at all, which is totally stupid if you ask me

I managed to salvage the old htc-one-s I had. It's still a little buggy (it crashes randomly) and a little slow, but generally works and I really like how small it is. It's going to be hard to go back to a bigger format.

I bought fairphone2 (FP2). It was pricey, and it's crazy because they might come up with the FP3 this year, but I was sick of trying to cross-reference specification tables and LineageOS download pages. The FP2 just works with an "open" Android version (and LOS) out of the box. But more importantly, the FP project tries to avoid major human rights issues in the source of components and the production of the device, something that's way too often overlooked. Many minerals involved in the fabrication of modern electronics come from conflict zones or involve horrible (child) labour conditions. Fixing those issues should be our priority, maybe even before hardware or software freedom.

Even without addressing completely those issues, the fact that it scored a perfect 10 in iFixit's reparability score is amazing. It seems parts are difficult to find, even in Europe. The phone doesn't ship to the Americas from the original website, which makes it difficult to buy, but some shops do ship to Canada, like Ecosto.

So we'll see how that goes. I will, as usual, document my experiences in the wiki, in fairphone2.

Mailing list experiments

As part of my calendar project, I figured I would keep my "readers" informed of my progress this year and send them an update every month or so. I was inspired by this post as I said last week: I can't stop thinking about it.

So I kept working on Mailman 3. Unfortunately, only a single of my proposed patches was merged. Many of them are "work in progress" (WIP) of course, but I was hoping to get more feedback on the proposals, especially the no notification workflow. Such a workflow delegates the sending of confirmation mails to the caller, which enables them to send more complex email than the straitjacket the templating system forces you into: you could then control every part of the email, not just the body and subject, but also content type, attachments and so on. That didn't seem to get traction: some informal comments I received said this wasn't the right fix for the invite problem, but then no one is working on fixing the invite problem either, so I wonder where that is going to go.

Unabashed, I tried to provide a french translation which allowed me to send an actual invite fully translated. This was a lot of work for not much benefit, so that was frustrating as well.

In the end, I ended up just with a Bcc list that I keep as an alias in my ~/.mutt/aliases, which notmuch reads thanks to my notmuch-address hack. In the email, I proposed my readers an "opt-out": if they don't write back, they're on the mailing list. It's spammy, but the readers are not just the general public: they are people I know well, that are close to me, and to who I have given a friggin' calendar (at least most of them).

If I find the energy, I'll finish setting up Mailman 3 just the way I like and use it to do the next mailing. But I can't help but think the mailing list is overkill for this now: the mailing with a Bcc list worked without a flaw, as far as I could tell, and it means minimal maintenance. So I'm not sure I'll battle Mailman 3 much longer, which is a shame because I happen to believe it's probably our best bet to keep mailing lists (and therefore probably email itself) alive in the future.

Emailing HTML in Notmuch

I actually had to write content for that email too - just messing around with the mailing list server is one thing, but the whole point is to actually say something. Or, in my case, show something, which is difficult using plain text. So I went crazy and tried to send HTML mail with notmuch. The thread is interesting: I encourage you to read it in full, but I'll quote the first post here for posterity:

I know, I know, HTML email is "evil"[1]. I mostly never ever use it, in fact, I don't remember the last time I consciously sent HTML. Maybe I did so back when I was using Netscape Communicator[2][3], but whatever.

The reason I thought about this again is I have been doing more photography these days and, well, being allergic to social media, I have very few ways of sharing those photographs with families and friends. I have tried creating a gallery website with an RSS feed but I'm sure no one here will be surprised that the uptake is minimal, if non-existent. People expect to have stuff pushed to them, like Instagram, Facebook, Twitter or Spam does.

So I thought[4] of Email again: the original social network! I figured I would just make a mailing list, and write to my people once in a while to let them new about my new pictures. And while writing the first email, I realized it was pretty silly to not include images, or at least links to images in the email.

I'm sure you can see where this is going. A link in the email: who's going to click that. Who clicks now anyways, with all the tapping[5] going on. So the answer comes naturally: just write frigging HTML email. Don't be a rms^Wreligious zealot and do the right thing, what works basically everywhere[6] (even notmuch!).

So I started Thunderbird and thought "what the heck am I doing! there must be a better way!" After searching for "message mode emacs html email ktxbye", I found some people already thought about this problem and came up with somewhat elegant solutions[7]. I built on that by trying to come up with a pure elisp solution, which goes a little like this:

(defun anarcat/notmuch-html-convert () """create an HTML part from a Markdown body This will not work if there are *any* attachments of any form, those should be added after.""" (interactive) (save-excursion ;; fetch subject, it will be the HTML version title (message "building HTML attachment...") (message-goto-subject) (beginning-of-line) (search-forward ":") (forward-char) (let ((beg (point))) (end-of-line) (setq subject (buffer-substring beg (point)))) (message "determined title is %s..." subject) ;; wrap signature in a <pre> (message-goto-signature) (forward-line -1) ;; save and delete signature which requires special formatting (setq signature (buffer-substring (point) (point-max))) (delete-region (point) (point-max)) ;; set region to top of body then end of buffer (end-of-buffer) (message-goto-body) (narrow-to-region (point) (mark)) ;; run markdown on region (setq output-buffer-name "*notmuch-markdown-output*") (message "running markdown...") (markdown output-buffer-name) (widen) (save-excursion (set-buffer output-buffer-name) (end-of-buffer) ;; add signature formatted as <pre> (insert "\n<pre>") (insert signature) (insert "</pre>\n") (markdown-add-xhtml-header-and-footer subject)) (message "done the dirty work, re-inserting everything...") ;; restore signature (message-goto-signature) (insert signature) (message-goto-body) (insert "<#multipart type=alternative>\n") (end-of-buffer) (insert "<#part type=text/html>\n") (insert-buffer output-buffer-name) (end-of-buffer) (insert "<#/multipart>\n") (let ((f (buffer-size (get-buffer output-buffer-name)))) (message "appended HTML part (%s bytes)" f))))

For those who can't read elisp for breakfast, this does the following:

  1. parse the current email body as markdown, in a separate buffer
  2. make the current email multipart/alternative
  3. add an HTML part
  4. inject the HTML version in the HTML part

There's some nasty business with formatting the signature correctly by wrapping it in a <pre> that's going on there - I took that from Thunderbird as well.

(For those who do read elisp for breakfast, improvements and comments on the coding style are very welcome.)

The idea is that you write your email normally, but in markdown. When you're done writing that email, you launch the above function (carefully bound to "M-x anarcat/notmuch-html-convert" here) which takes that email and adds an equivalent HTML part to it. You can then even tweak that part to screw around with the raw HTML if you feel depressed or nostalgic.

What do people think? Am I insane? Could this work? Does this belong in notmuch? Or maybe in the tips section? Should I seek therapy? Do you hate markdown? Expand on the relationship between your parents and text editors.

Thanks for any feedback,

A.

PS: the above, naturally, could be adapted to parse the body as RST, asciidoc, texinfo, latex or whatever insanity you think would be more appropriate, I don't care. The idea is the same.

PPS: I remember reading about someone wanting to declare a text/markdown mimetype for email, and remembering it was all backwards and weird and I can't find the reference anymore. If some lazyweb magic person could forward the link to me I would be grateful.

[1]: one of so many: https://www.georgedillon.com/web/html_email_is_evil_still.shtml [2]: https://en.wikipedia.org/wiki/Netscape_Communicator [3]: yes my age is showing [4]: to be fair, this article encouraged me quite a bit: https://blog.chaddickerson.com/2019/01/09/replacing-facebook/ [5]: not the bass guitar one, unfortunately [6]: https://en.wikipedia.org/wiki/HTML_email#Adoption [7]: https://trey-jackson.blogspot.com/2008/01/emacs-tip-8-markdown.html

I edited the original message to include the latest version of the script, which (unfortunately) lives in my private dotfiles git repository.

In the end, all that effort didn't quite do it: the image links would break in webmail when seen from Chromium. This is apparently intended behaviour: the problem was that I am embedding the username/password of the gallery in the HTTP URL, using in-URL credentials which is apparently "deprecated" even though no standards actually says so. So I ended up generating a full HTML version of the frigging email, complete with a link on top of the email saying "if this email doesn't display properly, click the following".

Now I remember why I dislike HTML email. Yet my readers were quite happy to see the images directly and I suspect most of them wouldn't click through on individual images to see each photo, so I think it's worth the trouble.

And now that I think about it, it feels silly not to post those updates on this blog now. But the gallery is private right now, and I think I'd like to keep it that way: it gives me more freedom to share more intimate pictures with people.

Using dtach instead of screen for my IRC bouncer

I have been using irssi in a screen session for a long time now. Recently I started thinking about simplifying that setup by setting up password-less authentication to the session, but also running it as a separate user. This was especially important to keep possible compromises of the IRC client limited to a sandboxed account instead of my more powerful user.

To further limit the impact of a possible compromise, I also started using dtach instead of GNU screen to handle my irssi session: irssi can still run arbitrary code, but at least you can't just open a new window in screen and need to think a little more about how to do it.

Eventually, I could make a profile in systemd to keep it from forking at all, although I'm not sure irssi could still work in such an environment. The change broke the "auto-away script" which relies on screen's peculiar handling of the socket to signify if the session is attached, so I filed that as a feature request.

Other work
Categories: External Blogs

New large hard drive and 8-year old server anniversary

Mon, 02/25/2019 - 12:59

It's the "installation birthday" of my home server on February 22nd:

/etc/cron.daily/installation-birthday: 0 0 | | ____|___|____ 0 |~ ~ ~ ~ ~ ~| 0 | | | | ___|__|___________|___|__ |/\/\/\/\/\/\/\/\/\/\/\/| 0 | H a p p y | 0 | |/\/\/\/\/\/\/\/\/\/\/\/| | _|___|_______________________|___|__ |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| | | | B i r t h d a y! ! ! | | ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ | |___________________________________| Congratulations, your Debian system "marcos" was installed 8 year(s) ago today! Best wishes, Your local system administrator

I can't believe this machine I built 8 years ago has been running continuously all that time. That is far, far beyond the usual 3 or 5 year depreciation period set in most organizations. It goes to show how some hardware can be reliable in the long term.

I bought yet another new drive to deal with my ever-increasing disk use. I got a Seagate IronWolf 8TB ST8000VN0022 at Canada Computers (CC) for 290$CAD. I also bought a new enclosure as well, a transparent Orico enclosure which is kind of neat. I previously bought this thing instead, it was really hard to fit the hard drive in because the bottom was mis-aligned: you had to lift the drive slightly to fit it in the SATA connector. Even the salesman at CC couldn't figure it out. The new enclosure is a bit better, but also doesn't quite close correctly when a hard drive is present.

Compatibility and reliability

The first 8TB drive I got last week was DOA (no, not that DOA): it was "clicking" and wasn't detected by the kernel. CC took it back without questions, after they were able to plug it into something. I'm not sure that's a good sign for the reliability of that drive, but I have another running in a backup server and it has worked well so far.

I was happily surprised to see the new drive works with my old Asus P5G410-M motherboard. My previous attempt at connecting this huge drive into older equipment failed in a strange way: when connected in a Thermaltake USB-SATA dock, it would only be recognized as 4TB. I don't remember if I tried to connect it inside the server, but I do remember connecting it to curie instead which was kind of a mess. So I'm quite happy to see the drive works even on an old SATA controller, a testament to the backwards-compatibility requirements of the standard.

Setup

Of course, I used a GUID Partition Table GPT because MBR (Master Boot Record) partition tables are limited to 2TiB. I have learned about parted --align optimal to silence the warnings when creating the device:

parted /dev/sdc mklabel gpt parted -a optimal /dev/sdc mkpart primary 0% 8MB parted -a optimal /dev/sdc mkpart primary 8MB 100%

I have come to like to call parted without going into its shell. It's clean and easy to copy paste. It also makes me wonder why the Debian installer bothers with that complicated partition editor after all...

I have encrypted the drive using Debian stretch's LUKS default, but I have given special attention to the filesystem settings, given the drive is so big. Here's the commandline I ended using:

mkfs -t ext4 -j -T largefile -i 65536 -m 1 /dev/mapper/8tb_crypt

Here are the details of each bit:

  • ext4 - I still don't trust BTRFS enough, and I don't need the extra features

  • -j - journaling, probably default, but just in case

  • -T largefile - this is where things get interesting. the mkfs manpage says that -b -1 is supposed to tweak the block size according to the filesystem size, but mkfs refuses to parse this, so I had to use the -T setting. but it turns out that didn't change the block size anyways, which is still at the eternal 4KiB

  • -i 65536 ("64 KiB per inode" ratio) - the default mkfs setting would have allowed for around five hundred million (488 281 250) inodes on this disk. given that I have less than a million files to store on there so far, that seemed totally overkill, so I bumped it up.

  • -m - don't reserve as much space for root, as default (5%) would have reserved 400GB. 1% is still too big (80GB), but I can reclaim the space later with tune2fs -m 0.001 /dev/mapper/8tb_crypt. it gives me a good "heads up" before it's time to change the drive again. besides, it's not possible to pass lower, non-zero values to mkfs, strangely

Benchmarks

I performed a few benchmarks. It looks like the disk can easily saturate the SATA bus, which is limited to 150MB/s (1.5Gbit/s unencoded):

root@marcos:~# dd bs=1M count=512 conv=fdatasync if=/dev/zero of=/mnt/testfile 512+0 enregistrements lus 512+0 enregistrements écrits 536870912 bytes (537 MB, 512 MiB) copied, 3,4296 s, 157 MB/s root@marcos:~# dd bs=1M count=512 if=/mnt/testfile of=/dev/null 512+0 enregistrements lus 512+0 enregistrements écrits 536870912 bytes (537 MB, 512 MiB) copied, 0,367484 s, 1,5 GB/s root@marcos:~# hdparm -Tt /dev/sdc /dev/sdc: Timing cached reads: 2514 MB in 2.00 seconds = 1257.62 MB/sec Timing buffered disk reads: 660 MB in 3.00 seconds = 219.98 MB/sec

A SMART test succeeded after 20 hours. Transferring the files over from the older disk took even longer: at 3.5TiB used, it's quite a lot of data and the older disk does not yield the same performance as the new one. rsync seems to show numbers between 40 and 50MB/s (or MiB/s?), which means the entire transfer takes more than a day to complete.

I have considered setting up the new drive as a degraded RAID-1 array to facilitate those transfers but it doesn't seem to be worth the trouble: this will yield warnings in a few place, adds some overhead (including scrubbing, for example) and might make me freak out for nothing in the future. This is a single drive, and will probably stay that way for the foreseeable future.

The sync is therefore made with good old rsync:

rsync -aAvP /srv/ /mnt/

Some more elaborate tests performed with fio also show that random read/write performance is somewhat poor (<1MB/s):

root@marcos:/srv# fio --name=stressant --group_reporting --directory=test --size=100M --readwrite=randrw --direct=1 --numjobs=4 stressant: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1 ... fio-2.16 Starting 4 processes stressant: Laying out IO file(s) (1 file(s) / 100MB) stressant: Laying out IO file(s) (1 file(s) / 100MB) stressant: Laying out IO file(s) (1 file(s) / 100MB) stressant: Laying out IO file(s) (1 file(s) / 100MB) Jobs: 2 (f=2): [_(2),m(2)] [99.4% done] [1097KB/1305KB/0KB /s] [274/326/0 iops] [eta 00m:02s] stressant: (groupid=0, jobs=4): err= 0: pid=10161: Mon Feb 25 12:51:21 2019 read : io=205352KB, bw=586756B/s, iops=143, runt=358378msec clat (usec): min=145, max=367185, avg=23237.22, stdev=24300.33 lat (usec): min=145, max=367186, avg=23238.42, stdev=24300.31 clat percentiles (usec): | 1.00th=[ 450], 5.00th=[ 3792], 10.00th=[ 6816], 20.00th=[ 9408], | 30.00th=[12608], 40.00th=[14912], 50.00th=[17280], 60.00th=[19328], | 70.00th=[22656], 80.00th=[27264], 90.00th=[46848], 95.00th=[69120], | 99.00th=[123392], 99.50th=[148480], 99.90th=[238592], 99.95th=[272384], | 99.99th=[329728] write: io=204248KB, bw=583601B/s, iops=142, runt=358378msec clat (usec): min=164, max=322970, avg=4646.01, stdev=10840.13 lat (usec): min=165, max=322971, avg=4647.36, stdev=10840.16 clat percentiles (usec): | 1.00th=[ 195], 5.00th=[ 227], 10.00th=[ 251], 20.00th=[ 310], | 30.00th=[ 378], 40.00th=[ 494], 50.00th=[ 596], 60.00th=[ 2832], | 70.00th=[ 6176], 80.00th=[ 8896], 90.00th=[12480], 95.00th=[15552], | 99.00th=[22400], 99.50th=[33024], 99.90th=[199680], 99.95th=[234496], | 99.99th=[272384] lat (usec) : 250=4.86%, 500=16.18%, 750=7.01%, 1000=1.45% lat (msec) : 2=0.91%, 4=3.69%, 10=19.06%, 20=27.09%, 50=15.04% lat (msec) : 100=3.51%, 250=1.14%, 500=0.05% cpu : usr=0.11%, sys=0.27%, ctx=103127, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=51338/w=51062/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=205352KB, aggrb=573KB/s, minb=573KB/s, maxb=573KB/s, mint=358378msec, maxt=358378msec WRITE: io=204248KB, aggrb=569KB/s, minb=569KB/s, maxb=569KB/s, mint=358378msec, maxt=358378msec Disk stats (read/write): dm-6: ios=51862/51241, merge=0/0, ticks=1203452/250196, in_queue=1453720, util=100.00%, aggrios=51736/51295, aggrmerge=168/61, aggrticks=1196604/246444, aggrin_queue=1442968, aggrutil=100.00% sdb: ios=51736/51295, merge=168/61, ticks=1196604/246444, in_queue=1442968, util=100.00%

I am still, overall, quite happy with those results.

Categories: External Blogs

January 2019 report: LTS, Mailman 3, Vero 4k, Kubernetes, Undertime, Monkeysign, oh my!

Wed, 02/06/2019 - 10:32

January is often a long month in our northern region. Very cold, lots of snow, which can mean a lot of fun as well. But it's also a great time to cocoon (or maybe hygge?) in front of the computer and do great things. I think the last few weeks were particularly fruitful which lead to this rather lengthy report, which I hope will be nonetheless interesting.

So grab some hot coco, a coffee, tea or whatever warm beverage (or cool if you're in the southern hemisphere) and hopefully you'll learn awesome things. I know I did.

Free software volunteer work

As always, the vast majority of my time was actually spent volunteering on various projects, while scrambling near the end of the month to work on paid stuff. For the first time here I mention my Kubernetes work, but I've also worked on the new Mailman 3 packages, my monkeysign and undertime packages (including a new configuration file support for argparse), random Debian work, and Golang packaging. Oh, and I bought a new toy for my home cinema, which I warmly recommend.

Kubernetes research

While I've written multiple articles on Kubernetes for LWN in the past, I am somewhat embarrassed to say that I don't have much experience running Kubernetes itself for real out there. But for a few months, with a group of fellow sysadmins, we've been exploring various container solutions and gravitated naturally towards Kubernetes. In the last month, I particularly worked on deploying a Ceph cluster with Rook, a tool to deploy storage solutions on a Kubernetes cluster (submitting a patch while I was there). Like many things in Kubernetes, Rook is shipped as a Helm chart, more specifically as an "operator", which might be described (if I understand this right) as a container that talks with Kubernetes to orchestrate other containers.

We've similarly worked on containerizing Nextcloud, which proved to be pretty shitty at behaving like a "cloud" application: secrets and dynamic data and configuration are all mixed up in the config directory, which makes it really hard to manage sanely in a container environment. The only way we found it could work was to mount configuration as a volume, which means configuration becomes data and can't be controled through git. Which is bad. This is also how the proposed Nextcloud Helm solves this problem (on which I've provided a review), for what it's worth.

We've also worked on integrating GitLab in our workflow, so that we keep configuration as code and deploy on pushes. While GitLab talks a lot about Kubernetes integration, the actual integration features aren't that great: unless I totally misunderstood how it's supposed to work, it seems you need to provide your own container and run kubectl from it, using the tokens provided by GitLab. And if you want to do anything of significance, you will probably need to give GitLab cluster access to your Kubernetes cluster, which kind of freaks me out considering the number of security issues that keep coming out with GitLab recently.

In general, I must say I was very skeptical of Kubernetes when I first attended those conferences: too much hype, buzzwords and suits. I felt that Google just threw us a toy project to play with while they kept the real stuff to themselves. I don't think that analysis is wrong, but I do think Kubernetes has something to offer, especially for organizations still stuck in the "shared hosting" paradigm where you give users a shell account or (S?!)FTP access and run mod_php on top. Containers at least provide some level of isolation out of the box and make such multi-tenant offerings actually reasonable and much more scalable. With a little work, we've been able to setup a fully redundant and scalable storage cluster and Nextcloud service: doing this from scratch wouldn't be that hard either, but it would have been done only for Nextcloud. The trick is the knowledge and experience we gained by doing this with Nextcloud will be useful for all the other apps we'll be hosting in the future. So I think there's definitely something there.

Debian work

I participated in the Montreal BSP, of which Louis-Philippe Véronneau made a good summary. I also sponsored a few uploads and fixed a few bugs. We didn't fix that many bugs, but I gave two workshops, including my now well-tuned packaging 101 workshop, which seems to be always quite welcome. I really wish I could make a video of that talk, because I think it's useful in going through the essentials of Debian packaging and could use a wider audience. In the meantime, my reference documentation is the best you can get.

I've decided to let bugs-everywhere die in Debian. There's a release critical bug and it seems no one is really using this anymore, at least I'm not. I would probably orphan the package once it gets removed from buster, but I'm not actually the maintainer, just an uploader... A promising alternative to BE seems to be git-bug, with support for synchronization with GitHub issues.

I've otherwise tried to get my figurative "house" of Debian packages in order for the upcoming freeze, which meant new updates for

I've also sponsored the introduction of web-mode (RFS #921130) a nice package to edit HTML in Emacs and filed the usual barrage of bug reports and patches.

Elegant argparse configfile support and new date parser for undertime

I've issued two new releases for my undertime project which helps users coordinate meetings across timezones. I first started working on improvingthe date parser which mostly involved finding a new library to handle dates. I started using dateparser which behaves slightly better, and I ended up packaging it for Debian as well although I still have to re-upload undertime to use the new dependency.

That was a first 1.6.0 release, but that wasn't enough - my users wanted a configuration file! I ended up designing a simple, YAML-based configuration file parser that integrates quite well with argparse, after finding too many issues with existing solutions like Configargparse. I summarized those for the certbot project which suffered from similar issues. I'm quite happy with my small, elegant solution for config file support. It is significantly better than the one I used for Monkeysign which was (ab)using the fromfile option of argparse.

Mailman 3

Motivated by this post extolling the virtues of good old mailing lists to resist social media hegemony, I did a lot (too much) work on installing Mailman 3 on my own server. I have ran Mailman 2 mailing lists for hundreds of clients in my previous job at Koumbit and I have so far used my access there to host a few mailing lists. This time, I wanted to try something new and figured Mailman 3 might have been ready after 4 years since the 3.0 release and almost 10 years since the project started.

How wrong I was! Many things don't work: there is no french translation at all (nor any other translation, for that matter), no invite feature, templates translation is buggy, the Debian backport fails with the MySQL version in stable... it's a mess. The complete history of my failure is better documented in mail.

I worked around many of those issues. I like the fact that I was almost able to replace the missing "invite" feature through the API and there Mailman 3 is much better to look at than the older version. They did fix a lot of things and I absolutely love the web interface which allows users to interact with the mailing list as a forum. But maybe it will take a bit more time before it's ready for my use case.

Right now, I'm hesitant: either I go with a mailing list to connect with friends and family. It works with everyone because everyone uses email, if only for their password resets. The alternative is to use something like a (private?) Discourse instance, which could also double as a comments provider for my blog if I ever decide to switch away from Ikiwiki... Neither seems like a good solution, and both require extra work and maintenance, Discourse particularly so because it is very unlikely it will get shipped as a Debian package.

Vero: my new home cinema box

Speaking of Discourse, the reason I'm thinking about it is I am involved in many online forums running it. It's generally a great experience, although I wish email integration was mandatory - it's great to be able to reply through your email client, and it's not always supported. One of the forums I participate in is the Pixls.us forum where I posted a description of my photography kit, explained different NAS options I'm considering and explained part of my git-annex/dartkable workflow.

Another forum I recently started working on is the OSMC.tv forum. I first asked what were the full specifications for their neat little embedded set-top box, the Vero 4k+. I wasn't fully satisfied with the answers (the hardware is not fully open), but I ended up ordering the device and moving the "home cinema services" off of the venerable marcos server, which is going to turn 8 years old this year. This was an elaborate enterprise which involved wiring power outlets (because a ground was faulty), vacuuming the basement (because it was filthy), doing elaborate research on SSHFS setup and performance, deal with systemd bugs and so on.

In the end it was worth it: my roommates enjoy the new remote control. It's much more intuitive than the previous Bluetooth keyboard, it performs well enough, and is one less thing to overload poor marcos with.

Monkeysign alternatives testing

I already mentioned I was considering Monkeysign retirement and recently a friend asked me to sign his key so I figured it was a great time to test out possible replacements for the project. Turns out things were not as rosy as I thought.

I first tested pius and it didn't behave as well as I hoped. Generally, it asks too many cryptic questions the user shouldn't have to guess the answer to. Specifically, here's the issues I found in my review:

  1. it forces you to specify your signing key, which is error-prone and needlessly difficult for the user

  2. I don't quite understand what the first question means - there's too much to unpack there: is it for inline PGP/MIME? for sending email at all? for sending individual emails? what's going on? and the second questions

  3. the second question should be optional: i already specified my key on the commandline, it should use that as a From...

  4. the signature level is useless and generally disregarded by all software, including OpenPGP. even if it would be used, 0/1/2/3/s/n/h/q is a pretty horrible user interface.

And then it simply fails to send the email completely on dkg's key, but that might be because its key was so exotic...

Gnome-keysign didn't fare much better - I opened six different issues on the promising project:

  1. what does the internet button do?
  2. signing arbitrary keys in GUI
  3. error in french translation
  4. using mutt as a MUA does not work
  5. signing a key on the commandline never completes
  6. flatpak instructions failure

So, surprisingly, Monkeysign might survive a bit longer, as much as I have come to dislike the poor little thing...

Golang packaging

To help a friend getting the new RiseupVPN package in Debian, I uploaded a bunch of Golang dependencies (bug #919936, bug #919938, bug #919941, bug #919944, bug #919945, bug #919946, bug #919947, bug #919948) in Debian. This involved filing many bugs upstream as many of those (often tiny) packages didn't have explicit licences, so many of those couldn't actually be uploaded, but the ITPs are there and hopefully someone will complete that thankless work.

I also tried to package two other useful Golang programs, dmarc-cat and gotop, both of which also required a significant number of dependencies to be packaged (bug #920387, bug #920388, bug #920389, bug #920390, bug #921285, bug #921286, bug #921287, bug #921288). dmarc-cat has just been accepted in Debian - it's very useful to decipher DMARC reports you get when you configure your DNS to receive such reports. This is part of a larger effort to modernize my DNS and mail configuration.

But gotop is just starting - none of the dependencies have been update just yet, and I'm running out of steam a little, even though that looks like an awesome package.

Other work
  • I hosed my workstation / laptop backup by trying to be too clever with Borg. It bit back and left me holding the candle, the bastard.

  • Expanded on my disk testing documentation to include better examples of fio as part of my neglected stressant package

GitHub said I "opened 21 issues in 14 other repositories" which seems a tad insane. And there's of course probably more stuff I'm forgetting here.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

sbuild regression

My first stop this month was to notice a problem with sbuild from buster running on jessie chroots (bug #920227). After discussions on IRC, where fellow Debian Developers basically fabricated me a patch on the fly, I sent merge request #5 which was promptly accepted and should be part of the next upload.

systemd

I again worked a bit on systemd. I marked CVE-2018-16866 as not affecting jessie, because the vulnerable code was introduced in later versions. I backported fixes for CVE-2018-16864 and CVE-2018-16865 and published the resulting package as DLA-1639-1, after doing some smoke-testing.

I still haven't gotten the courage to dig back in the large backport of tmpfiles.c required to fix CVE-2018-6954.

tiff review

I did a quick review of the fix for CVE-2018-19210 proposed upstream which seems to have brought upstream's attention back to the issue and finally merge the fix.

Enigmail EOL

After reflecting on the issue one last time, I decided to mark Enigmail as EOL in jessie, which involved an upload of debian-security-support to jessie (DLA-1657-1), unstable and a stable-pu.

DLA / website work

I worked again on fixing the LTS workflow with the DLAs on the main website. Reminder: hundreds of DLAs are missing from the website (bug #859122) and we need to figure out a way to automate the import of newer ones (bug #859123).

The details of my work are in this post but basically, I readded a bunch more DLAs to the MR and got some good feedback from the www team (in MR #47). There's still some work to be done on the DLA parser, although I have merged my own improvements (MR #46) as I felt they had been sitting for review long enough.

Next step is to deal with noise like PGP signatures correctly and thoroughly review the proposed changes.

While I was in the webmaster's backyard, I tried to help with a few things by merging a LTS errata and a paypal integration note although the latter ended up being a mistake that was reverted. I also rejected some issues (MR #13, MR #15) during a quick triage.

phpMyAdmin review

After reading this email from Lucas Kanashiro, I reviewed CVE-2018-19968 and reviewed and tested CVE-2018-19970.

Categories: External Blogs

Debian build helpers: dh dominates

Tue, 02/05/2019 - 19:54

It's been a while since someone did this. Back in 2009, Joey Hess made a talk at Debconf 9 about debhelper and mentioned in his slides (PDF) that it was used in most Debian packages. Here was the ratio (page 10):

  • debhelper: 54%
  • cdbs: 25%
  • dh: 9%
  • other: 3%

Then Lucas Nussbaum made graphs from snapshot.debian.org that did the same, but with history. His latest post (archive link because original is missing images), from 2015 confirmed Joey's 2009 results. It also showed cdbs was slowly declining and a sharp uptake in the dh usage (over debhelper). Here were the approximate numbers:

  • debhelper: 15%
  • cdbs: 15%
  • dh: 69%
  • other: 1%

I ran the numbers again. Jakub Wilk pointed me to the lintian.debian.org output that can be used to get the current state easily:

$ curl -so lintian.log.gz https://lintian.debian.org/lintian.log.gz $ zgrep debian-build-system lintian.log.gz | awk '{print $NF}' | sort | uniq -c | sort -nr 25772 dh 2268 debhelper 2124 cdbs-with-debhelper.mk 257 dhmk 123 other 8 cdbs-without-debhelper.mk

Shoving this in a LibreOffice spreadsheet (sorry, my R/Python brain is slow today) gave me this nice little graph:

As of today, the numbers are now:

  • debhelper: 7%
  • cdbs: 7%
  • dh: 84%
  • other: 1%

(No the numbers don't add up. Yes it's a rounding error. Blame LibreOffice.)

So while cdbs lost 10% of the packages in 6 years, it lost another half of its share in the last 4. It's also interesting to note that debhelper and cdbs are both shrinking at a similar rate.

This confirms that debhelper development is where everything is happening right now. The new dh(1) sequencer is also a huge improvement that almost everyone has adopted wholeheartedly.

Now of course, that remaining 15% of debhelper/cdbs (or just 7% of cdbs, depending on how pedantic you are) will be the hard part to transition. Notice how the 1% of "other" packages hasn't really moved in the last four years: that's because some packages in Debian are old, abandoned, ignored, complicated, or all of the above. So it will be difficult to convert the remaining packages and finalize this great unification Joey (unknowingly) started ten years ago, as the remaining packages are probably the hard, messy, old ones no want wants to fix because, well, "they're not broken so don't fix it".

Still, it's nice to see us agree on something for a change. I'd be quite curious to see an update of Lucas' historical graphs. It would be particularly useful to see the impact of the old Alioth server replacement with salsa.debian.org, because it runs GitLab and only supports Git. Without an easy-to-use internal hosting service, I doubt SVN, Darcs, Bzr and whatever is left in "other" there will survive very long.

Categories: External Blogs