Skip to main content

Feed aggregator

Ansible: Making Things Happen

Linux Journal - Tue, 01/30/2018 - 11:07

Finally, an automation framework that thinks like a sysadmin. Ansible, you're hired. more>>

Categories: Linux News

KDE Plasma 5.12, Btrfs Improvement, Linux Support for Wacom SmartPad Devices and More

Linux Journal - Tue, 01/30/2018 - 10:27

News updates for January 30, 2018.

Interested in giving KDE Plasma 5.12 LTS Desktop a spin (currently in beta)? Look no further than the latest snapshot releases of OpenSUSE Tumbleweed. more>>

Categories: Linux News

Rapid, Secure Patching: Tools and Methods

Linux Journal - Mon, 01/29/2018 - 11:45

Generate enterprise-grade SSH keys and load them into an agent for control of all kinds of Linux hosts. Script the agent with the Parallel Distributed Shell (pdsh) to effect rapid changes over your server farm. more>>

Categories: Linux News

Linux 4.15 Kernel, GCC, LinuxBoot Project and More Cryptojacking

Linux Journal - Mon, 01/29/2018 - 10:03

News briefs for January 29, 2018.

The good: the Linux 4.15 kernel officially has been released. View the diff here, and also see the Linux Kernel Archives for more info. more>>

Categories: Linux News

Advice for Buying and Setting Up Laptops When You're Traveling or On-Call

Linux Journal - Sun, 01/28/2018 - 09:08

Why stress over losing that expensive personal or work laptop? Buy a cheap one for risky situations.

In a previous article, I wrote about how to prepare for a vacation so you aren't disturbed by a work emergency. As part of that article, I described how to prepare your computer: more>>

Categories: Linux News

4TB+ large disk price review

Anarcat - Sat, 01/27/2018 - 20:39

For my personal backups, I am now looking at 4TB+ single-disk long-term storage. I currently have 3.5TB of offline storage, split in two disks: this is rather inconvenient as I need to plug both in a toaster-like SATA enclosure which gathers dusts and performs like crap. Now I'm looking at hosting offline backups at a friend's place so I need to store everything in a single drive, to save space.

This means I need at least 4TB of storage, and those needs are going to continuously expand in the future. Since this is going to be offsite, swapping the drive isn't really convenient (especially because syncing all that data takes a long time), so I figured I would also look at more than 4 TB.

So I built those neat little tables. I took the prices from Newegg.ca or Newegg.com as a fallback when the item wasn't available in Canada. I used to order from NCIX because it was "more" local, but they unfortunately went bankrupt and in the worse possible way: the website is still up and you can order stuff, but those orders never ship. Sad to see a 20-year old institution go out like that; I blame Jeff Bezos.

I also used failure rate figures from the latest Backblaze review, although those should always be taken with a grain of salt. For example, the apparently stellar 0.00% failure rates are all on sample sizes too small to be statistically significant (<100 drives).

All prices are in CAD, sometimes after conversion from USD for items that are not on newegg.ca, as of today.

8TB Brand Model Price $/TB fail% Notes HGST 0S04012 280$ 35$ N/A Seagate ST8000NM0055 320$ 40$ 1.04% WD WD80EZFX 364$ 46$ N/A Seagate ST8000DM002 380$ 48$ 0.72% HGST HUH728080ALE600 791$ 99$ 0.00% 6TB Brand Model Price $/TB fail% Notes HGST 0S04007 220$ 37$ N/A Seagate ST6000DX000 ~222$ 56$ 0.42% not on .ca, refurbished Seagate ST6000AS0002 230$ 38$ N/A WD WD60EFRX 280$ 47$ 1.80% Seagate STBD6000100 343$ 58$ N/A 4TB Brand Model Price $/TB fail% Notes Seagate ST4000DM004 125$ 31$ N/A Seagate ST4000DM000 150$ 38$ 3.28% WD WD40EFRX 155$ 39$ 0.00% HGST HMS5C4040BLE640 ~242$ 61$ 0.36% not on .ca Toshiba MB04ABA400V ~300$ 74$ 0.00% not on .ca Conclusion

Cheapest per TB costs seem to be in the 4TB range, but the 8TB HGST comes really close. Reliabilty for this drive could be an issue, however - I can't explain why it is so cheap compared to other devices... But I guess we'll see where it goes as I'll just order the darn thing and try it out.

Categories: External Blogs

A summary of my 2017 work

Anarcat - Sat, 01/27/2018 - 11:54

New years are strange things: for most arbitrary reasons, around January 1st we reset a bunch of stuff, change calendars and forget about work for a while. This is also when I forget to do my monthly report and then procrastinate until I figure out I might as well do a year report while I'm at it, and then do nothing at all for a while.

So this is my humble attempt at fixing this, about a month late. I'll try to cover December as well, but since not much has happened then, I figured I could also review the last year and think back on the trends there. Oh, and you'll get chocolate cookies of course. Hang on to your eyeballs, this won't hurt a bit.

Debian Long Term Support (LTS)

Those of you used to reading those reports might be tempted to skip this part, but wait! I actually don't have much to report here and instead you will find an incredibly insightful and relevant rant.

So I didn't actually do any LTS work in December. I actually reduced my available hours to focus on writing (more on that later). Overall, I ended up working about 11 hours per month on LTS in 2017. That is less than the 16-20 hours I was available during that time. Part of that is me regularly procrastinating, but another part is that finding work to do is sometimes difficult. The "easy" tasks often get picked and dispatched quickly, so the stuff that remains, when you're not constantly looking, is often very difficult packages.

I especially remember the pain of working on libreoffice, the KRACK update, more tiff, GraphicsMagick and ImageMagick vulnerabilities than I care to remember, and, ugh, Ruby... Masochists (also known as "security researchers") can find the details of those excruciating experiments in debian-lts for the monthly reports.

I don't want to sound like an old idiot, but I must admit, after working on LTS for two years, that working on patching old software for security bugs is hard work, and not particularly pleasant on top of it. You're basically always dealing with other people's garbage: badly written code that hasn't been touched in years, sometimes decades, that no one wants to take care of.

Yet someone needs to take care of it. A large part of the technical community considers Linux distributions in general, and LTS releases in particular, as "too old to care for". As if our elders, once they passed a certain age, should just be rolled out to the nearest dumpster or just left rotting on the curb. I suspect most people don't realize that Debian "stable" (stretch) was released less than a year ago, and "oldstable" (jessie) is a little over two years old. LTS (wheezy), our oldest supported release, is only four years old now, and will become unsupported this summer, on its fifth year anniversary. Five years may seem like a long time in computing but really, there's a whole universe out there and five years is absolutely nothing in the range of changes I'm interested in: politics, society and the environment range much beyond that shortsightedness.

To put things in perspective, some people I know still run their office on an Apple II, which celebrated its 40th anniversary this year. That is "old". And the fact that the damn thing still works should command respect and admiration, more than contempt. In comparison, the phone I have, an LG G3, is running an unpatched, vulnerable version of Android because it cannot be updated, because it's locked out of the telcos networks, because it was found in a taxi and reported "lost or stolen" (same thing, right?). And DRM protections in the bootloader keep me from doing the right thing and unbricking this device.

We should build devices that last decades. Instead we fill junkyards with tons and tons of precious computing devices that have more precious metals than most people carry as jewelry. We are wasting generations of programmers, hardware engineers, human robots and precious, rare metals on speculative, useless devices that are destroying our society. Working on supporting LTS is a small part in trying to fix the problem, but right now I can't help but think we have a problem upstream, in the way we build those tools in the first place. It's just depressing to be at the receiving end of the billions of lines of code that get created every year. Hopefully, the death of Moore's law could change that, but I'm afraid it's going to take another generation before programmers figure out how far away from their roots they have strayed. Maybe too long to keep ourselves from a civilization collapse.

LWN publications

With that gloomy conclusion, let's switch gears and talk about something happier. So as I mentioned, in December, I reduced my LTS hours and focused instead on finishing my coverage of KubeCon Austin for LWN.net. Three articles have already been published on the blog here:

... and two more articles, about Prometheus, are currently published as exclusives by LWN:

I was surprised to see that the container runtimes article got such traction. It wasn't the most important debate in the whole conference, but there were some amazingly juicy bits, some of which we didn't even cover because. Those were... uh... rather controversial and we want the community to stay sane. Or saner, if that word can be applied at all to the container community at this point.

I ended up publishing 16 articles at LWN this year. I'm really happy about that: I just love writing and even if it's in English (my native language is French), it's still better than rambling on my own like I do here. My editors allow me to publish well polished articles, and I am hugely grateful for the privilege. Each article takes about 13 hours to write, on average. I'm less happy about that: I wish delivery was more streamlined and I spare you the miserable story of last minute major changes I sent in some recent articles, to which I again apologize profusely to my editors.

I'm often at a loss when I need to explain to friends and family what I write about. I often give the example of the password series: I wrote a whole article about just how to pick a passphrase then a review of two geeky password managers and then a review of something that's not quite a password manager and you shouldn't be using. And on top of that, I even wrote an history of those but by that time my editors were sick and tired of passwords and understandably made me go away. At this point, neophytes are just scratching their heads and I remind them of the TL;DR:

  1. choose a proper password with a bunch of words picked at random (really random, check out Diceware!)

  2. use a password manager so you have to remember only one good password

  3. watch out where you type those damn things

I covered two other conferences this year as well: one was the NetDev conference, for which I wrote 4 articles (1, 2, 3, 4). It turned out I couldn't cover NetDev in Korea even though I wanted to, but hopefully that is just "partie remise" as we say in french... I also covered DebConf in Montreal, but that ended up being much harder than I thought: I got involved in networking and volunteered all over the place. By the time the conference started, I was too exhausted to do actually write anything, even though I took notes like crazy and ran around trying to attend everything. I found it's harder to write about topics that are close to home: nothing is new, so you don't get excited as much. I still enjoyed writing about the supposed decline of copyleft, which was based on a talk by FSF executive director John Sullivan, and I ended up writing about offline PGP key storage strategies and cryptographic keycards, after buying a token from friendly gniibe at DebConf.

I also wrote about Alioth moving to Pagure, unknowingly joining up with a long tradition of failed predictions at LWN: a few months later, the tide turned and Debian launched the Alioth replacement as a beta running... GitLab. Go figure - maybe this is the a version of the quantum observer effect applied to journalism?

Two articles seemed to have been less successful. The GitHub TOS update was less controversial than I expected it would be and didn't seem to have a significant impact, although GitHub did rephrase some bits of their TOS eventually. The ROCA review didn't seem to bring excited crowds either, maybe because no one actually understood anything I was saying (including myself).

Still, 2017 has been a great ride in LWN-land: I'm hoping to publish even more during the next year and encourage people to subscribe to the magazine, as it helps us publish new articles, if you like what you're reading here of course.

Free software work

Last but not least is my free software work. This was just nuts.

New programs

I have written a bunch of completely new programs:

  • Stressant - a small wrapper script to stress-test new machines. no idea if anyone's actually using the darn thing, but I have found it useful from time to time.

  • Wallabako - a bridge between Wallabag and my e-reader. This is probably one of my most popular programs ever. I get random strangers asking me about it in random places, which is quite nice. Also my first Golang program, something I am quite excited about and wish I was doing more of.

  • Ecdysis - a pile of (Python) code snippets, documentation and standard community practices I reuse across projects. Ended up being really useful when bootstrapping new projects, but probably just for me.

  • numpy-stats - a dumb commandline tool to extract stats from streams. didn't really reuse it so maybe not so useful. interestingly, found similar tools called multitime and hyperfine that will be useful for future benchmarks

  • feed2exec - a new feed reader (just that) which I have been using ever since for many different purposes. I have now replaced feed2imap and feed2tweet with that simple tool, and have added support for storing my articles on https://archive.org/, checking for dead links with linkchecker (below) and pushing to the growing Mastodon federation

  • undertime - a simple tool to show possible meeting times across different timezones. a must if you are working with people internationally!

If I count this right (and I'm omitting a bunch of smaller, less general purpose programs), that is six new software projects, just this year. This seems crazy, but that's what the numbers say. I guess I like programming too, which is arguably a form of writing. Talk about contributing to the pile of lines of code...

New maintainerships

I also got more or less deeply involved in various communities:

And those are just the major ones... I have about 100 repositories active on GitHub, most of which are forks of existing repositories, so actual contributions to existing free software projects. Hard numbers for this are annoyingly hard to come by as well, especially in terms of issues vs commits and so on. GitHub says I have made about 600 contributions in the last year, which is an interesting figure as well.

Debian contributions

I also did a bunch of things in the Debian project, apart from my LTS involvement:

  • Gave up on debmans, a tool I had written to rebuild https://manpages.debian.org, in the face of the overwhelming superiority of the Golang alternative. This is one of the things which lead me to finally try the language and write Wallabako. So: net win.

  • Proposed standard procedures for third-party repositories, which didn't seem to have caught up significantly in the real world. Hopefully just a matter of time...

  • Co-hosted a bug squashing party for the Debian stretch release, also as a way to have the DebConf team meet up.

  • That lead to a two hour workshop at Montreal DebConf which was packed and really appreciated. I'm thinking of organizing this at every DebConf I attend, in a (so far) secret plot to standardize packaging practices by evangelizing new package maintainers to my peculiar ways. I hope to teach again in Taiwan this year, but I'm not sure I'll make it that far across the globe...

  • And of course, I did a lot of regular package maintenance. I don't have good numbers on the exact activity stats here (any way to pull that out easily?) but I now directly maintain 34 Debian packages, a somewhat manageable number.

What's next?

This year, I'll need to figure out what to do with legacy projects. Gameclock and Monkeysign both need to be ported away from GTK2, which is deprecated. I will probably abandon the GUI in Monkeysign but gameclock will probably need a rewrite of its GUI. This begs the question of how we can maintain software in the longterm if even the graphical interface (even Xorg is going away!) is swept away under our feet all the time. Without this change, both software could have kept on going for another decade without trouble. But now, I need to spend time just to keep those tools from failing to build at all.

Wallabako seems to be doing well on its own, but I'd like to fix the refresh issues that make the reader sometimes unstable: maybe I can write directly to the SQLite database? I tried statically linking sqlite to do some tests about that, but that's apparently impossible and failed.

Feed2exec just works for me. I'm not very proud of the design, but it does its job well. I'll fix bugs and maybe push out a 1.0 release when a long enough delay goes by without any critical issues coming up. So try it out and report back!

As for the other projects, I'm not sure how it's going to go. It's possible that my involvement in paid work means I cannot commit as much to general free software work, but I can't help but just doing those drive-by contributions all the time. There's just too much stuff broken out there to sit by and watch the dumpster fire burn down the whole city.

I'll try to keep doing those reports, of which you can find an archive in monthly-report. Your comments, encouragements, and support make this worth it, so keep those coming!

Happy new year everyone: may it be better than the last, shouldn't be too hard...

PS: Here is the promised chocolate cookie:

Categories: External Blogs

Using gphoto2 to Automate Taking Pictures

Linux Journal - Sat, 01/27/2018 - 10:04

Introducing an app that allows DSLR cameras to function as an image or video capture device in Linux. more>>

Categories: Linux News

Creating an Adventure Game in the Terminal with ncurses

Linux Journal - Fri, 01/26/2018 - 11:26

How to use curses functions to read the keyboard and manipulate the screen. more>>

Categories: Linux News

Mycroft Mark II, Chronicle, Intel and Bionic Beaver

Linux Journal - Fri, 01/26/2018 - 08:55

News briefs for January 26, 2018.

The Mycroft Mark II Open Source Voice Assistant (that doesn't spy on you) just launched on Kickstarter. Mycroft source code is available on GitHub. more>>

Categories: Linux News

diff -u: Complexifying printk()

Linux Journal - Thu, 01/25/2018 - 12:37

What's new in kernel development: complexifying printk(). more>>

Categories: Linux News

Chrome 64, GCC 7.3, Librem 5 Phone Progress and More

Linux Journal - Thu, 01/25/2018 - 08:57

News updates for January 25, 2018.

Chrome 64 is now available for Linux, Mac and Windows, featuring a stronger ad blocker and several security fixes, including mitigations for Spectre and Meltdown. See the release updates for more info. more>>

Categories: Linux News

Changes in Prometheus 2.0

Anarcat - Wed, 01/24/2018 - 19:00

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

2017 was a big year for the Prometheus project, as it published its 2.0 release in November. The new release ships numerous bug fixes, new features and, notably, a new storage engine that brings major performance improvements. This comes at the cost of incompatible changes to the storage and configuration-file formats. An overview of Prometheus and its new release was presented to the Kubernetes community in a talk held during KubeCon + CloudNativeCon. This article covers what changed in this new release and what is brewing next in the Prometheus community; it is a companion to this article, which provided a general introduction to monitoring with Prometheus.

What changed

Orchestration systems like Kubernetes regularly replace entire fleets of containers for deployments, which means rapid changes in parameters (or "labels" in Prometheus-talk) like hostnames or IP addresses. This was creating significant performance problems in Prometheus 1.0, which wasn't designed for such changes. To correct this, Prometheus ships a new storage engine that was specifically designed to handle continuously changing labels. This was tested by monitoring a Kubernetes cluster where 50% of the pods would be swapped every 10 minutes; the new design was proven to be much more effective. The new engine boasts a hundred-fold I/O performance improvement, a three-fold improvement in CPU, five-fold in memory usage, and increased space efficiency. This impacts container deployments, but it also means improvements for any configuration as well. Anecdotally, there was no noticeable extra load on the servers where I deployed Prometheus, at least nothing that the previous monitoring tool (Munin) could detect.

Prometheus 2.0 also brings new features like snapshot backups. The project has a longstanding design wart regarding data volatility: backups are deemed to be unnecessary in Prometheus because metrics data is considered disposable. According to Goutham Veeramanchaneni, one of the presenters at KubeCon, "this approach apparently doesn't work for the enterprise". Backups were possible in 1.x, but they involved using filesystem snapshots and stopping the server to get a consistent view of the on-disk storage. This implied downtime, which was unacceptable for certain production deployments. Thanks again to the new storage engine, Prometheus can now perform fast and consistent backups, triggered through the web API.

Another improvement is a fix to the longstanding staleness handling bug where it would take up to five minutes for Prometheus to notice when a target disappeared. In that case, when polling for new values (or "scraping" as it's called in Prometheus jargon) a failure would make Prometheus reuse the older, stale value, which meant that downtime would go undetected for too long and fail to trigger alerts properly. This would also cause problems with double-counting of some metrics when labels vary in the same measurement.

Another limitation related to staleness is that Prometheus wouldn't work well with scrape intervals above two minutes (instead of the default 15 seconds). Unfortunately, that is still not fixed in Prometheus 2.0 as the problem is more complicated than originally thought, which means there's still a hard limit to how slowly you can fetch metrics from targets. This, in turn, means that Prometheus is not well suited for devices that cannot support sub-minute refresh rates, which, to be fair, is rather uncommon. For slower devices or statistics, a solution might be the node exporter "textfile support", which we mentioned in the previous article, and the pushgateway daemon, which allows pushing results from the targets instead of having the collector pull samples from targets.

The migration path

One downside of this new release is that the upgrade path from the previous version is bumpy: since the storage format changed, Prometheus 2.0 cannot use the previous 1.x data files directly. In his presentation, Veeramanchaneni justified this change by saying this was consistent with the project's API stability promises: the major release was the time to "break everything we wanted to break". For those who can't afford to discard historical data, a possible workaround is to replicate the older 1.8 server to a new 2.0 replica, as the network protocols are still compatible. The older server can then be decommissioned when the retention window (which defaults to fifteen days) closes. While there is some work in progress to provide a way to convert 1.8 data storage to 2.0, new deployments should probably use the 2.0 release directly to avoid this peculiar migration pain.

Another key point in the migration guide is a change in the rules-file format. While 1.x used a custom file format, 2.0 uses YAML, matching the other Prometheus configuration files. Thankfully the promtool command handles this migration automatically. The new format also introduces rule groups, which improve control over the rules execution order. In 1.x, alerting rules were run sequentially but, in 2.0, the groups are executed sequentially and each group can have its own interval. This fixes the longstanding race conditions between dependent rules that create inconsistent results when rules would reuse the same queries. The problem should be fixed between groups, but rule authors still need to be careful of that limitation within a rule group.

Remaining limitations and future

As we saw in the introductory article, Prometheus may not be suitable for all workflows because of its limited default dashboards and alerts, but also because of the lack of data-retention policies. There are, however, discussions about variable per-series retention in Prometheus and native down-sampling support in the storage engine, although this is a feature some developers are not really comfortable with. When asked on IRC, Brian Brazil, one of the lead Prometheus developers, stated that "downsampling is a very hard problem, I don't believe it should be handled in Prometheus".

Besides, it is already possible to selectively delete an old series using the new 2.0 API. But Veeramanchaneni warned that this approach "puts extra pressure on Prometheus and unless you know what you are doing, its likely that you'll end up shooting yourself in the foot". A more common approach to native archival facilities is to use recording rules to aggregate samples and collect the results in a second server with a slower sampling rate and different retention policy. And of course, the new release features external storage engines that can better support archival features. Those solutions are obviously not suitable for smaller deployments, which therefore need to make hard choices about discarding older samples or getting more disk space.

As part of the staleness improvements, Brazil also started working on "isolation" (the "I" in the ACID acronym) so that queries wouldn't see "partial scrapes". This hasn't made the cut for the 2.0 release, and is still work in progress, with some performance impacts (about 5% CPU and 10% RAM). This work would also be useful when heavy contention occurs in certain scenarios where Prometheus gets stuck on locking. Some of the performance impact could therefore be offset under heavy load.

Another performance improvement mentioned during the talk is an eventual query-engine rewrite. The current query engine can sometimes cause excessive loads for certain expensive queries, according the Prometheus security guide. The goal would be to optimize the current engine so that those expensive queries wouldn't harm performance.

Finally, another issue I discovered is that 32-bit support is limited in Prometheus 2.0. The Debian package maintainers found that the test suite fails on i386, which lead Debian to remove the package from the i386 architecture. It is currently unclear if this is a bug in Prometheus: indeed, it is strange that Debian tests actually pass in other 32-bit architectures like armel. Brazil, in the bug report, argued that "Prometheus isn't going to be very useful on a 32bit machine". The position of the project is currently that "'if it runs, it runs' but no guarantees or effort beyond that from our side".

I had the privilege to meet the Prometheus team at the conference in Austin and was happy to see different consultants and organizations working together on the project. It reminded me of my golden days in the Drupal community: different companies cooperating on the same project in a harmonious environment. If Prometheus can keep that spirit together, it will be a welcome change from the drama that affected certain monitoring software. This new Prometheus release could light a bright path for the future of monitoring in the free software world.

This article first appeared in the Linux Weekly News.

Categories: External Blogs

Threading in Python

Linux Journal - Wed, 01/24/2018 - 11:56

Threads can provide concurrency, even if they're not truly parallel. more>>

Categories: Linux News

Plex VR, Firefox 58.0, SteamOS and More

Linux Journal - Wed, 01/24/2018 - 09:32

News briefs for January 24, 2018.

Plex is now VR-ready for Google Daydream-supported devices, available for free starting today from the Google Play Store. more>>

Categories: Linux News

Introducing the Alarmy Android App

Linux Journal - Tue, 01/23/2018 - 14:56

Shawn takes a quick look at "The World's Most Annoying Alarm Clock App". more>>

Categories: Linux News

Linus Rants, Cryptojacking Protection, openSUSE and Games

Linux Journal - Tue, 01/23/2018 - 11:15

News updates from January 23, 2018.

Linus Torvalds slams Intel's Spectre and Meltdown patches, calling them "COMPLETE and UTTER GARBAGE". See LKML for more. more>>

Categories: Linux News

diff -u: in-Kernel DRM Support

Linux Journal - Tue, 01/23/2018 - 08:40

A look at what's new in kernel development.

Welcome to the new diff -u! We're experimenting with a shorter, more frequent, single-subject format for this feature, which also may evolve over time. Let us know what you think in the comments below. more>>

Categories: Linux News

Spectre Patches, Snap, Happy Birthday LWN and More

Linux Journal - Mon, 01/22/2018 - 13:54

News updates for January 22, 2018.

Are you using protection? Longtime kernel developer, Greg Kroah-Hartman, just posted a simple recipe for users to verify whether they are running a Spectre/Meltdown patched version of the Linux kernel. more>>

Categories: Linux News

Raspberry Pi Alternatives

Linux Journal - Mon, 01/22/2018 - 08:14

A look at some of the many interesting Raspberry Pi competitors. more>>

Categories: Linux News
Syndicate content