Skip to main content

Feed aggregator

VMware Acquires Heptio, Mining Bitcoin Requires More Energy Than Mining Gold, Fedora Turns 15, Microsoft's New Linux Distros and ReactOS 0.4.10 Released

Linux Journal - Tue, 11/06/2018 - 09:42

News briefs for November 6, 2018.

VMware has acquired Heptio, which was founded by Joe Beda and Craig McLuckie, two of the creators of Kubernetes. TechCrunch reports that the terms of the deal aren't being disclosed and that "this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses." The post also notes that this acquisition is "also another endorsement of the ongoing rise of open source and its role in cloud architectures".

The energy needed to mine one dollar's worth of bitcoin is reported to be more than double the energy required to mine the same amount of gold, copper or platinum. The Guardian reports on recent research from the Oak Ridge Institute in Cincinnati, Ohio, that "one dollar's worth of bitcoin takes about 17 megajoules of energy to mine...compared with four, five and seven megajoules for copper, gold and platinum".

Happy 15th birthday to Fedora! Fifteen years ago today, November 6, 2003, Fedora Core 1 was released. See Fedora Magazine's post for a look back at the Fedora Project's beginnings.

Microsoft announced the availability of two new Linux distros for Windows Subsystem for Linux, which will coincide with the Windows 10 1809 release. ZDNet reports that the Debian-based Linux distribution WLinux is available from the Microsoft Store for $9.99 currently (normally it's $19.99). Also, OpenSUSE 15 and SLES 15 are now available from the Microsoft Store as well.

ReactOS 0.4.10 was released today. The main new feature is "ReactOS' ability to now boot from a BTRFS formatted drive". See the official ChangeLog for more details.

News VMware Heptio Kubernetes Containers Bitcoin cryptomining Fedora Microsoft ReactOS
Categories: Linux News

Game Review: Lamplight City

Linux Journal - Tue, 11/06/2018 - 09:07
by Patrick Whelan

A well lit look into Grundislav Games' latest release.

The universe of Lamplight City is rich, complex and oddly familiar. The game draws on that ever-popular theme of a steampunk alternative universe, adding dashes of Victorian squalor and just a pinch of 1950's detective tropes. Is it just a mishmash of clichés then? Yes, but it all works well together to form a likable and somewhat unique universe—like a cheesy movie, you can't help but fall in love with Lamplight City.

Figure 1. The Lamplight City Universe

Figure 2. Some Protesters

In Lamplight City, you play Miles Fordham, a disgraced detective turned PI following the death of his partner in Act I at the hands of a mysterious killer. Miles is accompanied by the ghostly voice of his partner Bill as a sort of schizophrenic inner monologue. It's creepy, and it's a perfect example of taking a classic trope and turning it into one of the game's biggest strengths. Bill's monologues add witty flavour to the dry protagonist and a way to explain details and scenarios to the player.

Figure 3. Miles Fordham's Schizophrenic Dialogue

Lamplight City features multiple cases that are all tied together with an overarching story. More impressively though is the overarching story's effect on the individual cases. In my play-through, mistakes I made in one case affected another and effectively led to another case becoming unsolvable. This is a system I instinctively hated. It seemed unjustly punitive to punish players for simply exploring dialogue options. Over time, however, as the music and art slowly enveloped me into a universe I truly enjoyed exploring and experiencing, I began to see how subtleties are at the center of this universe. What at first is dismissed as unimportant or underwhelming later appears as a subtle smack in the face, with that familiar feeling of "Oh, I knew I shouldn't have done that!"

Go to Full Article
Categories: Linux News

freenode #live 2018 - Kyle Rankin - The death and resurrection of Linux Journal

Linux Journal - Mon, 11/05/2018 - 10:31

Please support Linux Journal by subscribing or becoming a patron.

Categories: Linux News

Kernel 4.20-rc1 Is Out, KDE Connect Android App 1.10 Released, Linux Mint 19.1 Coming Soon, Microsoft Ported ProcDump to Linux and Neptune Version 5.6 Now Available

Linux Journal - Mon, 11/05/2018 - 09:31

News briefs for November 5, 2018.

Linux kernel 4.20-rc1 is out. Linus writes, "This was a fairly big merge window, but it didn't break any records, just solid. And things look pretty regular, with about 70% of the patch is driver updates (gpu drivers are looming large as usual, but there's changes all over). The rest is arch updates (x86, arm64, arm, powerpc and the new C-SKY architecture), header files, networking, core mm and kernel, and tooling." See the LKML post for more information.

The KDE Connect Android app version 1.10 was released yesterday. Main changes include "mouse input now works with the same speed independent from the phones pixel density"; "the media controller now allows stopping playback"; the "run command supports triggering commands using kdeconnect:// URLs" and more. There are several desktop improvements as well, and the Linux Mobile App has also gained many new features.

The Linux Mint blog recently posted its upcoming release schedule. They are working on getting Linux Mint 19.1 out in time for Christmas, "with all three editions released at the same time and the upgrade paths open before the holiday season". In addition, Linux Mint is now on Patreon. See the post for all the changes and improvements in the works.

Microsoft ported the ProcDump applications to Linux and is planning to port ProcMon to Linux as well. According to ZDNet, "these ports are part of the company's larger plan to make the Sysinternals package available for Linux users in the coming future".

Neptune version 5.6 was released yesterday. This update of the desktop distro based fully on Debian 9.0 ("Stretch") provides kernel 4.18.6 with improved drivers and bugfixes. Other updates include systemd to version 239, KDE Applications to version 18.08.2, Network-Manager updated to 1.14, Plasma desktop has been updated to 5.12.7 and much more. See the full changelog here.

News kernel KDE Android Mobile Linux Mint Microsoft Neptune Distributions Desktop
Categories: Linux News

Time for Net Giants to Pay Fairly for the Open Source on Which They Depend

Linux Journal - Mon, 11/05/2018 - 08:00
by Glyn Moody

Net giants depend on open source: so where's the gratitude?

Licensing lies at the heart of open source. Arguably, free software began with the publication of the GNU GPL in 1989. And since then, open-source projects are defined as such by virtue of the licenses they adopt and whether the latter meet the Open Source Definition. The continuing importance of licensing is shown by the periodic flame wars that erupt in this area. Recently, there have been two such flarings of strong feelings, both of which raise important issues.

First, we had the incident with Lerna, "a tool for managing JavaScript projects with multiple packages". It came about as a result of the way the US Immigration and Customs Enforcement (ICE) has been separating families and holding children in cage-like cells. The Lerna core team was appalled by this behavior and wished to do something concrete in response. As a result, it added an extra clause to the MIT license, which forbade a list of companies, including Microsoft, Palantir, Amazon, Motorola and Dell, from being permitted to use the code:

For the companies that are known supporters of ICE: Lerna will no longer be licensed as MIT for you. You will receive no licensing rights and any use of Lerna will be considered theft. You will not be able to pay for a license, the only way that it is going to change is by you publicly tearing up your contracts with ICE.

Many sympathized with the feelings about the actions of the ICE and the intent of the license change. However, many also pointed out that such a move went against the core principles of both free software and open source. Freedom 0 of the Free Software Definition is "The freedom to run the program as you wish, for any purpose." Similarly, the Open Source Definition requires "No Discrimination Against Persons or Groups" and "No Discrimination Against Fields of Endeavor". The situation is clear cut, and it didn't take long for the Lerna team to realize their error, and they soon reverted the change:

Go to Full Article
Categories: Linux News

Weekend Reading: FOSS Projects

Linux Journal - Sat, 11/03/2018 - 07:15
by Carlie Fairchild

FOSS Project Spotlights provide an opportunity for free and open-source project team members to show Linux Journal readers what makes their project compelling. Join us this weekend as we explore some of the latest FOSS projects in the works.

 

FOSS Project Spotlight: Nitrux, a Linux Distribution with a Focus on AppImages and Atomic Upgrades

by Nitrux Latinoamericana S.C.

Nitrux is a Linux distribution with a focus on portable, application formats like AppImages. Nitrux uses KDE Plasma 5 and KDE Applications, and it also uses our in-house software suite Nomad Desktop.

 

FOSS Project Spotlight: Tutanota, the First Encrypted Email Service with an App on F-Droid

by Matthias Pfau

Seven years ago, Tutanota was being built, an encrypted email service with a strong focus on security, privacy and open source. Long before the Snowden revelations, the Tutanota team felt there was a need for easy-to-use encryption that would allow everyone to communicate online without being snooped upon.

 

FOSS Project Spotlight: LinuxBoot

by David Hendricks

Linux as firmware.

The more things change, the more they stay the same. That may sound cliché, but it's still as true for the firmware that boots your operating system as it was in 2001 when Linux Journal first published Eric Biederman's "About LinuxBIOS". LinuxBoot is the latest incarnation of an idea that has persisted for around two decades now: use Linux as your bootstrap.

 

FOSS Project Spotlight: CloudMapper, an AWS Visualization Tool

by Scott Piper

Duo Security has released CloudMapper, an open-source tool for visualizing Amazon Web Services (AWS) cloud environments.

When working with AWS, it's common to have a number of separate accounts run by different teams for different projects. Gaining an understanding of how those accounts are configured is best accomplished by visually displaying the resources of the account and how these resources can communicate. This complements a traditional asset inventory.

 

FOSS Project Spotlight: Ravada

by Francesc Guasch

Go to Full Article
Categories: Linux News

Creative Commons Working with Flickr, OSI Announces $200,000 Donation from Handshake, Intel's OTC Adopts Contributor Covenant, Artifact Digital Card Game Coming Soon to Linux and Facebook Open-Sources Suite of Kernel Components and Tools

Linux Journal - Fri, 11/02/2018 - 08:58

News briefs for November 2, 2018.

Creative Commons is working with Flickr and SmugMug, Flickr's parent company, to protect the Commons following Flickr's recent announcement that it will be limiting free accounts to 1,000 images. Ryan Merkley, Creative Commons CEO, writes, "We want to ensure that when users share their works that they are available online in perpetuity and that they have a great experience." But he also admits that "the business models that have powered the web for so long are fundamentally broken. Storage and bandwidth for hundreds of millions (if not billions) of photos is very expensive. We've all benefited from Flickr's services for so long, and I'm hopeful we will find a way forward together."

The Open Source Initiative announces a $200,000 donation from Handshake, "the largest single donation in organizational history". Patrick Masson, the OSI's general manager, says "Handshake's funding will allow us to extend the reach and impact of our Working Groups and Incubator Projects, many which were established to confront the growing efforts to manipulate open source through 'fauxpen source software' and 'open-washing'."

Intel's Open-Source Technology Center (OTC) has adopted the Contributor Covenant for all of its open-source projects. Phoronix reports that it chose the Contributor Covenant because "it's well written and represented, provides a clear expression of expectations, and represents open-source best practices." You can read the Contributor Covenant here.

Valve's digital card game Artifact is scheduled to be released November 28th with Linux support. According to Gaming on Linux, the new game will also have a built-in tournament feature. See the official Artifact site for more details.

Facebook recently announced it's open-sourcing a new suite of Linux kernel components and related tools "that address critical fleet management issues. These include resource control, resource utilization, workload isolation, load balancing, measuring, monitoring, and much more". According to the Facebook blog post, "the kernel components and tools included in this release can be adapted to solve a virtually limitless number of production problems."

News creative commons Handshake OSI Intel gaming Valve Facebook open source
Categories: Linux News

The Asus Eee: How Close Did the World Come to a Linux Desktop?

Linux Journal - Fri, 11/02/2018 - 08:30
by Jeff Siegel

It was white, not much bigger than my hands held side by side, weighed about as much as a bottle of wine, and it came in a shiny, faux-leather case. It was the $199 Asus Eee 901, and I couldn't believe that a computer could be that powerful, that light and that much fun.

This is the story of the brief, shining history of the Asus Eee, the first netbook—a small, cheap and mostly well-made laptop that dominated the computer industry for two or three years about a decade go. It's not so much that the Eee was ahead of its time, which wasn't that difficult in an industry then dominated by pricey and bulky laptops that didn't always have a hard drive and by desktop design hadn't evolved much past the first IBM 8086 box.

Rather, the Eee was ahead of everyone's time. It ran a Linux operating system with a tabbed interface and splashy icons, and the hardware included wireless, Bluetooth, a webcam and an SSD hard drive—all in a machine that weighed just 2.5 pounds. In this, it teased many of the concepts that tech writer Mark Wilson says we take for granted in today's cloud, smartphone and Chromebook universe.

The Eee was so impressive that even Microsoft, whose death grip on the PC world seemed as if it would never end, took notice. As everyone from Dell to HP to Samsung to Toshiba to Sony to Acer to one-offs and "never-weres" raced netbooks into production, Microsoft offered manufacturers a version of Windows XP (and later a truncated Windows 7) to cram onto the machines. Because we can't have the masses running a Linux OS, can we?

"The Eee gave regular people something they couldn't have before", says Dan Ackerman, a longtime section editor at CNET who wrote some of the website's original Eee and netbook reviews. "Laptops had always been ridiculously expensive. The Eee wasn't, and it gave regular people a chance to buy a laptop that was smaller and more portable and that they could be productive with. I always gave Asus credit—they understood the role of form and function."

Netbook History

The computer world never had really seen anything like the first Eee, which didn't even have a name when it was launched in 2007 (although it later would be called both the 701 and the 4G). In fact, say those who reviewed the 701, it wasn't so much a product but a proof of concept—that Asus could make something that small and that cheap that worked.

There had been small laptops before, of course, like the Intel Classmate PC and the OLPC X0-1, each part of the One Laptop per Child project. But those were specialized machines designed to bring computing and the internet to students throughout the world, and not necessarily consumer products.

Go to Full Article
Categories: Linux News

October 2018 report: LTS, Monkeysphere, Flatpak, Kubernetes, CD archival and calendar project

Anarcat - Thu, 11/01/2018 - 15:12
Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

GnuTLS

As discussed last month, one of the options to resolve the pending GnuTLS security issues was to backport the latest 3.3.x series (3.3.30), an update proposed then uploaded as DLA-1560-1. I after a suggestion, I've included an explicit NEWS.Debian item warning people about the upgrade, a warning also included in the advisory itself.

The most important change is probably dropping SSLv3, RC4, HMAC-SHA384 and HMAC-SHA256 from the list of algorithms, which could impact interoperability. Considering how old RC4 and SSLv3 are, however, this should be a welcome change. As for the HMAC changes, those are mandatory to fix the targeted vulnerabilities (CVE-2018-10844, CVE-2018-10845, CVE-2018-10846).

Xen

Xen updates had been idle for a while in LTS, so I bit the bullet and made a first discovery of the pending vulnerabilities. I sent the result to the folks over at Credativ who maintain the 4.4 branch and they came back with a set of proposed updates which I briefly review. Unfortunately, the patches were too deep for me: all I was able to do was to confirm consistency with upstream patches.

I also brought up a discussion regarding the viability of Xen in LTS, especially regarding the "speculative execution" vulnerabilities (XSA-254 and related). My understanding is upstream Xen fixes are not (yet?) complete, but apparently that is incorrect as Peter Dreuw is "condident in the Xen project to provide a solution for these issues". I nevertheless consider, like RedHat that the simpler KVM implementation might provide more adequate protection against those kind of attacks and LTS users should seriously consider switching to KVM for hosing untrusted virtual machines, even if only because that code is actually mainline in the kernel while Xen is unlikely to ever be. It might be, as Dreuw said, simpler to upgrade to stretch than switch virtualization systems...

When all is said and done, however, Linux and KVM are patches in Jessie at the time of writing, while Xen is not (yet).

Enigmail

I spent a significant amount of time working on Enigmail this month again, this time specifically working on reviewing the stretch proposed update to gnupg from Daniel Kahn Gillmor (dkg). I did not publicly share the code review as we were concerned it would block the stable update, which seemed to be in jeopardy when I started working on the issue. Thankfully, the update went through but it means it might impose extra work on leaf packages. Monkeysphere, in particular, might fail to build from source (FTBFS) after the gnupg update lands.

In my tests, however, it seems that packages using GPG can deal with the update correctly. I tested Monkeysphere, Password Store, git-remote-gcrypt and Enigmail, all of which passed a summary smoke test. I have tried to summarize my findings on the mailing list. Basically our options for the LTS update are:

  1. pretend Enigmail works without changing GnuPG, possibly introducing security issues

  2. ship a backport of GnuPG and Enigmail through jessie-sloppy-backports

  3. package OpenPGP.js and backport all the way down to jessie

  4. remove Enigmail from jessie

  5. backport the required GnuPG patchset from stretch to jessie

So far I've taken that last step as my favorite approach...

Firefox / Thunderbird and finding work

... which brings us to the Firefox and Thunderbird updates. I was assuming those were going ahead, but the status of those updates currently seems unclear. This is a symptom of a larger problem in the LTS work organization: some packages can stay "claimed" for a long time without an obvious status update.

We discussed ways of improving on this process and, basically, I will try to be more proactive in taking over packages from others and reaching out to others to see if they need help.

A note on GnuPG

As an aside to the Enigmail / GnuPG review, I was struck by the ... peculiarities in the GnuPG code during my review. I discovered that GnuPG, instead of using the standard resolver, implements its own internal full-stack DNS server, complete with UDP packet parsing. That's 12 000 lines of code right there. There are also abstraction leaks like using "1" and "0" as boolean values inside functions (as opposed to passing an integer and converting as string on output).

A major change in the proposed patchset are changes to the --with-colons batch output, which GnuPG consumers (like GPGME) are supposed to use to interoperate with GnuPG. Having written such a parser myself, I can witness to how difficult parsing those data structures is. Normally, you should always be using GPGME instead of parsing those directly, but unfortunately GPGME does not do everything GPG does: signing operations and keyring management, for example, has long been considered out of scope, so users are force to parse that output.

Long story short, GPG consumers still use --with-colons directly (and that includes Enigmail) because they have to. In this case, critical components were missing from that output (e.g. knowing which key signed which UID) so they were added in the patch. That's what breaks the Monkeysphere test suite, which doesn't expect a specific field to be present. Later versions of the protocol specification have been updated (by dkg) to clarify that might happen, but obviously some have missed the notice, as it came a bit late.

In any case, the review did not make me confident in the software architecture or implementation of the GnuPG program.

autopkgtest testing

As part of our LTS work, we often run tests to make sure everything is in order. Starting with Jessie, we are now seeing packages with autopkgtest enabled, so I started meddling with that program. One of the ideas I was hoping to implement was to unify my virtualization systems. Right now I'm using:

Because sbuild can talk with autopkgtest, and autopkgtest can talk with qemu (which can use KVM images), I figured I could get rid of schroot. Unfortunately, I met a few snags;

  • #911977: how do we correctly guess the VM name in autopkgtest?
  • #911963: qemu build fails with proxy_cmd: parameter not set (fixed and provided a patch)
  • #911979: fails on chown in autopkgtest-qemu backend
  • #911981: qemu server warns about missing CPU features

So I gave up on that approach. But I did get autopkgtest working and documented the process in my quick Debian development guide.

Oh, and I also got sucked down into wiki stylesheet (#864925) after battling with the SystemBuildTools page.

Spamassassin followup

Last month I agreed we could backport the latest upstream version of SpamAssassin (a recurring pattern). After getting the go from the maintainer, I got a test package uploaded but the actual upload will need to wait for the stretch update (#912198) to land to avoid a versioning conflict.

Salt Stack

My first impression of Salt was not exactly impressive. The CVE-2017-7893 issue was rather unclear: first upstream fixed the issue, but reverted the default flag which would enable signature forging after it was discovered this would break compatibility with older clients.

But even worse, the 2014 version of Salt shipped in Jessie did not have master signing in the first place, which means there was simply no way to protect from master impersonation, a worrisome concept. But I assumed this was expected behavior and triaged this away from jessie, and tried to forgot about the horrors I had seen.

phpLDAPadmin with sunweaver

I looked next at the phpLDAPadmin (or PHPLDAPadmin?) vulnerabilities, but could not reproduce the issue using the provided proof of concept. I have also audited the code and it seems pretty clear the code is protected against such an attack, as was explained by another DD in #902186. So I asked Mitre for rejection, and uploaded DLA-1561-1 to fix the other issue (CVE-2017-11107). Meanwhile the original security researcher acknowledged the security issue was a "false positive", although only in a private email.

I almost did a NMU for the package but the security team requested to wait, and marked the package as grave so it gets kicked out of buster instead. I at least submitted the patch, originally provided by Ubuntu folks, upstream.

Smarty3

Finally, I worked on the smart3 package. I confirmed the package in jessie is not vulnerable, because Smarty hadn't yet had the brilliant idea of "optimizing" realpath by rewriting it with new security vulnerabilities. Indeed, the CVE-2018-13982 proof of content and CVE-2018-16831 proof of content both fail in jessie.

I have tried to audit the patch shipped with stretch to make sure it fixed the security issue in question (without introducing new ones of course) abandoned parsing the stretch patch because this regex gave me a headache:

'%^(?<root>(?:<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aalpha%3A" rel="nofollow">?</a>:alpha:</span>:[\\\\]|/|[\\\\]{2}<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aalpha%3A" rel="nofollow">?</a>:alpha:</span>+|<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aprint%3A" rel="nofollow">?</a>:print:</span>{2,}:[/]{2}|[\\\\])?)(?<path>(?:<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aprint%3A" rel="nofollow">?</a>:print:</span>*))$%u', "who is supporting our users?"

I finally participated in a discussion regarding concerns about support of cloud images for LTS releases. I proposed that, like other parts of Debian, responsibility of those images would shift to the LTS team when official support is complete. Cloud images fall in that weird space (ie. "Installing Debian") which is not traditionally covered by the LTS team.

Hopefully that will become the policy, but only time will tell how this will play out.

Other free software work irssi sandbox

I had been uncomfortable running irssi as my main user on my server for a while. It's a constantly running network server, sometimes connecting to shady servers too. So it made sense to run this as a separate user and, while I'm there, start it automatically on boot.

I created the following file in /etc/systemd/system/irssi@.service, based on this gist:

[Unit] Description=IRC screen session After=network.target [Service] Type=forking User=%i ExecStart=/usr/bin/screen -dmS irssi irssi ExecStop=/usr/bin/screen -S irssi -X stuff '/quit\n' NoNewPrivileges=true [Install] WantedBy=multi-user.target

A whole apparmor/selinux/systemd profile could be written for irssi of course, but I figured I would start with NoNewPrivileges. Unfortunately, that line breaks screen, which is sgid utmp which is some sort of "new privilege". So I'm running this as a vanilla service. To enable, simply enable the service with the right username, previously created with adduser:

systemctl enable irssi@foo.service systemctl start irssi@foo.service

Then I join the session by logging in as the foo user, which can be configured in .ssh/config as a convenience host:

Host irc.anarc.at Hostname shell.anarc.at User foo IdentityFile ~/.ssh/id_ed25519_irc # using command= in authorized_keys until we're all on buster #RemoteCommand screen -x RequestTTY force

Then the ssh irc.anarc.at command rejoins the screen session.

Monkeysphere revival

Monkeysphere was in bad shape in Debian buster. The bit rotten test suite was failing and the package was about to be removed from the next Debian release. I filed and worked on many critical bugs (Debian bug #909700, Debian bug #908228, Debian bug #902367, Debian bug #902320, Debian bug #902318, Debian bug #899060, Debian bug #883015) but the final fix came from another user. I was also welcome on the Debian packaging team which should allow me to make a new release next time we have similar issues, which was a blocker this time round.

Unfortunately, I had to abandon the Monkeysphere FreeBSD port. I had simply forgotten about that commitment and, since I do not run FreeBSD anywhere anymore, it made little sense to keep on doing so, especially since most of the recent updates were done by others anyways.

Calendar project

I've been working on a photography project since the beginning of the year. Each month, I pick the best picture out of my various shoots and will collect those in a 2019 calendar. I documented my work in the photo page, but most of my work in October was around finding a proper tool to layout the calendar itself. I settled on wallcalendar, a beautiful LaTeX template, because the author was very responsive to my feature request.

I also figured out which events to include in the calendar and a way to generate moon phases (now part of the undertime package) for the local timezone. I still have to figure out which other astronomical events to include. I had no response from the local Planetarium but (as always) good feedback from NASA folks which pointed me at useful resources to top up the calendar.

Kubernetes

I got deeper into Kubernetes work by helping friends setup a cluster and share knowledge on how to setup and manage the platforms. This led me to fix a bug in Kubespray, the install / upgrade tool we're using to manage Kubernetes. To get the pull request accepted, I had to go through the insanely byzantine CLA process of the CNCF, which was incredibly frustrating, especially since it was basically a one-line change. I also provided a code review of the Nextcloud helm chart and reviewed the python-hvac ITP, one of the dependencies of Kubespray.

As I get more familiar with Kubernetes, it does seem like it can solve real problems especially for shared hosting providers. I do still feel it's overly complex and over-engineered. It's difficult to learn and moving too fast, but Docker and containers are such a convenient way to standardize shipping applications that it's hard to deny this new trend does solve a problem that we have to fix right now.

CD archival

As part of my work on archiving my CD collection, I contributed three pull requests to fix issues I was having with the project, mostly regarding corner cases but also improvements on the Dockerfile. At my suggestion, upstream also enabled automatic builds for the Docker image which should make it easier to install and deploy.

I still wish to write an article on this, to continue my series on archives, which could happen in November if I can find the time...

Flatpak conversion

After reading a convincing benchmark I decided to give Flatpak another try and ended up converting all my Snap packages to Flatpak.

Flatpak has many advantages:

  • it's decentralized: like APT or F-Droid repositories, anyone can host their own (there is only one Snap repository, managed by Canonical)

  • it's faster: the above benchmarks hinted at this, but I could also confirm Signal starts and runs faster under Flatpak than Snap

  • it's standardizing: many of the work Flatpak is doing to make sense of how to containerize desktop applications is being standardized (and even adopted by Snap)

Much of this was spurred by the breakage of Zotero in Debian (Debian bug #864827) due to the Firefox upgrade. I made a wiki page to tell our users how to install Zotero in Debian considering Zotero might take a while to be packaged back in Debian (Debian bug #871502).

Debian work

Without my LTS hat, I worked on the following packages:

Other work

Usual miscellaneous:

Categories: External Blogs

Why Your Server Monitoring (Still) Sucks

Linux Journal - Thu, 11/01/2018 - 10:25
by Mike Julian

Five observations about why your your server monitoring still stinks by a monitoring specialist-turned-consultant.

Early in my career, I was responsible for managing a large fleet of printers across a large campus. We're talking several hundred networked printers. It often required a 10- or 15-minute walk to get to some of those printers physically, and many were used only sporadically. I didn't always know what was happening until I arrived, so it was anyone's guess as to the problem. Simple paper jam? Driver issue? Printer currently on fire? I found out only after the long walk. Making this even more frustrating for everyone was that, thanks to the infrequent use of some of them, a printer with a problem might go unnoticed for weeks, making itself known only when someone tried to print with it.

Finally, it occurred to me: wouldn't it be nice if I knew about the problem and the cause before someone called me? I found my first monitoring tool that day, and I was absolutely hooked.

Since then, I've helped numerous people overhaul their monitoring systems. In doing so, I noticed the same challenges repeat themselves regularly. If you're responsible for managing the systems at your organization, read on; I have much advice to dispense.

So, without further ado, here are my top five reasons why your monitoring is crap and what you can do about it.

1. You're Using Antiquated Tools

By far, the most common reason for monitoring being screwed up is a reliance on antiquated tools. You know that's your issue when you spend too much time working around the warts of your monitoring tools or when you've got a bunch of custom code to get around some major missing functionality. But the bottom line is that you spend more time trying to fix the almost-working tools than just getting on with your job.

The problem with using antiquated tools and methodologies is that you're just making it harder for yourself. I suppose it's certainly possible to dig a hole with a rusty spoon, but wouldn't you prefer to use a shovel?

Great tools are invisible. They make you more effective, and the job is easier to accomplish. When you have great tools, you don't even notice them.

Maybe you don't describe your monitoring tools as "easy to use" or "invisible". The words you might opt to use would make my editor break out a red pen.

This checklist can help you determine if you're screwing yourself.

Go to Full Article
Categories: Linux News

System76 Announces American-Made Desktop PC with Open-Source Parts

Linux Journal - Thu, 11/01/2018 - 10:01
by Bryan Lunduke

Early in 2017—nearly two years ago—System76 invited me, and a handful of others, out to its Denver headquarters for a sneak peek at something new they'd been working on.

We were ushered into a windowless, underground meeting room. Our phones and cameras confiscated. Seriously. Every word of that is true. We were sworn to total and complete secrecy. Assumedly under penalty of extreme death...though that part was, technically, never stated.

Once the head honcho of System76, Carl Richell, was satisfied that the room was secure and free from bugs, the presentation began.

System76 told us the company was building its own desktop computers. Ones that it designed themselves. From-scratch cases. With wood. And inlaid metal. What's more, these designs would be open. All built right there in Denver, Colorado.

We were intrigued.

Then they showed them to us, and we darn near lost our minds. They were gorgeous. We all wanted them.

But they were not ready yet. This was early on in the design and engineering, and they were looking for feedback—to make sure System76 was on the right track.

They were.

Flash-forward to today (November 1, 2018), and these Linux-powered, made in America desktop machines are finally being unveiled to the world as the Thelio line (which they've been teasing for several weeks with a series of sci-fi themed stories).

The Thelio comes in three sizes:

  • Thelio (aka "small") — max 32GB RAM, 24TB storage.
  • Thelio Major (aka "medium") — max 128GB RAM, 46TB storage.
  • Thelio Massive (aka "large") — max 768GB RAM, 86TB storage.

All three sport the same basic look: part black metal, part wood (with either maple or walnut options) with rounded side edges. The cases open with a single slide up of the outer housing, with easy swapping of components. Lots of nice little touches, like a spot for in-case storage of screws that can be used in securing drives.

In an awesomely nerdy touch, the rear exhaust grill shows the alignment of planets in the solar system...at UNIX Epoch time. Also known as January 1, 1970. A Thursday.

Go to Full Article
Categories: Linux News

The Monitoring Issue

Linux Journal - Thu, 11/01/2018 - 09:20
by Bryan Lunduke

In 1935, Austrian physicist, Erwin Schrödinger, still flying high after his Nobel Prize win from two years earlier, created a simple thought experiment.

It ran something like this:

If you have a file server, you cannot know if that server is up or down...until you check on it. Thus, until you use it, a file server is—in a sense—both up and down. At the same time.

This little brain teaser became known as Schrödinger's File Server, and it's regarded as the first known critical research on the intersection of Systems Administration and Quantum Superposition. (Though, why Erwin chose, specifically, to use a "file server" as an example remains a bit of a mystery—as the experiment works equally well with any type of server. It's like, we get it, Erwin. You have a nice NAS. Get over it.)

...

Okay, perhaps it didn't go exactly like that. But I'm confident it would have...you know...had good old Erwin had a nice Network Attached Storage server instead of a cat.

Regardless, the lessons from that experiment certainly hold true for servers. If you haven't checked on your server recently, how can you be truly sure it's running properly? Heck, it might not even be running at all!

Monitoring a server—to be notified when problems occur or, even better, when problems look like they are about to occur—seems, at first blush, to be a simple task. Write a script to ping a server, then email me when the ping times out. Run that script every few minutes and, shazam, we've got a server monitoring solution! Easy-peasy, time for lunch!

Whoah, there! Not so fast!

That server monitoring solution right there? It stinks. It's fragile. It gives you very little information (other than the results of a ping). Even for administering your own home server, that's barely enough information and monitoring to keep things running smoothly.

Even if you have a more robust solution in place, odds are there are significant shortcomings and problems with it. Luckily, Linux Journal has your back—this issue is chock full of advice, tips and tricks for how to keep your servers effectively monitored.

You know, so you're not just guessing of the cat is still alive in there.

Mike Julian (author of O'Reilly's Practical Monitoring) goes into detail on a bunch of the ways your monitoring solution needs serious work in his adorably titled "Why Your Server Monitoring (Still) Sucks" article.

We continue "telling it like it is" with Corey Quinn's treatise on Amazon's CloudWatch, "CloudWatch Is of the Devil, but I Must Use It". Seriously, Corey, tell us how you really feel.

Go to Full Article
Categories: Linux News

GNOME 3.30.2 Released, Braiins OS Open-Source System for Cryptocurrency Embedded Devices Launched, Ubuntu 19.04 Dubbed Disco Dingo, Project OWL Wins IBM's Call for Code Challenge and Google Announces New Security Features

Linux Journal - Thu, 11/01/2018 - 09:01

News briefs for November 1, 2018.

GNOME 3.30.2 was released yesterday. It includes several bug fixes, and packages should arrive in your distro of choice soon, but if you want to compile it yourself, you can get it here. The full list of changes is available here. This is the last planned point release of the 3.30 desktop environment. The 3.32 release is expected to be available in spring 2019.

Braiins Systems has announced Braiins OS, which claims to be "the first fully open source system for cryptocurrency embedded devices". FOSSBYTES reports that the initial release is based on OpenWrt. In addition, Braiins OS "keeps monitoring the working conditions and hardware to create reports of errors and performance. Braiins also claimed to reduce power consumption by 20%".

Ubuntu 19.04 will be called Disco Dingo, and the release is scheduled for April 2019. Source: OMG! Ubuntu!.

IBM announces Project OWL is the winner of its first Call for Code challenge. Project OWL is "an IoT and software solution that keeps first responders and victims connected in a natural disaster". The team will receive $200,000 USD and will be able to deploy the solution via the IBM Corporate Service Corps. The OWL stands for "stands for Organization, Whereabouts, and Logistics", and it's a hardware/software solution that "provides an offline communication infrastructure that gives first responders a simple interface for managing all aspects of a disaster".

Google yesterday announced four new security features for Google accounts. According to ZDNet, Google won't allow you to sign in if you have disabled JavaScript in your browser. It plans to pull data from Google Play Protect to list all malicious apps installed on Android phones, and it also now will notify you whenever you share any data from your Google account. Finally, it has implemented a new set of procedures to help users after an account has been attacked.

News GNOME Distributions cryptomining Ubuntu IBM Google Security
Categories: Linux News

Fedora 29 Officially Released, Red Hat Enterprise Linux 7.6 Launched, New Version of Linux Lite, Google AI Tracking Humpback Whale Songs, and Resin.io Announces openBalena and a Name Change

Linux Journal - Wed, 10/31/2018 - 09:03

News briefs for October 31, 2018.

The Fedora Project Manager announced the official release of Fedora 29 yesterday. This release is the first to include the Fedora Modularity feature across all variants. Other changes include "GNOME 3.30 on the desktop, ZRAM for our ARM images, and a Vagrant image for Fedora Scientific". You can download it from here.

Red Hat Enterprise Linux 7.6 launched yesterday with improved security. eWeek reports that the new release features "TPM 2.0 support for security authentication, as well as integrating the open source nftables firewall technology effort". eWeek quotes principal project manager Steve Almy: "The TPM 2.0 integration in 7.6 provides an additional level of security by tying the hands-off decryption to server hardware in addition to the network bound disk encryption (NBDE) capability, which operates across the hybrid cloud footprint from on-premise servers to public cloud deployments." Version 7.6 is the second major milestone release of 2018.

Linux Lite 4.2 Final is now available. Linux Lite creator Jerry Bezencon says the release is "a 'refinement' and not a 'major upgrade'. There are some new wallpapers thanks to @whateverthing and some minor tweaks here and there." One change with this version is the addition of Redshift, which "adjusts the color temperature according to the position of the sun".

Google and a group of cetologists have been using AI to listen to years of undersea recordings with the hope of creating "a machine learning model that can spot humpback whale calls". According to TechCrunch, the project is part of Google's AI for Social Good initiative.

Resin.io, a container-based server platform for Linux device management, has "changed its name to balena and released an open source version of its IoT fleet management platform for Linux devices called openBalena", Linux Gizmos reports. Founder and CEO of the company says the name change is due to "to trademark issues, to cannabis references, and to people mishearing it as 'raisin'". balenaOS is "an open source spinoff of the container-based device software that works with balenaCloud", and the new openBalena "is an open version of the balenaCloud server software. Customers can now choose between letting balena manage their fleet of devices or building their own openBalena based server platform that manages fleets of devices running balenaOS".

News Fedora Red Hat Linux Lite Google AI Machine Learning Containers
Categories: Linux News

Episode 5: Linux is Personal

Linux Journal - Wed, 10/31/2018 - 08:06
Your browser does not support the audio element. Reality2.0 - Episode 5: Linux is Personal

Doc Searls and Katherine Druckman talk to Corbin Champion about Userland, an easy way to run Linux on your Android device, and other new projects.

Categories: Linux News

CloudWatch Is of the Devil, but I Must Use It

Linux Journal - Wed, 10/31/2018 - 06:30
by Corey Quinn

Let's talk about Amazon CloudWatch.

For those fortunate enough to not be stuck in the weeds of Amazon Web Services (AWS), CloudWatch is, and I quote from the official AWS description, "a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers." This is all well and good, except for the part where there isn't a single named constituency who enjoys working with the product. Allow me to dispense some monitoring heresy.

Better, let me describe this in the context of the 14 Amazon Leadership Principles that reportedly guide every decision Amazon makes. When you take a hard look at CloudWatch's complete failure across all 14 Leadership Principles, you wonder how this product ever made it out the door in its current state.

"Frugality"

I'll start with billing. Normally left for the tail end of articles like this, the CloudWatch billing paradigm is so terrible, I'm leading with it instead. You get billed per metric, per month. You get billed per thousand metrics you request to view via the API. You get billed per dashboard per month. You get billed per alarm per month. You get charged for logs based upon data volume ingested, data volume stored and "vended logs" that get published natively by AWS services on behalf of the customer. And, you get billed per custom event. All of this can be summed up best as "nobody on the planet understands how your CloudWatch metrics and logs get billed", and it leads to scenarios where monitoring vendors can inadvertently cost you thousands of dollars by polling CloudWatch too frequently. When the AWS charges are larger than what you're paying your monitoring vendor, it's not a wonderful feeling.

"Invent and Simplify"

CloudWatch Logs, CloudWatch Events, Custom Metrics, Vended Logs and Custom Dashboards all mean different things internally to CloudWatch from what you'd expect, compared to metrics solutions that actually make some fathomable level of sense. There are, thus, multiple services that do very different things, all operating under the "CloudWatch" moniker. For example, it's not particularly intuitive to most people that scheduling a Lambda function to invoke once an hour requires a custom CloudWatch Event. It feels overly complicated, incredibly confusing, and very quickly, you find yourself in a situation where you're having to build complex relationships to monitor things that are themselves far simpler.

Go to Full Article
Categories: Linux News

Kali Linux 2018.4 Released, ProtonDB Reports 2671 Games Now Work on Linux, Google Discover Rolling Out, Barcelona Investing 78.7% of IT Budget on Open Source and Manjaro New Stable Update

Linux Journal - Tue, 10/30/2018 - 08:40

News briefs for October 30, 2018.

Kali Linux 2018.4 was released yesterday. This is the final release of this year, and it brings the kernel to version 4.18.10, fixes several bugs and has many updated packages, including "a very experimental 64-bit Raspberry Pi 3 image". The new version also includes Wireguard, "a powerful and easy to configure VPN solution that eliminates many of the headaches one typically encounters setting up VPNs". See the Wireguard on Kali post for more information. You can download Kali from here.

The ProtonDB reports that 2,671 games now work on Linux since Valve Software released Proton two months ago. Proton is integrated with Steam Play to make playing Windows games on Linux easy. It "comprises other popular tools like Wine and DXVK among others that a gamer would otherwise have to install and maintain themselves. This greatly eases the burden for users to switch to Linux without having to learn the underlying systems or losing access to a large part of their library of games."

Google Discover has started rolling out to google.com on mobile devices. According to 9to5Google, Google Discover is a rebrand of Google Feeds, and "is part of the company's efforts to surface information without users actively having to ask for it".

The European Commission reports that the city of Barcelona is now investing 78.7% of its IT budget on open source, and it expects nearly all of its IT budget to be linked to open-source projects by 2020. Xavier Roca, director of IT development for Barcelona, commented: "We will continue to work with proprietary software solutions, as we have systems in place that require maintenance. One day we hope everything will be open source, but today that is impossible."

Manjaro released a new stable update this week. Version 2018-10-28 updates systemd, Deepin, Bootsplash, NVIDIA drivers to 410.73, Firefox to v64b4 and more. You can find the full list of changes here.

News Kali Linux Security VPN gaming Google open source Manjaro
Categories: Linux News

Normalizing Filenames and Data with Bash

Linux Journal - Tue, 10/30/2018 - 08:40
by Dave Taylor

URLify: convert letter sequences into safe URLs with hex equivalents.

This is my 155th column. That means I've been writing for Linux Journal for:

$ echo "155/12" | bc 12

No, wait, that's not right. Let's try that again:

$ echo "scale=2;155/12" | bc 12.91

Yeah, that many years. Almost 13 years of writing about shell scripts and lightweight programming within the Linux environment. I've covered a lot of ground, but I want to go back to something that's fairly basic and talk about filenames and the web.

It used to be that if you had filenames that had spaces in them, bad things would happen: "my mom's cookies.html" was a recipe for disaster, not good cookies—um, and not those sorts of web cookies either!

As the web evolved, however, encoding of special characters became the norm, and every Web browser had to be able to manage it, for better or worse. So spaces became either "+" or %20 sequences, and everything else that wasn't a regular alphanumeric character was replaced by its hex ASCII equivalent.

In other words, "my mom's cookies.html" turned into "my+mom%27s+cookies.html" or "my%20mom%27s%20cookies.html". Many symbols took on a second life too, so "&" and "=" and "?" all got their own meanings, which meant that they needed to be protected if they were part of an original filename too. And what about if you had a "%" in your original filename? Ah yes, the recursive nature of encoding things....

So purely as an exercise in scripting, let's write a script that converts any string you hand it into a "web-safe" sequence. Before starting, however, pull out a piece of paper and jot down how you'd solve it.

Normalizing Filenames for the Web

My strategy is going to be easy: pull the string apart into individual characters, analyze each character to identify if it's an alphanumeric, and if it's not, convert it into its hexadecimal ASCII equivalent, prefacing it with a "%" as needed.

There are a number of ways to break a string into its individual letters, but let's use Bash string variable manipulations, recalling that ${#var} returns the number of characters in variable $var, and that ${var:x:1} will return just the letter in $var at position x. Quick now, does indexing start at zero or one?

Here's my initial loop to break $original into its component letters:

Go to Full Article
Categories: Linux News

Montréal-Python 73: Despotic Wagon

Montreal Python - Mon, 10/29/2018 - 23:00

Just in time for PyCon Canada, we are organizing an amazing evening with great local Pythonisthas. It is your chance to come to support them, see their talk in avant-première and who knows maybe to give them some feedback.

For PyCon Canada: don't forget it's next month, on November 10-11, in Toronto and there's still some tickets available. You should pick yours by going at https://2018.pycon.ca/registration.

Presentations Andrew Francis

Physical libraries are great! Managing library material via web interfaces leaves much to be desired. In the age of Siri and Alexa, why can’t one manage one’s library loans with text messaging or voice? This talk discusses questions and answers by prototyping a Python based conversational agent

Python packaging for everyone - Eric Araujo

Packaging in Python used to be a complicated affair, for technical and human reasons. Thankfully, in recent years the Python community has developed robust tools and practices. If you are wondering how to develop and distribute your project, this talk will show you the best of 2018!

Numpy to PyTorch - Shagun Sodhani

Numpy is the de-facto choice for array-based operations while PyTorch largely used as a deep learning framework. At the core, both provide a powerful N-dimensional tensor. This talk would focus on the similarities and difference between the two and how we can use PyTorch to augment Numpy.

Why are robots becoming Pythonistas? - Maxime St-Pierre

Introduction: In the fast pace and intense world of robotics, many praises a particular language, this godsend is Python. In this talk, we will look at some robotic frameworks and try to understand why Python is a popular alternative to C++ and Java.

Keep It Simply Annotated, Stupid - Sébastien Portebois

Des déclarations de type en Python. Hérésie? Depuis quand? Survolons ensemble le support de Python 2.7 à 3.7, les contraintes pour les développeurs et au runtime, et surtout: pour pourquoi voudrait-on ou devrait-on faire ça!

When

Monday November 5th, 2018 at 6PM

Where

Shopify Montreal Office 490 rue de la Gauchetière Montréal, Québec

Schedule
  • 6:00PM - Doors open
  • 6:30PM - Presentations
  • 8:00PM - End of the event
  • 8:15PM - Benelux
Categories: External Blogs

Bryan Lunduke Is New LJ Deputy Editor

Linux Journal - Mon, 10/29/2018 - 09:02
by Bryan Lunduke

Portland, Oregon, October 29, 2018 — Today, Bryan Lunduke announced that he is officially joining the Linux Journal team as "Deputy Editor" of the illustrious — and long-running — Linux magazine.

"I've been a fan of Linux Journal for almost as long as I've been using Linux," beamed Lunduke. "To be joining a team that has been producing such an amazing magazine for nearly a quarter of a century? It's a real honor." In November of 2017, SUSE—the first Linux-focused company ever created—announced Lunduke's departure to re-focus on journalism. Now, furthering that goal, Lunduke has joined the first Linux-focused magazine ever created.

Lunduke's popular online show, the aptly named "Lunduke Show", will continue to operate as a completely independent entity with no planned changes to production schedules or show content.

Sources say Lunduke is "feeling pretty fabulous right about now." No confirmation, as yet, on if Lunduke is currently doing a "happy dance". At least one source suggests this is likely.

Go to Full Article
Categories: Linux News
Syndicate content