Skip to main content

Feed aggregator

About ncurses Colors

Linux Journal - Thu, 12/13/2018 - 08:00
by Jim Hall

Why does ncurses support only eight colors?

If you've looked into the color palette available in curses, you may wonder why curses supports only eight colors. The curses.h include file defines these color macros:

COLOR_BLACK COLOR_RED COLOR_GREEN COLOR_YELLOW COLOR_BLUE COLOR_MAGENTA COLOR_CYAN COLOR_WHITE

But why only eight colors, and why these particular colors? At least with the Linux console, if you're running on a PC, the color range's origins are with the PC hardware.

A Brief History of Color

Linux started as a PC operating system, so the first Linux console was a PC running in text mode. And to understand the color palette on the PC console, you need to go all the way back to the old CGA days. In text mode, the PC terminal had a color palette of 16 colors, enumerated 0 (black) to 15 (white). Backgrounds were limited to the first eight colors:

  • 0. Black
  • 1. Blue
  • 2. Green
  • 3. Cyan
  • 4. Red
  • 5. Magenta
  • 6. Brown
  • 7. White ("Light Gray")
  • 8. Bright Black ("Gray")
  • 9. Bright Blue
  • 10. Bright Green
  • 11. Bright Cyan
  • 12. Bright Red
  • 13. Bright Magenta
  • 14. Yellow
  • 15. Bright White

These colors go back to CGA, IBM's Color/Graphics Adapter from the earlier PC-compatible computers. This was a step up from the plain monochrome displays; as the name implies, monochrome could display only black or white. CGA could display a limited range of colors.

CGA supports mixing red (R), green (G) and blue (B) colors. In its simplest form, RGB is either "on" or "off". In this case, you can mix the RGB colors in 2x2x2=8 ways. Table 1 shows the binary and decimal representations of RGB.

Table 1. Binary and Decimal Representations of RGB 000 (0) Black 001 (1) Blue 010 (2) Green 011 (3) Cyan 100 (4) Red 101 (5) Magenta 110 (6) Yellow 111 (7) White

To double the number of colors, CGA added an extra bit called the "intensifier" bit. With the intensifier bit set, the red, green and blue colors would be set to their maximum values. Without the intensifier bit, each RGB value would be set to a "midrange" intensity. Let's represent that intensifier bit as an extra 1 or 0 in the binary color representation, as iRGB (Table 2).

Go to Full Article
Categories: Linux News

Firefox 64 Now Available, SoftMaker Office Announces "Load and Help" Fundraising Campaign, the Joint Development Foundation Has Joined The Linux Foundation, Google+ to End in April 2019 and Valve Releases Proton 3.16 (Beta)

Linux Journal - Wed, 12/12/2018 - 09:54

News briefs for December 12, 2018.

Firefox 64 was released yesterday. New features include multiple tab selection, Developer Tools improvements, standardizing proprietary styling features, updated privacy features and much more. See the full release notes for more details, and download Firefox here.

SoftMaker Office announces its "Load and Help 2018" fundraiser campaign: "From now until Christmas, the company will donate 10 cents to charitable organizations for each free download of FlexiPDF Basic or SoftMaker FreeOffice 2018." Also, for the first time ever, SoftMaker's free FreeOffice package is now available for macOS, in addition to Linux and Windows.

The Joint Development Foundation has joined The Linux Foundation family to "make it easier to collaborate through both open source and standards development". The press release quotes Executive Director of The Linux Foundation Jim Zemlin: "Leveraging the capabilities of the Joint Development Foundation will enable us to provide open source projects with another path to standardization, driving greater industry adoption of standards and specifications to speed adoption."

Google+ will be killed off in April 2019, rather than August 2019 as initially planned, due to a bug in the Google+ API that exposed the data of 52.5 million users. See the betanews post for details.

Valve announces a new beta release of Proton 3.16. With this release, 29 additional games are now supported, and the build also contains a rework of the audio. See the Changelog for more information.

News Firefox SoftMaker Office The Linux Foundation Google Valve gaming
Categories: Linux News

Lessons in Vendor Lock-in: Shaving

Linux Journal - Wed, 12/12/2018 - 07:30
by Kyle Rankin

Learn how to embrace open standards while you remove stubble.

Freedom is powerful. When you start using free software, a whole world opens up to you, and you start viewing everything in a different light. You start noticing when vendors don't release their code or when they try to lock you in to their products with proprietary protocols. These vendor lock-in techniques aren't new or even unique to software. Companies long have tried to force customer loyalty with incompatible proprietary products that make you stay on an upgrade treadmill. Often you can apply these free software principles outside the software world, so in this article, I describe my own object lesson in vendor lock-in from the shaving industry.

When I first started shaving, I was pretty intimidated with the notion of a sharp blade against my face so I picked the easiest and least-intimidating route: electric razors. Of course, electric razors have a large up-front cost, and after some time, you have to buy replacement blades. Still, the shaves were acceptable as far as I knew, so I didn't mind much.

At some point in my shaving journey, Gillette released the Mach 3 disposable razor. For some reason, this design appealed to a lot of geeks, and I ended up hearing about it on geek-focused blogs like Slashdot back in the day. I decided to try it out, and after I got over the initial intimidation, I realized it really wasn't all that hard to shave with it, and due to the multiple blades and lubricating strip along the top, I got a much closer shave.

I was a convert. I ditched my electric razor and went all in with the Mach 3. Of course, those disposable blades had the tendency to wear out pretty quickly, along with that blue lubricating strip, so I'd find myself dropping a few bucks per blade to get refills after a few shaves. Then again, Gillette was famous for the concept of giving away the razor and making its money on the blade, so this wasn't too surprising.

We're Going to Four Blades!

The tide started turning for me a few years later when Gillette decided to deprecate the Mach 3 in favor of a new design—this time with four blades, a lubricating strip and a rubber strip along the bottom! Everyone was supposed to switch over to this new and more expensive design, but I was perfectly happy with what I was using, and the new blades were incompatible with my Mach 3 razor, so I didn't pay it much attention.

The problem was that with this new design, replacement Mach 3 blades became harder and harder to come by, and all of the blades started creeping up in price. Eventually, I couldn't buy Mach 3 blades in bulk at my local warehouse store, and finally I gave up and bought one of the even more expensive new Gillette razors. What else could I do?

Go to Full Article
Categories: Linux News

Vote for Linux Support on Adobe, Nextcloud 15 Now Available, LF Deep Learning Foundation Introduces Interactive Deep Learning Landscape, Canonical Announces Full Enterprise Support for Kubernetes 1.13 on Ubuntu and Icinga Director 1.6 Released

Linux Journal - Tue, 12/11/2018 - 10:02

News briefs for December 11, 2018.

Adobe customer care says there hasn't been enough demand for Linux, Phoronix reports. But, if you're interested in Linux support on Adobe Premiere CC, you can "upvote that feature request" via the Adobe User Survey

Nextcloud 15 is out. This major release is "big step forward for communication and collaboration with others in a secure way". It introduces several new features, including Nextcloud Social, new security abilities and deep Collabora Online integration. Download Nextcloud 15 from here.

The Linux Foundation's Deep Learning Foundation has created the Interactive Deep Learning Landscape, which is "intended as a map to explore open source AI, ML, DL projects". According to the LF Deep Learning blog post, the tool "allows viewers to filter, obtain detailed information on a specific project or technology, and easily share via stateful URLs. It is intended to help developers, end users and others navigate the complex AI, DL and ML landscape." All data is also available in a GitHub repo.

Canonical announced full enterprise support for Kubernetes 1.13 on Ubuntu, including support for kubeadm and updates to MicroK8s. The Ubuntu blog notes that "Canonical's certified, Charmed Distribution of Kubernetes (CDK) is built from pure upstream binaries, and offers simplified deployment, scaling, management, and upgrades of Kubernetes, regardless of the underlying hardware or machine virtualisation. Supported deployment targets include AWS, GCE, Azure, VMware, OpenStack, LXD, and bare metal."

Icinga Director 1.6 was released yesterday. This version of Icinga Director—a tool to configure the Icinga open-source monitoring software—now includes multi-instance support, configuration baskets and improved health checks. You can checkout or download the new release here.

News Adobe Nextcloud The Linux Foundation Deep Learning Canonical Kubernetes Ubuntu Icinga
Categories: Linux News

Testing Your Code with Python's pytest, Part II

Linux Journal - Tue, 12/11/2018 - 08:00
by Reuven M. Lerner

Testing functions isn't hard, but how do you test user input and output?

In my last article, I started looking at "pytest", a framework for testing Python programs that's really changed the way I look at testing. For the first time, I really feel like testing is something I can and should do on a regular basis; pytest makes things so easy and straightforward.

One of the main topics I didn't cover in my last article is user input and output. How can you test programs that expect to get input from files or from the user? And, how can you test programs that are supposed to display something on the screen?

So in this article, I describe how to test input and output in a variety of ways, allowing you to test programs that interact with the outside world. I try not only to explain what you can do, but also show how it fits into the larger context of testing in general and pytest in particular.

User Input

Say you have a function that asks the user to enter an integer and then returns the value of that integer, doubled. You can imagine that the function would look like this:

def double(): x = input("Enter an integer: ") return int(x) * 2

How can you test that function with pytest? If the function were to take an argument, the answer would be easy. But in this case, the function is asking for interactive input from the user. That's a bit harder to deal with. After all, how can you, in your tests, pretend to ask the user for input?

In most programming languages, user input comes from a source known as standard input (or stdin). In Python, sys.stdin is a read-only file object from which you can grab the user's input.

So, if you want to test the "double" function from above, you can (should) replace sys.stdin with another file. There are two problems with this, however. First, you don't really want to start opening files on disk. And second, do you really want to replace the value of sys.stdin in your tests? That'll affect more than just one test.

The solution comes in two parts. First, you can use the pytest "monkey patching" facility to assign a value to a system object temporarily for the duration of the test. This facility requires that you define your test function with a parameter named monkeypatch. The pytest system notices that you've defined it with that parameter, and then not only sets the monkeypatch local variable, but also sets it up to let you temporarily set attribute names.

In theory, then, you could define your test like this:

Go to Full Article
Categories: Linux News

Large files with Git: LFS and git-annex

Anarcat - Mon, 12/10/2018 - 19:00

Git does not handle large files very well. While there is work underway to handle large repositories through the commit graph work, Git's internal design has remained surprisingly constant throughout its history, which means that storing large files into Git comes with a significant and, ultimately, prohibitive performance cost. Thankfully, other projects are helping Git address this challenge. This article compares how Git LFS and git-annex address this problem and should help readers pick the right solution for their needs.

The problem with large files

As readers probably know, Linus Torvalds wrote Git to manage the history of the kernel source code, which is a large collection of small files. Every file is a "blob" in Git's object store, addressed by its cryptographic hash. A new version of that file will store a new blob in Git's history, with no deduplication between the two versions. The pack file format can store binary deltas between similar objects, but if many objects of similar size change in a repository, that algorithm might fail to properly deduplicate. In practice, large binary files (say JPEG images) have an irritating tendency of changing completely when even the smallest change is made, which makes delta compression useless.

There have been different attempts at fixing this in the past. In 2006, Torvalds worked on improving the pack-file format to reduce object duplication between the index and the pack files. Those changes were eventually reverted because, as Nicolas Pitre put it: "that extra loose object format doesn't appear to be worth it anymore".

Then in 2009, Caca Labs worked on improving the fast-import and pack-objects Git commands to do special handling for big files, in an effort called git-bigfiles. Some of those changes eventually made it into Git: for example, since 1.7.6, Git will stream large files directly to a pack file instead of holding them all in memory. But files are still kept forever in the history.

An example of trouble I had to deal with is for the Debian security tracker, which follows all security issues in the entire Debian history in a single file. That file is around 360,000 lines for a whopping 18MB. The resulting repository takes 1.6GB of disk space and a local clone takes 21 minutes to perform, mostly taken up by Git resolving deltas. Commit, push, and pull are noticeably slower than a regular repository, taking anywhere from a few seconds to a minute depending one how old the local copy is. And running annotate on that large file can take up to ten minutes. So even though that is a simple text file, it's grown large enough to cause significant problems for Git, which is otherwise known for stellar performance.

Intuitively, the problem is that Git needs to copy files into its object store to track them. Third-party projects therefore typically solve the large-files problem by taking files out of Git. In 2009, Git evangelist Scott Chacon released GitMedia, which is a Git filter that simply takes large files out of Git. Unfortunately, there hasn't been an official release since then and it's unclear if the project is still maintained. The next effort to come up was git-fat, first released in 2012 and still maintained. But neither tool has seen massive adoption yet. If I would have to venture a guess, it might be because both require manual configuration. Both also require a custom server (rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits collaboration since users need access to another service.

Git LFS

That was before GitHub released Git Large File Storage (LFS) in August 2015. Like all software taking files out of Git, LFS tracks file hashes instead of file contents. So instead of adding large files into Git directly, LFS adds a pointer file to the Git repository, which looks like this:

version https://git-lfs.github.com/spec/v1 oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393 size 12345

LFS then uses Git's smudge and clean filters to show the real file on checkout. Git only stores that small text file and does so efficiently. The downside, of course, is that large files are not version controlled: only the latest version of a file is kept in the repository.

Git LFS can be used in any repository by installing the right hooks with git lfs install then asking LFS to track any given file with git lfs track. This will add the file to the .gitattributes file which will make Git run the proper LFS filters. It's also possible to add patterns to the .gitattributes file, of course. For example, this will make sure Git LFS will track MP3 and ZIP files:

$ cat .gitattributes *.mp3 filter=lfs -text *.zip filter=lfs -text

After this configuration, we use Git normally: git add, git commit, and so on will talk to Git LFS transparently.

The actual files tracked by LFS are copied to a path like .git/lfs/objects/{OID-PATH}, where {OID-PATH} is a sharded file path of the form OID[0:2]/OID[2:4]/OID and where OID is the content's hash (currently SHA-256) of the file. This brings the extra feature that multiple copies of the same file in the same repository are automatically deduplicated, although in practice this rarely occurs.

Git LFS will copy large files to that internal storage on git add. When a file is modified in the repository, Git notices, the new version is copied to the internal storage, and the pointer file is updated. The old version is left dangling until the repository is pruned.

This process only works for new files you are importing into Git, however. If a Git repository already has large files in its history, LFS can fortunately "fix" repositories by retroactively rewriting history with git lfs migrate. This has all the normal downsides of rewriting history, however --- existing clones will have to be reset to benefit from the cleanup.

LFS also supports file locking, which allows users to claim a lock on a file, making it read-only everywhere except in the locking repository. This allows users to signal others that they are working on an LFS file. Those locks are purely advisory, however, as users can remove other user's locks by using the --force flag. LFS can also prune old or unreferenced files.

The main limitation of LFS is that it's bound to a single upstream: large files are usually stored in the same location as the central Git repository. If it is hosted on GitHub, this means a default quota of 1GB storage and bandwidth, but you can purchase additional "packs" to expand both of those quotas. GitHub also limits the size of individual files to 2GB. This upset some users surprised by the bandwidth fees, which were previously hidden in GitHub's cost structure.

While the actual server-side implementation used by GitHub is closed source, there is a test server provided as an example implementation. Other Git hosting platforms have also implemented support for the LFS API, including GitLab, Gitea, and BitBucket; that level of adoption is something that git-fat and GitMedia never achieved. LFS does support hosting large files on a server other than the central one --- a project could run its own LFS server, for example --- but this will involve a different set of credentials, bringing back the difficult user onboarding that affected git-fat and GitMedia.

Another limitation is that LFS only supports pushing and pulling files over HTTP(S) --- no SSH transfers. LFS uses some tricks to bypass HTTP basic authentication, fortunately. This also might change in the future as there are proposals to add SSH support, resumable uploads through the tus.io protocol, and other custom transfer protocols.

Finally, LFS can be slow. Every file added to LFS takes up double the space on the local filesystem as it is copied to the .git/lfs/objects storage. The smudge/clean interface is also slow: it works as a pipe, but buffers the file contents in memory each time, which can be prohibitive with files larger than available memory.

git-annex

The other main player in large file support for Git is git-annex. We covered the project back in 2010, shortly after its first release, but it's certainly worth discussing what has changed in the eight years since Joey Hess launched the project.

Like Git LFS, git-annex takes large files out of Git's history. The way it handles this is by storing a symbolic link to the file in .git/annex. We should probably credit Hess for this innovation, since the Git LFS storage layout is obviously inspired by git-annex. The original design of git-annex introduced all sorts of problems however, especially on filesystems lacking symbolic-link support. So Hess has implemented different solutions to this problem. Originally, when git-annex detected such a "crippled" filesystem, it switched to direct mode, which kept files directly in the work tree, while internally committing the symbolic links into the Git repository. This design turned out to be a little confusing to users, including myself; I have managed to shoot myself in the foot more than once using this system.

Since then, git-annex has adopted a different v7 mode that is also based on smudge/clean filters, which it called "unlocked files". Like Git LFS, unlocked files will double disk space usage by default. However it is possible to reduce disk space usage by using "thin mode" which uses hard links between the internal git-annex disk storage and the work tree. The downside is, of course, that changes are immediately performed on files, which means previous file versions are automatically discarded. This can lead to data loss if users are not careful.

Furthermore, git-annex in v7 mode suffers from some of the performance problems affecting Git LFS, because both use the smudge/clean filters. Hess actually has ideas on how the smudge/clean interface could be improved. He proposes changing Git so that it stops buffering entire files into memory, allows filters to access the work tree directly, and adds the hooks he found missing (for stash, reset, and cherry-pick). Git-annex already implements some tricks to work around those problems itself but it would be better for those to be implemented in Git natively.

Being more distributed by design, git-annex does not have the same "locking" semantics as LFS. Locking a file in git-annex means protecting it from changes, so files need to actually be in the "unlocked" state to be editable, which might be counter-intuitive to new users. In general, git-annex has some of those unusual quirks and interfaces that often come with more powerful software.

And git-annex is much more powerful: it not only addresses the "large-files problem" but goes much further. For example, it supports "partial checkouts" --- downloading only some of the large files. I find that especially useful to manage my video, music, and photo collections, as those are too large to fit on my mobile devices. Git-annex also has support for location tracking, where it knows how many copies of a file exist and where, which is useful for archival purposes. And while Git LFS is only starting to look at transfer protocols other than HTTP, git-annex already supports a large number through a special remote protocol that is fairly easy to implement.

"Large files" is therefore only scratching the surface of what git-annex can do: I have used it to build an archival system for remote native communities in northern Québec, while others have built a similar system in Brazil. It's also used by the scientific community in projects like GIN and DataLad, which manage terabytes of data. Another example is the Japanese American Legacy Project which manages "upwards of 100 terabytes of collections, transporting them from small cultural heritage sites on USB drives".

Unfortunately, git-annex is not well supported by hosting providers. GitLab used to support it, but since it implemented Git LFS, it dropped support for git-annex, saying it was a "burden to support". Fortunately, thanks to git-annex's flexibility, it may eventually be possible to treat LFS servers as just another remote which would make git-annex capable of storing files on those servers again.

Conclusion

Git LFS and git-annex are both mature and well maintained programs that deal efficiently with large files in Git. LFS is easier to use and is well supported by major Git hosting providers, but it's less flexible than git-annex.

Git-annex, in comparison, allows you to store your content anywhere and espouses Git's distributed nature more faithfully. It also uses all sorts of tricks to save disk space and improve performance, so it should generally be faster than Git LFS. Learning git-annex, however, feels like learning Git: you always feel you are not quite there and you can always learn more. It's a double-edged sword and can feel empowering for some users and terrifyingly hard for others. Where you stand on the "power-user" scale, along with project-specific requirements will ultimately determine which solution is the right one for you.

Ironically, after thorough evaluation of large-file solutions for the Debian security tracker, I ended up proposing to rewrite history and split the file by year which improved all performance markers by at least an order of magnitude. As it turns out, keeping history is critical for the security team so any solution that moves large files outside of the Git repository is not acceptable to them. Therefore, before adding large files into Git, you might want to think about organizing your content correctly first. But if large files are unavoidable, the Git LFS and git-annex projects allow users to keep using most of their current workflow.

This article first appeared in the Linux Weekly News.

Categories: External Blogs

Linux Thursday - Dec 6, 2018

Linux Journal - Mon, 12/10/2018 - 18:00

Please support Linux Journal by subscribing or becoming a patron.

Categories: Linux News

Cumulus Networks Partners with Lenovo, Unvanquished Game Announces First Alpha in Almost Three Years, KDE Frameworks 5.53.0 Released, Git v2.20.0 Is Now Available and Major Milestone WordPress Update

Linux Journal - Mon, 12/10/2018 - 09:42

News briefs for December 10, 2018.

Cumulus Networks is partnering with Lenovo to deliver open data-center networking switches. According to the press release, through this partnership, "Lenovo will offer ThinkSystem RackSwitch models with support for Cumulus Linux. Lenovo customers can now use Cumulus' popular network operating system (OS), Cumulus Linux, and Cumulus' operational management tool, NetQ, while taking advantage of unprecedented third-party options including network automation and monitoring to drive greater operational efficiency."

Developers of the open-source game Unvanquished announce a new alpha release, Unvanquished Alpha 51 today, marking their first release in almost three years. According to Phoronix, the beta should drop soon as well. See the game's website for details.

KDE yesterday announced the release of KDE Frameworks 5.53.0. KDE Frameworks is made up of 70 add-on libraries to Qt, and this release is part of a series of planned monthly releases. See the announcement for the list of what's new in this version.

The latest feature release of Git, v2.20.0, is now available. According to the release announcement this version is composed of "962 non-merge commits since v2.19.0 (this is by far the largest release in v2.x.x series), contributed by 83 people, 26 of which are new faces". You can get the tarballs here.

WordPress recently announced a new major milestone update, WordPress 5.0, which is code-named "Bebo" in honor of Cuban jazz musician Bebo Valdés. The biggest user-facing change is the new Project Gutenberg editor, "the primary interface to how WordPress site administrators create content and define how it is displayed". See the WordPress blog for more information on the new block-based editor.

News Lenovo gaming KDE qt git WordPress
Categories: Linux News

How Can We Bring FOSS to the Virtual World?

Linux Journal - Mon, 12/10/2018 - 07:30
by Doc Searls

Is there room for FOSS in the AI, VR, AR, MR, ML and XR revolutions—or vice versa?

Will the free and open-source revolution end when our most personal computing happens inside the walled gardens of proprietary AI VR, AR, MR, ML and XR companies? I ask, because that's the plan.

I could see that plan when I met the Magic Leap One at IIW in October (only a few days ago as I write this). The ML1 (my abbreviation) gave me an MR (mixed reality) experience when I wore all of this:

  • Lightwear (a headset).
  • Control (a handset).
  • Lightpack (electronics in a smooth disc about the size of a saucer).

So far, all Magic Leap offers is a Creator Edition. That was the one I met. Its price is $2,295, revealed only at the end of a registration gauntlet that requires name, email address, birth date and agreement with two click-wrap contracts totaling more than 7,000 words apiece. Here's what the page with the price says you get:

Magic Leap One Creator Edition is a lightweight, wearable computer that seamlessly blends the digital and physical worlds, allowing digital content to coexist with real world objects and the people around you. It sees what you see and uses its understanding of surroundings and context to create unbelievably believable experiences.

Also recommended on the same page are a shoulder strap ($30), a USB (or USB-like) dongle ($60) and a "fit kit" ($40), bringing the full price to $2,425.

Buying all this is the cost of entry for chefs working in the kitchen, serving apps and experiences to customers paying to play inside Magic Leap's walled garden: a market Magic Leaps hopes will be massive, given an investment sum that now totals close to $2 billion.

The experience it created for me, thanks to the work of one early developer, was with a school of digital fish swimming virtually in my physical world. Think of a hologram without a screen. I could walk through them, reach out and make them scatter, and otherwise interact with them. It was a nice demo, but far from anything I might crave.

But I wondered, given Magic Leap's secretive and far-advanced tech, if it could eventually make me crave things. I ask because immersive doesn't cover what this tech does. A better adjective might be invasive.

Go to Full Article
Categories: Linux News

Weekend Reading: Sysadmin 101

Linux Journal - Sat, 12/08/2018 - 13:27
by Kyle Rankin

This series covers sysadmin basics. The first article explains how to approach alerting and on-call rotations as a sysadmin. In the second article, I discuss how to automate yourself out of a job, and in the third, I explain why and how you should use tickets. The fourth article covers some of the fundamentals of patch management under Linux, and the fifth and final article describes the overall sysadmin career path and the attributes that might make you a "senior sysadmin" instead of a "sysadmin" or "junior sysadmin", along with some tips on how to level up.

Sysadmin 101: Alerting

In this first article, I cover on-call alerting. Like with any job title, the responsibilities given to sysadmins, DevOps and Site Reliability Engineers may differ, and in some cases, they may not involve any kind of 24x7 on-call duties, if you're lucky. For everyone else, though, there are many ways to organize on-call alerting, and there also are many ways to shoot yourself in the foot.

Sysadmin 101: Automation

Here we cover systems administrator fundamentals. These days, DevOps has made even the job title "systems administrator" seem a bit archaic, much like the "systems analyst" title it replaced. These DevOps positions are rather different from sysadmin jobs in the past. They have a much larger emphasis on software development far beyond basic shell scripting, and as a result, they often are filled by people with software development backgrounds without much prior sysadmin experience. In the past, a sysadmin would enter the role at a junior level and be mentored by a senior sysadmin on the team, but in many cases currently, companies go quite a while with cloud outsourcing before their first DevOps hire. As a result, the DevOps engineer might be thrust into the role at a junior level with no mentor around apart from search engines and Stack Overflow posts.

Go to Full Article
Categories: Linux News

Episode 9: Humanity, Magic, and Glitter

Linux Journal - Fri, 12/07/2018 - 11:28
Your browser does not support the audio element. Reality 2.0 - Episode 9: Humanity, Magic, and Glitter

Katherine Druckman and Doc Searls talk to Bryan Lunduke about Linux and humanity.

Categories: Linux News

Feral Interactive Bringing DiRT 4 to Linux in 2019, Chrome 71 Blocks Ads on Abusive Sites, New Linux Malware Families Discovered, The Linux Foundation Launches the Automated Compliance Tooling Project, and GNU Guix and GuixSD 0.16.0 Released

Linux Journal - Fri, 12/07/2018 - 11:10

News briefs for December 7, 2018.

Feral Interactive announced this morning that DiRT 4 is coming to Linux and macOS in 2019. The all-terrain motorsport game was originally developed by Codemaster and boasts a fleet of more than 50 rally cars, buggies, trucks and crosskarts. And, for the first time in the history of the franchise, players can create their own rally routes. You can view the trailer here.

Newly released Chrome 71 "now blocks ads on 'abusive' sites that consistently trick users with fake system warnings, non-functional 'close' buttons and other bogus content that steers you to ads and landing pages. The sites themselves won't lose access the moment Google marks them abusive, but they'll have 30 days to clean up their acts." According to Engadget, Chrome 71 has other additional safeguards, and it's available now for Linux, Mac and Windows. It'll be rolling out to Android and iOS users in the coming weeks.

Cyber-security company ESET has discovered 21 "new" Linux malware families, and all of them "operate in the same manner, as trojanized versions of the OpenSSH client". ZDNet reports that "They are developed as second-stage tools to be deployed in more complex 'botnet' schemes. Attackers would compromise a Linux system, usually a server, and then replace the legitimate OpenSSH installation with one of the trojanized versions. ESET said that '18 out of the 21 families featured a credential-stealing feature, making it possible to steal passwords and/or keys' and '17 out of the 21 families featured a backdoor mode, allowing the attacker a stealthy and persistent way to connect back to the compromised machine.'"

The Linux Foundation has launched the Automated Compliance Tooling (ACT) project in order to help companies comply with open-source licensing requirements. Kate Stewart, Senior Director of Strategic Programs at The Linux Foundation, says, "There are numerous open source compliance tooling projects but the majority are unfunded and have limited scope to build out robust usability or advanced features. We have also heard from many organizations that the tools that do exist do not meet their current needs. Forming a neutral body under The Linux Foundation to work on these issues will allow us to increase funding and support for the compliance tooling development community."

GNU Guix and GuixSD 0.16.0 were released yesterday. This release represents 4,515 commits by 95 people over five months, and it's hopefully the last release before version 1.0. See the release announcement for more details and download links.

News gaming Feral Interactive Chrome Security Google OpenSSH The Linux Foundation licensing open source GNU Guix
Categories: Linux News

Reinventing Software Development and Availability with Open Source: an Interview with One of Microsoft Azure's Lead Architects

Linux Journal - Fri, 12/07/2018 - 08:00
by Petros Koutoupis

Microsoft was founded in 1975—that's 43 years ago and a ton of history. Up until the last decade, the company led a campaign against the Open Source and Free Software movements, and although it may have slowed the opposition, it did not bring it to an end. In fact, it emboldened its supporters to push the open-source agenda even harder. Fast-forward to the present, and open-source technologies run nearly everything—mobile devices, cloud services, televisions and more.

It wasn't until Satya Nadella took the helm (2014) that the large ship was steered around. Almost overnight, Microsoft embraced everything Linux and open source. It eventually joined The Linux Foundation and, more recently, the Open Initiative Network. At first, it seemed too good to be true, but here we are, a few years after these events, and Microsoft continues to support the Open Source community and adopt many of its philosophies. But why?

I wanted to find out and ended up reaching out to Microsoft. John Gossman, a lead architect working on Azure, spent a bit of time with me to share both his thoughts and experiences as they relate to open source.

Petros Koutoupis: Can you tell our readers a bit about yourself?

John Gossman: I'm a long-time developer with 30 years of industry experience. I have been with Microsoft for 18 of those years. At Microsoft, I have had the opportunity to touch a little bit of everything—from Windows to other graphical applications, and more recently, that is, for the last 6 years, I have worked on Azure. My primary focus is on developer experience. I know this area very well and much of it comes from the Open Source world. I spend a lot of time looking at Linux workloads while also working very closely with Linux vendors. More recently (at least two years now), I stepped into a very interesting role as a member on the board of The Linux Foundation.

PK: Microsoft hasn't always had the best of relationships with anything open-source software (OSS)-related&mddash;that is, until Satya Nadella stepped to his current role as CEO. Why the change? Why has Microsoft changed its position?

JG: I have spent a lot of time thinking about this very question. Now, I cannot speak for the entire company, but I believe it all goes back to the fact that Microsoft was and still is a company focused on software developers. Remember, when Microsoft first started, it built and sold a BASIC interpreter. Later on, the company delivered Visual Studio and many more products. The core mission in the Microsoft culture always has been to enable software developers.

For a while, Windows and Office overshadowed the developer frameworks, losing touch with those core developers, but with the introduction of Azure, the focus has since been reverted back to software developers, and those same developers love open source.

Go to Full Article
Categories: Linux News

Google, Facebook and Uber Join the OpenChain Project, ownCloud's 2nd-Gen End-to-End Encryption for ownCloud Enterprise Now Available, Tuxedo Computers Announces Infinity Book Pro 13 Coming Soon, Five openSUSE Tumbleweed Snapshots and PHP 7.3 Released

Linux Journal - Thu, 12/06/2018 - 11:45

News briefs for December 6, 2018.

Facebook, Google and Uber have joined the OpenChain Project as platinum members. OpenChain is hosted by The Linux Foundation and is the "only standard for open source compliance in the supply chain". It also "provides a specification as well as overarching processes, policies and training that companies need to be successful". See the press release for more details and links to further reading.

ownCloud today announces the the second generation of End-To-End Encryption (E2EE) for ownCloud Enterprise. The new plugin "enables encryption and decryption by generating a 'key pair' including a private key and public key, which takes place directly with the sender and recipient in the web browser. The new Version also provides the option of using hardware keys on which a private key is stored and never leaves the token, such as smart cards or USB tokens."

Tuxedo Computers announces that its new Infinity Pro 13 is coming soon. The machine is small and light: 1.3 kg with a 13.3" display. It also sports a new CPU and USB type C charging capability. Other specs include Intel UHD 620 graphic, standard 2.5" HDD or SDD, up to 32GB DDR4, and an illuminated and lasered keyboard with Tux Super key. In addition, you can remove the bottom of the case, so all components are easy to maintain, clean or replace.

openSUSE's rolling release Tumbleweed had five snapshots this week, and it's preparing for an update to the KDE Plasma 5.14.4 packages in upcoming snapshots. Package updates include kernel 4.19.5, GNOME's Flickr app, VirtualBox 5.2.22, an update to Firefox 63.0.3 and more.

PHP 7.3 was released today. According to Phoronix, this release marks the first big update in a year to the programming language. In addition, "PHP 7.3 introduces the Foreign Function Interface (FFI) to access functions/variables/structures from C within PHP, a platform independent function for accessing the system's network interface information, an is_countable() function was added, WebP is now supported within the GD image create from string, updated SQLite integration, and a range of other improvements." See the official release documentation here.

News Facebook Google Uber OpenChain The Linux Foundation OwnCloud Laptops Tuxedo Computers openSUSE PHP
Categories: Linux News

On Linus' Return to Kernel Development

Linux Journal - Thu, 12/06/2018 - 09:08
by Zack Brown

On October 23, 2018, Linus Torvalds came out of his self-imposed isolation, pulling a lot of patches from the git trees of various developers. It was his first appearance on the Linux Kernel Mailing List since September 16, 2018, when he announced he would take a break from kernel development to address his sometimes harsh behavior toward developers. On the 23rd, he announced his return, which I cover here after summarizing some of his pull activities.

For most of his pulls, he just replied with an email that said, "pulled". But in one of them, he noticed that Ingo Molnar had some issues with his email, in particular that Ingo's mail client used the iso-8859-1 character set instead of the more usual UTF-8. Linus said, "using iso-8859-1 instead of utf-8 in this day and age is just all kinds of odd. It looks like it was all fine, but if Mutt has an option to just send as utf-8, I encourage everybody to just use that and try to just have utf-8 everywhere. We've had too many silly issues when people mix locales etc and some point in the chain gets it wrong."

On the 24th, Linus continued pulling from developer trees. One of these was a batch of networking updates from David Miller, and it included contributions from a lot of different people. Linus noticed that the Kconfig rules were running into unmet dependency warnings because the code expected to run on the Qualcomm architecture, which Linus didn't use. He suggested it was a simple matter of updating the dependency list in the code. He also asked why the developers didn't notice that problem when testing their patches. Kalle Valo explained, "Mostly bad timing due to my vacation. I did do allmodconfig build but not sure why I missed the warning, also the kbuild bot didn't report anything. Jeff did report it last week, but I was on vacation at the time and just came back yesterday and didn't have time to react to it yet."

That seemed fine to Linus, who said he'd pull the fix when it became available. He remarked, "I just don't want my tree to have warnings that I see, and that may hide new warnings coming in when I do my next pull request."

On the 25th, Linus continued pulling from developer trees. In one instance, the issue of minimal tool versions came up. Linus prefers to support as many regular users as possible, which means supporting tool versions from the Linux distributions.

In response to a hard-to-read patch, Andi Kleen suggested changing the minimum supported binutils version from 2.20 to 2.21, which would support some useful assembler opcodes that would make the patch easier to review. Andy Lutomirski, another of the patch reviewers, said this would be fine. And Linus said:

Go to Full Article
Categories: Linux News

UK Parliament Releases Facebook Document on the Handling of User Data, Australia Set to Give Law Enforcement Power to Access Encrypted Messages, Microsoft Open-Sourced Windows UI/UX Frameworks, Iridium Browser New Release and CrossOver 18.1 Now Available

Linux Journal - Wed, 12/05/2018 - 11:16

News briefs for December 5, 2018.

The UK Parliament released a 250-page previously sealed Facebook document that reveals how the company handled crucial decisions regarding user data. The Verge reports that "In emails released as part of the cache, Facebook executives are shown dealing with other major tech companies on 'whitelisting' for its platform" and that according to British lawmaker Damian Collins "the agreements allowed the companies access to user data after new restrictions were put in place to end most companies' access. Companies offered access included Netflix and Airbnb, according to the emails." You can see the 250-page document here.

Australia plans to give law enforcement and intelligence agencies the ability to access encrypted messages on platforms like WhatsApp, putting public safety concerns ahead of personal privacy. Bloomberg reports that "Amid protests from companies such as Facebook Inc. and Google, the government and main opposition struck a deal on Tuesday that should see the legislation passed by parliament this week. Under the proposed powers, technology companies could be forced to help decrypt communications on popular messaging apps, or even build new functionality to help police access data."

Microsoft yesterday open-sourced Windows Forms, the WinUI (Windows UI Library) and WPF (Windows Presentation Foundation). According to Phoronix, the full source code is available on GitHub and the UI/UX frameworks are now open source under the MIT license. For more information, see this Windows blog post.

Iridium Browser recently released build 2018.11.71 for Debian-based systems. The new version is based on Chromium 71.0.3578.30, and it's available for Fedora and openSUSE as well. Iridium Browser is "Iridium Browser is based on the Chromium code base. All modifications enhance the privacy of the user and make sure that the latest and best secure technologies are used. Automatic transmission of partial queries, keywords and metrics to central services is prevented and only occurs with the approval of the user. In addition, all our builds are reproducible and modifications are auditable, setting the project ahead of other secure browser providers." You can download it from here.

CodeWeavers announced the release of CrossOver 18.1 yesterday for both Linux and macOS. According to the announcement, "CrossOver 18.1 restores controller support for Steam on both macOS and Linux. macOS customers with active support entitlements will be upgraded to CrossOver 18.1 the next time they launch CrossOver. Linux users can download the latest version from here.

News Privacy Facebook Australia Microsoft open source Iridium Browser Chromium crossover Codeweavers
Categories: Linux News

Best Linux Marketing Campaigns

Linux Journal - Wed, 12/05/2018 - 08:00
by Bryan Lunduke

I have long held the opinion that one of the biggest problems holding back Linux-based systems from dominating (market-share-wise) in the desktop computing space...is marketing. Our lack of attention-grabbing, hearts-and-minds-winning marketing is, in my oh-so-humble opinion, one of the most glaring weaknesses of the Free and Open Source Software world.

But, in a way, me saying that really isn't fair.

The reality is that we have had some truly fantastic marketing campaigns through the years. A few even managed to break outside of our own Linux-loving community. Let's take a stroll through a few of my favorites.

From my vantage point, the best marketing has come from two places: IBM (which is purchasing Red Hat) and SUSE. Let's do this chronologically.

IBM's "Peace. Love. Linux."

Back in 2001, IBM made a major investment in Linux. To promote that investment, obviously, an ad campaign must be launched! Something iconic! Something catchy! Something...potentially illegal!

Boy, did they nail it.

"Peace. Love. Linux." Represented by simple symbols: peace sign, a heart and a penguin, all in little circles next to each other. It was visually pleasing, and it promoted happiness (or, at least, peace and love). Brilliant!

IBM then paid to have more than 300 of these images spray-painted across sidewalks all over San Francisco. The paint was supposed to be biodegradable and wash away quickly. Unfortunately, that didn't happen—many of the stencils still were there months later.

And, according to the mayor, "Some were etched into the concrete, so, in those cases, they will never be removed."

The response from the city was...just as you'd expect.

After months of discussion, the City of San Francisco fined Big Blue $100,000, plus any additional cleanup costs, plus legal fees.

On the flip-side, the stories around it made for a heck of a lot of advertising!

IBM's "The Kid"

Remember the Linux Super Bowl ad from IBM? The one with the little boy sitting in a room of pure white light?

"He's learning. Absorbing. Getting smarter every day."

When that hit in 2004, it was like, whoa. Linux has made it. IBM made a Super Bowl ad about it!

"Does he have a name? His name...is Linux."

That campaign included Penny Marshall and Muhammad Ali. That's right. Laverne from Laverne & Shirley has endorsed Linux in a Super Bowl ad. Let that sink in for a moment.

Go to Full Article
Categories: Linux News

Epic Games Launching New Game Store, Microsoft Building a Chromium Browser, CentOS Releases CentOS Linux 7 (1810) on the x86_64 Architecture, Creative Commons Announces Changes to Certificate Program and New Version of the Commercial Zentyal Server

Linux Journal - Tue, 12/04/2018 - 11:33

News briefs for December 4, 2018.

Epic Games today officially announced its own game store alternative to Steam. According to Phoronix, the Epic Games Store will be limited to Microsoft and macOS initially, but will be supporting Android and "other open platforms" throughout 2019.

Microsoft is building its own Chromium browser to replace Edge on Windows 10. The Verge reports that "Microsoft will announce its plans for a Chromium browser as soon as this week, in an effort to improve web compatibility for Windows." The Verge article also notes that "There were signs Microsoft was about to adopt Chromium onto Windows, as the company's engineers have been working with Google to support a version of Chrome on an ARM-powered Windows operating system."

CentOS announces the release of CentOS Linux 7 (1810) on the x86_64 architecture. The release announcement recommends that "every user apply all updates, including the content released today, on your existing CentOS Linux 7 machine by just running 'yum update'." See the release notes for more details.

Creative Commons announces changes to its CC Certificate program. CC is updating pricing, creating a scholarship program, building a CC Certificate Facilitator Training program, and is working to engage a more global, diverse community. To register for courses, go here.

Zentyal announces a major new version of the Commercial Zentyal Server Edition, Zentyal Server 6.0: "This new commercial version of Zentyal Server aims at offering an easy-to-use Linux alternative to Windows Server. It comes with native Microsoft Active Directory interoperability, together with all the network services required in corporate environments." The new version is based on Ubuntu Server 18.04.1 LTS, and release highlights include network authentication service, virtualization manager, user authentication in HTTP Proxy and more. To request a free 45-day trial, go here.

News gaming Microsoft Chromium CentOS creative commons Certification Zentyal
Categories: Linux News

Removing Duplicate PATH Entries, Part II: the Rise of Perl

Linux Journal - Tue, 12/04/2018 - 07:30
by Mitch Frazier

 

With apologies to Arnold and the Terminator franchise for the title, let's look one more time at removing duplicates from the PATH variable. This take on doing it was prompted by a comment from a reader named Shaun on the previous post that asked "if you're willing to use a non-bash solution (AWK) to solve the problem, why not use Perl?" Shaun was kind enough to provide a Perl version of the code, which was good, since I'd have been hard-pressed to come up with one. It's a short piece of code, shorter than the AWK version, so it seemed like it ought to be fairly easy to pick it apart. In the end, I'm not sure I'd call it easy, but it was interesting, and I thought other non-Perl programmers might find it interesting too.

Go to Full Article
Categories: Linux News

NVIDIA Open-Sourcing PhysX, miniNodes Launching a Raspberry Pi 3 CoM Carrier Board, Linux Mint 19.1 Beta Now Available, Linux Kernel 4.20-rc5 Released and New F-Bomb Fixing Patch for Kernel

Linux Journal - Mon, 12/03/2018 - 11:32

News briefs for December 3, 2018.

NVIDIA is open-sourcing its PhysX physics simulation engine. According to Phoronix, NVIDIA says ""We're doing this because physics simulation—long key to immersive games and entertainment—turns out to be more important than we ever thought. Physics simulation dovetails with AI, robotics and computer vision, self-driving vehicles, and high-performance computing." See also the NVIDIA blog for more details.

miniNodes is launching a new Raspberry Pi 3 CoM carrier board that will allow developers to create mini ARM clusters. ZDNet reports that the board has slots for five RPi 3s in order to "bring extreme edge compute capacity' to cramped spaces, industrial IoT applications, and remote villages". It also can be used " on the desktop for learning about compute clustering, Docker Swarm, Kubernetes, or development using Python, Arm, and Linux". The carrier board is available now for pre-order for $259 from miniNodes.

Linux Mint 19.1 beta is now available. This version features a new desktop layout and many other improvements. You can download it from here. Note that this is a beta version for testing and shouldn't be considered stable. (Source: OMG! Ubuntu!.)

Linux kernel 4.20-rc5 is out. Linus wrote "So it all looks a bit odd, although none of it is hugely _alarming_. One of the reasons the arch side is a bit bigger than usual at this stage is that we got the STIPB performance regression sorted out, for example." In addition, he addressed the timing of the final 4.20 release: "So my current suggestion is that we plan on a Christmas release, everybody gets their pull requests for the next merge window done *before* the holidays, and then we see what happens. I think we all want to have a calm holiday season without either the stress of a merge window _or_ the stress of prepping for one." (See the LKML for the full message.)

ZDNet reports that Jarkko Sakkinen, a kernel contributor from Intel, "has released a set of patches that conceal some of the f-bombs that Linux kernel developers have added to kernel code comments over the years." The patch set "addresses 15 components where 'fuck' or 'fucking' appeared in code comments, which have now been swapped out for a 'hugload of hugs'."

NVIDIA Raspberry Pi Linux Mint kernel Code of Conduct
Categories: Linux News
Syndicate content