Skip to main content

Feed aggregator

Take Your Git In-House

Linux Journal - Thu, 09/27/2018 - 07:56
by John S. Tonello

If you're wary of the Microsoft takeover of GitHub, or if you've been looking for a way to ween yourself off free public repositories, or if you want to ramp up your DevOps efforts, now's a good time to look at installing and running GitLab yourself. It's not as difficult as you might think, and the free, open-source GitLab CE version provides a lot of flexibility to start from scratch, migrate or graduate to more full-fledged versions.

In today's software business, getting solid code out the door fast is a must, and practices to make that easier are part of any organization's DevOps toolset. Git has risen to the top of the heap of version control tools, because it's simple, fast and makes collaboration easy.

For developers, tools like Git ensure that their code isn't just backed up and made available to others, but nearly guarantees that it can be incorporated into a wide variety of third-party development tools—from Jenkins to Visual Studio—that make continuous integration and continuous delivery (CI/CD) possible. Orchestration, automation and deployment tools easily integrate with Git as well, which means code developed on any laptop or workstation anywhere can be merged, branched and integrated into deployed software. That's why version control repositories are the future of software development and DevOps, no matter how big or small you are, and no matter whether you're building monolithic apps or containerized ones.

Getting Started with Git

Git works by taking snapshots of code on every commit, so every version of contributed code is always available. That means it's easy to roll back changes or look over different contributors' work.

If you're working in an environment that uses Git, you can do your work even when you're offline. Everything is saved in a project structure on your workstation, just as it is in the remote Git repository, and when you're next online, your commits and pushes update the master (or other) code branch quickly and easily.

Most Git users (even newbies) use the Git command-line tools to clone, commit and push changes, because it's easy, and for nearly 28-million developers, GitHub has become the de facto remote Git-based repository for their work. In fact, GitHub has moved beyond being just a code repository to become a multifaceted code community featuring 85-million projects. That's a lot of code.

GitLab is gaining popularity as a remote code repository too, but it's smaller and bills itself as more DevOps-focused, with CI/CD tool included for free. Both repositories offer free hosted accounts that allow users to create a namespace, and start contributing and collaborating right away. The graphical browser interfaces offered by the GitHub- and GitLab-hosted services make it easy to manage projects and project code, and also to add SSH keys, so you easily can connect from your remote terminal on Linux, Windows or Mac.

Go to Full Article
Categories: Linux News

Cisco Confirms 88 Products Vulnerable to FragmentStack Bug, KDE neon Rebased on Ubuntu 18.04 LTS, GNOME 3.30.1 Released, Rust Announces Version 1.29.1 and Mozilla Launches Firefox Monitor

Linux Journal - Wed, 09/26/2018 - 08:16

News briefs for September 26, 2018.

Cisco confirms that 88 of its products that rely on the Linux kernel are vulnerable to the FragmentStack bug. According to ZDNet, "the bug can saturate a CPU's capacity when under a low-speed attack using fragmented IPv4 and IPv6 packets, which could cause a denial-of-service condition on the affected device." Affected products include "Nexus switches, Cisco IOS XE software, and equipment from its lines of Unified Computing and Unified Communications brands, several TelePresence products, and a handful of wireless access points."

The KDE neon team announces the rebase of its packages onto Ubuntu 18.04 LTS "Bionic Beaver" and encourages users to upgrade now. You also can download a clean installation from here.

GNOME 3.30.1 has been released. This release contains only bugfixes. If you want to compile it, you can use the BuildStream project snapshot. See the list of updated modules and changes here.

The Rust Team yesterday announced Rust 1.29.1. This new version fixes a security vulnerability in the standard library "where if a large number was passed to str::repeat, it could cause a buffer overflow after an integer overflow. If you do not call the str::repeat, function you are not affected." See the release notes on GitHub for all the details.

Mozilla yesterday launched Firefox Monitor, a free service that alerts you if you've been part of a data breach. Enter your email at Firefox Monitor for a basic scan.

News Cisco Security KDE Ubuntu GNOME Rust Mozilla Firefox
Categories: Linux News

Support for a GNSS and GPS Subsystem

Linux Journal - Wed, 09/26/2018 - 07:00
by Zack Brown

Recently, there was a disagreement over whether a subsystem really addressed its core purpose or not. That's an unusual debate to have. Generally developers know if they're writing support for one feature or another.

In this particular case, Johan Hovold posted patches to add a GNSS subsystem (Global Navigation Satellite System), used by GPS devices. His idea was that commercial GPS devices might use any input/output ports and protocols—serial, USB and whatnot—forcing user code to perform difficult probes in order to determine which hardware it was dealing with. Johan's code would unify the user interface under a /dev/gnss0 file that would hide the various hardware differences.

But, Pavel Machek didn't like this at all. He said that there wasn't any actual GNSS-specific code in Johan's GNSS subsystem. There were a number of GPS devices that wouldn't work with Johan's code. And, Pavel felt that at best Johan's patch was a general power management system for serial devices. He felt it should not use names (like "GNSS") that then would be unavailable for a "real" GNSS subsystem that might be written in the future.

However, in kernel development, "good enough" tends to trump "good but not implemented". Johan acknowledged that his code didn't support all GPS devices, but he said that many were proprietary devices using proprietary interfaces, and those companies could submit their own patches. Also, Johan had included two GPS drivers in his patch, indicating that even though his subsystem might not contain GNSS-specific code, it was still useful for its intended purpose—regularizing the GPS device interface.

The debate went back and forth for a while. Pavel seemed to have the ultimate truth on his side—that Johan's code was at best misnamed, and at worst, incomplete and badly structured. Although Johan had real-world usefulness on his side, where something like his patch had been requested by other developers for a long time and solved actual problems confronted by people today.

Finally Greg Kroah-Hartman put a stop to all debate—at least for the moment—by simply accepting the patch and feeding it up to Linus Torvalds for inclusion in the main kernel source tree. He essentially said that there was no competing patch being offered by anyone, so Johan's patch would do until anything better came along.

Pavel didn't want to give up so quickly, and he tried at least to negotiate a name change away from "GNSS", so that a "real" GNSS subsystem might still come along without a conflict. But with his new-found official support, Johan said, "This is the real gnss subsystem. Get over it."

Go to Full Article
Categories: Linux News

WLinux Distro for Windows Subsystem for Linux Now Available, openSUSE Call for Hosts, New Firefox Bug, Firefox Collecting Telemetry Data and Creative Commons Releases Significant CC Search Update

Linux Journal - Tue, 09/25/2018 - 08:18

News briefs for September 25, 2018.

Whitewater Foundry recently launched WLinux, a Linux distribution optimized for use on the Windows Subsystem for Linux (WSL). Because the distro is created specifically for WSL, it has "sane defaults" and also allows for "faster patching of security and compatibility issues". You can download it from the Microsoft Store, and it's currently on sale for $9.99.

openSUSE announced that it's accepting proposals to host the openSUSE 2020 conference. The "Call for Hosts" is open until April 15, 2019. See the Conference How to Check List and the Conference How to bid wiki pages if you're interested.

Security researcher Sabri Haddouche has discovered a new Firefox bug that causes your browser and sometimes your PC (on Linux, Mac and Windows) to crash. In an interview with ZDNet, Haddouche explained, "What happens is that the script generates a file (a blob) that contains an extremely long filename and prompts the user to download it every one millisecond". See also the bug report for more information.

In other Firefox news, the browser evidently is collecting telemetry data via hidden add-ons, ITWire reports. The ITWire post also quotes Mozilla's Marshall Eriwn, director of Trust and Security: "...we will measure Telemetry Coverage, which is the percentage of all Firefox users who report telemetry. The Telemetry Coverage measurement will sample a portion of all Firefox clients and report whether telemetry is enabled. This measurement will not include a client identifier and will not be associated with our standard telemetry."

Creative Commons released a significant update to its beta of the CC Search project yesterday. This iteration "integrates access to more than 10 million images across 13 content providers". It also features AI image tags generated from Clarifai, the "best in class image classification software that provides tagging support and visual recognition". In addition, CC Search has a new design making it easy for users to "search by category, see popular images, and search more accurately across a wide range of content". And finally, users can share content and create public lists of images without needing an account.

News openSUSE Windows Distributions WLinux Security Firefox Privacy creative commons
Categories: Linux News

Bytes, Characters and Python 2

Linux Journal - Tue, 09/25/2018 - 08:17
by Reuven M. Lerner

Moving from Python 2 to 3? Here's what you need to know about strings and their role in in your upgrade.

An old joke asks "What do you call someone who speaks three languages? Trilingual. Two languages? Bilingual. One language? American."

Now that I've successfully enraged all of my American readers, I can get to the point, which is that because so many computer technologies were developed in English-speaking countries—and particularly in the United States—the needs of other languages often were left out of early computer technologies. The standard established in the 1960s for translating numbers into characters (and back), known as ASCII (the American Standard Code for Information Interchange), took into account all of the letters, numbers and symbols needed to work with English. And that's all that it could handle, given that it was a seven-byte (that is, 128-character) encoding.

If you're willing to ignore accented letters, ASCII can sort of, kind of, work with other languages, as well—but the moment you want to work with another character set, such as Chinese or Hebrew, you're out of luck. Variations on ASCII, such as ISO-8859-x (with a number of values for "x"), solved the problem to a limited degree, but there were numerous issues with that system.

Unicode gives each character, in every language around the globe, a unique number. This allows you to represent (just about) every character in every language. The problem is how you can represent those numbers using bytes. After all, at the end of the day, bytes are still how data is stored to and read from filesystems, how data is represented in memory and how data is transmitted over a network. In many languages and operating systems, the encoding used is UTF-8. This ingenious system uses different numbers of bytes for different characters. Characters that appear in ASCII continue to use a single byte. Some other character sets (for example, Arabic, Greek, Hebrew and Russian) use two bytes per character. And yet others (such as Chinese and emojis) use three bytes per character.

In a modern programming language, you shouldn't have to worry about this stuff too much. If you get input from the filesystem, the user or the network, it should just come as characters. How many bytes each character needs is an implementation detail that you can (or should be able to) ignore.

Why do I mention this? Because a growing number of my clients have begun to upgrade from Python 2 to Python 3. Yes, Python 3 has been around for a decade already, but a combination of some massive improvements in the most recent versions and the realization that only 18 months remain before Python 2 is deprecated is leading many companies to realize, "Gee, maybe we finally should upgrade."

The major sticking point for many of them? The bytes vs. characters issue.

Go to Full Article
Categories: Linux News

Archiving web sites

Anarcat - Mon, 09/24/2018 - 19:00

I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web.

Converting simple sites

The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the Drupal content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard.

For simple or static sites, the venerable Wget program works well. The incantation to mirror a full web site, however, is byzantine:

$ nice wget --mirror --execute robots=off --no-verbose --convert-links \ --backup-converted --page-requisites --adjust-extension \ --base=./ --directory-prefix=./ --span-hosts \ --domains=www.example.com,example.com http://www.example.com/

The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [robots.txt][] rules, as is now common practice for archivists, and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site.

The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site.

That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a --reject-regex option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well.

JavaScript doom

Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using progressive enhancement to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like NoScript or uMatrix will confirm.

Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper (pamplemousse.ca), I found that WordPress adds query strings (e.g. ?ver=1.12.4) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right Content-Type header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites.

As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach.

Creating and displaying WARC files

At the Internet Archive, Brewster Kahle and Mike Burner designed the ARC (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") specification that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the International Internet Preservation Consortium (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based Heritrix crawler.

A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the --warc parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is pywb, a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on http://localhost:8080/:

$ pip install pywb $ wb-manager init example $ wb-manager add example crawl.warc.gz $ wayback

This tool was, incidentally, built by the folks behind the Webrecorder service, which can use a web browser to save dynamic page contents.

Unfortunately, pywb has trouble loading WARC files generated by Wget because it followed an inconsistency in the 1.0 specification, which was fixed in the 1.1 specification. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called crawl. Here is how it is invoked:

$ crawl https://example.com/

(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the -exclude-related flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the -c flag. But, best of all, the resulting WARC files load perfectly in pywb.

Future work and alternatives

There are plenty more resources for using WARC files. In particular, there's a Wget drop-in replacement called Wpull that is specifically designed for archiving web sites. It has experimental support for PhantomJS and youtube-dl integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called ArchiveBot, which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at ArchiveTeam in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, snscrape will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is crocoite, which uses the Chrome browser in headless mode to archive JavaScript-heavy sites.

This article would also not be complete without a nod to the HTTrack project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line.

In the same vein, during my research I found a full rewrite of Wget called Wget2 that has support for multi-threaded operation, which might make it faster than its predecessor. It is missing some features from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support.

Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in Wallabag, a self-hosted "read it later" service designed as a free-software alternative to Pocket (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually unreadable and Wallabag sometimes fails to parse the article. Instead, other tools like bookmark-archiver or reminiscence save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay.

The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously working on a backup of the Internet Archive itself.

This article first appeared in the Linux Weekly News.

As usual, here's the list of issues and patches generated while researching this article:

I also want to personally thank the folks in the #archivebot channel for their assistance and letting me play with their toys.

The Pamplemousse crawl is now available on the Internet Archive, it might end up in the wayback machine at some point if the Archive curators think it is worth it.

Another example of a crawl is this archive of two Bloomberg articles which the "save page now" feature of the Internet archive wasn't able to save correctly. But webrecorder.io could! Those pages can be seen in the web recorder player to get a better feel of how faithful a WARC file really is.

Finally, this article was originally written as a set of notes and documentation in the archive page which may also be of interest to my readers.

Categories: External Blogs

YubiKey 5 Series Launched, Google Chrome's Recent Questionable Privacy Practice, PlayOnLinux Alpha Version 5 Released, Android Turns Ten, and Fedora 29 Atomic and Cloud Test Day

Linux Journal - Mon, 09/24/2018 - 08:42

News briefs September 24, 2018.

Yubico announced the launch of the YubiKey 5 series this morning, which are the first multi-protocol security keys to support FIDO2/WebAuthn and allow you to replace "weak password-based authentication with strong hardware-based authentication". You can purchase them here for $45.

Google Chrome recently has begun automatically signing your browser in to your Google account for you every time you log in to a Google property, such as Gmail, without asking and without notification. See Matthew Green's blog post for more information on the huge privacy implications of this new practice.

PlayOnLinux released the alpha version of PlayOnLinux and PlayOnMac 5 ("Phoencis") over the weekend. The interface has been completely redesigned and is now decentralized, so if the website has issues, the program will still work. In addition, the script is now available on GitHub. This alpha version supports 135 games and apps. See the full list here.

Android celebrated its 10th birthday this weekend. See TechRadar, Engadget and TechCrunch for different takes on Android's history.

Fedora 29 Atomic and Fedora 29 Cloud development is wrapping up, and they now provide the latest versions of packages in Fedora 29, including all new features and bug fixes. Fedora Atomic Working Group and Cloud SIG are organizing a Test Day, Monday, October 1st. See the wiki page if you're interested in participating.

News Security YubiKey Google Chrome Privacy PlayOnLinux gaming Android Fedora
Categories: Linux News

ModSecurity and nginx

Linux Journal - Mon, 09/24/2018 - 06:30
by Elliot Cooper

nginx is the web server that's replacing Apache in more and more of the world's websites. Until now, nginx has not been able to benefit from the security ModSecurity provides. Here's how to install ModSecurity and get it working with nginx.

Earlier this year the popular open-source web application firewall, ModSecurity, released version 3 of its software. Version 3 is a significant departure from the earlier versions, because it's now modularized. Before version 3, ModSecurity worked only with the Apache web server as a dependent module, so there was no way for other HTTP applications to utilize ModSecurity. Now the core functionality of ModSecurity, the HTTP filtering engine, exists as a standalone library, libModSecurity, and it can be integrated into any other application via a "connector". A connector is a small piece of code that allows any application to access libModSecurity.

A Web Application Firewall (WAF) is a type of firewall for HTTP requests. A standard firewall inspects data packets as they arrive and leave a network interface and compares the properties of the packets against a list of rules. The rules dictate whether the firewall will allow the packet to pass or get blocked.

ModSecurity performs the same task as a standard firewall, but instead of looking at data packets, it inspects HTTP traffic as it arrives at the server. When an HTTP request arrives at the server, it's first routed through ModSecurity before it's routed on to the destination application, such as Apache2 or nginx. ModSecurity compares the inbound HTTP request against a list of rules. These rules define the form of a malicious or harmful request, so if the incoming request matches a rule, ModSecurity blocks the request from reaching the destination application where it may cause harm.

The following example demonstrates how ModSecurity protects a WordPress site. The following HTTP request is a non-malicious request for the index.php file as it appears in Apache2's log files:

GET /index.php HTTP/1.1

This request does not match any rules, so ModSecurity allows it onto the web server.

WordPress keeps much of its secret information, such as the database password, in a file called wp-config.php, which is located in the same directory as the index.php file. A careless system administrator may leave this important file unprotected, which means a web server like Apache or nginx happily will serve it. This is because they will serve any file that is not protected by specific configuration. This means that the following malicious request:

GET /wp-config.php HTTP/1.1

will be served by Apache to whomever requests it.

Go to Full Article
Categories: Linux News

Weekend Reading: Scary Tales from the Server Room

Linux Journal - Sat, 09/22/2018 - 07:00
by Carlie Fairchild

It's always better to learn from someone else's mistakes than from your own. This weekend we feature Kyle Rankin and Bill Childers as they tell stories from their years as systems administrators. It's a win-win: you get to learn from their experiences, and they get to make snide comments to each other. 

We also want to hear your scary server room stories. E-mail us, publisher@linuxjournal.com, with yours (just a few sentences or even a few paragraphs is fine), and we'll publish every one we receive on October 31...spooky.

 

Zoning Out

by Kyle Rankin and Bill Childers

Sometimes events and equipment conspire against you and your team to cause a problem. Occasionally, however, it's lack of understanding or foresight that can turn around and bite you. Unfortunately, this is a tale of where we failed to spot all the possible things that might go wrong.

 

Panic on the Streets of London

by Kyle Rankin and Bill Childers

I was now at the next phase of troubleshooting: prayer. Somewhere around this time, I had my big breakthrough...

 

It's Always DNS's Fault!

by Kyle Rankin and Bill Childers

I was suffering, badly. We had just finished an all-night switch migration on our production Storage Area Network while I was hacking up a lung fighting walking pneumonia. Even though I did my part of the all-nighter from home, I was exhausted. So when my pager went off at 9am that morning, allowing me a mere four hours of sleep, I was treading dangerously close to zombie territory...

 

Unboxing Day

by Kyle Rankin and Bill Childers

As much as I love working with Linux and configuring software, one major part of being a sysadmin that always has appealed to me is working with actual hardware. There's something about working with tangible, physical servers that gives my job an extra dimension and grounds it from what might otherwise be a completely abstract job even further disconnected from reality. On top of all that, when you get a large shipment of servers, and you view the servers at your company as your servers, there is a similar anticipation and excitement when you open a server box as when you open Christmas presents at home. This story so happens to start during the Christmas season...

 

 

 

Go to Full Article
Categories: Linux News

Purism Launches the Librem Key, Mir 1.0 Released, Solus 3 ISO Refresh Now Available, New Malware as a Service Botnet Discovered and Sparky 5.5 Is Out

Linux Journal - Fri, 09/21/2018 - 08:41

News briefs September 21, 2018.

Purism yesterday launched Librem Key, the "first and only OpenPGP smart card providing a Heads-firmware-integrated tamper-evident boot process". The Librem key is the size of an average thumb drive, allows you to keep your secret encryption keys in your pocket, and it alerts you if anyone tampers with your kernel or BIOS while you're away from your laptop. The key works with all laptops but has extended features with Purism's Librem laptop line. You can order one from here for $59. See also Kyle Rankin's post for more details on the Librem key.

The Mir team announces the milestone release of the Mir 1.0 display server today. This release is "targeted at IoT device makers and enthusiasts looking to build thenext-generation of graphical solutions". Mir's goal is to "unify the graphical environment across all devices, including desktop, TV, and mobile devices and continues to be developed with new features and modern standards". See the Mir website for more information.

Solus 3 ISO Refresh was released yesterday. This refresh of the operating system designed for home computing "enables support for a variety of new hardware released since Solus 3, introduces an updated set of default applications and theming, as well as enables users to immediately take advantage of new Solus infrastructure". You can download Solus Budgie, Solus GNOME or Solus MATE from here.

A new botnet in the "Malware as a Service" arena has been discovered that touts "Android-based payloads to potential cybercriminals". The botnet was developed by a Russian-speaking group called "The Lucy Game", which already has provided demos for potential subscribers. See ZDNet for more details.

New install ISO images of Sparky 5.5 "Nibiru", which is based on Debian testing "Buster", are now available for download. Changes include Linux kernel 4.18.6, Calamares installer updated to v. 3.2.1, GCC 8 is now the default and much more. You can download new ISO images from here.

News Purism Security Librem Mir Solus Distributions malware Sparky
Categories: Linux News

FOSS Project Spotlight: Nitrux, a Linux Distribution with a Focus on AppImages and Atomic Upgrades

Linux Journal - Fri, 09/21/2018 - 08:41
by Nitrux Latinoa…

Nitrux is a Linux distribution with a focus on portable, application formats like AppImages. Nitrux uses KDE Plasma 5 and KDE Applications, and it also uses our in-house software suite Nomad Desktop.

What Can You Use Nitrux For?

Well, just about anything! You can surf the internet, word-process, send email, create spreadsheets, listen to music, watch movies, chat, play games, code, do photo editing, create content—whatever you want!

Nitrux's main feature is the Nomad Desktop, which aims to extend Plasma to suit new users without compromising its power and flexibility for experts. Nomad's features:

  • The System Tray replaces the traditional Plasma version.
  • An expanded notification center allows users to manage notifications in a friendlier manner.
  • Easier access to managing networks: quick access to different network settings without having to search for them.
  • Improved media controls: a less confusing way to adjust the application's volume and integrated media controls.
  • Calendar and weather: displays the traditional Plasma calendar but also adds the ability to see appointments and the ability to configure location settings to display the weather.
  • Custom Plasma 5 artwork: including Look and Feel, Plasma theme, Kvantum theme, icon theme, cursor themes, SDDM themes, Konsole theme and Aurorae window decoration.

Nitrux is a complete operating system that ships the essential apps and services for daily use: office applications, PDF reader, image editor, music and video players and so on. We also include non-KDE or Qt applications like Chromium and LibreOffice that together create a friendly user experience.

Available Out of the Box

Nitrux includes a selection of applications carefully chosen to perform the best when using your computer:

  • Dolphin: file manager.
  • Kate: advanced text editor.
  • Ark: archiving tool.
  • Konsole: terminal emulator.
  • Chromium: web browser.
  • Babe: music player.
  • VLC: multimedia player.
  • LibreOffice: open-source office suite.
  • Showimage: image viewer.
Explore a Universe of Apps in Nitrux

The NX Software Center is a free application that provides Linux users with a modern and easy way to manage the software installed on their open-source operating systems. Its features allow you to search, install and manage AppImages. AppImages are faster to install, easier to create and safer to run. AppImages aim to work on any distribution or device, from IoT devices to servers, desktops and mobile devices.

Figure 1. The Nomad Software Center

Go to Full Article
Categories: Linux News

Canonical Announces Extended Security Maintenance for Ubuntu 14.04 LTS, Mozilla to Discuss the Future of Advertising at ICDPPC, Newegg Attacked, MetaCase Launches MetaEdit+ 5.5 and MariaDB Acquires Clustrix

Linux Journal - Thu, 09/20/2018 - 08:58

News briefs for September 20, 2018.

Canonical yesterday announced the Extended Security Maintenance for Ubuntu 14.04 LTS "Trusty Tahr", which means critical and important security patches will be available beyond the Ubuntu 14.04 end-of-life date (April 2019).

Mozilla to hold a high-level panel discussion on "the future of advertising in an open and sustainable internet ecosystem" at the 40th annual International Conference of Data Protection and Privacy Conference in Brussels, Belgium October 22–26, 2018. The discussion is titled "Online advertising is broken: Can ethics fix it?", and it's scheduled for October 23, 2018.

Attackers stole credit-card information from Newegg by injecting 15 lines of skimming code on the online payments page, which remained undetected from August 14th to September 18, 2018, TechCrunch reports. Yonathan Klijnsma, threat researcher at RiskIQ, told TechCrunch that "These attacks are not confined to certain geolocations or specific industries—any organization that processes payments online is a target." If you entered your credit-card data during that period, contact your bank immediately.

MetaCase this morning announced the launch of MetaEdit+ 5.5 for Linux, which brings collaborated models to Git and other version control systems. It's "aimed at expert developers looking to gain productivity and quality by generating tight code directly from domain-specific models". You can download a free trial from here.

MariaDB has acquired Clustrix, the "pioneer in distributed database technology". According to the press release, this acquisition gives "MariaDB's open source database the scalability and high-availability that rivals or exceeds Oracle and Amazon while foregoing the need for expensive computing platforms or high licensing fees."

News Ubuntu Canonical Mozilla adtech Privacy git MariaDB Oracle Databases
Categories: Linux News

Investigating Some Unexpected Bash coproc Behavior

Linux Journal - Thu, 09/20/2018 - 08:57
by Mitch Frazier

Recently while refreshing my memory on the use of Bash's coproc feature, I came across a reference to a pitfall that described what I thought was some quite unexpected behavior. This post describes my quick investigation of the pitfall and suggests a workaround (although I don't really recommend using it).

Go to Full Article
Categories: Linux News

Ampere eMAG for Hyperscale Cloud Computing Now Available, LLVM 7.0.0 Released, AsparaDB RDS for MariaDB TX Announced, New Xbash Malware Discovered and Kong 1.0 Launched

Linux Journal - Wed, 09/19/2018 - 08:57

News briefs for September 19, 2018.

Ampere, in partnership with Lenovo, announced availability of the Ampere eMAG for hyperscale cloud computing. The first-generation Armv8-A 64-bit processors provide "high-performance compute, high memory capacity, and rich I/O to address cloud workloads including big data, web tier and in-memory databases". Pricing is 32 cores at up to 3.3GHz Turbo for $850 or 16 cores at up to 3.3GHz Turbo for $550.

LLVM 7.0.0 is out. This release is the result of six months of work by the community and includes "function multiversioning in Clang with the 'target' attribute for ELF-based x86/x86_64 targets, improved PCH support in clang-cl, preliminary DWARF v5 support, basic support for OpenMP 4.5 offloading to NVPTX, OpenCL C++ support, MSan, X-Ray and libFuzzer support for FreeBSD, early UBSan, X-Ray and libFuzzer support for OpenBSD, UBSan checks for implicit conversions, many long-tail compatibility issues fixed in lld which is now production ready for ELF, COFF and MinGW, new tools llvm-exegesis, llvm-mca and diagtool." See the release notes for details, and go here to download.

Alibaba Cloud and MariaDB announce AsparaDB RDS for MariaDB TX, which is "the first public cloud to incorporate the enterprise version of MariaDB and provide customer support directly from the two companies. ApsaraDB RDS for MariaDB TX provides Alibaba Cloud customers the latest database innovations and most secure enterprise solution for mission-critical transactional workloads." See the press release for more information.

Unit 42 researchers have discovered a new malware family called Xbash, which they have connected to the Iron Group, that targets Linux and Microsoft Windows severs. Besides ransomware and coin-mining capabilities, "Xbash also has self-propagating capabilities (meaning it has worm-like characteristics similar to WannaCry or Petya/NotPetya). It also has capabilities not currently implemented that, when implemented, could enable it to spread very quickly within an organizations' network (again, much like WannaCry or Petya/NotPetya)." See the Palo Alto Networks post for more details on the attack and how to protect your servers.

Kong Inc. yesterday announced the launch of Kong 1.0, the "only open-source API purpose built for microservices, cloud native and server less architectures". According to the press release, Kong 1.0 is feature-complete: "it combines sub-millisecond low latency, linear scalability and unparalleled flexibility with a robust feature set, support for service mesh patterns, Kubernetes Ingress controller and backward compatibility between versions." See also the Kong GitHub page.

News Ampere HPC Cloud LLVM MariaDB Security Kong
Categories: Linux News

Moving Compiler Dependency Checks to Kconfig

Linux Journal - Wed, 09/19/2018 - 07:00
by Zack Brown

The Linux kernel config system, Kconfig, uses a macro language very similar to the make build tool's macro language. There are a few differences, however. And of course, make is designed as a general-purpose build tool while Kconfig is Linux-kernel-specific. But, why would the kernel developers create a whole new macro language so closely resembling that of an existing general-purpose tool?

One reason became clear recently when Linus Torvalds asked developers to add an entirely new system of dependency checks to the Kconfig language, specifically testing the capabilities of the GCC compiler.

It's actually an important issue. The Linux kernel wants to support as many versions of GCC as possible—so long as doing so would not require too much insanity in the kernel code itself—but different versions of GCC support different features. The GCC developers always are tweaking and adjusting, and GCC releases also sometimes have bugs that need to be worked around. Some Linux kernel features can only be built using one version of the compiler or another. And, some features build better or faster if they can take advantage of various GCC features that exist only in certain versions.

Up until this year, the kernel build system has had to check all those compiler features by hand, using many hacky methods. The art of probing a tool to find out if it supports a given feature dates back decades and is filled with insanity. Imagine giving a command that you know will fail, but giving it anyway because the specific manner of failure will tell you what you need to know for a future command to work. Now imagine hundreds of hacks like that in the Linux kernel build system.

Part of the problem with having those hacky checks in the build system is that you find out about them only during the build—not during configuration. But since some kernel features require certain GCC versions, the proper place to learn about the GCC version is at config time. If the user's compiler doesn't support a given feature, there's no reason to show that feature in the config system. It should just silently not exist.

Linus requested that developers migrate those checks into the Kconfig system and regularize them into the macro language itself. This way, kernel features with particular GCC dependencies could identify those dependencies and then show up or not show up at config time, according to whether those dependencies had been met.

That's the reason simply using make wouldn't work. The config language had to represent the results of all those ugly hacks in a friendly way that developers could make use of.

Go to Full Article
Categories: Linux News

Linux Community to Adopt New Code of Conduct, Firefox Reality Browser Now Available, Lamplight City Game Released, openSUSE Summit Nashville Announced and It's Now Easier to Run Ubuntu VMs on Windows 10

Linux Journal - Tue, 09/18/2018 - 08:34

News briefs for September 18, 2018.

Following Linus Torvalds' apology for his behavior, the Linux Community has announced it will adopt a "Code of Conduct", which pledges to make "participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation."

Mozilla announced this morning that its new Firefox Reality browser, "designed from the ground up to work on stand-alone virtual and augmented reality (or mixed reality) headsets", is now available in the Viveport, Oculus and Daydream app stores. See the Mozilla blog for more information, how to participate and download links.

The new game Lamplight City, "a steampunk-ish detective adventure" was released recently for Linux, Windows and macOS. See the Steam store for more info and to purchase.

openSUSE announces it will hold its openSUSE SUmmit in Nashville, Tennessee, next year, April 5-6, 2019. Registration is now open for the event and the call for papers is open until January 15, 2019.

It's now much easier to run Ubuntu VMs on Windows 10 via Hyper-V Quick Create. According to ZDNet, Canonical and Microsoft partnered to release "an optimized Ubuntu Desktop image that's available through Microsoft's Hyper-V Gallery".

News Community Linus Torvalds Firefox VR Mozilla gaming openSUSE Ubuntu Windows Desktop Virtual Machines
Categories: Linux News

Writing More Compact Bash Code

Linux Journal - Tue, 09/18/2018 - 07:00
by Mitch Frazier

In any programming language, idioms may be used that may not seem obvious from reading the manual. Often these usages of the language represent ways to make your code more compact (as in requiring fewer lines of code). Of course, some will eschew these idioms believing they represent bad style. Style, of course, is in the eyes of beholder, and this article is not intended as an exercise in defining good or bad style. So for those who may be tempted to comment on the grounds of style I would (re)direct your attention to /dev/null.

Go to Full Article
Categories: Linux News

Globbing and Regex: So Similar, So Different

Linux Journal - Mon, 09/17/2018 - 08:51
by Shawn Powers

Grepping is awesome, as long as you don't glob it up! This article covers some grep and regex basics.

There are generally two types of coffee drinkers. The first type buys a can of pre-ground beans and uses the included scoop to make their automatic drip coffee in the morning. The second type picks single-origin beans from various parts of the world, accepts only beans that have been roasted within the past week and grinds those beans with a conical burr grinder moments before brewing in any number of complicated methods. Text searching is a bit like that.

For most things on the command line, people think of *.* or *.txt and are happy to use file globbing to select the files they want. When it comes to grepping a log file, however, you need to get a little fancier. The confusing part is when the syntax of globbing and regex overlap. Thankfully, it's not hard to figure out when to use which construct.

Globbing

The command shell uses globbing for filename completion. If you type something like ls *.txt, you'll get a list of all the files that end in .txt in the current directory. If you do ls R*.txt, you'll get all the files that start with capital R and have the .txt extension. The asterisk is a wild card that lets you quickly filter which files you mean.

You also can use a question mark in globbing if you want to specify a single character. So, typing ls read??.txt will list readme.txt, but not read.txt. That's different from ls read*.txt, which will match both readme.txt and read.txt, because the asterisk means "zero or more characters" in the file glob.

Here's the easy way to remember if you're using globbing (which is very simple) vs. regular expressions: globbing is done to filenames by the shell, and regex is used for searching text. The only frustrating exception to this is that sometimes the shell is too smart and conveniently does globbing when you don't want it to—for example:

grep file* README.TXT

In most cases, this will search the file README.TXT looking for the regular expression file*, which is what you normally want. But if there happens to be a file in the current folder that matches the file* glob (let's say filename.txt), the shell will assume you meant to pass that to grep, and so grep actually will see:

grep filename.txt README.TXT

Gee, thank you so much Mr. Shell, but that's not what I wanted to do. For that reason, I recommend always using quotation marks when using grep. 99% of the time you won't get an accidental glob match, but that 1% can be infuriating. So when using grep, this is much safer:

grep "file*" README.TXT

Because even if there is a filename.txt, the shell won't substitute it automatically.

Go to Full Article
Categories: Linux News

Linus Torvalds Taking a Break, Help Krita Squash the Bugs, Vulnerability in Alpine Linux, Flatpak Now Works on Windows Subsystem for Linux and AnsibleFest 2018 Announced

Linux Journal - Mon, 09/17/2018 - 08:44

News briefs for September 17, 2018.

Linus Torvalds is taking a break. In his rc4 email update over the weekend, he writes about his scheduling mix-up with the kernel summit and having a "look yourself in the mirror moment", and then (to summarize), he writes: "hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely. I am going to take time off and get some assistance on how to understand people's emotions and respond appropriately."

Krita announced its developer fundraiser "let's squash the bugs"! The goal this year for the open-source graphics editor is to "fix bugs, make Krita more stable and bring more polish and shine to all the features we have made possible together". Visit here to learn how you can help.

A vulnerability, has been discovered in Alpine Linux, which is commonly used in Docker images. Worst-case scenario, according to The Register, an "attacker could intercept apk's package requests during Docker image building, inject them with malicious code, and pass them along to the target machines that would unpack and run the code within their Docker container." Update apk and images now.

Alexander Larsson, lead developer and creator of the Flatpak package system, announced via Twitter that it now works on Windows Subsystem for Linux. See the post on Neowin for more on the story, and the "hacky workarounds" required.

Red Hat announces AnsibleFest 2018, which will be held October 2-3, in Austin, Texas and will cover many aspects of IT automation. See the AnsibleFest website for all the details.

News Linus Torvalds kernel Krita Alpine Linux Docker Security Windows Flatpak Ansible Red Hat
Categories: Linux News

Fedora Silverblue Test Day Next Week, Nextcloud 14 Released, Plasma 5.4 Beta Now Available, openSUSE's Recent Snapshots and Ansible Tower 3.3 Is Out

Linux Journal - Fri, 09/14/2018 - 08:44

News briefs for September 14, 2018.

The Fedora Workstation Team is holding a test day next week for Fedora Silverblue, a new variant of Fedora that has rpm-ostree at its core and provides fully atomic upgrades. The test day is Thursday, September 20, 2018. For more information on how to participate, visit the Silverblue Test Day Wiki page.

Nextcloud announced the release of version 14 this week. This new version introduces two big security improvements: video verification and signal/telegram/SMS 2FA support. Version 14 also includes many collaboration improvements as well as a Data Protection Confirmation app in compliance with the GDPR. Go here to install.

KDE released Plasma 5.14 beta yesterday. New to this version are improvements to Plasma's Discover software manager and the addition of a Firmware Update feature, among other things. The final release should be available in three weeks.

openSUSE has released three new snapshots, and the latest brought new major versions of Flatpak and qemu. Flatpak version 1.0 came with snapshot 20180911, and Mozilla Thunderbird received a major update in snapshot 20180910. See the announcement for more details on all the recent snapshot updates.

Ansible Tower 3.3 is now available. New enhancements include added functionality with Red Hat OpenShift, more granular permissions, improvements to the scheduler, support for multiple Ansible environments and more. Visit here for a free trial of Ansible Tower.

News Fedora Nextcloud Cloud Plasma KDE openSUSE Ansible Red Hat
Categories: Linux News
Syndicate content