Tuesday·28·March·2017
System Tray Icon to Monitor a Linux Software RAID Locally //at 04:09 //by abe
About a year ago I bought a new workstation computer for myself at home. It’s a Tuxedo XUX_Cube which is advertised as gaming PC. But I ordered a slightly atypical non-gamer configuration:
- As much RAM as possible (64 GB)
- Intel i7 CPU, but the low power variant
- Only with the onboard Intel graphics card. (No need for NVidia binary crap drivers.)
- 2× Samsung 128 GB SSD for OS and $HOME plus 2× 3 TB WD Red disks for media storage; both pairs set up as RAID 1
- Bitfenix Prodigy-M case in Orange. (Not available in Tuxedo Computer’s online shop, but they nevertheless ordered it for me. :-)
Of course the box runs Debian. To be more precise, it runs Debian Sid
with sysvinit-core as init system and i3 as window manager.
As I usually have no monitoring clients on my laptops and private
workstations, I rather often felt the urge to do a cat
/proc/mdstat
on that box.
So at some point I wanted something like smart-notifier, but for Linux Software (MD) RAIDs. And since I found nothing, I did what Open Source guys usually do in such cases: I wrote it myself — of course in Perl — and called it systray-mdstat.
First I wondered about which build system would be most suitable for that task, but in the end I once again went with Dist::Zilla for the upstream build system and hence dh-dist-zilla for the Debian packaging.
Ideas for the actual implementation were taken from Wouter’s fdpowermon for the systray icon framework in Perl and Myon’s mdstat Xymon plugin for an already proven logic to
parse /proc/mdstat
. (Both, Wouter and Myon have stated in
a GnuPG-signed e-mail that I copied less code than would validate
their copyrights, so I was able to license it under a single license,
namely GNU GPL version 3.)
As of now, systray-mdstat is also available as package in
Debian Unstable. It won’t make it to Stretch as its first line of code
has been written after the soft-freeze for Stretch was already in
place.
Tagged as: Bitfenix, Debian, dh-dist-zilla, Dist::Zilla, dzil, GitHub, hardware, i3, Linux, orange, Perl, Prodigy-M, RAID, systray-mdstat, Tuxedo Computers
// show without comments // write a comment
Related stories
Maintaining Debian Packages of Perl Modules with dh-dist-zilla //at 03:59 //by abe
Maintaining Debian packages of Perl modules usually can be done with
the common git-buildpackage (aka gbp
) workflow with its three
git branches master
(or debian
),
upstream
and pristine-tar
:
upstream
contains the upstream code as imported from upstream’s release tar-balls.pristine-tar
contains the binary diffs between the contents of theupstream
branch and the original tar-ball. This mostly contains meta-data (timestamps, permissions, file owners, etc.) as git doesn’t store them.master
(ordebian
) which containsupstream
plus packaging.
This also works more or less fine for Perl modules, where the Debian
package maintainer is also the upstream developer. In that case mostly
the upstream
branch is used (and then maybe called
master
while the Debian packaging branch is then called
debian
).
But the files needed for a proper so called “CPAN distribution” of a Perl module often contain redundant information (version numbers, required modules, etc.) which needs to be maintained. And for that, many people prefer Don’t Repeat Yourself (DRY) as a principle.
Dist::Zilla
One nice and common tool for that is Dist::Zilla or short dzil. It generates most
redundant but required data out of a central source, e.g.
Dist::Zilla’s dist.ini
or the contained .pm
files, etc. dzil build
creates tar ball which contains
all files necessary by CPAN.
But now we have a dilemma: Debian expects those generated files inside
the upstream
branch while the files are only generated
from other files in that branch. There are multiple solutions, but all
of them involve committing generated files to the git repository:
- Commit them into the
upstream
branch. Disadvantage: You’ll likely later forget which files were generated and which weren’t. - Commit the generated files into a separated branch, e.g. use
master
(original code),upstream
(original code + stuff generated bydzil build
, maybe imported withgit-import-orig
),pristine-tar
and adebian
(based onupstream
) branches.
librun-parts-perl aka Run::Parts (a Perl
wrapper around and a pure-perl implementation of Debian’s
run-parts
tool) was initially maintained in the latter
way.
But especially in cases where we just need a Perl module packaged as
.deb
without uploading it to CPAN (e.g. project-internal
modules), this is a tedious workflow and overkill. It would be much
nicer if debhelper would just call dzil
to generate all
the stuff it needs to build the package.
dh-dist-zilla
Well, you can
do that now, at least with Debian Jessie. This is what dh-dist-zilla does: It is a debhelper sequence plugin which calls
dzil build
and dzil clean
in the right
moment and takes care that all dh_auto_*
commands look in
the directory with the generated files instead of the rather clean
project root directory.
To use dh-dist-zilla, you just need to add a build-dependency on it
and the Dist::Zilla plugins you use, and add --with
dist-zilla
to your minimal dh
-style
debian/rules
file:
#!/usr/bin/make -f %: dh $@ --with dist-zilla
That’s it.
With regards to workflow and git branches, you may still want to use separate branches for upstream work and debian work, and you may want to continue to use pristine-tar, but you don’t have to commit generated files to git anymore and you can maintain a clean master branch with nearly no redundancy.
And if you need to generate to final upstream tar ball for you debian
package, just call dh get-orig-source
or maybe easier to
use with tab completion dh_dist_zilla_origtar
.
This is how the librun-parts-perl package is maintained nowadays. There’s otherwise not much difference to the old, classically maintained versions.
More DRY
Next step in the DRY evolution is to reduce redundancies between upstream (Dist::Zilla based) packaging and the Debian packaging. There are a few tools available, partially brand new, partially not yet packaged:
- dh-dist-zilla’s
dh-dzil-refresh
which combines dh-make-perl’s “refresh” subcommand with Dist::Zilla. - Enrico Zini’s debdry, which aims to be a front-end to all the language specific packaging automation tools like dh-make-perl and gem2deb.
- The not yet packaged Perl module distribution Dist-Zilla-Deb which beyond others contains the (slightly under-documented) Perl module Dist::Zilla::Plugin::Deb::VersionFromChangelog to use the version from Debian’s changelog as the primary source for the version of the module. (Source code is on GitHub.)
- And then there is Dist::Zilla::App::Command::authordebs aka libdist-zilla-app-command-authordebs-perl by Dominque Dumont which lists or installs Dist::Zilla authors dependencies as Debian packages. (Source code is on GitHub, too.)
I wouldn’t be surprised if there’s more to come in this area.
P.S.: I actually started this blog posting in September 2014 and never
finished it until now. Had to kick out some already outdated again
stuff, but also could add some more recent things.
Tagged as: CPAN, debdry, debhelper, Debian, dh-dist-zilla, dh-dzil-refresh, dh-make-perl, Dist-Zilla-Deb, Dist::Zilla, DRY, gbp, Git, git-buildpackage, GitHub, Jessie, Packaging, Perl, pristine-tar
// show without comments // write a comment
Related stories
Sunday·10·March·2013
Rendering Markdown, Asciidoc and Friends automatically while Editing //at 15:41 //by abe
Partially because of Markdown being Github’s markup format of choice, I enjoy writing documents in simple markup formats more and more.
There’s though one common annoyance with these formats compared to writing plain HTML…
The Annoyance
They need to be rendered (i.e. more or less compiled) before you can view your outpourings rendered, e.g. in the web browser. So the workflow usually is:
- Saving the current file in your favourite editor
- Switch to terminal with commandline
- Cursor up, Enter
- Switch to your favourite web browser
- Hit the reload button
Using a Specialized Editor with Live Preview
One choice would be to use a specific editor with live rendering. The one I know in Debian (from Wheezy on) is ReText (Debian package retext). It supports Markdown and reStructuredText.
But as with most simple GUI editors, I miss there many of the advanced editing commands possible with Emacs.
Using Emacs’ Markdown Mode
Then there is the Markdown Mode
for Emacs (part of Debian’s emacs-goodies-el package), where
you can get a “preview” by pressing C-c C-c p
. But for
some reason this takes several seconds, opens a new buffer
and window with the rendered HTML code and then starts
(hardcoded) Firefox (which is not my preferred web browser). And if you do that a
second time without closing Firefox first, it won’t just reload the
file but will open a new tab. You might think that just hitting reload
should suffice. But no, the new tab has a different file name, so
reload doesn’t help. Additionally it may not use my preferred Markdown
implementation. Meh.
Well, I probably could fix all those issues with Markdown Mode, it’s only Emacs Lisp. Heck, the called command is even configurable. But fixing at least four issues to fix one workflow annoyance? Maybe some other time, but not as long there are other nice choices…
Using inotifywait to Render on Write
So everytime you save the currently edited file, you immediately want to rerender the same HTML file from it. This can be easily automated by using Linux’ inotify kernel subsystem which notices changes to the filesystem, and reports those to applications which ask for it.
One such tool is inotifywait
which can either output all
or just specific events, or just exit if the first requested event
occurs. With the latter it’s easy to write a while loop on the
commandline which regenerates a file after every write access. I use
either Pandoc or Asciidoc for that since both generate full HTML pages
including header and footer, but you can use that also with Markdown
to render just the HTML body. Most browsers render it correctly
anyway:
while inotifywait -q -e modify index.md; do pandoc -s -f markdown -t html -o index.html index.md; done while inotifywait -q -e modify index.txt; do asciidoc index.txt; done while inotifywait -q -e modify index.md; do markdown index.md > index.html; done
This solution is even editor- and build-system-agnostic (But not operating-system-agnostic.)
inotifywait is part of inotify-tools, a useful set of commandline tools to interface with inotify. They’re packaged in Debian as inotify-tools, too.
Using mdpress for Markdown plus Impress.js based Slides
The ruby-written mdpress is a special case of the previous case. It’s
a commandline tool to convert Markdown into Impress.js based slide
shows and it has an option named --automatic
which causes
it to keep running and automatically update the presentation as soon
as changes are made to the Markdown file.
mdpress is not yet in Debian, but there’s an ITP for it and
Impress.js itself recently entered Debian as libjs-impress.
Nevertheless, two dependencies (highlight.js,
ITP‘ed, ruby-launchy, ITP‘ed) are still missing in Debian.
Tagged as: Asciidoc, Emacs, emacs-goodies-el, GitHub, HTML, Impress.js, inotify, inotify-tools, inotifywait, ITP, Major-Mode, Markdown, mdpress, oneliner, Pandoc, reST, ReText, Ruby, slides, Wheezy
// show without comments // write a comment
Related stories
Wednesday·11·April·2012
Tools for CLI Road Warriors: Remote Shells //at 19:44 //by abe
Most of my private online life happens on netbooks and besides the web browser, SSH is my most used program — especially on netbooks. Accordingly I also have hosts on the net to which I connect via SSH. My most used program there is GNU Screen.
So yes, for things like e-mail, IRC, and Jabber I connect to a running screen session on some host with a permanent internet connection. On those hosts there is usually one GNU Screen instance running permanently with either mutt or irssi (which is also my Jabber client via a Bitlbee gateway).
But there are some other less well-known tools which I regard as useful in such a setup. The following two tools can both be seen as SSH for special occassions.
autossh
I already blogged about autossh, even twice, so I’ll just recap the most important features here:
autossh is a wrapper around SSH which regularily checks via two tunnels connect to each other on the remote side if the connection is still alive, and if not, it kills the ssh and starts a new one with the same parameters (i.e. tunnels, port forwardings, commands to call, etc.).
It’s quite obvious that this is perfect to be combined with screen’s
-R
and -d
options.
I use autossh so often that I even adopted its Debian package.
mosh
Since last week there’s a new kid in town^W
Debian
Unstable: mosh targets
the same problems as autossh (unreliable networks, roaming, suspending
the computer, etc.) just with a completely different approach which
partially even obsoletes the usage of GNU Screen or tmux:
While mosh uses plain SSH for authentication, authorization and key exchange the final connection is an AES-128 encrypted UDP connection on a random port and is independent of the client’s IP address.
This allows mosh to have the following advantages: The connection stays even if you’re switching networks or suspending your netbook. So if you’re just running a single text-mode application you don’t even need GNU Screen or tmux. (You still do if you want the terminal multiplexing feature of GNU Screen or tmux.)
Another nice feature, especially on unreliable WLAN connections or laggy GSM or UMTS connections is mosh’s output prediction based on its input (i.e. what is typed). Per line it tries to guess which server reaction a key press would cause and if it detects a lagging connection, it shows the predicted result underlined until it gets the real result from the server. This eases writing mails in a remote mutt or chatting in a remote irssi, especially if you noticed that you made a typo, but can’t remember how many backspaces you would have to type to fix it.
Mosh needs to be installed on both, client and server, but the server is only activated via SSH, so it has no port open unless a connection is started. And despite that (in Debian) mosh is currently just available in Unstable, the package builds fine on Squeeze, too. There’s also an PPA for Ubuntu and of course you can also get the source code, e.g. as git checkout from GitHub.
mosh is still under heavy development and new features and bug fixes get added nearly every day.
Thanks to Christine Spang for sponsoring and mentoring Keith’s mosh package in Debian.
Update: I gave a lightning talk about Mosh and AutoSSH in German at Easterhegg
2012. The slides are available online.
Tagged as: autossh, Bitlbee, Debian, GitHub, GNU Screen, IRC, irssi, Jabber, mosh, mutt, PPA, Squeeze, SSH, ssh, Testing, Ubuntu, Unstable
// show without comments // write a comment
Related stories
Thursday·22·March·2012
Tools for CLI Road Warriors: Tunnels //at 19:49 //by abe
Sometime the network you’re connected to is either untrusted (e.g. wireless) or castrated in some way. In both cases you want a tunnel to your trusted home base.
Following I’ll show you three completely different tunneling tools which may helpful while travelling.
sshuttle
sshuttle is a tool somewhere in between of automatic port forward and VPN. It tunnels arbitrary TCP connections and DNS through an SSH tunnel without requiring root access on the remote end of the SSH connection.
So it’s perfect for redirecting most of your traffic through an SSH tunnel to your favourite SSH server, e.g. to ensure your local privacy when you are online via a public, unencrypted WLAN (i.e. easy to sniff for everyone).
It runs on Linux and MacOS X and only needs a Python interpreter on the remote side. Requires root access (usually via sudo) on the client side, though.
It’s currently available at least in Debian Unstable and Testing (Wheezy) as well as in Ubuntu since 11.04 Natty.
Miredo
Miredo is an free and open-source implementation of Microsoft’s NAT-traversing Teredo IPv6 tunneling protocol for at least Linux, FreeBSD, NetBSD and MacOS X.
Miredo includes not only a Teredo client but also a Teredo server
implementation. The developer of Miredo also runs a public Miredo
server, so you don’t even need to install a server somewhere. If you
run Debian or Ubuntu you just need to do apt-get
install miredo
as root and you have IPv6 connectivity. It’s
that easy.
So it’s perfect to get a dynamic IPv6 tunnel for your laptop or mobile phone independently where you are and without the need to register any IPv6 tunnel or configure the Miredo client.
I usually use Miredo on my netbooks to be able to access my boxes at home (which are behind an IPv4 NAT router which is also an SixXS IPv6 tunnel endpoint) from whereever I am.
iodine
iodine is likely the most undermining tool in this set. It tunnels IPv4 over DNS, allowing you to make arbitrary network connections if you are on a network where nothing but DNS requests is allowed (i.e. only DNS packets reach the internet).
This is often the case on wireless LANs with landing page. They redirect all web traffic to the landing page. But the network’s routers try to avoid poisoning the client’s DNS cache with different DNS replies as they would get after the user is logged in. So DNS packets usually pass even the local network’s DNS servers unchanged, just TCP and other UDP packets are redirected until logging in.
With an iodine tunnel, it is possible get a network connection to the outside on such a network anyway. On startup iodine tries to automatically find the best parameters (MTU, request type, etc.) for the current environmenent. However that may fail if any DNS server in between imposes DNS request rate limits.
To be able to start such a tunnel you need to set up an iodine daemon somewhere on the internet. Choose a server which is not already a DNS server.
iodine is available in many distributions, e.g. in
Debian and in Ubuntu.
Tagged as: autossh, Debian, GitHub, iodine, IPv6, Miredo, NAT, Python, Squeeze, SSH, sshuttle, Testing, Ubuntu, Unstable, VPN
// show without comments // write a comment
Related stories
Wednesday·24·November·2010
Useful but Unknown Unix Tools: netselect //at 00:05 //by abe
Ever wondered which mirror of your favourite Linux distribution is the fastest at your location?
Check it with netselect (code at GitHub). It checks for the number of hops and ping times to given hosts and tells you which one is the fastest of them:
# netselect -vv ftp.de.debian.org ftp2.de.debian.org \ ftp.ch.debian.org ftp.nl.debian.org ftp.debian.org Running netselect to choose 1 out of 5 addresses. ....................................................... ftp.de.debian.org 25 ms 16 hops 90% ok ( 9/10) [ 72] ftp2.de.debian.org 17 ms 17 hops 90% ok ( 9/10) [ 51] ftp.ch.debian.org 0 ms 3 hops 90% ok ( 9/10) [ 0] ftp.nl.debian.org 22 ms 15 hops 90% ok ( 9/10) [ 62] ftp.debian.org 22 ms 15 hops 90% ok ( 9/10) [ 60] 0 ftp.ch.debian.org
And if you’re too lazy to optimize your sources.list with netselect
manually, just use the netselect-apt package. It will do it
for you.
Tagged as: apenwarr, APT, GitHub, mirrors, netselect, netselect-apt, nuggets, ping, traceroute, UUUT
// show without comments // write a comment
Related stories
Friday·15·October·2010
Thoughts on Gitorious and GitHub plus a useful git hook //at 11:36 //by abe
When I took over the developement of xen-tools, I looked around for an appropriate git hosting. I especially had a look at GitHub and Gitorious.
If you just regard the features, GitHub is definitely more targetted on single developers and Gitorious more towards projects:
At GitHub, every repository has its URL under the URL of a user page which makes it nearly impossible to have user independent, “official” repositories for projects which have more than one official developer.
At Gitorious, every hosted repository needs to belong to a project, even if it’s only a published configuration. But a project can have more than one git repository. You only seem to be able to have personal repositories if you clone some existing Gitorious repository.
So from a feature point of view, the xen-tools git repositories fit way better to Gitorious’ hosting while the git repositories with zshrc, conkerorrc and desktop configuration files defintely have a more fitting addresses on GitHub, in my case http://github.com/xtaran/$repository. On Gitorious, they are now together under a “project” called “Axel’s configuration files” at http://gitorious.org/abe which contains git repositories of my on grml’s .zshrc based .zshrc, my configuration for Conkeror and all the files necessary for my ratpoison/xmobar based netbook/laptop desktop.
I though feel a little bad for giving the project the very short “slug” name “abe” instead of “abe-config” (as I did initially) since “abe” is IMHO not a proper “project name” for my configuration files and possibly other projects would have a more reasonable claim for that project name on Gitorious. But that way it suites more its purpose: Gather some of my git repositories which don’t belong to a proper project.
But there’s another important point when comparing Gitorious and GitHub: Free Software needs free tools as Benjamin Mako Hill posted recently on Planet Debian. Despite my (probably well known) distrust against Google and therefore also Google Code, and despite knowing the history of SourceForge becoming non-free, I was not that much aware that GitHub’s software is only partially open source and therefore also not free software while Gitorious is both as it’s licensed under the GNU Affero General Public License (like StatusNet/identi.ca for example) which is basically GPLv3, but its ideas applied to hosted web applications scenario (where the GPL itself doesn’t grasp), too.
Initially I just had the xen-tools git repositories on Gitorious and all my small one-repository “projects” as copies of the repositories on my own git server on GitHub to get some more publicity for them and allow “social cloning”. After reading Mako’s article, I decided to at least have repository clones on Gitorious of all repositories I mirror at GitHub, too.
That way I force nobody to use the non-free tools on GitHub for
“social cloning” one of my git repositories. And of course I have
copies of my code somewhere on the net as backup. Or to say it with
Linus Torvalds’ (slightly updated) words: Only wimps use tape
backup: real men just upload their important stuff on git, and let the
rest of the world clone it.
;-)
But isn’t that tedious to always push your code to three repositories?
No, it isn’t. I just push my code to git.noone.org where I have configured the according remotes and
the following post-receive
hook:
#!/bin/sh read oldrev newrev refname git push gitorious ${refname:t} git push github ${refname:t}
The only other thing necessary is to use ssh-agent and SSH agent
forwarding to at least the host you’re pushing to.
Tagged as: .zshrc, abe, Agent Forwarding, AGPL, Conkeror, Free Software, git, GitHub, Gitorious, gitweb, Google, Google Code, GPL, grml, Hook, Hosting, identi.ca, Linus Torvalds, mako, non-free, Open Source, Planet Debian, Ratpoison, Real Man, Social Coding, Social Networking, SourceForge, SSH, ssh-agent, StatusNet, xen-tools, zsh
// show without comments // write a comment