Stoppt die Vorratsdatenspeicherung! Jetzt klicken &handeln! Willst du auch an der Aktion teilnehmen? Hier findest du alle relevanten Infos und Materialien:
Jump to menu and information about this site.

Wednesday·07·December·2011

automounter vs procmail //at 00:10 //by abe

from the posthamster dept.

At work we use .procmailrc files generated by CGIpaf to let non-technical users create forwards, out-of-office mails, etc. and any combination thereof. This also has the advantage that we can filter out double bounces and spam (which also prevents us from being listed in spammer blacklists).

Unfortunately autofs (seems independent if autofs4 or autofs5 is used) seems to be unreliable if there are bursts of mount or umount requests, resulting either in “File or directory not found” error message while trying to access the home directory of a user, or “Directory not empty” error messages if the automounter tries to remove the mount point after unmounting. In that case a not mounted directory owned by root is left over.

In the end both cases lead to procmail behaving as if that user does not have a .procmailrc – which looks like sporadically lost mails to those who forward all mails. (The mails then can be found in the local default INBOX for that user.)

Additionally there are similar issues when the NFS servers are not available.

The most effective countermeasure we found so far was adding tests to the global /etc/procmailrc to check if the user’s home directory exists and belongs to the correct user:

# -----------------
# Global procmailrc
# -----------------

# For debugging, turn off if everything works well
VERBOSE=1
LOGFILE=/var/log/procmail.log

# This only works with bourne shells, $SHELL defaults to the user's
# login shell. And by experience dash seems not work, so we use bash.
OLDSHELL=$SHELL
SHELL=/bin/bash

# temporary failure (see EX_TEMPFAIL in /usr/include/sysexits.h) if
# $LOGNAME is not set for some reason. (Just to be sure our paths
# later on are not senseless.
:0
* ? test -z "$LOGNAME"
{
    LOG="Expected variable LOGNAME not set. "
    EXITCODE=75
    :0
    /dev/null
}

# temporary failure (see EX_TEMPFAIL in /usr/include/sysexits.h) if
# $HOME is not readable. ~$LOGNAME does not seem to work, so this uses
# a hard wired /home/.
:0
* ? test ! -r /home/$LOGNAME
{
    LOG="Home of user $LOGNAME not readable: /home/$LOGNAME "
    EXITCODE=75
    :0
    /dev/null
}

# temporary failure (see EX_TEMPFAIL in /usr/include/sysexits.h) if
# $HOME has wrong owner. ~$LOGNAME does not seem to work, so this uses
# a hard wired /home/.
:0
* ? test ! -O /home/$LOGNAME
{
    LOG="Home of user $LOGNAME has wrong owner: /home/$LOGNAME "
    EXITCODE=75
    :0
    /dev/null
}

[…]

If you want to store a copy of these mails for debugging purposes on every delivery attempt, replace /dev/null with some Maildir or mbox only accessible for root.

One small but important part was to explicitly declare bash as shell for executing the tests, otherwise mails for users with tcsh or zsh as login shell filled up the mail queue and never get delivered (if the SHELL variable never gets fixed).

Only drawback so far: This leads to more lagging e-mail on e-mail bursts also for those users who have no .procmailrc – because procmail can’t check if there’s really no .procmailrc.

Extensive procmail documentation can be found online at the Procmail Documentation Project as well as in the man pages procmail(1), procmailrc(5) and procmailex(5).

Monday·14·November·2011

grep everything //at 09:43 //by abe

from the *grep* dept.

During the OpenRheinRuhr I noticed that a friend of mine didn’t know about zgrep and friends. So I told him what other grep variations I know and he told me about some grep variations I didn’t know about.

So here’s our collection of grep wrappers, derivatives and variations. First I’ll list programs which search for text in different file formats:

grep through whatFixed StringsWildcards / Basic RegExpsExtended RegExpsDebian package
uncompressed text filesfgrepgrepegrepgrep
gzip-compressed text fileszfgrepzgrepzegrepzutils, gzip
bzip2-compressed text filesbzfgrepbzgrepbzegrepbzip2
xz-compressed text filesxzfgrepxzgrepxzegrepxz-utils
uncompressed text files in installed Debian packagesdfgrepdgrepdegrepdebian-goodies
gzip-compressed text files in installed Debian packages-dzgrep-debian-goodies
PDF documents--pdfgreppdfgrep
POD textspodgrep--pmtools
E-Mail folder (mbox, MH, Maildir)-mboxgrep -Gmboxgrep -Emboxgrep
Patches-grepdiffgrepdiff -Epatchutils
Process list--pgrepprocps
Gnumeric spreadsheetsssgrep -Fssgrep?gnumeric
Files in ZIP archives--zipgrepunzip
ID3 tags in MP3s--taggreppertaggrepper
Network packets--ngrepngrep
Tar archives--targrep / ptargrepperl (Experimental only for now)

And then there are also greps for special patterns on more or less normal files:

grep for whatuncompressed filescompressed filesDebian package
PCRE (Perl Compatible Regular Expression)pcregrep (see also the grep -P option)zpcregreppcregrep
IP Address in a given CIDR rangegrepcidr-grepcidr
XPath expressionxml_grep-xml-twig-tools

One question is though still unanswered for us: Is there some kind of meta-grep which chooses per file the right grep from above by looking at the MIME type of the according files, similar to xdg-open.

Other tools which have grep in their name, but are too special to properly fit into the above lists:

  • ext3grep: Tool to help recover deleted files on ext3 filesystems
  • xautomation: Includes a tool named visgrep to grep for subimages inside other images.

Includes contributions by Frank Hofmann and Faidon Liambotis.

Friday·28·October·2011

Conkeror usable on Ubuntu again despite XULRunner removal //at 00:08 //by abe

from the annoying-pissing-contest dept.

Because of the very annoying new Mozilla release politics (which look like a pissing contest with the similar annoying Google Chrome/Chromium release schedule), Ubuntu kicked out Mozilla XULRunner with its recent release of 11.10 Oneiric. And with XULRunner, Ubuntu also kicked out Conkeror and all other XULRunner reverse dependencies, too. Meh.

Sparked by this thread on the Conkeror mailing list, I extended the Debian package’s /usr/bin/conkeror wrapper script so that it looks for firefox in the search path, too, if no xulrunner* is found, and added an alternative dependency on firefox versions greater or equal to 3.5, too.

From now on, if the wrapper script finds no xulrunner but firefox in the search path, it calls firefox -app instead of xulrunner-$VERSION to start Conkeror.

With the expection of the about:-page showing the orange-blue Firefox logo and claiming that this is “Firefox $CONKEROR_VERSION”, it works as expected on my Toshiba AC100 netbook running the armel port of Ubuntu 11.10.

From version 1.0~~pre+git1110272207-~nightly1 on, the Conkeror Nightly Built Debian Packages will be installable on Ubuntu 11.10 Oneiric again without the need to install or keep XULRunner version from Ubuntu 11.04 Natty.

For those who don’t want to use the nightly builds, I created a (currently still empty) specific PPA for Conkeror where I’ll probably upload all the conkeror packages I upload to Debian Unstable.

Thursday·27·October·2011

Conkeror in the Debian NEW queue //at 22:57 //by abe

from the Never-trust-a-dot-zero-release dept.

I already mentioned a few times in the blog that I’m working on a Debian package of the Conkeror web browser. And now, after a lot of fine-tuning (and I still further new ideas how to improve the package ;-) Conkeror is finally in the NEW queue and hopefully will hit unstable in a few days. (Update Thursday, 03-Jul-2008, 18:13 CEST: The package has been accepted by Jörg and should be included on most architectures in tonight’s updates.)

Those who could hardly await it can fetch Conkeror .debs from http://noone.org/debian/. The conkeror package itself is a non-architecture specific package (but needs xulrunner-1.9 to be available), and its small C-written helper program spawn-process-helper is available as package conkeror-spawn-process-helper for i386, amd64, sparc, alpha, powerpc, kfreebsd-i386 and kfreebsd-amd64. There are no backported packages for Etch available, though, since I don’t know of anyone yet, who has successfully backported xulrunner-1.9 to Etch.

Interestingly the interest in Conkeror seems to have risen in the Debian community independently of its Debian packaging. Luca Capello, who sponsored the upload of my Conkeror package, pointed me to two blog post on Planet Debian, written by people being fed up with Firefox 3 already and are looking for a more lean, but still Gecko based web browser: Decklin Foster is fed up with Firefox’ -eh- Iceweasel’s arrogance and MJ Ray is fed up with Firefox 3 and its SSL problems.

Since my previously favourited Gecko based web browser Kazehakase never became really stable but instead became slow and leaking memory (and therefore not much better than Firefox 2), I can imagine that it’s no more an candidate for people seaking for a lean and fast web browser.

Conkeror has some “strange” concepts of which the primary one is that it looks and feels like Emacs:

  • The current location is shown in a status bar below the website, where Emacs usually shows buffer names. All input, even entering new URLs to go to, is done via the mini-buffer, an input line below the status bar.

  • Instead of tabs it uses Emacs’ concept of buffers. So no tab bar clutter and though easy access to all currently open pages.

  • It has no buttons, menu-bar or such. And except the status bar and mini-buffer, it uses the whole size of the window for the displayed web page. This is the main reason why I prefer Conkeror on the 7” EeePC: I don’t want to waste any pixels for buttons or menu bars and still have a fully functional web browser.

  • It of course has Emacs alike keybindings (with a slight touch of Lynx). While this may seem awkward for the vi world (Hey, they have the vimperator*, also in Debian since a few days!), as an Emacs user you just have to remember that you web browser now also expects to be treated like an Emacs. It just works:

    C-x C-c
    Exit Emacs -eh- Conkeror
    C-x C-f
    Open File -eh- web page in new buffer
    C-x C-b
    Change to some other tab -eh- buffer
    C-x C-v
    Replace web page in this buffer and use the current URL as start for entering the new one
    C-x 5 2
    Open new frame -eh- window
    C-x 5 0
    Close current frame -eh- window
    C-x k
    Close tab, -eh- kill buffer
    C-h i
    Documentation
    C-s
    Incremental search forward
    C-r
    Incremental search backward
    C-g
    Stop
    l
    Go back (Think info-mode)
    g
    Go to (Open web page in this buffer)

    (Hehe, I like the faces of vi users having read these keybindings and now wondering how to remember them. SCNR. Well, sometimes vi key bindings are a mystery to me, too. :-)

    There are of course many more and nearly all are the same as in Emacs, even the universal argument C-u and the M-x command-line are there. E.g. C-u g lets you open a web page in a new buffer, too.

  • Conkeror also has very promising concept for following and copying links with the keyboard only. Opera is very inefficient here since you have to jump from link to link to get to the one you want. In Conkeror you just press f for following or c for copying links and then all links on the currently shown part of the page show a small number attached to it. Then you just enter the number (and additionally press enter if the number is ambigous) and the link is either opened or copied to the clipboard.

    A funny anecdote about how this concept grew over the time: Early versions of Conkeror (back in the days when it just was a Firefox externsion as vimperator) numbered all links on the page, not only the visible ones. On large pages with many links or buttons (e.g. my blog ;-), this took minutes to complete. The idea to just number the visible links is so simple and important – but someone first needed to have it. :-)

Footnotes

*) I just noticed that there is now also muttator, making Thunderbird look and behave like vim (and probably also mutt), too. Wonder into which e-mail client the Emacs community will convert Thunderbird. GNUS? RMAIL? VM? Wanderslust? What will it be called? Wunderbird? Thunderslust? (SCNRE ;-)

Daily Snapshot .debs of Conkeror //at 22:57 //by abe

from the development-tracking-using-APT dept.

Keeping track with packaging software which is under heavy development can be time-consuming. I noticed this while packaging Conkeror, because there was quite a demand for up-to-date packages, especially from upstream themself.

So recently on the IRC channel #conkeror the idea of automatically built Debian packages came up. After a few hours of experimenting and a few days of steadily optimizing, I can proudly present daily built snapshot packages of Conkeror for currently Lenny and Sid, ready to be included in your sources.list:

deb     http://noone.org/conkeror-nightly-debs lenny main
deb-src http://noone.org/conkeror-nightly-debs lenny main

deb     http://noone.org/conkeror-nightly-debs sid main
deb-src http://noone.org/conkeror-nightly-debs sid main

The binary package conkeror-spawn-process-helper is currently only built for the i386 architecture, but other architectures may follow.

The packages probably work also on any other Debian based distribution (e.g. Ubuntu) which includes XULRunner version 1.9.

Surely they are not of the usual Debian quality, but they should do it for staying up-to-date with the Conkeror development just by using your favourite APT frontend.

The script which generates those packages is also available in the Conkeror git repository at repo.or.cz.

The APTable archive is generated with reprepro. Packages and the repository are signed with the passphrase-less GnuPG key 373B76B4 which is used only for the Conkeror nightly builds. (If anyone knows a better solution for automatic builds than a passphrase-less key, please tell me. :-)

P.S.: I really like the new keybindings “<<”, “>>” and “G”. :-)

The World without a sage web browser? — or — Why Firefox sucks //at 22:57 //by abe

from the all-browsers-suck-this-one-just-sucks-less dept.

Although I read our Debian’s Joey’s blog posting about not being able to produce Mozilla security updates for Debian, only now, after reading about other Debian’s Joey’s try to fix a security hole in Debian’s Mozilla Firefox, I see how asshole-like the Mozilla Foundation’s security policy looks to Linux (and maybe other operating system’s) distributions, who favour stableness over feature richness.

As many know (or at least were forced to know ;-) I don’t like Firefox, because in spite of all the plugins it can’t cope with all the useful features of Galeon 1.2.x or Opera. That’s the UI point of view.

But from the political (correctness) point of view, we have to ask ourself: What sage browser does the open source world still have?

  • Mozilla does not provide security patches, so Firefox, Mozilla (RIP), Epiphany and Galeon are no more acceptable for distribution use.
  • Konqueror has planed to drop KHTML in favor of Mozillas Gecko. So see above.
  • Dillo’s rendering engine is fast but not really state of the art. Same counts for glinks (aka “links -g”).
  • Lynx, links and w3m somehow don’t count since the distributions (and sometimes, me too ;-) primarily need a graphical web browser.

But back to usaility: I heard from quite a few people — even open source people — evaluating or even already using Opera as an alternative, because there is no sage open source web browser, even if you don’t count Mozillas security policy. And I can understand them. If Galeon wouldn’t exist, I probably would be a convinced Opera on Debian user myself, although Opera is closed source. But I and many more can’t live without a working and sage web browser.

The only thing, I don’t like with Opera is that this company seems to be (or at least was a few years ago) very chaotic and uncoordinated. (And I really wonder, how they are able to produce such impressive software.) But that’s another story…

Friday·30·September·2011

Fun facts from the UDD //at 23:20 //by abe

from the username=packagename dept.

After spotting an upload of mira, who in turn spotted an upload of abe (the package, not an upload by me aka abe@d.o), mira (mirabilos aka tg@d.o) noticed that there are Debian packages which have same name as some Debian Developers have as login name.

Of course I noticed a long time ago that there is a Debian package with my login name “abe”. Another well-known Debian login and former package name is amaya.

But since someone else came up with that thought, too, it was time for finding the definite answer to the question which are the DD login names which also exist as Debian package names.

My first try was based on the list of trusted GnuPG keys:

$ apt-cache policy $(gpg --keyring /etc/apt/trusted.gpg --list-keys 2>/dev/null | \
                     grep @debian.org | \
        	     awk -F'[<@]' '{print $2}' | \
                     sort -u) 2>/dev/null | \
                   egrep -o '^[^ :]*'
alex
tor
ed
bam
ng

But this was not satisfying as my own name didn’t show up and gpg also threw quite a lot of block reading errors (which is also the reason for redirecting STDERR).

mira then had the idea of using the Ultimate Debian Database to answer this question more properly:

udd=> SELECT login, name FROM carnivore_login, carnivore_names
      WHERE carnivore_login.id=carnivore_names.id AND login IN
      (SELECT package AS login FROM packages, active_dds
       WHERE packages.package=active_dds.login UNION
       SELECT source AS name FROM sources, active_dds
       WHERE sources.source=active_dds.login)
      ORDER BY login;
 login |                 name
-------+---------------------------------------
 abe   | Axel Beckert
 alex  | Alexander List
 alex  | Alexander M. List  4402020774 9332554
 and   | Andrea Veri
 ash   | Albert Huang
 bam   | Brian May
 ed    | Ed Boraas
 ed    | Ed G. Boraas [RSA Compatibility Key]
 ed    | Ed G. Boraas [RSA]
 eric  | Eric Dorland
 gq    | Alexander GQ Gerasiov
 iml   | Ian Maclaine-cross
 lunar | Jérémy Bobbio
 mako  | Benjamin Hill
 mako  | Benjamin Mako Hill
 mbr   | Markus Braun
 mlt   | Marcela Tiznado
 nas   | Neil A. Schemenauer
 nas   | Neil Schemenauer
 opal  | Ola Lundkvist
 opal  | Ola Lundqvist
 paco  | Francisco Moya
 paul  | Paul Slootman
 pino  | Pino Toscano
 pyro  | Brian Nelson
 stone | Fredrik Steen
(26 rows)

Interestingly “tor” (Tor Slettnes) is missing in this list, so it’s not complete either…

At least I’m quite sure that nobody maintains a package with his own login name as package name. :-)

We also have no packages ending in “-guest”, so there’s no chance that a package name matches an Alioth guest account either…

Thursday·22·September·2011

Emacs Macros: Repeat on Steroids //at 16:06 //by abe

from the .-for-Emacsen dept.

vi users have their . (dot) redo command for repeating the last command. The article Repeating Commands in Emacs in Mickey Petersen’s blog Mastering Emacs explained Emacs’ equivalent for that, namely the command repeat, by default bound to C-x z.

I though seldomly use it as I mostly have to repeat a chain of commands. What I use are so called Keyboard Macros.

For example for the CVE-2011-3192 vulnerability in Apache I added a line like Include /etc/apache2/sites-common/CVE-2011-3192.conf to all VirtualHosts.

So I started Emacs with all the relevant files: grep CVE-2011-3192 -l /etc/apache2/sites-available/*[^~] | xargs emacs &

To remove those “Include” lines again M-x flush-lines is probably the easiest way in Emacs. So for every file I had to call flush-lines with always the same parameter, save the buffer and then close the file or — in Emacsish — “kill” the buffer.

So while working on the first file I recorded my doing as a keyboard macro:

C-x (
Start recording
M-x flush-lines<Enter>CVE-2011-3192<Enter>
flush all lines which contain the string “CVE-2011-3192”
C-x C-s
save the current buffer
C-x C-k<Enter>
kill the current buffer, i.e. close the file
C-x )
Stop recording

Then I just had to call the saved macro with C-x e. It flushed all lines, saved the changes and switched to the next remaining file by closing the current file with three key-strokes. And to make it even easier, from the second occasion on I only had to press e to call the macro directly again. So I just pressed e for a bunch of time and had all files edited. (In this case I used git diff afterwards to check that I didn’t wreck anything by half-automating my editing. :-)

Of course there are other ways to do this, too, e.g. use sed or so, but I still think it’s a neat example for showing the power of keyboard macros in Emacs. More things you can do with Emacs Keyboard Macros are described in the EmacsWiki entry Keyboard Macros.

And if you still miss vi’s . command in Emacs, you can use the dot-mode, an Emacs mode currently maintained by Robert Wyrick which more or less automatically defines keyboard macros and lets you call them with C-..

Wednesday·21·September·2011

Creative Toilet Paper Usage in Webcomics //at 10:34 //by abe

from the do-not-try-this-at-home dept.

Funnily two of my daily web comics recently featured interesting things you could do with toilet paper: Zits on 19th of September 2011 involving a fan and Calvin and Hobbes on 13th of September 2011 involving flushing the toilet.

Although both experiments are obviously resource wasting, they look like quite some fun and I’m tempted to actually try them both at least once. (I though don’t plan to try this, too. :-)

Thursday·01·September·2011

Useful but Unknown Unix Tools: How wdiff and colordiff help to choose the right Swiss Army Knife //at 12:18 //by abe

from the colorful-diffs dept.

In light of the fact that it seems possible to fit the plastic caps of a Debian branded Swiss Army Knife (Last orders today!) on an existing Swiss Army Knife (German written howto as PDF), I started to think about which Victorinox Cybertool would be the best fitting for me.

And because the Victorinox comparison page doesn’t really show diffs, just columns with floating text which are not very helpful for generating diffs in your head, I used command line tools for that purpose:

wdiff

Because the floating texts are not line- but just whitespace-based, the tool of choice is not diff but wdiff, a word-based diff. It encloses additions and removals in {+…+} and [-…-] blocks. (No, those aren’t Japanese smileys although they look a lot like some. ^^).

The easiest and clearest way is to copy and paste the texts from Victorinox’ comparison page into some text files and compare them with wdiff:

$ wdiff cybertool34.txt cybertool41.txt
{+Schraubendreher 2.5mm,+} Pinzette, Nähahle mit Nadelöhr, {+Holzsäge,+} Bit-Schlüssel( 5 mm Innensechskant für die D-SUB Steckverbinder, 4 mm Innensechskant für Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), Kombizange( Hülsenpresser, Drahtschneider ), Stech-Bohrahle, Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Paketträger), {+Metallsäge( Metallfeile, Nagelfeile, Nagelreiniger ),+} Dosenöffner( kleiner Schraubendreher ), Kleine Klinge, Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), {+Holzmeissel / Schaber,+} Bit-Halter, Stecknadel, inox, Schere, Korkenzieher, Zahnstocher

So this already extracted the information which are the seven tools which are in the Cybertool 41, but not in the Cybertool 34. Nevertheless the diff is still not easily recognizable on the first glance. There are several ways to help here.

First wdiff has an option --no-common (the according short option is -3) which just shows added and removed words:

$ wdiff -3 cybertool34.txt cybertool41.txt
======================================================================
{+Schraubendreher 2.5mm,+}
======================================================================
 {+Holzsäge,+}
======================================================================
 {+Metallsäge( Metallfeile, Nagelfeile, Nagelreiniger ),+}
======================================================================
 {+Holzmeissel / Schaber,+}
======================================================================

This is already way better to quickly recognize the actual differences.

But if you still also want to see the common tools of the two knifes you need some visual help:

One option is to use wdiff’s --terminal (or short -t) option. Added words are then displayed inverse and removed words are shown underlined (background and foreground colors hardcoded as there is no “invert colors” style in CSS or HTML):

$ wdiff -t cybertool34.txt cybertool41.txt
Schraubendreher 2.5mm, Pinzette, Nähahle mit Nadelöhr, Holzsäge, Bit-Schlüssel( 5 mm Innensechskant für die D-SUB Steckverbinder, 4 mm Innensechskant für Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), Kombizange( Hülsenpresser, Drahtschneider ), Stech-Bohrahle, Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Paketträger), Metallsäge( Metallfeile, Nagelfeile, Nagelreiniger ), Dosenöffner( kleiner Schraubendreher ), Kleine Klinge, Druckkugelschreiber, Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), Holzmeissel / Schaber, Bit-Halter, Stecknadel, inox, Schere, Korkenzieher, Zahnstocher

But some still like to to use color instead of the contrast-rich inverse and the easily to oversee underlining. This is where colordiff comes into play:

colordiff

colordiff is like syntax highlighting for diffs on the command line. I works with classic and unified diffs as well as with wdiffs and debdiffs (the debdiff command is part of the devscripts package).

$ wdiff cybertool34.txt cybertool41.txt | colordiff
{+Schraubendreher 2.5mm,+} Pinzette, Nähahle mit Nadelöhr, {+Holzsäge,+} Bit-Schlüssel( 5 mm Innensechskant für die D-SUB Steckverbinder, 4 mm Innensechskant für Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), Kombizange( Hülsenpresser, Drahtschneider ), Stech-Bohrahle, Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Paketträger), {+Metallsäge( Metallfeile, Nagelfeile, Nagelreiniger ),+} Dosenöffner( kleiner Schraubendreher ), Kleine Klinge, Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), {+Holzmeissel / Schaber,+} Bit-Halter, Stecknadel, inox, Schere, Korkenzieher, Zahnstocher

$ wdiff cybertool29.txt cybertool41.txt | colordiff
{+Schraubendreher 2.5mm,+} Pinzette, Nähahle mit Nadelöhr, {+Holzsäge,+} Bit-Schlüssel( 5 mm Innensechskant für die D-SUB Steckverbinder, 4 mm Innensechskant für Bits, Bit Phillips 0, Bit Phillips 1, Bit-Schlitzschrauben 4 mm, Bit Phillips 2, Bit Hex 4 mm, Bit Torx 8, Bit Torx 10, Bit Torx 15 ), {+Kombizange( Hülsenpresser, Drahtschneider ),+} Stech-Bohrahle, {+Kugelschreiber( auch zum DIP-Switch verstellen ), Mehrzweckhaken (Paketträger), Metallsäge( Metallfeile, Nagelfeile, Nagelreiniger ),+} Dosenöffner( kleiner Schraubendreher ), Kleine Klinge, [-Druckkugelschreiber,-] Grosse Klinge, Ring, inox, Mini-Schraubendreher, Kapselheber( Schraubendreher, Drahtabisolierer ), {+Holzmeissel / Schaber,+} Bit-Halter, Stecknadel, inox, {+Schere,+} Korkenzieher, Zahnstocher

(Coloured “Screenshots” done with ANSI HTML Adapter from the package aha.)

Some, especially those who are used to git, are probably confused by the default choice of diff colors. This is easily fixable by writing the following into you ~/.colordiffrc:

newtext=green
oldtext=red
diffstuff=darkblue
cvsstuff=darkyellow

(See also /etc/colordiff for the defaults and hints.)

colordiff has by the way two operating modes:

  • Without parameter it reads diffs from standard input as seen above.
  • With parameters it works as drop-in diff replacement including all diff options as shown below.

So now let us compare the Cybertool 29 with Cybertool 34 in a normal diff (by using the texts from above and replacing all commata with newline characters) with git-like colors:

$ colordiff cybertool29-lines.txt cybertool34-lines.txt
12a13,14
> Kombizange( Hülsenpresser
> Drahtschneider )
13a16,17
> Kugelschreiber( auch zum DIP-Switch verstellen )
> Mehrzweckhaken (Paketträger)
16d19
< Druckkugelschreiber
25a29
> Schere

Or as unifed diff with some context:

$ colordiff -u cybertool29-lines.txt cybertool34-lines.txt
--- cybertool29-lines.txt     2011-08-31 20:55:37.195546238 +0200
+++ cybertool34-lines.txt   2011-08-31 20:55:11.667710504 +0200
@@ -10,10 +10,13 @@
 Bit Torx 8
 Bit Torx 10
 Bit Torx 15 )
+Kombizange( Hülsenpresser
+Drahtschneider )
 Stech-Bohrahle
+Kugelschreiber( auch zum DIP-Switch verstellen )
+Mehrzweckhaken (Paketträger)
 Dosenöffner( kleiner Schraubendreher )
 Kleine Klinge
-Druckkugelschreiber
 Grosse Klinge
 Ring
 inox
@@ -23,5 +26,6 @@
 Bit-Halter
 Stecknadel
 inox
+Schere
 Korkenzieher
 Zahnstocher

So if you want nicely colored diffs with Subversion like you’re used to with git, you can use svn diff | colordiff.

Wednesday·31·August·2011

Useful but Unknown Unix Tools: Calculating with IPs, The Sequel //at 20:09 //by abe

from the juggling-with-IPv6-netmasks dept.

This is a direct followup on my previous blog posting about calculating IPs and netmasks with the tools netmask and prips. Kurt Roeckx (via e-mail) and Niall Donegan (via a comment to that blog posting) both told me about the package sipcalc, and Kurt also mentioned the package ipcalc. Thanks for that! And since I found both useful, too, let’s put them in their own blog posting:

Both tools, ipcalc and sipcalc offer a “get all information at once” mode which are not present in the previously presented tool netmask.

ipcalc

ipcalc by default outputs all information and even in ANSI colors:

$ ipcalc 192.168.96.0/21
Address:   192.168.96.0         11000000.10101000.01100 000.00000000
Netmask:   255.255.248.0 = 21   11111111.11111111.11111 000.00000000
Wildcard:  0.0.7.255            00000000.00000000.00000 111.11111111
=>
Network:   192.168.96.0/21      11000000.10101000.01100 000.00000000
HostMin:   192.168.96.1         11000000.10101000.01100 000.00000001
HostMax:   192.168.103.254      11000000.10101000.01100 111.11111110
Broadcast: 192.168.103.255      11000000.10101000.01100 111.11111111
Hosts/Net: 2046                  Class C, Private Internet

(Coloured “Screenshots” done with ANSI HTML Adapter from the package aha.)

You can suppress the bitwise option or directly output HTML via commandline options. For example ipcalc -b -h 192.168.96.0/21 outputs the following content:

Address:     192.168.96.0         
Netmask: 255.255.248.0 = 21
Wildcard: 0.0.7.255
=>
Network:     192.168.96.0/21      
HostMin: 192.168.96.1
HostMax: 192.168.103.254
Broadcast: 192.168.103.255
Hosts/Net: 2046 Class C, Private Internet

Yes, that’s an HTML table and no preformatted text, just with a monospaced font. (I just removed the hardcoded text color from it, otherwise it would not look nice on dark backgrounds like in Planet Commandline’s default color scheme.)

Like netmask, ipcalc can also deaggregate IP ranges into largest possible networks:

$ ipcalc 192.168.87.0 - 192.168.110.255
deaggregate 192.168.87.0 - 192.168.110.255
192.168.87.0/24
192.168.88.0/21
192.168.96.0/21
192.168.104.0/22
192.168.108.0/23
192.168.110.0/24

(ipcalc -r 192.168.87.0 192.168.110.255 is just another way to write this, and it results in the same output.)

To find networks with at least 20, 63 and 30 IP addresses within a /24 network, use for example:

Address:   192.0.2.0            
Netmask:   255.255.255.0 = 24   
Wildcard:  0.0.0.255            
=>
Network:   192.0.2.0/24         
HostMin:   192.0.2.1            
HostMax:   192.0.2.254          
Broadcast: 192.0.2.255          
Hosts/Net: 254                   Class C

1. Requested size: 20 hosts
Netmask:   255.255.255.224 = 27 
Network:   192.0.2.128/27       
HostMin:   192.0.2.129          
HostMax:   192.0.2.158          
Broadcast: 192.0.2.159          
Hosts/Net: 30                    Class C

2. Requested size: 63 hosts
Netmask:   255.255.255.128 = 25 
Network:   192.0.2.0/25         
HostMin:   192.0.2.1            
HostMax:   192.0.2.126          
Broadcast: 192.0.2.127          
Hosts/Net: 126                   Class C

3. Requested size: 30 hosts
Netmask:   255.255.255.224 = 27 
Network:   192.0.2.160/27       
HostMin:   192.0.2.161          
HostMax:   192.0.2.190          
Broadcast: 192.0.2.191          
Hosts/Net: 30                    Class C

Needed size:  192 addresses.
Used network: 192.0.2.0/24
Unused:
192.0.2.192/26

sipcalc

sipcalc is similar to ipcalc. One big difference seems to be the IPv6 support:

$ sipcalc 2001:DB8::/32
-[ipv6 : 2001:DB8::/32] - 0

[IPV6 INFO]
Expanded Address        - 2001:0db8:0000:0000:0000:0000:0000:0000
Compressed address      - 2001:db8::
Subnet prefix (masked)  - 2001:db8:0:0:0:0:0:0/32
Address ID (masked)     - 0:0:0:0:0:0:0:0/32
Prefix address          - ffff:ffff:0:0:0:0:0:0
Prefix length           - 32
Address type            - Aggregatable Global Unicast Addresses
Network range           - 2001:0db8:0000:0000:0000:0000:0000:0000 -
                          2001:0db8:ffff:ffff:ffff:ffff:ffff:ffff

(Thanks to Niall for the pointer to RFC3849. :-)

It can also split up networks into smaller chunks, but only same-size chunks, like e.g. split a /32 IPv6 network into /34 networks:

sipcalc -S34 2001:DB8::/32
-[ipv6 : 2001:DB8::/32] - 0

[Split network]
Network                 - 2001:0db8:0000:0000:0000:0000:0000:0000 -
                          2001:0db8:3fff:ffff:ffff:ffff:ffff:ffff
Network                 - 2001:0db8:4000:0000:0000:0000:0000:0000 -
                          2001:0db8:7fff:ffff:ffff:ffff:ffff:ffff
Network                 - 2001:0db8:8000:0000:0000:0000:0000:0000 -
                          2001:0db8:bfff:ffff:ffff:ffff:ffff:ffff
Network                 - 2001:0db8:c000:0000:0000:0000:0000:0000 -
                          2001:0db8:ffff:ffff:ffff:ffff:ffff:ffff

-

Similar thing with IPv4:

sipcalc -s27 192.0.2.0/24
-[ipv4 : 192.0.2.0/24] - 0

[Split network]
Network                 - 192.0.2.0       - 192.0.2.31
Network                 - 192.0.2.32      - 192.0.2.63
Network                 - 192.0.2.64      - 192.0.2.95
Network                 - 192.0.2.96      - 192.0.2.127
Network                 - 192.0.2.128     - 192.0.2.159
Network                 - 192.0.2.160     - 192.0.2.191
Network                 - 192.0.2.192     - 192.0.2.223
Network                 - 192.0.2.224     - 192.0.2.255

sipcalc also has a “show me all information” mode with the -a option:

$ sipcalc -a 192.168.96.0/21
-[ipv4 : 192.168.96.0/21] - 0

[Classfull]
Host address            - 192.168.96.0
Host address (decimal)  - 3232260096
Host address (hex)      - C0A86000
Network address         - 192.168.96.0
Network class           - C
Network mask            - 255.255.255.0
Network mask (hex)      - FFFFFF00
Broadcast address       - 192.168.96.255

[CIDR]
Host address            - 192.168.96.0
Host address (decimal)  - 3232260096
Host address (hex)      - C0A86000
Network address         - 192.168.96.0
Network mask            - 255.255.248.0
Network mask (bits)     - 21
Network mask (hex)      - FFFFF800
Broadcast address       - 192.168.103.255
Cisco wildcard          - 0.0.7.255
Addresses in network    - 2048
Network range           - 192.168.96.0 - 192.168.103.255
Usable range            - 192.168.96.1 - 192.168.103.254

[Classfull bitmaps]
Network address         - 11000000.10101000.01100000.00000000
Network mask            - 11111111.11111111.11111111.00000000

[CIDR bitmaps]
Host address            - 11000000.10101000.01100000.00000000
Network address         - 11000000.10101000.01100000.00000000
Network mask            - 11111111.11111111.11111000.00000000
Broadcast address       - 11000000.10101000.01100111.11111111
Cisco wildcard          - 00000000.00000000.00000111.11111111
Network range           - 11000000.10101000.01100000.00000000 -
                          11000000.10101000.01100111.11111111
Usable range            - 11000000.10101000.01100000.00000001 -
                          11000000.10101000.01100111.11111110

[Networks]
Network                 - 192.168.96.0    - 192.168.103.255 (current)

Thanks again to Kurt and Niall for their contributions!

Now listening to the schreimaschine and fausttanz submissions for the interactive competition at the Bünzli/DemoDays in Olten (Switzerland)

Tuesday·30·August·2011

Useful but Unknown Unix Tools: watch //at 22:18 //by abe

from the Watch-commands,-not-TV dept.

Yet another useful tool of which at least I heard quite late in my Unix career is “watch”. For a long time I wrote one-liners like this to monitor the output of a command:

while :; do echo -n "`date` "; host bla nameserver; sleep 2; done

But it’s way shorter and less error-prone to use “watch” from Debian’s procps package and just write

watch host bla nameserver

The only relevant difference is that I don’t have some kind of history when the output of the command changed, e.g. to calculate the rate with which a file grows.

You can even track the output of more than one command:

watch 'ps aux | grep resize2fs; df -hl'

Also a nice way to use watch is to run it inside GNU Screen (or tmux or splitvt) and split up the terminal horizontally, i.e. show the output of watch in one window and the process you’re tracking with the commands run by watch in the other window and see both running at the same time.

Update, Sunday, 28th of August 2011, 17:13h

I never found a useful case for watch’s -d option which highlights changes to the previous run (by inverting the changed bytes), but until now three people pointed out the -d option in response to this blog-posting and weasel also had some nice examples, so here are they:

Keep an eye on the current network routes (once per second) of a host and quickly notice when they change:

watch -n1 -d ip r

Watch the current directory for size or time stamp changes of its files:

watch -d ls -l

The option -d only highlights changes from the previous run to the next run. If you want to see all bytes which ever changed since the first run, use --differences=cumulative.

Thanks to Klaus “Mowgli” Ethgen, Ulrich “mru” Dangel, Uli “youam” Martens and Peter “weasel” Palfrader for comments and suggestions.

Useful but Unknown Unix Tools: Kill all processes of a user //at 22:15 //by abe

from the BOFH-slays-users dept.

I already got mails like “What a pity that your nice blog posting series ended”. No, it didn’t end. As announced, I knew that I won’t be able to keep up a daily schedule. It worked as long as I had already written the postings in advanced. But in the end the last postings were already written just in time and then I ran out of leisure and muse for a time. But as I said: It didn’t end, it will be continued. And this is the next such posting.

Oh, and for those who tell me further tools, I should blog about: I appreciate that, especially because that way I also hear about tools I didn’t know about. But why just telling me and not blogging yourself about it? :-) At least those whose blog is part of Planet Debian or Planet Symlink anyway really should do this themselves. I’d really like to see also others writing about cool tools. I neither have a right on the idea nor on the name of this series (call it meme if you want :-), so please go on and publish your favourite tools in a blog posting, too. :-)

And for all those who want to join me and Myon blogging about cool Unix tools, independent if listed on Planet Debian or Planet Symlink, I encourage you to offer a separate feed for this kind of postings and join us on Planet Commandline.

Anyway, here’s the next such posting:

As system administrator you often have the case that you have to kill all processes of one user, e.g. if a daemon didn’t properly shut down itself or amok running leftovers of a GUI session.

Many use pkill -SIGNAL -u user from the procps package or killall -SIGNAL -u user from the psmisc package for it. But that’s a) quite cumbersome to type and b) is there a chance to forget about the -u and then bad things may happen, especially with pkill’s default substring match, so I prefer another tool with a more explicit name:

slay

slay has an easy to remember name (at least for BOFHs ;-) which is even quicker to type (alternating one character with the left and the right hand, at least on US layout keyboards) than “pkill” (all characters to type with the right hand), and has the same easy to remember commandline syntax like kill itself:

slay -SIGNAL user [user …]

But beware, slay is…

… not only for BOFHs, but also from a BOFH

It has a “mean mode” which is activated by default. With mean mode on, it won’t kill the given user but the user who called the program if it is invoked as an ordinary user without root rights. *g*

Interestingly I never ran into this issue despite I use this program often and for many years now.

But some Ubuntu users did, probably because adding a sudo in front of some command is easier to forget than doing an ssh root@localhost or su - beforehand. They even seem to be so desperate about it that they forwarded the issue from Launchpad to the Debian Bug Tracking System. ;-)

But to be honest — even if I was very amused about those bug reports — isn’t this issue “grave”, as it causes very likely (unexpected) data loss?

Now playing: Monzykill dash nine (… and your process is mine ;-)

Saturday·27·August·2011

Useful but Unknown Unix Tools: Calculating with IPs //at 12:22 //by abe

from the juggling-with-netmasks dept.

There are two small CLI tools I need often when I’m handling larger networks or more than a few IP addresses at once:

netmask

netmask is very handy for calculating with netmasks (anyone expected something else? ;-) in all variants:

$ netmask 192.168.96.0/255.255.248.0
    192.168.96.0/21
$ netmask -s 192.168.96.0/21
    192.168.96.0/255.255.248.0  
$ netmask --range 192.168.96.0/21
    192.168.96.0-192.168.103.255  (2048)
$ netmask 192.168.96.0:192.168.103.255
    192.168.96.0/21
$ netmask 192.168.87.0:192.168.110.255
    192.168.87.0/24
    192.168.88.0/21
    192.168.96.0/21
   192.168.104.0/22
   192.168.108.0/23
   192.168.110.0/24
$ netmask --cisco 192.168.96.0/21
    192.168.96.0 0.0.7.255

(The IP ranges in RFC5737 where too small for the examples I had in mind. :-)

There’s though one thing netmask can’t do out of the box and that’s where the second tool comes into play:

prips

When I read the package name prips, I always think of something like “print postscript” or so, but it’s actually an abbreviation for “print IPs”.

And that’s all it does:

$ prips 192.0.2.0/29
192.0.2.0
192.0.2.1
192.0.2.2
192.0.2.3
192.0.2.4
192.0.2.5
192.0.2.6
192.0.2.7
$ prips 198.51.100.1 198.51.100.6
198.51.100.1
198.51.100.2
198.51.100.3
198.51.100.4
198.51.100.5
198.51.100.6
$ prips -i 2 203.0.113.0/28
203.0.113.0
203.0.113.2
203.0.113.4
203.0.113.6
203.0.113.8
203.0.113.10
203.0.113.12
203.0.113.14
$ prips -f hex 192.0.2.8/29
c0000208
c0000209
c000020a
c000020b
c000020c
c000020d
c000020e
c000020f

prips has proven to be very useful in combination with shell loops like these:

$ prips 192.0.2.0/29 | xargs -n 1 host
[…]
$ for ip in `prips 198.51.100.1 198.51.100.6`; do host $ip; done
[…]

And since prips doesn’t support the 192.0.2.0/255.255.255.248 netmask syntax, you can even easily combine those two tools:

$ prips `netmask 192.0.2.0/255.255.255.248`
[…]

(Hah! Now I was able to use RFC5737 IP ranges! ;-)

Wednesday·10·August·2011

git $something -p //at 16:09 //by abe

from the git-rules--p dept.

git add -p is one of my favourite git features. It lets you selectively add the local changes hunk by hunk to the staging area. This is especially nice if you want to commit one change in a file, but not a second one, you also already did.

Recently I noticed that you can also selectively revert changes already in the staging area using git reset -p HEAD. The user interface is exactly the same as for git add -p.

Today I discovered another selective undo in git by just trying it out of curiosity if that works, too: Undoing local changes selectively using git checkout -p. Maybe less useful than those mentioned above, but nevertheless most times quicker than firing up your favourite editor and undoing the changes manually.

Another nice git feature which I discovered by accidentially using it (this time even unwittingly) is git checkout - which behaves like cd -, just for branches instead of directories, i.e. it switches back to the previously checked out branch. Very useful for quickly changing between two branches again and again.

Monday·08·August·2011

Finding libraries not marked as automatically installed with aptitude //at 17:26 //by abe

from the aptitude-for-the-win-again dept.

This is a direct followup on my blog posting Finding packages for deinstallation on the commandline with aptitude.

In the meantime on more alias for finding obosolete packages made it into my zsh configuration. It’s an alias to find installed libraries, …-data, …-common and other usually only automatically installed packages, which are not marked as being installed automatically nevertheless:

alias aptitude-review-unmarkauto-libraries='aptitude -o "Aptitude::Pkg-Display-Limit=( ^lib !-dev$ !-dbg$ !-utils$ !-tools$ !-bin$ !-doc$ !^libreoffice | -data$ | -common$ | -base$ !^r-base ) !~M"'

And yes, this pattern is slightly larger than those from the previous posting, so here’s the used filter in a little bit more readable way:

(
  ^lib
    !-dev$
    !-dbg$
    !-utils$
    !-tools$
    !-bin$
    !-doc$
    !^libreoffice | 
  -data$ | 
  -common$ | 
  -base$
    !^r-base
)
!~M

It matches all non-automatically installed packages whose name starts with “lib”, but is neither a debug symbols package, a development header package, a documentation package, a package containing supplied commands, nor a libreoffice package.

Additionally it matches all non-automatically installed packages ending in -data, -common, or -base, but excludes r-base packages.

Of course you can then mark any erroneously unmarked library by pressing “M” (Shift-m).

If you press “g” for “Go” afterwards and wonder why nothing to remove shows up, be reminded that the filter limit is active in this view, too. So press “l” for “Limit” and then Ctrl-u to erase the current filter limit of this view and press enter to set the new (now empty) filter, et voilà…

Hope this is of help for some others, too.

Saturday·30·July·2011

Notes from the Emacs Skills Exchange Session at DebConf11 //at 12:29 //by abe

from the spontaneous dept.

Thomas Koch asked at DebConf 11 for a Skills Exchange session about Emacs.

As nobody stepped up for that session for quite some time, I did. But I knew just the answers to half of his questions by mind, so I left the remainder for someone else. Luckily Kan-Ru Chen and Sebastian Tennant stepped up for most of the remainder.

We had a quite full meeting room and the notes that Kan-Ru and me prepared in Gobby (debian package) got collaboratively extended from being a braindump and guide what to talk about to a quite helpful, but compact and dense Emacs introduction.

I’ll probably use this as a base for an Emacs tutorial or workshop at some European FLOSS events, but I wouldn’t be able to have such a good and comprehensive base for that without that Skills Exchange session.

So thanks to all who contributed!

Update, 02:31: There also seem to exist an Emacs Lisp implementation of the Obby protocol called Ebby, but it doesn’t seem to support the 0.5 version of Gobby, only version 0.3.

Friday·10·June·2011

How to find broken symlinks //at 20:31 //by abe

from the useful-code-snippets dept.

Looking through the man page of find there is no obvious way to find broken symbolic links. But there is a simple way involving only find:

$ find -L . -type l
$ find -L . -type l -ls

The option -L (before the path!) causes find to follow symbolic links and the expression -type l causes find to report only symbolic links. But as it follows symlinks, it only reports those it can’t follow, i.e. broken ones.

The second line also shows where the broken links point to.

To easily show that this really works, just use the color indicator of GNU ls instead of find’s builtin -ls:

$ find -L . -type l -exec ls -lF --color=yes '{}' +

Et voilà, all displayed links show up in red which means they’re broken.

Kudos to CodeSnippets for showing me the right idea. And thanks to ft of zsh and grml fame for the hint about find -exec command {} + instead of find -exec command {} ;.

Hint from mika of grml fame: With zsh it is even less code to type:

% ls **/*(-@)
% ls -lF **/*(-@)

Thanks, mika!

How to move a git submodule //at 20:31 //by abe

from the git-rules-and-still-can-be-improved dept.

If you try to move a git submodule with git mv, you’ll get the following error message:

$ git mv old/submodule new/submodule
fatal: source directory is empty, source=old/submodule, destination=new/submodule

There’s a patch against git to supoort submodule moving, but it doesn’t seem to be applied yet, at least not in the version which is currently in Debian Sid.

What worked for me to solve this issue was the following (as posted on StackOverflow):

  1. Edit .gitmodules and change the path of the submodule appropriately, and put it in the index with git add .gitmodules.
  2. If needed, create the parent directory of the new location of the submodule: mkdir new.
  3. Move all content from the old to the new directory: mv -vi old/submodule new/submodule.
  4. Remove the old directory with git rm --cached old/submodule.

Looked like this for me afterwards:

 # On branch master
 # Changes to be committed:
 #   (use "git reset HEAD <file>..." to unstage)
 #
 #       modified:   .gitmodules
 #       renamed:    var/lib/dokuwiki/tpl -> var/lib/dokuwiki/lib/tpl
 #

Finally commit the changes. HTH.

Saturday·09·April·2011

Finding packages for deinstallation on the commandline with aptitude //at 20:18 //by abe

from the aptitude-for-the-win dept.

Although I often don’t agree with Erich (especially if GNOME is involved ;-), he recently posted something on Planet Debian which I found very helpful.

I also own a netbook where disk space is sacre. It’s an ASUS EeePC 701 with just 4GB disk space. And it runs Debian Sid, so dependencies change often, leaving packages installed which formerly had hard dependencies on, but are now left with just recommendations pointing to it.

Quite a few times I asked myself if it’s possible to find those packages and if so, how to do it. Well, I don’t have to ask myself that anymore, since Erich recently posted the appropriate filter patterns for my favourite package manager aptitude for this task in his posting “Finding packages for deinstallation”. Thanks, Erich!

Since those filters aren’t very easy to remember, I’d like to extend the usefulness of his posting towards the commandline. I for myself added the following aliases to my shell setup:

alias aptitude-just-recommended='aptitude -o "Aptitude::Pkg-Display-Limit=!?reverse-depends(~i) ~M !?essential"'
alias aptitude-also-via-dependency='aptitude -o "Aptitude::Pkg-Display-Limit=~i !~M ?reverse-depends(~i) !?essential"'

As youam suggested on IRC, I also added the filter !?essential since we won’t touch essential packages when cleaning up the list of installed packages anyway.

Hope this helps further.

Tuesday·22·March·2011

Different Flavours of Planet Commandline //at 22:40 //by abe

from the different-tastes-different-flavours dept.

Since there were quite some requests for a Planet Commandline feed without the microblogging feeds included, I splitted Planet Commandline into different flavours. I’m quite happy with that solution, because I must admit that the amount of microblogging postings in relation to normal blog postings was indeed higher than initially expected

So from now on Planet Commandline has a basic flavour at http://planet-commandline.org/ and one with the microblogging feeds (climagic and commandlinefu) included at http://planet-commandline.org/+snippets/.

For making this possible I hacked our Planet Venus wrapper to accept arbitary configuration snippets to be added at the end of the configuration as well as as sed-based modifications to the concatenated configuration before Planet Venus is run on them.

This also allowed me to create further flavours of Planet Commandline:

I hope nobody minds this diversification of Planet Commandline.

Currently no combination of flavours is supported, but if there’s a relevant demand for the one or the other combination of flavours I may have a look if that can be automated, too.

Planet Commandline officially online //at 22:25 //by abe

from the Magrathea dept.

Around the first bunch of postings in my Useful but Unknown Unix Tools, Tobias Klauser of inotail and Symlink fame came up with the idea of making a Planet (i.e. a blog aggregator) of all the comandline blogs and blog categories out there.

A first Planet Venus running prototype based on the template and style sheets of Planet Symlink was quickly up and running.

I just couldn’t decide if I should use an amber or phosphor green style for this new planet. Marius Rieder finally had the right idea to solve this dilemma: Offer both, an amber and a phosphor green style. Christian Herzog pointed me to the right piece of code at A List Apart. So here is it, available in you favourite screen colors:

Planet Commandline

For a beginning, the following feeds are included:

Which leads us to the discussion what kind of feeds should be included in Planet Commandline.

Of course, all blogs or blog categories which (nearly) solely post neat tips and tricks about the command line in English are welcome.

Microblogging feeds containing (only) small but useful command line tips are welcome, too, if they neither permanently contain dozens of posts per day nor have a low signal-to-noise ratio. Unfortunately most identi.ca groups do, so they’re not suitable for such a planet.

What I’m though unsure about are non-English feeds. Yes, there’s one in already, but I noticed this only after including Beat’s Chrütertee and his FreeBSD command line tips are really good. So if it doesn’t go overboard, I think it’s ok. If there are too many non-English feeds, I’ll probably split Planet Commandline off into at least three Planets: One with all feeds, one with English only and one with all non-English feeds or maybe even one feed per language. But for now that’s still a long way off.

Another thing I’m unsure about are more propgram specific blogs like the impressive Mastering Emacs blog “about mastering the world’s best text editor”. *g* (Yeah, I didn’t include that one yet. But as soon someone shows me the vi-equivalent of that blog, I’ll include both. Anyone thinks, spf13’s vim category is up to that?)

Oh, and sure, any shell-specific (zsh, tcsh, bash, mksh, busybox) tips & tricks blogs don’t count as program-specific blogs like some $EDITOR, $BROWSER, or $VCS specific blogs do. :-)

Of course I’m happy about further suggestions for feeds to include in Planet Commandline. Just remember that the feed should provide (at least nearly) exclusively command line tips, tricks or howtos. Suggestions for links to other commandline related planets are welcome, too.

Wednesday·16·March·2011

Mutationen auf Planet Symlink //at 18:41 //by abe

Aus der Foti Abteilung

Manche mögen es schon bemerkt haben — es gab in den letzten Monaten ein paar neue Blogs auf Planet Symlink. In chronologischer Reihenfolge (wozu git doch gut ist :-):

Auch habe ich ein paar Blogs entfernt. Die Blogs auf folgenden Domains existieren nicht mehr: frozenbrain.com, qolume.ch, meinblog.ch und sunflyer.ch.

Und nachdem schon seit längerem immer wieder Beschwerden kommen über nicht auf den Planet passende Inhalte aus ein und demselben Feed, und heute wieder eine solche kam, habe ich mich ausserdem auch durchringen können, dkgs Feed in der Planet-Konfiguration auszukommentieren.

Friday·28·January·2011

Cool new feature in OpenSSH 5.7: scp between two remote hosts //at 02:55 //by abe

from the always-wanted dept.

Just a few days after OpenSSH 5.7 was released upstream, our (Debian’s as well as Ubuntu’s) tireless OpenSSH and GRUB maintainer Colin Watson uploaded a first package of OpenSSH 5.7 to Ubuntu Natty and to Debian Experimental.

Besides the obvious new thing, the implementation of Elliptic Curve Cryptography which promises better speed and shorter keys while staying at the same level of security, one other item of his changelog entry stuck out and caught my attention:

  • scp(1): Add a new -3 option to scp: Copies between two remote hosts are transferred through the local host.

That’s something I always wondered why it didn’t “just work”. While it still doesn’t seem to detect such a situation by default, it’s now at least possible to copy stuff from on remote box to another without ugly port forwarding and tunneling hacks.

Further cool stuff in the changelog:

  • sftp(1)/sftp-server(8): add a protocol extension to support a hard link operation. It is available through the “ln” command in the client. The old “ln” behaviour of creating a symlink is available using its “-s” option or through the preexisting “symlink” command.

Colin++

Friday·07·January·2011

“peer holds all free leases” on both DHCP servers //at 15:54 //by abe

from the not-so-helpful-error-messages dept.

At work we run a pair of ISC DHCP servers running Debian Lenny in a classical ISC DHCP failover setup which provide DHCP service to several subnets, some only with static IPs (e.g. for printers) and some with half static and half dynamic IPs.

Today I got a call from a user that her laptop doesn’t get an IP despite it’s correctly registered in our MAC address database from which we generate the “group { }” sections of the dhcpd.conf.

Everything looked fine, but every DHCPDISCOVER package got logged in the syslog on both servers like this:

Jan  7 14:34:39 dhcp1 dhcpd: DHCPDISCOVER from 01:23:45:67:89:ab via eth2: peer holds all free leases
Jan  7 14:34:39 dhcp2 dhcpd: DHCPDISCOVER from 01:23:45:67:89:ab via eth2: peer holds all free leases

Searching the web for this error message mostly results in mails which say “If have this on one server but not the other, you soon run out of IP addresses”, but none which mentions what happens if you got them on both sides. Following a coworker’s idea of adding “both servers” to the search term, I found Debian bug #563449 (dhcp3-server: Incorrect “peer holds all free leases” log entries) which turned out as configuration error or at least unexpected configuration (machine was blocked from getting an IP on purpose) and misleading error messages.

So I checked under which circumstances this computer would not get an IP despite it had a static IP configured:

  host somehost {
    hardware ethernet 01:23:45:67:89:ab;
    fixed-address 192.0.2.123;
  }

That computer would not get an IP address in any subnet which has different IP range and no dynamic IP addresses. And even if I comment out the “fixed-address” setting, it wouldn’t get an IP in any static-IPs-only subnet either.

And *bingo*, that computer was plugged into the printer subnet which has only static IPs, e.g. in the 198.51.100.x range.

So if you get the “peer holds all free leases” error message from both your DHCP servers, chances are very high that the mentioned MAC address should really not get an IP address on this network (as it does :-). The error messages are just somewhat misleading.

Hope, this saves someone some time. :-)

Tag Cloud

2CV, aha, Apache, aptitude, ASUS, Automobiles, autossh, Berlin, bijou, Blogging, Blosxom, Blosxom Plugin, Browser, BSD, CDU, Chemnitz, Citroën, CLI, CLT, Conkeror, CX, deb, Debian, Doofe Parteien, E-Mail, eBay, EeePC, Emacs, Epiphany, Etch, ETH Zürich, Events, Experimental, Firefox, Fläsch, FreeBSD, FVWM, Galeon, Gecko, git, GitHub, GNOME, GNU, GNU Coreutils, GNU Screen, Google, GPL, grep, grml, gzip, Hacks, Hardware, Heise, HTML, identi.ca, IRC, irssi, Jabber, JavaShit, Kazehakase, Lenny, Liferea, Linux, LinuxTag, LUGS, Lynx, maol, Meme, Microsoft, Mozilla, Music, mutt, Myon, München, nemo, Nokia, nuggets, Open Source, Opera, Pentium I, Perl, Planet Debian, Planet Symlink, Quiz, Rant, ratpoison, Religion, RIP, Sarcasm, Sarge, Schweiz, screen, Shell, Sid, Spam, Squeeze, SSH, Stöckchen, SuSE, Symlink, Symlink-Artikel, Tagging, Talk, taz, Text Mode, ThinkPad, Ubuntu, USA, USB, UUUCO, UUUT, VCFe, Ventilator, Vintage, Wahlen, Wheezy, Wikipedia, Windows, WML, Woody, WTF, X, Xen, zsh, Zürich, ÖPNV

Calendar

 2011 
Months
Dec
 December 
Mo Tu We Th Fr Sa Su
     
 

Tattletale Statistics

Blog postings by posting time
Blog posting times this month



Search


Advanced Search


Categories


Recent Postings

13 most recent of 269 postings total shown.


Recent Comments

Hackergotchi of Axel Beckert

About...

This is the blog or weblog of Axel Stefan Beckert (aka abe or XTaran) who thought, he would never start blogging... (He also once thought, that there is no reason to switch to this new ugly Netscape thing because Mosaïc works fine. That was about 1996.) Well, times change...

He was born 1975 at Villingen-Schwenningen, made his Abitur at Schwäbisch Hall, studied Computer Science with minor Biology at University of Saarland at Saarbrücken (Germany) and now lives in Zürich (Switzerland), working at the IT Support Group (ISG) of the Departement of Physics at ETH Zurich.

Links to internal pages are orange, links to related pages are blue, links to external resources are green and links to Wikipedia articles, Internet Movie Database (IMDb) entries or similar resources are bordeaux. Times are CET respective CEST (which means GMT +0100 respective +0200).


RSS Feeds


Identity Archipelago


Picture Gallery


Button Futility

Valid XHTML Valid CSS
Valid RSS Any Browser
GeoURL
This content is licensed under a Creative Commons License (SA 3.0 DE). Some rights reserved. Hacker Emblem
Get Mozilla Firefox! Powered by Linux!
Typed with GNU Emacs Listed at Tux Mobil
XFN Friendly Button Maker

Blogroll

Blog or not?


People I know personally


Other blogs I like or read


Independent News


Interesting Planets


Web comics I like and read

Stalled Web comics I liked


Blogging Software

Blosxom Plugins I use

Bedside Reading

Just read

  • Bastian Sick: Der Dativ ist dem Genitiv sein Tod (Teile 1-3)
  • Neil Gaiman and Terry Pratchett: Good Omens (borrowed from Ermel)

Currently Reading

  • Douglas R. Hofstadter: Gödel, Escher, Bach
  • Neil Gaiman: Keine Panik (borrowed from Ermel)

Yet to read

  • Neil Stephenson: Cryptonomicon (borrowed from Ermel)

Always a good snack

  • Wolfgang Stoffels: Lokomotivbau und Dampftechnik (borrowed from Ermel)
  • Beverly Cole: Trains — The Early Years (getty images)

Postponed