Wednesday·21·November·2012
Suggestions for the GNOME Team //at 23:01 //by abe
Thanks to Erich Schubert’s blog posting on Planet Debian I became aware of the 2012 GNOME User Survey at Phoronix.
Like back in 2006 I still use some GNOME applications, so I do consider myself as “GNOME user” in the widest sense and hence I filled out that survey. Additionally I have to live with GNOME 3 as a system administrator of workstations, and that’s some kind of usage, too. ;-)
The last question in the survey was Do you
have any comments or suggestions for the GNOME team?
— Sure
I have. And since I tried to give constructive feedback instead of
only ranting, here’s my answer to that question as I
submitted it in the survey, too, just spiced up with some hyperlinks
and highlighting:
Don’t try to change the users. Give the users more possibilities to change GNOME if they don’t agree with your own preferences and decisions. (The trend to castrate the user was already starting with GNOME 2 and GNOME 3 made that worse IMHO.)
If you really think that you need less configurability because some non-power-users are confused or challenged by too many choices, then please give the other users at least the chance to enable more configuration options. A very good example in that hindsight was Kazehakase (RIP) who offered several user interfaces (novice, intermediate and power user or such). The popular text-mode web browser Lynx does the same, too, btw.
GNOME lost me mostly with the change to GNOME 2. The switch from Galeon 1.2 to 1.3/2.0 was horrible and the later switch to Epiphany made things even worse on the browser side. My short trip to GNOME as desktop environment ended with moving back to FVWM (configurable without tons of clicking, especially after moving to some other computer) and for the browser I moved on to Kazehakase back then. Nowadays I’m living very well with Awesome and Ratpoison as window managers, Conkeror as web browser (which are all very configurable) and a few selected GNOME applications like Liferea (luckily still quite configurable despite I miss Gecko’s
about:config
since the switch to WebKit), GUCharmap and Gnumeric.For people switching from Windows I nowadays recommend XFCE or maybe LXDE on low-end computers. I likely would recommend GNOME 2, too, if it still would exist. With regards to MATE I’m skeptical about its persistance and future, but I’m glad it exists as it solves a lot of problems and brings in just a few new ones. Cinnamon as well as SolusOS are based on the current GNOME libraries and are very likely the more persistent projects, but also very likely have the very same multi-head issues we’re all barfing about at work with Ubuntu Precise. (Heck, am I glad that I use Awesome at work, too, and all four screens work perfectly as they did with FVWM before.)
Thanks to Dirk Deimeke for his (German written) pointer to Marcus Moeller’s interview with Ikey Doherty (in German, too) about his
Debian-/GNOME-based distribution SolusOS.
Tagged as: awesome, Cinnamon, Debian, Desktop, Epiphany, FVWM, Galeon, GNOME, Gnumeric, GUCharmap, Kazehakase, Liferea, LXDE, MATE, Other Blogs, Phoronix, Planet Debian, Precise, Rant, ratpoison, SolusOS, survey, Ubuntu, XFCE
// show without comments // write a comment
Related stories
zutils: zcat and friends on Steroids //at 01:18 //by abe
I recently wrote about tools to handle archives conveniently. If you just have to handle compressed text files, there are some widely known shortcut commands to mimic common commands on files compressed with a specific compression format.
gzip | bzip2 | lzma | xz | |
---|---|---|---|---|
cat | zcat | bzcat | lzcat | xzcat |
cmp | zcmp | bzcmp | lzcmp | xzcmp |
diff | zdiff | bzdiff | lzdiff | xzdiff |
grep | zgrep | bzgrep | lzgrep | xzgrep |
egrep | zegrep | bzegrep | lzegrep | xzegrep |
fgrep | zfgrep | bzfgrep | lzfgrep | xzfgrep |
more | zmore | bzmore | lzmore | xzmore |
less | zless | bzless | lzless | xzless |
In Debian and derivatives, those tools are part of the according
package for that compression utility, i.e. the zcat
command is part of the gzip package and the
xzfgrep
command is part of the xz-utils package.
But despite this matrix is quite easy to remember, the situation has a few drawbacks:
- Those tools can only handle the format they’re written for (which
btw. means that all xz-tools can also handle
lzma
-compressed files aslzma
isxz
’s predecessor) zcat
and the other cat variants can’t even recognize non-compressed files and throw an error instead of just showing their contents.- I always tend to think that
lzcat
and friends are forlzip
-based compression asxzcat
can handlelzma
-compressed files anyway.
This is where the zutils project comes in: zutils provides the
functionality of most of these utilities, too, but with one big
difference: You don’t have to remember, think about or type which
compression method has been used for your data, just use
zcat
, zcmp
, zdiff
,
zgrep
, zegrep
, or zfgrep
and it
works — independently of what compression method has been used
— if any — or if there are different compression types
mixed in the parameters to the same command:
$ zfgrep foobar bla.txt fnord.gz hurz.xz quux.lz bar.lzma
Especially if you use logrotate and let
logrotate
compress old logs, it’s very comfortable that
one command suffices to concatenate all the available logfiles,
including the current uncompressed one:
$ zcat /var/log/syslog* | …
Additionally, zutils’ versions of these tools also support
lzip
-compressed files.
The zutils package is available in Debian starting with
Wheezy and in Ubuntu since Oneiric. When being installed, it replaces
the original z*
utilities from the gzip package
by diverting them away.
The only drawback so far is that there neither a
zless
nor a zmore
utility from the
zutils project, so zless bla.txt fnord.gz hurz.xz quux.lz
bar.lzma
will not work as expected even after
installing zutils as it is still the one from the gzip package and hence it will show you just the first two files in
plain text, but not the remaining ones.
Tagged as: bzip2, Debian, DWIM, gzip, logrotate, lzip, lzma, UUUT, xz, zcat, zcmp, zdiff, zgrep, ztest, zutils
// show without comments // write a comment
Related stories
Saturday·17·November·2012
deepgrep: grep nested archives with one command //at 02:00 //by abe
Several months ago, I wrote about grep everything and listed grep-like tools which can grep through compressed files or specific data formats. The blog posting sparked several magazine articles and talks by Frank Hofmann and me.
Frank recently noticed that we though missed one more or less mighty tool so far. We missed it, because it’s mostly unknown, undocumented and hidden behind a package name which doesn’t suggest a real recursive “grep everything”:
deepgrep
deepgrep
is part of the Debian package strigi-utils, a package which contains utilities related to the
KDE desktop search Strigi.
deepgrep
especially eases the searching through tar
balls, even nested ones, but can also search through zip files and
OpenOffice.org/LibreOffice documents (which are actually zip files).
deepgrep
seems to support at least the following archive
and compression formats:
- tar
- ar, and hence deb
- rpm (but not cpio)
- gzip/gz
- bzip2/bz2
- zip, and hence jar/war and OpenOffice.org/LibreOffice documents
- MIME messages (i.e. files attached to e-mails)
A search in an archive which is deeply nested looks like this:
$ deepgrep bar foo.ar foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz/foo.txt:foobar foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz/foo.txt:bar
deepgrep
though neither seems to support any LZMA based
compression (lzma, xz, lzip, 7z), nor does it support lzop, rzip,
compress (.Z suffix), cab, cpio, xar, or rar.
Further current drawbacks of deepgrep
:
- Nearly no commandline options, especially none of the common grep options
- No man-page or other documentation
- Exit code not related to search results, you have to check the output to see if something has been found
deepfind
If you just need the file names of the files in nested archives, the
package also contains the tool deepfind
which does
nothing else than to list all files and directories in a given set of
archives or directories:
$ deepfind foo.ar foo.ar foo.ar/foo.tar foo.ar/foo.tar/foo.tar.gz foo.ar/foo.tar/foo.tar.gz/foo.zip foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2 foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz/foo.txt
As with deepgrep
, deepfind
does not
implement any common options of it’s normal sister tool
find
.
[The following part has been added on 17-Nov-2012]
As with deepgrep, it also doesn’t seem to support any of the more modern or more exotic compression formats, i.e. it fails on modern debian binary packages which use xz compression on the data part:
deepfind xulrunner-18.0_18.0\~a2+20121109042012-1_amd64.deb xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/debian-binary xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/control.tar.gz xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/control.tar.gz/triggers xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/control.tar.gz/preinst xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/control.tar.gz/md5sums xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/control.tar.gz/postinst xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/control.tar.gz/control xulrunner-18.0_18.0~a2+20121109042012-1_amd64.deb/data.tar.xz
[End of part added at 17-Nov-2012]
Dependencies
The package strigi-utils doesn’t pull in the complete Strigi framework (i.e. no daemon), just a few libraries (libstreams, libstreamanalyzer, and libclucene). On Wheezy it also pulls in some audio/video decoding libraries which may make some server administrators less happy.
Conclusion
Both tools are quite limited to some basic use cases, but can be worth a fortune if you have to work with nested archives. Nevertheless the claim in the Debian package description of strigi-utils that they’re “enhanced” versions of their well known counterparts is IMHO disproportionate.
Most of the missing features and documentation can be explained by the primary purpose of these tools: Being backend for desktop searches. I guess, there wasn’t much need for proper commandline usage yet. Until now. ;-)
42.zip
And yes, I was curious enough to let deepfind
have a look
at 42.zip (the one from SecurityFocus, unzip seems not
able to unpack 42.zip from unforgettable.dk due a missing version compatibility)
and since it just traverses the archive sequentially, it has no
problem with that, needing just about 5 MB of RAM and a lot of time:
[…] 42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page e.zip 42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page e.zip/0.dll 42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page f.zip 42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page f.zip/0.dll deepfind 42.zip 11644.12s user 303.89s system 97% cpu 3:24:02.46 total
I though won’t try deepgrep
on 42.zip. ;-)
Tagged as: 42.zip, ar, bzip2, CLI, CLucene, deb, deepfind, deepgrep, efho, find, grep, gzip, jar, KDE, LibreOffice, Lucene, odt, OpenOffice.org, Rant, rpm, strigi, tar, UUUT, war, zip
// show without comments // write a comment
Related stories
Friday·16·November·2012
Useful but Unknown Unix Tools: dwdiff better than wdiff + colordiff //at 01:18 //by abe
A year ago I wrote in Useful but Unknown Unix Tools: How wdiff and colordiff help to choose the right Swiss Army Knife about using wdiff and colordiff together. Colordiff’ed wdiff output looks like this:
$ wdiff foobar.txt barfoo.txt | colordiff [-foo-]bar fnord gnarz hurz quux bla {+foo+} fasel
But if you have colour, why still having these hard to read wdiff markers still in the text?
There exists a tool named dwdiff which can do word diffs in colour without
textual markers and with even less to type (and without being
git diff --color-words
;-). Actually it looks like
git diff --color-words
, just without the git:
$ dwdiff -c foobar.txt barfoo.txt foo bar fnord gnarz hurz quux bla foo fasel
Another cool thing about dwdiff (and its name giving feature) is that you can defined what you consider whitespace, i.e. which character(s) delimit the words. So lets do the example above again, but this time declare that “f” is considered the only whitespace character:
$ dwdiff -W f -c foobar.txt barfoo.txt foo bar bar fnord gnarz hurz quux bla foo fasel
dwdiff can also show line numbers:
$ dwdiff -c -L foobar.txt barfoo.txt 1:1 foo bar fnord 2:2 gnarz hurz quux 3:3 bla foo fasel $ dwdiff -c -L foobar.txt quux.txt 1:1 foo bar fnord 1:2 foobar floedeldoe 2:3 gnarz hurz quux 3:4 bla foo fasel
(coloured shell screenshots by aha)
Tagged as: aha, colordiff, dwdiff, git, UUUT, wdiff
// show without comments // write a comment
Related stories
Thursday·15·November·2012
Tools to handle archives conveniently //at 01:42 //by abe
TL;DR: There’s a summary at the end of the article.
Today I wanted to see why a dependency in a .deb
-package
from an external APT repository changed so that it became
uninstallable. While dpkg-deb --info foobar.deb
easily
shows the control information, the changelog is in the filesystem part
of the package.
I could extract that one dpkg-deb
, too,
but I’d have to extract either to some temporary directory or pipe it
into tar which then can extract a single file from the archive and
sent it to STDOUT:
dpkg-deb --fsys-tarfile foobar.deb | tar xOf - ./usr/share/doc/foobar/changelog.Debian.gz | zless
But that’s tedious to type. The following command is clearly less to type and way easier to remember:
acat foobar.deb ./usr/share/doc/foobar/changelog.Debian.gz | zless
acat
stands for “archive cat” is part of the atool suite of commands:
- als
- lists files in an archive.
$ als foobar.tgz drwxr-xr-x abe/abe 0 2012-11-15 00:19 foobar/ -rw-r--r-- abe/abe 13 2012-11-15 00:20 foobar/bar -rw-r--r-- abe/abe 13 2012-11-15 00:20 foobar/foo
- acat
- extracts files in an archive to standard out.
$ acat foobar.tgz foobar/foo foobar/bar foobar/bar bar contents foobar/foo foo contents
- adiff
- generates a diff between two archives using diff(1).
$ als quux.zip Archive: quux.zip Length Date Time Name --------- ---------- ----- ---- 0 2012-11-15 00:23 quux/ 16 2012-11-15 00:22 quux/foo 13 2012-11-15 00:20 quux/bar --------- ------- 29 3 files $ adiff foobar.tgz quux.zip diff -ru Unpack-3594/foobar/foo Unpack-7862/quux/foo --- Unpack-3594/foobar/foo 2012-11-15 00:20:46.000000000 +0100 +++ Unpack-7862/quux/foo 2012-11-15 00:22:56.000000000 +0100 @@ -1 +1 @@ -foo contents +foobar contents
- arepack
- repacks archives to a different format. It does this by first
extracting all files of the old archive into a temporary directory,
then packing all files extracted to that directory to the new archive.
Use the
--each
(-e
) option in combination with--format
(-F
) to repack multiple archives using a single invocation ofatool
. Note thatarepack
will not remove the old archive. $ arepack foobar.tgz foobar.txz foobar.tgz: extracted to `Unpack-7121/foobar' foobar.txz: grew 36 bytes
- apack
- creates archives (or compresses files). If no file arguments are specified, filenames to add are read from standard in.
- aunpack
- extracts files from an archive. Often one wants to extract all files in an archive to a single subdirectory. However, some archives contain multiple files in their root directories. The aunpack program overcomes this problem by first extracting files to a unique (temporary) directory, and then moving its contents back if possible. This also prevents local files from being overwritten by mistake.
(atool subcommand descriptions from the atool man page which is licensed under GPLv3+. Examples by me.)
I though miss
the existence of an agrep
subcommand. Guess why?
atool
supports a wealth of archive types: tar (gzip-,
bzip-, bzip2-, compress-/Z-, lzip-, lzop-, xz-, and 7zip-compressed),
zip, jar/war, rar, lha/lzh, 7zip, alzip/alz, ace, ar, arj, arc, rpm,
deb, cab, gzip, bzip, bzip2, compress/Z, lzip, lzop, xz, rzip, lrzip
and cpio. (Not all subcommands support all archive types.)
Similar Utilities
There are some utilities which cover parts of what atool does, too:
Tools from the mtools package
Yes, they come from the “handle MS-DOS floppy disks tool” package, don’t ask me why. :-)
- uz
gunzip
s and extracts agzip
‘dtar
‘d archives- Advantage over
aunpack
: Less to type. :-) - Disadvantage compared to
aunpack
: Supports only one archive format. - lz
gunzip
s and shows a listing of agzip
‘dtar
‘d archive- Advantage over
als
: One character less to type. :-) - Disadvantage compared to
als
: Supports only one archive format.
unp
unp
extracts one or more files given as arguments on the
command line.
$ unp -s Known archive formats and tools: 7z: p7zip or p7zip-full ace: unace ar,deb: binutils arj: arj bz2: bzip2 cab: cabextract chm: libchm-bin or archmage cpio,afio: cpio or afio dat: tnef dms: xdms exe: maybe orange or unzip or unrar or unarj or lha gz: gzip hqx: macutils lha,lzh: lha lz: lzip lzma: xz-utils or lzma lzo: lzop lzx: unlzx mbox: formail and mpack pmd: ppmd rar: rar or unrar or unrar-free rpm: rpm2cpio and cpio sea,sea.bin: macutils shar: sharutils tar: tar tar.bz2,tbz2: tar with bzip2 tar.lzip: tar with lzip tar.lzop,tzo: tar with lzop tar.xz,txz: tar with xz-utils tar.z: tar with compress tgz,tar.gz: tar with gzip uu: sharutils xz: xz-utils zip,cbz,cbr,jar,war,ear,xpi,adf: unzip zoo: zoo
So it’s very similar to aunpack
, just a shorter command
and it supports some more exotic archive formats which
atool
doesn’t support.
Also part of the unp package is ucat
which does
more or less the same as acat
, just with unp
as backend.
dtrx
From the man page of dtrx
:
In addition to providing one command to extract many different archive types,
dtrx
also aids the user by extracting contents consistently. By default, everything will be written to a dedicated directory that’s named after the archive. dtrx will also change the permissions to ensure that the owner can read and write all those files.Supported archive formats: tar, zip (including self-extracting .exe files), cpio, rpm, deb, gem, 7z, cab, rar, and InstallShield. It can also decompress files compressed with gzip, bzip2, lzma, or compress.
dtrx -l
lists the contents of an archive, i.e. works like
als
or lz
.
dtrx has two features not present in the other tools mentioned so far:
- It can extract metadata instead of the normal contents from .deb and .gem files.
- It can extract archives recursively, i.e. can extract archives inside of archives.
Unfortunately you can’t mix those two features. But you can use the following tool for that purpose:
deepfind
deepfind is a command from the package strigi-utils and recursively lists files in archives, including archives in archives. I’ve already written a detailed blog-posting about deepfind and its friend deepgrep.
tardiff
tardiff
was written to check what changed in source code
tarballs from one release to another. By default it just lists the
differences in the file lists, not in the files’ contents and hence
works different than adiff
.
Summary
atool
and friends are probably the first choice when it comes to
DWIM archive handling, also
because they have an easy to remember subcommand scheme.
uz
and lz
and the shortest way to extract or
list the contents of a .tar.gz file. But nothing more. And you have to
install mtools even if you don’t have a floppy drive.
unp
comes in handy for exotic archive formats atool
doesn’t support. And it’s way easier to remember and type than
aunpack
.
dtrx
is neat if you want to extract archives in archives
or if you want to extract metadata from some package files with just a
few keystrokes.
For listing all files in recursive archives, use
deepfind
.
Tagged as: 7zip, acat, adiff, als, apack, archives, atool, aunpack, bzip, bzip2, deb, deepfind, dtrx, floppy, gem, grep, gzip, lha, lrzip, lz, lzip, lzop, MS-DOS, mtools, rar, rpm, rzip, strigi-utils, tar, tardiff, ucat, unp, UUUT, uz, xz, zip
// show without comments // write a comment
Related stories
Thursday·30·August·2012
Finding similar but not identical files //at 17:10 //by abe
There are quite some tools to find duplicate files in Debian (Ua is not even packaged for Debian!!!1!eleven! SCNR — via Chrütertee) and depending on the task I use either hardlink (see this blog posting), fdupes (if I need output with all identical files on one line; see example below), or duff (if it has to be performant).
But for code deduplication in historically grown code you sometimes need a tool which does not only find identical files, but also those which just differ in a few blanks or blank lines.
I found two tools in Debian which can give you some kind of percentage of similarity: simhash (which is btw. orphaned; upstream homepage) and similarity-tester (upstream homepage).
simhash has the shorter name and hecne sounds more usable on the command-line. But it seems only be able to compare two files at once and also only after first computing and writing down its similarity hash to a file. Not really usable for those one-liner cases on the command-line.
similarity-tester has the longer name (and one which made me suspect that it may be a GUI tool), but provides what I was looking for:
$ find . -type f | sim_text -ipTt 75
This lists all files in the current directory which have at 75% (“-t 75”) in common with another file in the list of files. The option “-i” causes sim_text to read the files to compare from standard input; “-p” causes sim_text to just output the similarity percentage; and “-T” suppresses the per-file list of found tokens.
I used similarity-tester’s “sim_text” tool to compare natural langauge as most of the files, I had to test, are shell scripts. But similarity-tester also provides tools to test the similarity of code in specific programming languages, namely C, Java, Pascal, Modula-2, Lisp and Miranda.
Example output from the xen-tools project (after I already did a lot of code deduplication):
./intrepid/30-disable-gettys consists for 100 % of ./edgy/30-disable-gettys material ./edgy/30-disable-gettys consists for 100 % of ./intrepid/30-disable-gettys material ./common/90-make-fstab-rpm consists for 98 % of ./centos-5/90-make-fstab material ./centos-5/90-make-fstab consists for 98 % of ./common/90-make-fstab-rpm material ./gentoo/55-create-dev consists for 91 % of ./dapper/55-create-dev material ./dapper/55-create-dev consists for 90 % of ./gentoo/55-create-dev material ./gentoo/55-create-dev consists for 88 % of ./common/55-create-dev material ./common/90-make-fstab-deb consists for 87 % of ./common/90-make-fstab-rpm material ./common/90-make-fstab-rpm consists for 85 % of ./common/90-make-fstab-deb material ./common/30-disable-gettys consists for 81 % of ./karmic/30-disable-gettys material ./intrepid/80-install-kernel consists for 78 % of ./edgy/80-install-kernel material ./edgy/30-disable-gettys consists for 76 % of ./karmic/30-disable-gettys material ./karmic/30-disable-gettys consists for 76 % of ./edgy/30-disable-gettys material ./common/50-setup-hostname-rpm consists for 76 % of ./gentoo/50-setup-hostname material
Depending on the length of possible filenames and amount of files this
can be made more readable using the column
utility from
the bsdmainutils package and reversed by using
tac
from the coreutils package:
$ find . -type f | sim_text -ipTt 75 | tac | column -t ./common/50-setup-hostname-rpm consists for 76 % of ./gentoo/50-setup-hostname material ./karmic/30-disable-gettys consists for 76 % of ./edgy/30-disable-gettys material ./edgy/30-disable-gettys consists for 76 % of ./karmic/30-disable-gettys material ./intrepid/80-install-kernel consists for 78 % of ./edgy/80-install-kernel material ./common/30-disable-gettys consists for 81 % of ./karmic/30-disable-gettys material ./common/90-make-fstab-rpm consists for 85 % of ./common/90-make-fstab-deb material ./common/90-make-fstab-deb consists for 87 % of ./common/90-make-fstab-rpm material ./gentoo/55-create-dev consists for 88 % of ./common/55-create-dev material ./dapper/55-create-dev consists for 90 % of ./gentoo/55-create-dev material ./gentoo/55-create-dev consists for 91 % of ./dapper/55-create-dev material ./centos-5/90-make-fstab consists for 98 % of ./common/90-make-fstab-rpm material ./common/90-make-fstab-rpm consists for 98 % of ./centos-5/90-make-fstab material ./edgy/30-disable-gettys consists for 100 % of ./intrepid/30-disable-gettys material ./intrepid/30-disable-gettys consists for 100 % of ./edgy/30-disable-gettys material
Compared to that, fdupes only finds the two 100% identical files:
$ fdupes -r1 . ./intrepid/30-disable-gettys ./edgy/30-disable-gettys
But fdupes helped me already a lot to find the first
bunch of identical files in xen-tools. :-)
Tagged as: bsdmainutils, C, cleanup, column, coreutils, Debian, deduplication, duff, duplicates, fdupes, find, hardlink, Java, Lisp, Miranda, Modula-2, Ohloh, Pascal, recursive, sim_text similarity-tester, simhash, similarity, tac, UUUT, xen-tools
// show without comments // write a comment
Related stories
Tuesday·05·June·2012
Automatically hardlinking duplicate files under /usr/share/doc with APT //at 20:43 //by abe
On my everyday netbook (a very reliable first generation ASUS EeePC 701 4G) the disk (4 GB as the product name suggests :-) is nearly always close to full.
TL;DWTR? Jump directly to the HowTo. :-)
So I came up with a few techniques to save some more disk space. Installing localepurge was one of the earliest. Another one was to implement aptitude filters to do interactively what deborphan does non-interactively. Yet another one is to use du and friends a lot – ncdu is definitely my favourite du-like tool in the meantime.
Using du and friends I often noticed how much disk space /usr/share/doc
takes up. But since I value the
contents of /usr/share/doc
a lot, I condemn
how Nokia solved that on the N900: They let APT delete all
files and directories under /usr/share/doc
(including the copyright files!) via some package named
docpurge. I also dislike Ubuntu’s “solution” to truncate the
shipped changelog files (you can still get the remainder of the files
on the web somewhere) as they’re an important source of information
for me.
So when aptitude showed me that some package suddenly wanted to use up
quite some more disk space, I noticed that the new package version
included the upstream changelog twice. So I started searching for
duplicate files under /usr/share/doc
.
There are quite some tools to find duplicate files in Debian. hardlink seemed most appropriate for this case.
First I just looked for duplicate files per package, which even on that less than four gigabytes installation on my EeePC found nine packages which shipped at least one file twice.
As recommended I rather opted for an according Lintian check (see bugs. Niels Thykier kindly implemented such a check in Lintian and its findings are as reported as tags “duplicate-changelog-files” (Severity: normal, from Lintian 2.5.2 on) and “duplicate-files” (Severity: minor, experimental, from Lintian 2.5.0 on).
Nevertheless, some source packages generate several binary packages
and all of them (of course) ship the same, in some cases quite large
(Debian) changelog file. So I found myself running hardlink /usr/share/doc
now and then to gain
some more free disk space. But as I run Sid and package upgrades
happen more than daily, I came to the conclusion that I should run
this command more or less after each aptitude run, i.e. automatically.
Having taken localepurge’s APT hook as example, I added the
following content as /etc/apt/apt.conf.d/98-hardlink-doc
to my system:
// Hardlink identical docs, changelogs, copyrights, examples, etc DPkg { Post-Invoke {"if [ -x /usr/bin/hardlink ]; then /usr/bin/hardlink -t /usr/share/doc; else exit 0; fi";}; };
So now installing a package which contains duplicate files looks like this:
~ # aptitude install perl-tk The following NEW packages will be installed: perl-tk 0 packages upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 2,522 kB of archives. After unpacking 6,783 kB will be used. Get: 1 http://ftp.ch.debian.org/debian/ sid/main perl-tk i386 1:804.029-1.2 [2,522 kB] Fetched 2,522 kB in 1s (1,287 kB/s) Selecting previously unselected package perl-tk. (Reading database ... 121849 files and directories currently installed.) Unpacking perl-tk (from .../perl-tk_1%3a804.029-1.2_i386.deb) ... Processing triggers for man-db ... Setting up perl-tk (1:804.029-1.2) ... Mode: real Files: 15423 Linked: 3 files Compared: 14724 files Saved: 7.29 KiB Duration: 4.03 seconds localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB Total disk space freed by localepurge: 0 KiB
Sure, that wasn’t the most space saving example, but on some
installations I saved around 100 MB of disk space that way – and
I still haven’t found a case where this caused unwanted damage. (Use
of this advice on your own risk, though. Pointers to potential
problems welcome. :-)
Tagged as: APT, aptitude, ASUS, changelog, docpurge, du, duff, duplicate, duplicates, EeePC, hardlink, HowTo, Lintian, localepurge, N900, ncdu, nemo, Netbook, Nokia, recursive, Ubuntu
// show without comments // write a comment
Related stories
Saturday·05·May·2012
unburden-home-dir uploaded to Sid //at 02:54 //by abe
Most popular web browsers cause quite a lot of I/O on a user’s home directory and their cache’s also take up quite some disk space – with Google’s Chrome/Chromium you can’t even configure how much disk space should be used for the cache.
This causes unnecessary network traffic and no more makes sense if the home directory itself comes over the network, e.g. via NFS or Samba. And on laptops it spins up the disks and unnecessarily costs battery power and therefore lowers the potential battery life.
Such caches also costs scarce disk space on SSDs or flash cards as common in laptops, netbooks and other mobile devices, and often get backed up without any real use.
To take some of this burden off our NFS servers at work I started to develop an Xsession.d hook which moves off such caches to the local disk and puts in symbolic links instead into the user’s home directory when the user locally logs in.
This hook quickly became a standalone Perl script named unburden-home-dir and the Xsession.d hook just a wrapper around it. Due to some unsolved issues I didn’t feel it’s good enough for Debian Unstable, so I uploaded it just to Debian Experimental back then.
Pietro Abate’s recent blog posting about unburden-home-dir on Planet Debian gave me the right kick to make another try to solve the remaining issues.
And the mental distance gained over the time indeed helped and I could fix the remaining issues. So I added some polish to the package and uploaded it to Debian Unstable.
If you used the previous version from experimental, you have to take care of a few things:
- Previously some configuration files sported
unburden_home_dir
as base name while others usedunburden-home-dir
as base name as that’s also the package name. Now all configuration files use the package name, i.e.unburden-home-dir
as base name. - “Conffiles” under
/etc/
should be renamed by dpkg automatically, but per-user configuration files ($HOME/.unburden_home_dir
and$HOME/.unburden_home_dir_list
) must be manually renamed to$HOME/.unburden-home-dir
and$HOME/.unburden-home-dir.list
. - By adding
UNBURDEN_HOME=yes
to$HOME/.unburden-home-dir
every user can decide himself if he wants the Xsession.d hook to be used when he logs in under X. On managed workstations with many users this eases testing of unburden-home-dir with just a few users a lot.
You can follow the development of unburden-home-dir also on GitHub and on Gitorious as well as on Ohloh.
Enjoy!
Tagged as: $HOME, cache, Chrome, Chromium, Conkeror, Debian, Epiphany, Experimental, Firefox, Galeon, Google, I/O, Icedove, Iceweasel, Kazehakase, Mozilla, NFS, Opera, performance, Planet Debian, Sid, symlinks, Thumbnails, Thunderbird, Trash, unburden-home-dir, Unstable, X
// show without comments // write a comment
Related stories
Wednesday·11·April·2012
Tools for CLI Road Warriors: Remote Shells //at 19:44 //by abe
Most of my private online life happens on netbooks and besides the web browser, SSH is my most used program — especially on netbooks. Accordingly I also have hosts on the net to which I connect via SSH. My most used program there is GNU Screen.
So yes, for things like e-mail, IRC, and Jabber I connect to a running screen session on some host with a permanent internet connection. On those hosts there is usually one GNU Screen instance running permanently with either mutt or irssi (which is also my Jabber client via a Bitlbee gateway).
But there are some other less well-known tools which I regard as useful in such a setup. The following two tools can both be seen as SSH for special occassions.
autossh
I already blogged about autossh, even twice, so I’ll just recap the most important features here:
autossh is a wrapper around SSH which regularily checks via two tunnels connect to each other on the remote side if the connection is still alive, and if not, it kills the ssh and starts a new one with the same parameters (i.e. tunnels, port forwardings, commands to call, etc.).
It’s quite obvious that this is perfect to be combined with screen’s
-R
and -d
options.
I use autossh so often that I even adopted its Debian package.
mosh
Since last week there’s a new kid in town^W
Debian
Unstable: mosh targets
the same problems as autossh (unreliable networks, roaming, suspending
the computer, etc.) just with a completely different approach which
partially even obsoletes the usage of GNU Screen or tmux:
While mosh uses plain SSH for authentication, authorization and key exchange the final connection is an AES-128 encrypted UDP connection on a random port and is independent of the client’s IP address.
This allows mosh to have the following advantages: The connection stays even if you’re switching networks or suspending your netbook. So if you’re just running a single text-mode application you don’t even need GNU Screen or tmux. (You still do if you want the terminal multiplexing feature of GNU Screen or tmux.)
Another nice feature, especially on unreliable WLAN connections or laggy GSM or UMTS connections is mosh’s output prediction based on its input (i.e. what is typed). Per line it tries to guess which server reaction a key press would cause and if it detects a lagging connection, it shows the predicted result underlined until it gets the real result from the server. This eases writing mails in a remote mutt or chatting in a remote irssi, especially if you noticed that you made a typo, but can’t remember how many backspaces you would have to type to fix it.
Mosh needs to be installed on both, client and server, but the server is only activated via SSH, so it has no port open unless a connection is started. And despite that (in Debian) mosh is currently just available in Unstable, the package builds fine on Squeeze, too. There’s also an PPA for Ubuntu and of course you can also get the source code, e.g. as git checkout from GitHub.
mosh is still under heavy development and new features and bug fixes get added nearly every day.
Thanks to Christine Spang for sponsoring and mentoring Keith’s mosh package in Debian.
Update: I gave a lightning talk about Mosh and AutoSSH in German at Easterhegg
2012. The slides are available online.
Tagged as: autossh, Bitlbee, Debian, GitHub, GNU Screen, IRC, irssi, Jabber, mosh, mutt, PPA, Squeeze, ssh, SSH, Testing, Ubuntu, Unstable
// show without comments // write a comment
Related stories
Wednesday·04·April·2012
Tools for CLI Road Warriors: Hidden Terminals //at 00:57 //by abe
Some networks have no connection to the outside except that they allow surfing through an HTTP(S) proxy. Sometimes you are happy and the HTTPS port (443) is unrestricted. The following server-side tools allow you to exploit these weaknesses and get you a shell on your server.
sslh
sslh is an SSH/SSL multiplexor. If a client connects to sslh, it checks if the clients speaks the SSH or the SSL protocol and then passes the connection to the according real port of SSL or some SSL enabled service, e.g. an HTTPS, OpenVPN, Tinc or XMPP server. That way it’s possible to connect to one of these services and SSH on the same port.
The usual scenario where this daemon is useful are firewalls which block SSH, force HTTP to go through a proxy, but allow HTTPS connections without restriction. In that case you let sslh listen on the HTTPS port (443) and to move the real HTTPS server (e.g. Apache) to listen on either a different port number (e.g. 442, 444 or 8443) or on another IP address, e.g. on localhost, port 443.
On an Debian or Ubuntu based Apache HTTPS server, you just have to do the following to run Apache on port 442 and sslh on port 443 instead:
apt-get install sslh
as root.- Edit
/etc/default/sslh
, changeRUN=no
toRUN=yes
and--ssl 127.0.0.1:443
to--ssl 127.0.0.1:442
. - Edit
/etc/apache2/ports.conf
and all files in/etc/apache2/sites-available/
which contain a reference to port 443 (which is only/etc/apache2/sites-available/default-ssl.conf
in the default configuration) and change all occurrences of443
to442
. service apache2 restart
service sslh start
Now you should be able to ssh to your server on port 443 (ssh -p 443 your.server.example.org
) while
still being able to surf to
https://your.server.example.org/
.
sslh works as threaded or as preforking daemon, or via inetd. It also
honors tcpwrapper configurations for sshd in /etc/hosts.allow
and /etc/hosts.deny
.
sslh is available as port or package at least in Gentoo, in FreeBSD, in Debian and in Ubuntu.
AjaxTerm
A completely different approach takes AjaxTerm. It provides a terminal inside a web browser with login and ssh being its server-side backend.
Properly safe-guarded by HTTPS plus maybe HTTP based authentication this can be an interesting emergency alternative to the more common — but also more often blocked — remote login mechanisms.
AjaxTerm is available as package at least in Debian and in Ubuntu.
Happily I never were forced to use either of them myself. :-)
Tagged as: AJAX, AjaxTerm, Apache, Debian, HTTPS, libwrap, OpenVPN, SSH, SSL, sslh, tcpd, tcpwrapper, Ubuntu, XMPP
// show without comments // write a comment
Related stories
Thursday·22·March·2012
Tools for CLI Road Warriors: Tunnels //at 19:49 //by abe
Sometime the network you’re connected to is either untrusted (e.g. wireless) or castrated in some way. In both cases you want a tunnel to your trusted home base.
Following I’ll show you three completely different tunneling tools which may helpful while travelling.
sshuttle
sshuttle is a tool somewhere in between of automatic port forward and VPN. It tunnels arbitrary TCP connections and DNS through an SSH tunnel without requiring root access on the remote end of the SSH connection.
So it’s perfect for redirecting most of your traffic through an SSH tunnel to your favourite SSH server, e.g. to ensure your local privacy when you are online via a public, unencrypted WLAN (i.e. easy to sniff for everyone).
It runs on Linux and MacOS X and only needs a Python interpreter on the remote side. Requires root access (usually via sudo) on the client side, though.
It’s currently available at least in Debian Unstable and Testing (Wheezy) as well as in Ubuntu since 11.04 Natty.
Miredo
Miredo is an free and open-source implementation of Microsoft’s NAT-traversing Teredo IPv6 tunneling protocol for at least Linux, FreeBSD, NetBSD and MacOS X.
Miredo includes not only a Teredo client but also a Teredo server
implementation. The developer of Miredo also runs a public Miredo
server, so you don’t even need to install a server somewhere. If you
run Debian or Ubuntu you just need to do apt-get
install miredo
as root and you have IPv6 connectivity. It’s
that easy.
So it’s perfect to get a dynamic IPv6 tunnel for your laptop or mobile phone independently where you are and without the need to register any IPv6 tunnel or configure the Miredo client.
I usually use Miredo on my netbooks to be able to access my boxes at home (which are behind an IPv4 NAT router which is also an SixXS IPv6 tunnel endpoint) from whereever I am.
iodine
iodine is likely the most undermining tool in this set. It tunnels IPv4 over DNS, allowing you to make arbitrary network connections if you are on a network where nothing but DNS requests is allowed (i.e. only DNS packets reach the internet).
This is often the case on wireless LANs with landing page. They redirect all web traffic to the landing page. But the network’s routers try to avoid poisoning the client’s DNS cache with different DNS replies as they would get after the user is logged in. So DNS packets usually pass even the local network’s DNS servers unchanged, just TCP and other UDP packets are redirected until logging in.
With an iodine tunnel, it is possible get a network connection to the outside on such a network anyway. On startup iodine tries to automatically find the best parameters (MTU, request type, etc.) for the current environmenent. However that may fail if any DNS server in between imposes DNS request rate limits.
To be able to start such a tunnel you need to set up an iodine daemon somewhere on the internet. Choose a server which is not already a DNS server.
iodine is available in many distributions, e.g. in
Debian and in Ubuntu.
Tagged as: autossh, Debian, GitHub, iodine, IPv6, Miredo, NAT, Python, Squeeze, SSH, sshuttle, Testing, Ubuntu, Unstable, VPN
// show without comments // write a comment
Related stories
Wednesday·21·March·2012
aptitude-gtk will likely vanish //at 01:06 //by abe
As Christian already wrote, there’s an Aptitude revival ongoing. We already saw this young team releasing aptitude 0.6.5 about 6 weeks ago, more commits have been made, and now we’re heading towards an 0.6.6 release quickly.
But this revival mostly covers the well-known and loved curses interface (TUI) of aptitude and not the seldomly installed GTK interface, which unfortunately never really took off:
While aptitude itself (i.e. the curses and commandline interface) is installed on nearly 99% of all Debian installations which take part in Debian’s “Popularity Contest” statistics, aptitude-gtk is only installed on 0.42% of all these installations.
One reason is likely that aptitude-gtk still hasn’t all the neat features of the curses interface. And another reason is probably that it’s still quite buggy.
Since nobody from the current Aptitude Team has the experience, leisure or time to resurrect (or even complete) aptitude-gtk, the plan is to stop building aptitude-gtk from the aptitude source package soon, i.e. to remove it from Debian for now.
Like the even less finished Qt interface of aptitude, its code will stay in the VCS, but will be unmaintained unless someone steps up to continue aptitude-gtk (or aptitude-qt, or both), maybe even as its own source package.
So if you like aptitude-gtk so much that you’re still using it and want to continue using it, please think about contributing by joining the Aptitude Team and getting aptitude’s GUI interface(s) back in shape.
Another option would be to find a mentor so that resurrecting (one of) aptitude’s GUI interfaces could become (again) a potential project at Debian’s participation at Google’s Summer of Code.
Please direct any questions about aptitude-gtk or aptitude-qt to the
Aptitude Development Mailing List. Or even better, join the discussion in this thread.
Tagged as: aptitude, aptitude-gtk, Debian, Google, GSoC, Planet Debian, removal, Summer of Code, Wheezy
// show without comments // write a comment
Related stories
Tuesday·20·March·2012
Happy Birthday GNU Screen! //at 23:46 //by abe
According to this Usenet posting, GNU Screen became 25 years old today. (Found via Fefe.)
And no, it’s not dead. In contrary, the reaction on the mailing list to bug fixes with patches is usually impressingly prompt. :-)
I took this occassion and uploaded a current git snapshot of GNU Screen to Debian Experimental.
Bug #644788 (screen 4.1.0 can’t attach to a running or detached screen 4.0.3 session) is still an issue with that snapshot, but gladly upstream seems to work on a solution for it. There’s even talk about a 4.1.0 beta release soon — although that hasn’t happened yet.
Have fun!
Tagged as: anniversary, birthday, Debian, Experimental, Git, GNU, GNU Screen, screen, snapshot, upload
// show without comments // write a comment
Related stories
Wednesday·14·March·2012
SSH Multiplexer: parallel-ssh //at 03:10 //by abe
There are many SSH multiplexers in Debian and most of them have one or two features which make them unique and especially useful for that one use case. I use some of them regularily (I even maintain the Debian package of one of them, namely pconsole :-) and I’ll present then and when one of them here.
For non-interactive purposes I really like parallel-ssh aka
pssh. It takes a file of hostnames and a bunch of common ssh
parameters as parameters, executes the given command in parallel in up
to 32 threads (by default, adjustable with -p
) and waits
by default for 60 seconds (adjustable with -t
). For
example to restart hobbit-client on all hosts in kiva.txt,
the following command is suitable:
$ parallel-ssh -h kiva.txt -l root /etc/init.d/hobbit-client restart [1] 19:56:03 [FAILURE] kiva6 Exited with error code 127 [2] 19:56:04 [SUCCESS] kiva [3] 19:56:04 [SUCCESS] kiva4 [4] 19:56:04 [SUCCESS] kiva2 [5] 19:56:04 [SUCCESS] kiva5 [6] 19:56:04 [SUCCESS] kiva3 [7] 19:57:03 [FAILURE] kiva1 Timed out, Killed by signal 9
(Coloured “Screenshots” done with ANSI HTML Adapter from the package aha.)
You easily see on which hosts the command failed and partially also why: On kiva6 hobbit-client is not installed and therefore the init.d script is not present. kiva1 is currently offline so the ssh connection timed out.
If you want to see the output of the commands, you have a two choices. Which one to choose depends on the expected amount of output:
If you don’t expect a lot of output, the -i
(or
--inline
) option for inline aggregated output is probably
the right choice:
$ parallel-ssh -h kiva.txt -l root -t 10 -i uptime [1] 20:30:20 [SUCCESS] kiva 20:30:20 up 7 days, 5:51, 0 users, load average: 0.12, 0.08, 0.06 [2] 20:30:20 [SUCCESS] kiva2 20:30:20 up 7 days, 5:50, 0 users, load average: 0.19, 0.08, 0.02 [3] 20:30:20 [SUCCESS] kiva3 20:30:20 up 7 days, 5:49, 0 users, load average: 0.10, 0.06, 0.06 [4] 20:30:20 [SUCCESS] kiva4 20:30:20 up 7 days, 5:49, 0 users, load average: 0.25, 0.17, 0.14 [5] 20:30:20 [SUCCESS] kiva6 20:30:20 up 7 days, 5:49, 10 users, load average: 0.16, 0.08, 0.02 [6] 20:30:21 [SUCCESS] kiva5 20:30:21 up 7 days, 5:49, 0 users, load average: 3.11, 3.36, 3.06 [7] 20:30:29 [FAILURE] kiva1 Timed out, Killed by signal 9
If you expect a lot of output you can give directories with the
-o
(or --outdir
) and -e
(or
--errdir
) option:
$ parallel-ssh -h kiva.txt -l root -t 20 -o kiva-output lsb_release -a [1] 20:36:51 [SUCCESS] kiva [2] 20:36:51 [SUCCESS] kiva2 [3] 20:36:51 [SUCCESS] kiva3 [4] 20:36:51 [SUCCESS] kiva4 [5] 20:36:53 [SUCCESS] kiva6 [6] 20:36:54 [SUCCESS] kiva5 [7] 20:37:10 [FAILURE] kiva1 Timed out, Killed by signal 9 $ ls -l kiva-output total 24 -rw-r--r-- 1 abe abe 98 Aug 28 20:36 kiva -rw-r--r-- 1 abe abe 0 Aug 28 20:36 kiva1 -rw-r--r-- 1 abe abe 98 Aug 28 20:36 kiva2 -rw-r--r-- 1 abe abe 98 Aug 28 20:36 kiva3 -rw-r--r-- 1 abe abe 98 Aug 28 20:36 kiva4 -rw-r--r-- 1 abe abe 102 Aug 28 20:36 kiva5 -rw-r--r-- 1 abe abe 100 Aug 28 20:36 kiva6 $ cat kiva-output/kiva5 Distributor ID: Debian Description: Debian GNU/Linux 6.0.2 (squeeze) Release: 6.0.2 Codename: squeeze
The only annoying thing IMHO is that the host list needs to be in a file. With zsh, bash and the original ksh (but neither tcsh, pdksh nor mksh), you can circumvent this restriction with one of the following command lines:
$ parallel-ssh -h <(printf "host1\nhost2\nhost3\n…") -l root uptime […] $ parallel-ssh -h <(echo host1 host2 host3 … | xargs -n1) -l root uptime […]
And in zsh there’s an even easier way to type this:
$ parallel-ssh -h <(print -l host1 host2 host3 …) -l root uptime […]
In addition to parallel-ssh
the pssh
package also contains some more ssh based tools:
parallel-scp
andparallel-rsync
for parallel copying files onto a set of hosts.parallel-slurp
for fetching files in parallel from a list of hosts.parallel-nuke
to kill a bunch of processes in parallel on a set of machines.
I though think that parallel-ssh
is by far the most
useful tool from the pssh package. (Probably no wonder
as it’s the most generic one. :-)
Tagged as: aha, Multiplexer, parallel-ssh, pconsole, pssh, SSH, UUUT
// show without comments // write a comment
Related stories
Monday·20·February·2012
Git Snapshot of GNU Screen in Debian Experimental //at 01:09 //by abe
I just uploaded a snapshot of GNU Screen to Debian Experimental. The package (4.1.0~20110819git450e8f3-1) is based on upstream’s HEAD whose most recent commit currently dates to the 19th of August 2011.
While the upload fixes tons of bugs which accumulated over the past two years in Debian’s, Ubuntu’s and upstream’s bug tracker, I don’t yet regard it as suitable for the next stable release (and hence for Debian Unstable) since there’s one not so nice issue about it:
- #644788: screen 4.1.0 can’t attach to a running/detached screen 4.0.3 session
Nevertheless it fixes a lot of open issues (of which the oldest is a wishlist bug report dating back to 1998 :-) and I didn’t want to withhold it from the rest of the Debian community so I uploaded it to Debian Experimental.
Issues closed in Debian Experimental
- #25096: digraph table should be run-time configurable
- #152961: lacks tsl/fsl/dsl caps
- #176626: mini-curses type of interface for screen -r w/ multiple screens? (Fixed by suggesting iselect, screenie or byobu)
- #223320: does not switch mouse mode
- #344759: mishandles xterm control string to set window title
- #353090: please enable the built-in telnet
- #361274: cannot reattach to sessionname if there is another session with similar sessionname
- #450421: please raise MAXWIN to at least 100 (merged with #499273)
- #461107: Requires test -t 0 even when opening a new window on existing screen
- #481411: window created with ‘-d -m’ silently ignores ‘-X exec’
- #488619: Session name string escape
- #496750: screen -d -m and -D -m segfault if setenv given with no value in a configuration file
- #532240: screen with caption SEGVs when resized to 1 line tall
- #541793: “C-a h” (mis)documented twice
- #558724: breaks altscreen
- #560231: Please remove restriction on user/login name length
- #578729: outputs spaces when refreshing/attaching a window with “defbce on”
- #591624: segfault when running “screen -d -m” with “layout save default” in .screenrc
- #603009: Updating the screen Uploaders list
- #612990: /etc/init.d/screen-cleanup: should check for existence of screen binary
- #621704: Fix slow scrolling in vertical splits
- #630535: manpage typo
- #641867: version bump (this bug report sparked the upload :-)
Update: Issues also closed in Debian Experimental, but not (yet) mentioned in the Debian changelog
- #238535: screen lock can no more be bypassed by reattaching.
- #446082: Shows cursor in front of the selected window in “windowlist -b”.
- #522689: Passes signals to programs running inside screen on kfreebsd.
- #526002: Adds focus left/right commands.
- #611453: Documents vertical split in man-page.
- #621804 and #630976: Allows longer $TERM than 20 characters
Issues which will be closed in Ubuntu
- #183849: update to git version of screen
- #315237: crashes with certain options and terminal sizes
- #582153: doesn’t accept login names longer than 20 chars
- #588846: slow when using vertical split
- #702094: Copying and pasting from mutt includes many trailing spaces
- #786292: segfaults if using layout saving with “-D -m”
- #788670: segfault in screen/byobu in natty
Please test the version from Experimental
If you are affected by one of the issues mentioned above, please try the version from Debian Experimental and check if they’re resolved for you, too.
Thanks to all who contributed!
A lot of the fixes have been made or applied upstream by Sadrul Habib Chowdhury who also industriously tagged Debian bug reports as “fixed-upstream”. Thanks!
Thanks also to Brian P Kroth who gave the initial spark to this upload by packaging Fedora 15’s git snapshot for Debian and filing bug although the upload is based on the current HEAD version of GNU Screen as this fixes some more important issues than the snapshot Fedora 15 includes. That way also two patches from Fedora/RedHat’s screen package are included in this upload.
(Co-) Maintainer wanted!
Oh, and if you care about the state of GNU Screen in Debian, I’d really appreciate if you’d join in and contribute to our collab-maint git repository – there are still a lot of issues unresolved and I know that I won’t be able to fix all of them myself. And since Hessophanes unfortunately currently has not enough time for the package, we definitely need more people maintaining this package.
P.S.
Yes, I know about tmux and tried to get some of my setups
working with it, too. But I still prefer screen over tmux.
:-)
Tagged as: byobu, Debian, Experimental, git, GNU, GNU Screen, iselect, screen, screenie, snapshot, tmux, Ubuntu, upload
// show without comments // write a comment
Related stories
Tuesday·10·January·2012
Illegal attempt to re-initialise SSL for server (theoretically shouldn’t happen!) //at 02:52 //by abe
After dist-upgrading my main Hetzner server from Lenny to Squeeze, Apache failed to come up, barfing the following error message in the alphabetically last defined and enabled virtual host’s error log:
[error] Illegal attempt to re-initialise SSL for server (theoretically shouldn't happen!)
Well this is not theory but the real world and it did happen — and it took me a while to find out what was wrong with the configuration despite it worked with Lenny’s Apache version.
To avoid that others have to search as long as I had to, here’s the solution:
Look at all enabled sites, pick out those which have a VirtualHost on port 443 defined and verify that all these VirtualHost containers do have their own “SSLEngine On” statement. If at least one is missing, you’ll run into the above mentioned error message.
And it won’t necessarily show up in the error log of those VirtualHosts which are missing the statement but only in the last VirtualHost (or the last VirtualHost on port 443).
To find the relevant site files, I used the following one-liner:
grep -lE 'VirtualHost.*443' sites-enabled/*[^~] | \ xargs grep -ci "SSLEngine On" | \ grep :0
Should work for all sites which have defined just one VirtualHost on port 443 per file.
I suspect that the raise of SNI made Apache’s SSL implementation more picky with regards to VirtualHosts.
Oh, and kudos to this comment to an article on Debian-Administration.org because
it finally pointed me in the right direction. :-)
Tagged as: Apache, CLI, commandline, Debian, error, experience, grep, HTTPS, KMMR, Lenny, Squeeze, SSL, xargs
// show without comments // write a comment