Wednesday·21·November·2012
Suggestions for the GNOME Team //at 23:01 //by abe
Thanks to Erich Schubert’s blog posting on Planet Debian I became aware of the 2012 GNOME User Survey at Phoronix.
Like back in 2006 I still use some GNOME applications, so I do consider myself as “GNOME user” in the widest sense and hence I filled out that survey. Additionally I have to live with GNOME 3 as a system administrator of workstations, and that’s some kind of usage, too. ;-)
The last question in the survey was Do you
have any comments or suggestions for the GNOME team?
— Sure
I have. And since I tried to give constructive feedback instead of
only ranting, here’s my answer to that question as I
submitted it in the survey, too, just spiced up with some hyperlinks
and highlighting:
Don’t try to change the users. Give the users more possibilities to change GNOME if they don’t agree with your own preferences and decisions. (The trend to castrate the user was already starting with GNOME 2 and GNOME 3 made that worse IMHO.)
If you really think that you need less configurability because some non-power-users are confused or challenged by too many choices, then please give the other users at least the chance to enable more configuration options. A very good example in that hindsight was Kazehakase (RIP) who offered several user interfaces (novice, intermediate and power user or such). The popular text-mode web browser Lynx does the same, too, btw.
GNOME lost me mostly with the change to GNOME 2. The switch from Galeon 1.2 to 1.3/2.0 was horrible and the later switch to Epiphany made things even worse on the browser side. My short trip to GNOME as desktop environment ended with moving back to FVWM (configurable without tons of clicking, especially after moving to some other computer) and for the browser I moved on to Kazehakase back then. Nowadays I’m living very well with Awesome and Ratpoison as window managers, Conkeror as web browser (which are all very configurable) and a few selected GNOME applications like Liferea (luckily still quite configurable despite I miss Gecko’s
about:config
since the switch to WebKit), GUCharmap and Gnumeric.For people switching from Windows I nowadays recommend XFCE or maybe LXDE on low-end computers. I likely would recommend GNOME 2, too, if it still would exist. With regards to MATE I’m skeptical about its persistance and future, but I’m glad it exists as it solves a lot of problems and brings in just a few new ones. Cinnamon as well as SolusOS are based on the current GNOME libraries and are very likely the more persistent projects, but also very likely have the very same multi-head issues we’re all barfing about at work with Ubuntu Precise. (Heck, am I glad that I use Awesome at work, too, and all four screens work perfectly as they did with FVWM before.)
Thanks to Dirk Deimeke for his (German written) pointer to Marcus Moeller’s interview with Ikey Doherty (in German, too) about his
Debian-/GNOME-based distribution SolusOS.
Tagged as: awesome, Cinnamon, Debian, Desktop, Epiphany, FVWM, Galeon, GNOME, Gnumeric, GUCharmap, Kazehakase, Liferea, LXDE, MATE, Other Blogs, Phoronix, Planet Debian, Precise, Rant, ratpoison, SolusOS, survey, Ubuntu, XFCE
// show without comments // write a comment
Related stories
Tuesday·05·June·2012
Automatically hardlinking duplicate files under /usr/share/doc with APT //at 20:43 //by abe
On my everyday netbook (a very reliable first generation ASUS EeePC 701 4G) the disk (4 GB as the product name suggests :-) is nearly always close to full.
TL;DWTR? Jump directly to the HowTo. :-)
So I came up with a few techniques to save some more disk space. Installing localepurge was one of the earliest. Another one was to implement aptitude filters to do interactively what deborphan does non-interactively. Yet another one is to use du and friends a lot – ncdu is definitely my favourite du-like tool in the meantime.
Using du and friends I often noticed how much disk space /usr/share/doc
takes up. But since I value the
contents of /usr/share/doc
a lot, I condemn
how Nokia solved that on the N900: They let APT delete all
files and directories under /usr/share/doc
(including the copyright files!) via some package named
docpurge. I also dislike Ubuntu’s “solution” to truncate the
shipped changelog files (you can still get the remainder of the files
on the web somewhere) as they’re an important source of information
for me.
So when aptitude showed me that some package suddenly wanted to use up
quite some more disk space, I noticed that the new package version
included the upstream changelog twice. So I started searching for
duplicate files under /usr/share/doc
.
There are quite some tools to find duplicate files in Debian. hardlink seemed most appropriate for this case.
First I just looked for duplicate files per package, which even on that less than four gigabytes installation on my EeePC found nine packages which shipped at least one file twice.
As recommended I rather opted for an according Lintian check (see bugs. Niels Thykier kindly implemented such a check in Lintian and its findings are as reported as tags “duplicate-changelog-files” (Severity: normal, from Lintian 2.5.2 on) and “duplicate-files” (Severity: minor, experimental, from Lintian 2.5.0 on).
Nevertheless, some source packages generate several binary packages
and all of them (of course) ship the same, in some cases quite large
(Debian) changelog file. So I found myself running hardlink /usr/share/doc
now and then to gain
some more free disk space. But as I run Sid and package upgrades
happen more than daily, I came to the conclusion that I should run
this command more or less after each aptitude run, i.e. automatically.
Having taken localepurge’s APT hook as example, I added the
following content as /etc/apt/apt.conf.d/98-hardlink-doc
to my system:
// Hardlink identical docs, changelogs, copyrights, examples, etc DPkg { Post-Invoke {"if [ -x /usr/bin/hardlink ]; then /usr/bin/hardlink -t /usr/share/doc; else exit 0; fi";}; };
So now installing a package which contains duplicate files looks like this:
~ # aptitude install perl-tk The following NEW packages will be installed: perl-tk 0 packages upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 2,522 kB of archives. After unpacking 6,783 kB will be used. Get: 1 http://ftp.ch.debian.org/debian/ sid/main perl-tk i386 1:804.029-1.2 [2,522 kB] Fetched 2,522 kB in 1s (1,287 kB/s) Selecting previously unselected package perl-tk. (Reading database ... 121849 files and directories currently installed.) Unpacking perl-tk (from .../perl-tk_1%3a804.029-1.2_i386.deb) ... Processing triggers for man-db ... Setting up perl-tk (1:804.029-1.2) ... Mode: real Files: 15423 Linked: 3 files Compared: 14724 files Saved: 7.29 KiB Duration: 4.03 seconds localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB Total disk space freed by localepurge: 0 KiB
Sure, that wasn’t the most space saving example, but on some
installations I saved around 100 MB of disk space that way – and
I still haven’t found a case where this caused unwanted damage. (Use
of this advice on your own risk, though. Pointers to potential
problems welcome. :-)
Tagged as: APT, aptitude, ASUS, changelog, docpurge, du, duff, duplicate, duplicates, EeePC, hardlink, HowTo, Lintian, localepurge, N900, ncdu, nemo, Netbook, Nokia, recursive, Ubuntu
// show without comments // write a comment
Related stories
Wednesday·11·April·2012
Tools for CLI Road Warriors: Remote Shells //at 19:44 //by abe
Most of my private online life happens on netbooks and besides the web browser, SSH is my most used program — especially on netbooks. Accordingly I also have hosts on the net to which I connect via SSH. My most used program there is GNU Screen.
So yes, for things like e-mail, IRC, and Jabber I connect to a running screen session on some host with a permanent internet connection. On those hosts there is usually one GNU Screen instance running permanently with either mutt or irssi (which is also my Jabber client via a Bitlbee gateway).
But there are some other less well-known tools which I regard as useful in such a setup. The following two tools can both be seen as SSH for special occassions.
autossh
I already blogged about autossh, even twice, so I’ll just recap the most important features here:
autossh is a wrapper around SSH which regularily checks via two tunnels connect to each other on the remote side if the connection is still alive, and if not, it kills the ssh and starts a new one with the same parameters (i.e. tunnels, port forwardings, commands to call, etc.).
It’s quite obvious that this is perfect to be combined with screen’s
-R
and -d
options.
I use autossh so often that I even adopted its Debian package.
mosh
Since last week there’s a new kid in town^W
Debian
Unstable: mosh targets
the same problems as autossh (unreliable networks, roaming, suspending
the computer, etc.) just with a completely different approach which
partially even obsoletes the usage of GNU Screen or tmux:
While mosh uses plain SSH for authentication, authorization and key exchange the final connection is an AES-128 encrypted UDP connection on a random port and is independent of the client’s IP address.
This allows mosh to have the following advantages: The connection stays even if you’re switching networks or suspending your netbook. So if you’re just running a single text-mode application you don’t even need GNU Screen or tmux. (You still do if you want the terminal multiplexing feature of GNU Screen or tmux.)
Another nice feature, especially on unreliable WLAN connections or laggy GSM or UMTS connections is mosh’s output prediction based on its input (i.e. what is typed). Per line it tries to guess which server reaction a key press would cause and if it detects a lagging connection, it shows the predicted result underlined until it gets the real result from the server. This eases writing mails in a remote mutt or chatting in a remote irssi, especially if you noticed that you made a typo, but can’t remember how many backspaces you would have to type to fix it.
Mosh needs to be installed on both, client and server, but the server is only activated via SSH, so it has no port open unless a connection is started. And despite that (in Debian) mosh is currently just available in Unstable, the package builds fine on Squeeze, too. There’s also an PPA for Ubuntu and of course you can also get the source code, e.g. as git checkout from GitHub.
mosh is still under heavy development and new features and bug fixes get added nearly every day.
Thanks to Christine Spang for sponsoring and mentoring Keith’s mosh package in Debian.
Update: I gave a lightning talk about Mosh and AutoSSH in German at Easterhegg
2012. The slides are available online.
Tagged as: autossh, Bitlbee, Debian, GitHub, GNU Screen, IRC, irssi, Jabber, mosh, mutt, PPA, Squeeze, ssh, SSH, Testing, Ubuntu, Unstable
// show without comments // write a comment
Related stories
Wednesday·04·April·2012
Tools for CLI Road Warriors: Hidden Terminals //at 00:57 //by abe
Some networks have no connection to the outside except that they allow surfing through an HTTP(S) proxy. Sometimes you are happy and the HTTPS port (443) is unrestricted. The following server-side tools allow you to exploit these weaknesses and get you a shell on your server.
sslh
sslh is an SSH/SSL multiplexor. If a client connects to sslh, it checks if the clients speaks the SSH or the SSL protocol and then passes the connection to the according real port of SSL or some SSL enabled service, e.g. an HTTPS, OpenVPN, Tinc or XMPP server. That way it’s possible to connect to one of these services and SSH on the same port.
The usual scenario where this daemon is useful are firewalls which block SSH, force HTTP to go through a proxy, but allow HTTPS connections without restriction. In that case you let sslh listen on the HTTPS port (443) and to move the real HTTPS server (e.g. Apache) to listen on either a different port number (e.g. 442, 444 or 8443) or on another IP address, e.g. on localhost, port 443.
On an Debian or Ubuntu based Apache HTTPS server, you just have to do the following to run Apache on port 442 and sslh on port 443 instead:
apt-get install sslh
as root.- Edit
/etc/default/sslh
, changeRUN=no
toRUN=yes
and--ssl 127.0.0.1:443
to--ssl 127.0.0.1:442
. - Edit
/etc/apache2/ports.conf
and all files in/etc/apache2/sites-available/
which contain a reference to port 443 (which is only/etc/apache2/sites-available/default-ssl.conf
in the default configuration) and change all occurrences of443
to442
. service apache2 restart
service sslh start
Now you should be able to ssh to your server on port 443 (ssh -p 443 your.server.example.org
) while
still being able to surf to
https://your.server.example.org/
.
sslh works as threaded or as preforking daemon, or via inetd. It also
honors tcpwrapper configurations for sshd in /etc/hosts.allow
and /etc/hosts.deny
.
sslh is available as port or package at least in Gentoo, in FreeBSD, in Debian and in Ubuntu.
AjaxTerm
A completely different approach takes AjaxTerm. It provides a terminal inside a web browser with login and ssh being its server-side backend.
Properly safe-guarded by HTTPS plus maybe HTTP based authentication this can be an interesting emergency alternative to the more common — but also more often blocked — remote login mechanisms.
AjaxTerm is available as package at least in Debian and in Ubuntu.
Happily I never were forced to use either of them myself. :-)
Tagged as: AJAX, AjaxTerm, Apache, Debian, HTTPS, libwrap, OpenVPN, SSH, SSL, sslh, tcpd, tcpwrapper, Ubuntu, XMPP
// show without comments // write a comment
Related stories
Thursday·22·March·2012
Tools for CLI Road Warriors: Tunnels //at 19:49 //by abe
Sometime the network you’re connected to is either untrusted (e.g. wireless) or castrated in some way. In both cases you want a tunnel to your trusted home base.
Following I’ll show you three completely different tunneling tools which may helpful while travelling.
sshuttle
sshuttle is a tool somewhere in between of automatic port forward and VPN. It tunnels arbitrary TCP connections and DNS through an SSH tunnel without requiring root access on the remote end of the SSH connection.
So it’s perfect for redirecting most of your traffic through an SSH tunnel to your favourite SSH server, e.g. to ensure your local privacy when you are online via a public, unencrypted WLAN (i.e. easy to sniff for everyone).
It runs on Linux and MacOS X and only needs a Python interpreter on the remote side. Requires root access (usually via sudo) on the client side, though.
It’s currently available at least in Debian Unstable and Testing (Wheezy) as well as in Ubuntu since 11.04 Natty.
Miredo
Miredo is an free and open-source implementation of Microsoft’s NAT-traversing Teredo IPv6 tunneling protocol for at least Linux, FreeBSD, NetBSD and MacOS X.
Miredo includes not only a Teredo client but also a Teredo server
implementation. The developer of Miredo also runs a public Miredo
server, so you don’t even need to install a server somewhere. If you
run Debian or Ubuntu you just need to do apt-get
install miredo
as root and you have IPv6 connectivity. It’s
that easy.
So it’s perfect to get a dynamic IPv6 tunnel for your laptop or mobile phone independently where you are and without the need to register any IPv6 tunnel or configure the Miredo client.
I usually use Miredo on my netbooks to be able to access my boxes at home (which are behind an IPv4 NAT router which is also an SixXS IPv6 tunnel endpoint) from whereever I am.
iodine
iodine is likely the most undermining tool in this set. It tunnels IPv4 over DNS, allowing you to make arbitrary network connections if you are on a network where nothing but DNS requests is allowed (i.e. only DNS packets reach the internet).
This is often the case on wireless LANs with landing page. They redirect all web traffic to the landing page. But the network’s routers try to avoid poisoning the client’s DNS cache with different DNS replies as they would get after the user is logged in. So DNS packets usually pass even the local network’s DNS servers unchanged, just TCP and other UDP packets are redirected until logging in.
With an iodine tunnel, it is possible get a network connection to the outside on such a network anyway. On startup iodine tries to automatically find the best parameters (MTU, request type, etc.) for the current environmenent. However that may fail if any DNS server in between imposes DNS request rate limits.
To be able to start such a tunnel you need to set up an iodine daemon somewhere on the internet. Choose a server which is not already a DNS server.
iodine is available in many distributions, e.g. in
Debian and in Ubuntu.
Tagged as: autossh, Debian, GitHub, iodine, IPv6, Miredo, NAT, Python, Squeeze, SSH, sshuttle, Testing, Ubuntu, Unstable, VPN
// show without comments // write a comment
Related stories
Monday·20·February·2012
Git Snapshot of GNU Screen in Debian Experimental //at 01:09 //by abe
I just uploaded a snapshot of GNU Screen to Debian Experimental. The package (4.1.0~20110819git450e8f3-1) is based on upstream’s HEAD whose most recent commit currently dates to the 19th of August 2011.
While the upload fixes tons of bugs which accumulated over the past two years in Debian’s, Ubuntu’s and upstream’s bug tracker, I don’t yet regard it as suitable for the next stable release (and hence for Debian Unstable) since there’s one not so nice issue about it:
- #644788: screen 4.1.0 can’t attach to a running/detached screen 4.0.3 session
Nevertheless it fixes a lot of open issues (of which the oldest is a wishlist bug report dating back to 1998 :-) and I didn’t want to withhold it from the rest of the Debian community so I uploaded it to Debian Experimental.
Issues closed in Debian Experimental
- #25096: digraph table should be run-time configurable
- #152961: lacks tsl/fsl/dsl caps
- #176626: mini-curses type of interface for screen -r w/ multiple screens? (Fixed by suggesting iselect, screenie or byobu)
- #223320: does not switch mouse mode
- #344759: mishandles xterm control string to set window title
- #353090: please enable the built-in telnet
- #361274: cannot reattach to sessionname if there is another session with similar sessionname
- #450421: please raise MAXWIN to at least 100 (merged with #499273)
- #461107: Requires test -t 0 even when opening a new window on existing screen
- #481411: window created with ‘-d -m’ silently ignores ‘-X exec’
- #488619: Session name string escape
- #496750: screen -d -m and -D -m segfault if setenv given with no value in a configuration file
- #532240: screen with caption SEGVs when resized to 1 line tall
- #541793: “C-a h” (mis)documented twice
- #558724: breaks altscreen
- #560231: Please remove restriction on user/login name length
- #578729: outputs spaces when refreshing/attaching a window with “defbce on”
- #591624: segfault when running “screen -d -m” with “layout save default” in .screenrc
- #603009: Updating the screen Uploaders list
- #612990: /etc/init.d/screen-cleanup: should check for existence of screen binary
- #621704: Fix slow scrolling in vertical splits
- #630535: manpage typo
- #641867: version bump (this bug report sparked the upload :-)
Update: Issues also closed in Debian Experimental, but not (yet) mentioned in the Debian changelog
- #238535: screen lock can no more be bypassed by reattaching.
- #446082: Shows cursor in front of the selected window in “windowlist -b”.
- #522689: Passes signals to programs running inside screen on kfreebsd.
- #526002: Adds focus left/right commands.
- #611453: Documents vertical split in man-page.
- #621804 and #630976: Allows longer $TERM than 20 characters
Issues which will be closed in Ubuntu
- #183849: update to git version of screen
- #315237: crashes with certain options and terminal sizes
- #582153: doesn’t accept login names longer than 20 chars
- #588846: slow when using vertical split
- #702094: Copying and pasting from mutt includes many trailing spaces
- #786292: segfaults if using layout saving with “-D -m”
- #788670: segfault in screen/byobu in natty
Please test the version from Experimental
If you are affected by one of the issues mentioned above, please try the version from Debian Experimental and check if they’re resolved for you, too.
Thanks to all who contributed!
A lot of the fixes have been made or applied upstream by Sadrul Habib Chowdhury who also industriously tagged Debian bug reports as “fixed-upstream”. Thanks!
Thanks also to Brian P Kroth who gave the initial spark to this upload by packaging Fedora 15’s git snapshot for Debian and filing bug although the upload is based on the current HEAD version of GNU Screen as this fixes some more important issues than the snapshot Fedora 15 includes. That way also two patches from Fedora/RedHat’s screen package are included in this upload.
(Co-) Maintainer wanted!
Oh, and if you care about the state of GNU Screen in Debian, I’d really appreciate if you’d join in and contribute to our collab-maint git repository – there are still a lot of issues unresolved and I know that I won’t be able to fix all of them myself. And since Hessophanes unfortunately currently has not enough time for the package, we definitely need more people maintaining this package.
P.S.
Yes, I know about tmux and tried to get some of my setups
working with it, too. But I still prefer screen over tmux.
:-)
Tagged as: byobu, Debian, Experimental, git, GNU, GNU Screen, iselect, screen, screenie, snapshot, tmux, Ubuntu, upload
// show without comments // write a comment
Related stories
Friday·28·October·2011
Conkeror usable on Ubuntu again despite XULRunner removal //at 00:08 //by abe
Because of the very annoying new Mozilla release politics (which look like a pissing contest with the similar annoying Google Chrome/Chromium release schedule), Ubuntu kicked out Mozilla XULRunner with its recent release of 11.10 Oneiric. And with XULRunner, Ubuntu also kicked out Conkeror and all other XULRunner reverse dependencies, too. Meh.
Sparked by this thread on the Conkeror mailing list, I extended the Debian package’s /usr/bin/conkeror wrapper script so
that it looks for firefox
in the search path, too, if no
xulrunner*
is found, and added an alternative dependency
on firefox versions greater or equal to 3.5, too.
From now on, if the wrapper script finds no xulrunner but firefox in
the search path, it calls firefox -app
instead of
xulrunner-$VERSION
to start Conkeror.
With the expection of the about:-page showing the orange-blue Firefox logo and claiming that this is “Firefox $CONKEROR_VERSION”, it works as expected on my Toshiba AC100 netbook running the armel port of Ubuntu 11.10.
From version 1.0~~pre+git1110272207-~nightly1
on, the Conkeror Nightly Built
Debian Packages will be installable on Ubuntu 11.10 Oneiric again
without the need to install or keep XULRunner version from Ubuntu
11.04 Natty.
For those who don’t want to use the nightly builds, I created a
(currently still empty) specific PPA
for Conkeror where I’ll probably upload all the conkeror packages
I upload to Debian Unstable.
Tagged as: .deb, 11.04, 11.10, AC100, armel, Browser, build, Chrome, Chromium, Conkeror, Debian, Dependencies, Dynabook, FAIL, Firefox, Google, Natty, netbook, nightly, Oneiric, packaging, PPA, Rant, Tech Babble, Toshiba, Ubuntu, XULRunner
// show without comments // write a comment