Getting more screen real estate with x2x

x2x is a program to transfer keyboard and mouse input from one X display to another. But in X, displays can be controlled by remote mice and keyboards (using X-forwarding), so what x2x really does is let you control a desktop remotely.

One particularly neat feature of x2x is the directional mode, which can essentially make two monitors behave like a single large screen: when you move your mouse off one edge of the first display, it appears on the second, even if the displays are running on two separate computers. This is really handy if you have two computers which are near each other and you need to use both of them at the same time, or if you just want some more screen area for certain tasks (for example, writing on one display while reading on another). It's actually easy enough that you can set up this kind of sharing ad-hoc, whenever you need it.

If you can SSH from one computer to the other, then x2x is very easy to configure. Suppose you have two computers named LAPTOP and DESKTOP. Then on LAPTOP do the following:

laptop$ ssh -X desktop
desktop$ x2x -east -from $DISPLAY -to :0

Now LAPTOP's mouse and keyboard can control DESKTOP when you move the mouse off the right-hand edge of the screen. To make DESKTOP's mouse and keyboard control LAPTOP instead, do the following on LAPTOP:

laptop$ ssh -X desktop
desktop$ x2x -west -from :0 -to $DISPLAY

If neither of the computers you're using can SSH to the other, you'll need a third computer that you have access to. Suppose you want to connect the displays of LAPTOP1 and LAPTOP2. In this case, the key is to connect both to SERVER and determine which remote displays they are connected on, then invoke x2x on one of them. On LAPTOP1, do this:

laptop1$ ssh -X server
server$ echo $DISPLAY

Then on LAPTOP2, running

laptop2$ ssh -X server
server$ x2x -east -from :10 -to $DISPLAY

allows LAPTOP1 to control LAPTOP2 (provided you have replaced :10 with what LAPTOP1's $DISPLAY variable actually is). To make LAPTOP2 control LAPTOP1:

laptop2$ ssh -X server
server$ x2x -west -from $DISPLAY -to :10

Of course, you can substitute the direction arguments with -north, -south, or -east, depending on how the monitors are actually arranged relative to each other.

Everything is a text file

One of the things which makes UNIX systems so powerful is the ease with which one can move data around. What makes this possible is the fact that, with few exceptions, everything is a text stream:

  • Configuration files are plain text.
  • Data are usually stored as flat text files.
  • Even executables, which most Windows users consider to be synonymous with "binaries", are frequently text files: shell scripts, Perl scripts, Python scripts, and the like.
  • Most of the UNIX core programs produce output as text streams or text files.
(On Windows, all of the above, except configurations, are usually represented as binary data, and configuration data, as stored in the registry, is not really amenable to editing by hand.)

What this means is that any text tool you learn, from less to emacs, can be put to use in almost any situation; you don't have to learn specialized tools for every new task you want to perform. Moreover, it means that applications which understand text automatically get an interface they can use to talk to each other. Text is the universal language of computing.

This model is so useful that Linux even creates text interfaces for many system internals which are not not naturally represented as text files or streams. The /proc filesystem, inspired by Plan 9, is one such "virtual filesystem" which exposes certain system vital signs. For example, /proc/cpuinfo provides information about the CPU(s):

prompt$ head /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 9
model name      : Intel(R) Pentium(R) M processor 1400MHz
stepping        : 5
cpu MHz         : 1400.000
cache size      : 1024 KB
fdiv_bug        : no
hlt_bug         : no

The /proc filesystem contains a wealth of system information (current processes, memory usage, hardware and software configuration) all in the guise of text files that you can read. You can write to some of these files to change configurations as well. For example, when executed as root,

echo > /proc/sys/kernel/hostname
changes the host name, and
echo 1 > /proc/sys/vm/drop_caches
drops the system page cache.

So what? It means that when you want to write a program to read or manipulate some aspect of the system, you don't have to rely on bindings which are fragile, or require special headers, or are unavailable in your language of choice. All your program needs to do is read from or write to a file, which is (usually) a piece of cake.

Sometimes flat text isn't enough. What if you need structured or hierarchical data? As it turns out, a filesystem provides a fine hierarchical storage mechanism in the form of directories. For example, the /proc filesystem stores information about the process with PID N inside files in /proc/N/. But when structure is stored in directories instead of, say, nested XML elements, or in the keys of a Windows registry file, you can bring to bear all the tools you already know that operate on files. If you're deploying an application, it's trivial to copy or extract a configuration directory to ~/.appname/. It's not quite as easy to unzip an XML configuration fragment into a larger XML configuration file.

The idea of universal interfaces has found traction even as flat text files have lost ground. In the OpenDocument format, each document (extensions .odt, .odp, etc.) is really a ZIP file containing an XML description of the file and any extra media. (On the other hand, prior to Microsoft Office 2007, essentially all knowledge of the Office file format was obtained through reverse-engineering.)

The other week, I found myself in a situation where I had to save all the (numerous) images in a Word document to individual files. The fastest way was in fact to use OpenOffice to convert to an OpenDocument; when I unzipped that, one of the directories inside contained all the pictures which were in the original document. Common interfaces and tools help you to break free from the limitations of specialized tools when necessary.

Further reading: Unix philosophy, proc filesystem. The /sys filesystem is worth a look too.

The Unix Pipe

With any installation of a Unix you get a bunch of programs like grep, tar, and scores of others.

Here are some examples of using these tools in combination by using the Unix pipe. These combinations really amount to one-line programs. (The theme is IM log analysis.)

(Gaim saves IM logs in files named with the date and time.)

cd ~/.gaim/logs/aim/my_sn/; find . | egrep '2006-08-26' | xargs head

"Show me the beginning of every conversation I had yesterday."

(Within each account, Gaim stores the logs for each of your buddies in a directory named using the buddy's screen name.)

cd ~/.gaim/logs/aim/my_sn/; du -sk * | sort -gr | cut -f2 | head

"Print the screen names of the people I talk to the most, in descending order of total log size."

cd ~/.gaim/logs/aim/my_sn/; egrep -i -r "my_sn:.*linux" * | wc -l

"How many times have I mentioned 'linux' in conversation?" (Answer: 83.)

Here is some stuff I actually use on a regular basis:

wget -O - | tar -xz

Download and decompress a file in one step. This way, I don't have to make a temporary directory in which to download the file, I don't have to remember where I put the file, and I don't have to delete it when I'm done.

ssh phil@remotehost 'cat ~/filelist | xargs tar -C ~/ -cz' > ./backup`date +%Y%m%d`.tar.gz

Make backups of selected files over the network from a remote host. This command reads a file I keep (named filelist) which contains a list of all the files/directories I want to back up, one per line.

For your edification, or if you're insomniac, here are full explanations for what the above commands are doing:

  • find recursively lists every file in the log directory; egrep does filtering to pass on only those lines (filenames) which contain that particular date; xargs constructs and executes the command "head file1 file2 ..." where file1, file2, etc. are the lines it gets from stdin; head, in turn, prints the beginning (first 10 lines) of each file argument.
  • du prints each directory named, preceded by its disk usage; sort sorts all the rows by the first column (the disk usage) in reverse (decreasing) order; cut trims off the size so that only the names remain (the second column, hence -f2) and head limits the output to the first 10 lines.
  • egrep -i -r searches recursively over all lines in all files contained in the directory; wc -l takes in all the lines and prints as output only the number of lines.
  • wget -O - downloads the file and outputs to stdout instead of to a file; tar -xz extracts a .tar.gz from stdin to the current directory (the absence of -f FILE means use stdin instead of reading from a file).
  • The quoted command is run on the remote host: "cat ~/filelist | xargs tar -cz" constructs and executes the command "tar -cz file1 file2 ...", supplying to tar all the files I've listed in filelist. This compresses all the files I named and writes the archive to stdout. The archive is then written to a file named something like backup20060827.tar.gz. (The date command is executed and outputs something like "20060827"; this string is then pasted in to the command.)

Bonus: In the last example, the stdout of the last command on the remote host can be immediately redirected to a file on the local host. ssh is in general capable of connecting pipes between programs on different hosts. It automatically streams that information over the network (encrypted, of course) so that the connection is transparent to everyone involved!

Further reading: on a GNU system, typing info coreutils will bring up information about the base GNU tools, like cat, head, and more.