What Unicode Means (or, why the Content-Type header is important)

Back in the day, when all computer engineers spoke English and lived in the USA, there was a wonderful standard called ASCII for representing text. It mapped each element of the English alphabet, digits, and punctuation (as well as some special symbols) to numbers between 0 and 127. For example, the letter "a" was 97. Since a byte can hold any of 256 values, this meant that text could be encoded using just one byte per character. In fact, since a byte can represent any of 256 values, there were even plenty of unused codes left over.

Computer manufacturers all had their own ideas about what meaning to assign to the unused codes. They would represent accented Latin characters if you bought a computer in Western Europe, Hebrew characters if you bought a computer in Israel, etc. For people writing in East Asian languages, there were even more complicated schemes which required using two bytes for each character, because those languages have more than 128 additional characters that need to be represented (thousands, in fact).

Eventually ANSI standardized these extensions to ASCII by creating the concept of "code pages": each code page was an encoding that mapped some set of characters to the codes between 128 and 255, although the codes from 0 to 127 were all the same (that is, the same as ASCII). If you are using Windows in any Western European language, you may have noticed your computer mentioning that it is using the Windows-1252 encoding, which is one such code page (it includes, mostly, accented Latin characters like è).

Just because the code pages were standardized did not mean that they were not a mess. Sharing files with others was problematic, because your messages would appear garbled if the recipient used a different encoding while opening the file. (This is what you are observing whenever you get emails which contain question marks or rectangles or characters like çëêèîôö until you tell your email program or web browser to use a different encoding.) The problem is that the meaning of a sequence of bytes is ambiguous unless you know what mapping between characters and bytes the sender used. Even worse, you could only use one code page at a time: for example, a document could not contain both Hebrew and Cyrillic letters; a database could not contain text from languages requiring two different code pages unless it stored the associated code page alongside each chunk of text.

Unicode was designed to solve this last problem by creating a single character set that could represent characters from any language. In the Unicode standard, each character (of which about 100,000 have been standardized now) is mapped to a "code point", which is just a number. For example, code point 65 is the letter "A" and code point 33865 is the Chinese character 葉 (usually, code points are represented in hex so people would typically refer to this character as U+8449).

Notice that we've only been talking about characters and their corresponding integers (code points): we've said nothing about how these integers are to be represented. As a first attempt you might choose to represent the code points using fixed-length sequences of bytes: representing code points 0, 1, 2, etc., using 0x00000000, 0x00000001, 0x00000002, etc., respectively (With three bytes you can actually represent up to 16,777,216 characters, which is plenty for now, but four is a nice round number.) This encoding (way of mapping code points to byte sequences) is called UTF-32. It's very simple, but people who were using mostly characters found in ASCII had to pay a 4x space overhead to encode all their data in UTF-32. So it was really hard to sell them on UTF-32, which, consequently, is rarely used.

There is another encoding called UTF-8 which maps code points to sequences of varying lengths: some code points can be written using just one byte, and some code points need up to four bytes to write. UTF-8 has the interesting property that Unicode code points 0-127, which are the same as ASCII characters, are mapped to the one-byte sequences 0x00-0x7f (0-127). What this means is that every ASCII text means exactly the same thing when interpreted as UTF-8! It is because of this backwards-compatibility that UTF-8 has become the standard for representing text on the web, in Java, and as a part of many other standards. There is another encoding called UTF-16 which uses two or four bytes to represent each character (Windows uses it a lot internally). The UTF-* encodings can represent any character in Unicode, which is a superset of pretty much every other character set out there. So text in any encoding scheme can be reversibly converted to UTF-8 or UTF-16.

To understand Unicode, it's important to make the distinction between a sequence of Unicode code points (abstractly, characters) and an encoding of those characters, which is just a sequence of bytes. With that in mind we can start to understand some of the mysteries of text.

Unicode in Python

[In Python 3.x, sequences of Unicode code points are represented by str, the default string type. So you don't need to use the u prefix on string literals as shown below. String data that has been encoded, and thus no longer carries the implication that it is a sequence of Unicode code points, is represented using bytes (as is data that is just random binary data). More information about str and bytes.]

In Python 2.x, "sequences of Unicode code points" are represented by unicode objects; you can make Unicode literals like so:

z = u"\u0041\u0042C" # Equivalent to u"ABC"

To do any I/O on a Unicode string (printing it, writing it to a file, or sending it out over the network), you have to choose an encoding for it. The process of encoding always transforms your unicode string into a str string, which is often assumed to mean "ASCII text" but really just means "sequence of bytes in some encoding".

Obviously, the encoding you choose has to be the same as the encoding that will be used to open it (that is, the encoding used by your terminal, the program that will open the file, or the computer on the other end of the network connection). When it's necessary to encode down to a byte sequence, Python will silently use your default encoding, which is usually ASCII. If you are using ASCII characters, you probably won't notice anything funny going on. But if you use any international (non-ASCII) characters, Python will raise a UnicodeEncodeError, which basically means "I tried to do the encoding, but one of the characters doesn't have an equivalent in ASCII!". There are two ways around this, which involve explicitly specifying the encoding using the encode method:

  1. If you intend for your recipient to be able to read the non-ASCII characters, you need to choose an encoding that can actually represent them. What you probably want here is to encode to UTF-8:

    >>> u"\u0041\u0042\u8449".encode('utf-8')

    Observe that the last character was encoded using three bytes.

  2. If your recipient really is expecting ASCII text, then they just can't view international characters. You can tell Python to attempt to do the encoding but replace the characters with question marks when the target character set can't represent the character:

    >>> u"\u0041\u0042\u8449".encode('ascii', 'replace')

As you can see, converting to ASCII (and other 1-byte encodings) is lossy, but converting to and from UTF-8, UTF-32, etc. are reversible.

Unicode in HTML documents

Data served over HTTP also has to have an associated encoding to be interpreted unambiguously. Usually the Content-Type header is used to identify the encoding for the document. The W3C's HTML validator will yell at you if you do not specify the encoding:

  <meta http-equiv="Content-Type" content="text/html;charset=utf-8">

But, you may have noticed a weird circularity here... how can a client read any data on the page, much less the encoding, if it doesn't already know what the encoding is in which it should interpret the contents of the page? Well, you're supposed to put the Content-Type header as the first thing after <head>, and you can usually get that far without using any non-ASCII characters, and ASCII, UTF-8, and any ANSI code page will all agree on the meaning of the ASCII characters (code points 0-127). So it works.

In XML, you are supposed to identify the charset on the first line, in the XML declaration:

<?xml version="1.0" encoding="UTF-8"?>

Unicode in text files

Even text files are not immune. In fact, when you have multiple encodings, there is no longer any such thing as "plain text" (by which we mean, text data that can be read unambiguously).

In Emacs, to set the coding system for a file, set the coding variable by adding this magic incantation anywhere in the first line (or second line, if the first line is a shebang line):

-*- coding: utf-8 -*-

Further reading: Joel on Software: The Absolute Minimum Every Software Developer Absolutely, Positively, Must Know About Unicode and Character Sets, Unicode FAQ, Python: Overview of Encodings and Unicode, Emacs manual: (info "(emacs)Specifying File Variables")

Using git-svn to interact with SVN repositories

Imagine you are a Git user and you have to work with an SVN repository. Tragically, you are now incapable of working without cheap and easy branching, disconnected operation, and all the other things that make Git great.

git-svn is a gateway that lets you interact with SVN repositories while using Git for all your local commits. It has the advantage of needing no extra configuration on the SVN server end.

To create a repository:

$ mkdir myproject; cd project
$ git-svn clone svn+ssh://me@remotehost/path/to/svn/repo

This may take a while, but you now have a bona fide Git archive with the full project history (which is not something that an SVN checkout contains).

Note: this is for a repository with no trunk and branches directories.

Your typical workflow will now looks like this.

Get remote changes:

$ git-svn rebase

Do all your Git business locally: edit, diff, commit, branch, and merge.

To push your changes back to the SVN repo:

$ git-svn dcommit

Individual commits in Git are pushed in order as separate commits to the SVN repo.

Further reading: git-svn documentation

Writing a Raytracer From Scratch

I spent part of last week writing a raytracer from scratch in C++.

Here's my account of it, including screenshots and code.

Perhaps the only person who makes more extravagant claims about Git than I do

"Git is the next Unix":

Git was originally not a version control system; it was designed to be the infrastructure so that someone else could build one on top. And they did; nowadays there are more than 100 git-* commands installed along with git. It's scary and confusing and weird, but what that means is git is a platform. It's a new set of nouns and verbs that we never had before. Having new nouns and verbs means we can invent entirely new things that we previously couldn't do.

With git, we've invented a new world where revision history, checksums, and branches don't make your filesystem slower: they make it faster. They don't make your data bigger: they make it smaller. They don't risk your data integrity; they guarantee integrity. They don't centralize your data in a big database; they distribute it peer to peer.

Much like Unix itself, git's actual software doesn't matter; it's the file format, the concepts, that change everything.

The author describes a whole bunch of projects he worked on or heard about where Git would basically have satisfied all the needs of the project while being faster, smaller, and more secure.

Essentially, Git could serve as the base for many more version control system-type tools than just the ones that are called Git today. Myself, I'm looking forward to version control that's suitable for entire disks (perhaps with automatic history pruning) and better ways to be able to easily deal with large projects that contain smaller modules. But there are undoubtedly better ways of doing things than our untrained minds are capable of even imagining right now.

Interview with Eben Moglen

Eben Moglen, in an interview:

The primary desire that businesses have is for control over their own destinies, for avoidance of autonomy bottlenecks which put the fate of their business into the hands of someone else. The difficulty that they experience — that they call vendor lock-in, or noninteroperability — is a difficulty which is really a businessman's equivalent of Stallman's frustration at unfreedom. They are essentially the same recognition: In a world of complex, interdependent technology, if I don't control my technology, it will control me. Stallman's understanding of that proposition and Goldman Sachs' understanding [for example] needn't be as far apart as one might think. The desire to maintain autonomy — the desire to avoid control of destiny by outside parties — is as fierce in both cases as it can get.

Indeed, I still find it difficult to argue that unfree software is immoral (except using very theoretical arguments), but it is certainly bad business sense. Using proprietary software is like handing the reins of your company's infrastructure to an outsider.

What is interesting about free software is that it is a commons which does not suffer from the tragedy of the commons. It is valuable because of its freedom, not despite its freedom.