Alue status report

In May, I posted about the discussion forum software I am writing, Alue. Since then –

What works:

  • The NNTP interface is essentially complete.
  • There is a rudimentary reading and posting HTTPS interface .
  • Users are able to self-register, manage their own accounts and reset lost passwords using an email challenge system.

What is broken:

  • NNTP control messages.
  • MIME message display in HTTPS (including character set conversions)
  • Web design. Although – all HTML is template-generated, so it’s more a problem with the test installation than with the actual software.

What is missing:

  • HTTPS-based administration
  • Moderation
  • Spam control
  • Email distribution of messages
  • Posting by email
  • Packaging (including proper installation and upgrade procedures)

And it would probably use a proper security review of the code.

If you are interested, go check out the test installation. The code and the test installation templates are available through Git. If you are really brave (and are a skilled system administrator), you might try creating your own installation – if you do, let me know.

This is Alue

I have made a couple of references in my blog to the new software suite I am writing, which I am calling Alue. It is time to explain what it is all about.

Alue will be a discussion forum system providing a web-based forum interface, a NNTP (Netnews) interface and an email interface, all with equal status. What will be unusual compared to most of the competition is that all these interfaces will be coequal views to the same abstract discussion, instead of being primarily one of these things and providing the others as bolted-on gateways. (I am aware of at least one other such system, but it is proprietary and thus not useful to my needs. Besides, I get to learn all kinds of fun things while doing this.)

I have, over several years, come across many times the need for such systems and never found a good, free implementation. I am now building this software for the use of one new discussion site that is being formed (which is graciously willing to serve as my guinea pig), but I hope it will eventually be of use to many other places as well.

I now have the first increment ready for beta testing. Note that this is not even close to being what I described above; it is merely a start. It currently provides a fully functional NNTP interface to a rudimentary (unreliable and unscalable) discussion database.

The NNTP server implements most of RFC 3977 (the base NNTP spec – IHAVE, MODE-READER, NEWNEWS and HDR are missing), all of RFC 4642 (STARTTLS) and a part of RFC 4643 (AUTHINFO USER – the SASL part is missing). The article database is intended to support – with certain deliberate omissions – the upcoming Netnews standards (USEFOR and USEPRO), but currently omits most of the mandatory checks.

There is a test installation at verbosify.org (port 119), which allows anonymous reading but requires identification and authentication for posting. I am currently handing out accounts only by invitation.

Code can be browsed in a Gitweb; git clone requests should be directed to git://git.verbosify.org/git/alue.git/.

There are some tweaks to be done to the NNTP frontend, but after that I expect to be rewriting the message filing system to be at least reliable if not scalable. After that, it is time for a web interface.

Asynchronous transput and gnutls

CC0
To the extent possible under law,
Antti-Juhani Kaijanaho has waived all copyright and related or neighboring rights to
Asynchronous transput and gnutls. This work is published from Finland.

GnuTLS is a wonderful thing. It even has a thick manual – but nevertheless its documentation is severely lacking from the programmer’s point of view (and there doesn’t even seem to be independent howtos floating on the net). My hope is to remedy with this post, in small part, that problem.

I spent the weekend adding STARTTLS support to the NNTP (reading) server component of Alue. Since Alue is written in C++ and uses the Boost ASIO library as its primary concurrency framework, it seemed prudent to use ASIO’s SSL sublibrary. However, the result wasn’t stable and debugging it looked unappetizing. So, I wrote my own TLS layer on top of ASIO, based on gnutls.

Now, the gnutls API looks like it works only with synchronous transput: all TLS network operations are of the form “do this and return when done”; for example gnutls_handshake returns once the handshake is finished. So how does one adapt this to asynchronous transput? Fortunately, there are (badly documented) hooks for this purpose.

An application can tell gnutls to call application-supplied functions instead of the read(2) and write(2) system calls. Thus, when setting up a TLS session but before the handshake, I do the following:

                gnutls_transport_set_ptr(gs, this);
                gnutls_transport_set_push_function(gs, push_static);
                gnutls_transport_set_pull_function(gs, pull_static);
                gnutls_transport_set_lowat(gs, 0);

Here, gs is my private copy of the gnutls session structure, and the push_static and pull_static are static member functions in my sesssion wrapper class. The first line tells gnutls to give the current this pointer (a pointer to the current session wrapper) as the first argument to them. The last line tells gnutls not to try treating the this pointer as a Berkeley socket.

The pull_static static member function just passes control on to a non-static member, for convenience:

ssize_t session::pull_static(void * th, void *b, size_t n)
{
        return static_cast<session *>(th)->pull(b, n);
}

The basic idea of the pull function is to try to return immediately with data from a buffer, and if the buffer is empty, to fail with an error code signalling the absence of data with the possibility that data may become available later (the POSIX EAGAIN code):

class session
{
        [...]
        std::vector<unsigned char> ins;
        size_t ins_low, ins_high;
        [...]
};
ssize_t session::pull(void *b, size_t n_wanted)
{
        unsigned char *cs = static_cast<unsigned char *>(b);
        if (ins_high - ins_low > 0)
        {
                errno = EAGAIN;
                return -1;
        }
        size_t n = ins_high - ins_low < n_wanted 
                ?  ins_high - ins_low 
                :  n_wanted;
        for (size_t i = 0; i < n; i++)
        {
                cs[i] = ins[ins_low+i];
        }
        ins_low += n;
        return n;
}

Here, ins_low is an index to the ins vector specifying the first byte which has not already been passed on to gnutls, while ins_high is an index to the ins vector specifying the first byte that does not contain data read from the network. The assertions 0 <= ins_low, ins_low <= ins_high and ins_high <= ins.size() are obvious invariants in this buffering scheme.

The push case is simpler: all one needs to do is buffer the data that gnutls wants to send, for later transmission:

class session
{
        [...]
        std::vector<unsigned char> outs;
        size_t outs_low;
        [...]
};
ssize_t session::push(const void *b, size_t n)
{
        const unsigned char *cs = static_cast<const unsigned char *>(b);
        for (size_t i = 0; i < n; i++)
        {
                outs.push_back(cs[i]);
        }
        return n;
}

The low water mark outs_low (indicating the first byte that has not yet been sent to the network) is not needed in the push function. It would be possible for the push callback to signal EAGAIN, but it is not necessary in this scheme (assuming that one does not need to establish hard buffer limits).

Once gnutls receives an EAGAIN condition from the pull callback, it suspends the current operation and returns to its caller with the gnutls condition GNUTLS_E_AGAIN. The caller must arrange for more data to become available to the pull callback (in this case by scheduling an asynchronous write of the data in the outs buffer scheme and scheduling an asynchronous read to the ins buffer scheme) and then call the operation again, allowing the operation to resume.

The code so far does not actually perform any network transput. For this, I have written two auxiliary methods:

class session
{
        [...]
        bool read_active, write_active;
        [...]
};
void session::post_write()
{
        if (write_active) return;
        if (outs_low > 0 && outs_low == outs.size())
        {
                outs.clear();
                outs_low = 0;
        }
        else if (outs_low > 4096)
        {
                outs.erase(outs.begin(), outs.begin() + outs_low);
                outs_low = 0;
        }
        if (outs_low < outs.size())
        {
                stream.async_write_some
                        (boost::asio::buffer(outs.data()+outs_low,
                                             outs.size()-outs_low),
                         boost::bind(&session::sent_some,
                                     this, _1, _2));
                write_active = true;
        }
}

void session::post_read()
{
        if (read_active) return;
        if (ins_low > 0 && ins_low == ins.size())
        {
                ins.clear();
                ins_low = 0;
                ins_high = 0;
        }
        else if (ins_low > 4096)
        {
                ins.erase(ins.begin(), ins.begin() + ins_low);
                ins_high -= ins_low;
                ins_low = 0;
        }
        
        if (ins_high + 4096 >= ins.size()) ins.resize(ins_high + 4096);

        stream.async_read_some(boost::asio::buffer(ins.data()+ins_high,
                                                   ins.size()-ins_high),
                               boost::bind(&session::received_some,
                                           this, _1, _2));
        read_active = true;
}

Both helpers prune the buffers when necessary. (I should really remove those magic 4096s and make them a symbolic constant.)

The data members read_active and write_active ensure that at most one asynchronous read and at most one asynchronous write is pending at any given time. My first version did not have this safeguard (instead trying to rely on the ASIO stream reset method to cancel any outstanding asynchronous transput at need), and the code sent some TLS records twice – which is not good: sending the ServerHello twice is guaranteed to confuse the client.

Once ASIO completes an asynchronous transput request, it calls the corresponding handler:

void session::received_some(boost::system::error_code ec, size_t n)
{
        read_active = false;
        if (ec) { pending_error = ec; return; }
        ins_high += n;
        post_pending_actions();
}
void session::sent_some(boost::system::error_code ec, size_t n)
{
        write_active = false;
        if (ec) { pending_error = ec; return; }
        outs_low += n;
        post_pending_actions();
}

Their job is to update the bookkeeping and to trigger the resumption of suspended gnutls operations (which is done by post_pending_actions).

Now we have all the main pieces of the puzzle. The remaining pieces are obvious but rather messy, and I’d rather not repeat them here (not even in a cleaned-up form). But their essential idea goes as follows:

When called by the application code or when resumed by post_pending_actions, an asynchronous wrapper of a gnutls operation first examines the session state for a saved error code. If one is found, it is propagated to the application using the usual ASIO techniques, and the operation is cancelled. Otherwise, the wrapper calls the actual gnutls operation. When it returns, the wrapper examines the return value. If successful completion is indicated, the handler given by the application is posted in the ASIO io_service for later execution. If GNUTLS_E_AGAIN is indicated, post_read and post_write are called to schedule actual network transput, and the wrapper is suspended (by pushing it into a queue of pending actions). If any other kind of failure is indicated, it is propagated to the application using the usual ASIO techniques.

The post_pending_actions merely empties the queue of pending actions and schedules the actions that it found in the queue for resumption.

The code snippets above are not my actual working code. I have mainly removed from them some irrelevant details (mostly certain template parameters, debug logging and mutex handling). I don’t expect the snippets to compile. I expect I will be able to post my actual git repository to the web in a couple of days.

Please note that my (actual) code has received only rudimentary testing. I believe it is correct, but I won’t be surprised to find it contains bugs in the edge cases. I hope this is, still, of some use to somebody :)

Star Trek

It is curious to see that the eleventh movie in a series is the first to bear the series name with no adornment. It is apt, however: Star Trek is a clear attempt at rebooting the universe and basically forgetting most of the decades-heavy baggage. It seems to me that the reboot was fairly well done, too.

The movie opens with the birth of James Tiberius Kirk, and follows his development into the Captain of the Enterprise. Along the way, we also see the growth of Spock from adolescence into Kirk’s trusted sidekick and also into … well. Despite the fact that the action plot macguffins are time travel and planet-killer weaponry, it is mainly a story of personal vengeance, personal tragedy, and personal growth. Curiously enough, although Kirk gets a lot of screen time, it is really the personal story of Spock.

Besides Kirk and Spock, we also get to meet reimagined versions of Uhura (I like!), McCoy, Sulu, Chekov and Scott. And Christopher Pike, the first Captain of the Enterprise. The appearance of Leonard Nimoy as the pre-reboot Spock merits a special mention and a special thanks.

I overheard someone say in the theatre, after the movie ended, that the movie was a ripoff and had nothing to do with anything that had gone before. I respectfully disagree. The old Star Trek continuum had been weighed down by all the history into being a 600-pound elderly man who is unable to leave the couch on his own. This movie provided a clearn reboot, ripping out most of the baggage, retaining the essence of classic Star Trek and giving a healthy, new platform for good new stories. One just hopes Paramount is wise enough not to foul it up again.

It was worth it, I thought.

dpkg tip

If your dpkg runs seem to take a long time in the “reading database” step, try this:

Step One: Clear the available file dpkg --clear-avail

Step Two: Forget old unavailable packages dpkg --forget-old-unavail

Step Three: If you use grep-available or other tools that rely on a useful available file, update the available file using sync-available (in the dctrl-tools package).

The few times I’ve tried it (all situations where the “reading database” step seemed to take ages), it has always sped the process up dramatically. There probably are situations where it won’t make much difference, but I haven’t run into them.

Initramfs problems with the new kernel-package, and a solution

I’ve been using Manoj’s new kernel-package for some weeks now, and used it to compile two kernels (a reconfigured 2.6.29.1 and the new 2.6.29.2). Both times I’ve had trouble with initrd.

As the documentation says, kernel-package kernel packages no longer do their own initramfs generation. One must copy the example scripts at /usr/share/doc/kernel-package/examples/etc.kernel/postinst.d/initramfs and /usr/share/doc/kernel-package/examples/etc.kernel/postrm.d/initramfs to the appropriate subdirectories of /etc/kernel/. However, this is not enough.

My /etc/kernel-img.conf file had the usual posthook and prehook lines calling update-grub. Unfortunately, those hooks are called before the postinst.d hooks, and so update-grub never saw my initramfs images.

Fix? I removed those lines from /etc/kernel-img.conf and created a very simple postinst.d and postrm.d script:

#!/bin/sh

update-grub

I call the script zzz-grub-local, to ensure that it runs last.

Some things to avoid when triaging other people’s bugs

DO NOT send your query for more information only to nnn@bugs.debian.org. That address (almost) never reaches the submitter. (The correct address is nnn-submitter@bugs.debian.org – or you can CC the submitter directly.)

DO NOT close a bug just because your query “can you still reproduce it” has not been promptly answered.

And, actually:

DO NOT close a bug if you do not have the maintainer’s permission to do so. You may, if you wish, state in the bug logs that you think the bug should be closed.

This ends today’s public service announcement. Thank you for your attention.

grep-dctrl is ten years old

My message Intent to package a Debian control file grepper to WNPP and debian-devel is today a decade old. Apparently, the message predates the invention of the acronym ITP for Intent to Package (the first instance I can find is from May 1999).

The changelog reveals that the first upload was on March 1st, 1999. There is unfortunately no record of when the package hit unstable the first time, since dinstall did not send Installed (or even the later Accepted) mails to an archived mailing list at that time. I suppose I might have it in some old private email archive, but more likely it’s just gone. Similarly, the first fixed bug (#35527) predates BTS archiving and is now lost.

The grep-dctrl program came out of repeated awkward grepping of the dpkg available and status files. Eventually I decided there must be a better way, and failed to find any canned solutions (I was later pointed to sgrep, which didn’t look useful enough, and even later to dpkg-awk, but I had already committed to my own solution by then). I wrote a simple C program that processed these files as a sequence of records and did simple substring searches in each record. I rapidly added support for field selection, regular expressions and output field selection. By version 1.3a of March 2000 (which was released with Debian 2.2 ‘potato’), the program was as good as it was going to get – with one exception.

“Make disjunctive searches possible,” said my TODO file those days. Conjunctive searches (that is, AND-searches) were possible even then by using more than one grep-dctrl command in a pipeline. Disjunctive (OR) was not possible. The problem was not so much that it would be hard to program (although the program’s internal structure wasn’t very good, to be honest, making extensive changes difficult), it was more a problem of coming up with a good command line syntax.

Another thing that bothered me with the old grep-dctrl was how to implement Ben Armstrong’s feature request. Again, the programming part wasn’t the problem, the problem was coming up with a good, clean semantics for the feature.

It was finally the appearance of ara in 2003 that got me moving again. Ara’s author proudly compared eir program to grep-dctrl, claiming that my program did not do disjunctive searches while ara does. Competition being good for the soul, I took it as a challenge. In April 2003 I announced a complete rewrite of grep-dctrl, which was completed in January 2004 (the 2.0 release).

The rewrite changed the way the command line was handled – even though the usual Unix switch style is still used, the command line is regarded as a language with a parser (first an operator-precedence parser, then a recursive-descent one). The command line is transformed into an interpreted stack-based language which drives the actual grepping.

The rewrite also generalised the internal data structures into an internal library which could be used to write other tools. The first such tool was sort-dctrl (introcuded in 2.7, June 2005), which was soon followed by tbl-dctrl (2.8, July 2005). The later appearance of join-dctrl (2.11, August 2007) finally allowed me to close Ben Armstrong’s longstanding feature request mentioned above.

The unpronounceable part of the names, “dctrl”, is an abbreviation for “Debian control”, which I decided to call the file format used by dpkg. Some people call it a RFC-822 format, but that really is a misnomer, since the differences between dctrl and RFC-822 outweigh the mainly superficial similarities (and the historical connection). I did consider calling my program dpkg-grep, but I didn’t feel like I had the right to invade the dpkg namespace. The later rename to dctrl-tools reflects the fact that there are now several tools, grep-dctrl being just the oldest.

I have several plans for the dctrl-tools suite, but my time and energy are mostly claimed by other responsibilities. The suite is currently team-maintained, but unfortunately the team is not very active. I would love it if I weren’t the most active one with my busy schedule! Feel free to pop in on the dctrl-tools-devel mailing list, and to look at the Git repository and the todo list. If you decide to participate, please follow the rules.