Asynchronous transput and gnutls

CC0
To the extent possible under law,
Antti-Juhani Kaijanaho has waived all copyright and related or neighboring rights to
Asynchronous transput and gnutls. This work is published from Finland.

GnuTLS is a wonderful thing. It even has a thick manual – but nevertheless its documentation is severely lacking from the programmer’s point of view (and there doesn’t even seem to be independent howtos floating on the net). My hope is to remedy with this post, in small part, that problem.

I spent the weekend adding STARTTLS support to the NNTP (reading) server component of Alue. Since Alue is written in C++ and uses the Boost ASIO library as its primary concurrency framework, it seemed prudent to use ASIO’s SSL sublibrary. However, the result wasn’t stable and debugging it looked unappetizing. So, I wrote my own TLS layer on top of ASIO, based on gnutls.

Now, the gnutls API looks like it works only with synchronous transput: all TLS network operations are of the form “do this and return when done”; for example gnutls_handshake returns once the handshake is finished. So how does one adapt this to asynchronous transput? Fortunately, there are (badly documented) hooks for this purpose.

An application can tell gnutls to call application-supplied functions instead of the read(2) and write(2) system calls. Thus, when setting up a TLS session but before the handshake, I do the following:

                gnutls_transport_set_ptr(gs, this);
                gnutls_transport_set_push_function(gs, push_static);
                gnutls_transport_set_pull_function(gs, pull_static);
                gnutls_transport_set_lowat(gs, 0);

Here, gs is my private copy of the gnutls session structure, and the push_static and pull_static are static member functions in my sesssion wrapper class. The first line tells gnutls to give the current this pointer (a pointer to the current session wrapper) as the first argument to them. The last line tells gnutls not to try treating the this pointer as a Berkeley socket.

The pull_static static member function just passes control on to a non-static member, for convenience:

ssize_t session::pull_static(void * th, void *b, size_t n)
{
        return static_cast<session *>(th)->pull(b, n);
}

The basic idea of the pull function is to try to return immediately with data from a buffer, and if the buffer is empty, to fail with an error code signalling the absence of data with the possibility that data may become available later (the POSIX EAGAIN code):

class session
{
        [...]
        std::vector<unsigned char> ins;
        size_t ins_low, ins_high;
        [...]
};
ssize_t session::pull(void *b, size_t n_wanted)
{
        unsigned char *cs = static_cast<unsigned char *>(b);
        if (ins_high - ins_low > 0)
        {
                errno = EAGAIN;
                return -1;
        }
        size_t n = ins_high - ins_low < n_wanted 
                ?  ins_high - ins_low 
                :  n_wanted;
        for (size_t i = 0; i < n; i++)
        {
                cs[i] = ins[ins_low+i];
        }
        ins_low += n;
        return n;
}

Here, ins_low is an index to the ins vector specifying the first byte which has not already been passed on to gnutls, while ins_high is an index to the ins vector specifying the first byte that does not contain data read from the network. The assertions 0 <= ins_low, ins_low <= ins_high and ins_high <= ins.size() are obvious invariants in this buffering scheme.

The push case is simpler: all one needs to do is buffer the data that gnutls wants to send, for later transmission:

class session
{
        [...]
        std::vector<unsigned char> outs;
        size_t outs_low;
        [...]
};
ssize_t session::push(const void *b, size_t n)
{
        const unsigned char *cs = static_cast<const unsigned char *>(b);
        for (size_t i = 0; i < n; i++)
        {
                outs.push_back(cs[i]);
        }
        return n;
}

The low water mark outs_low (indicating the first byte that has not yet been sent to the network) is not needed in the push function. It would be possible for the push callback to signal EAGAIN, but it is not necessary in this scheme (assuming that one does not need to establish hard buffer limits).

Once gnutls receives an EAGAIN condition from the pull callback, it suspends the current operation and returns to its caller with the gnutls condition GNUTLS_E_AGAIN. The caller must arrange for more data to become available to the pull callback (in this case by scheduling an asynchronous write of the data in the outs buffer scheme and scheduling an asynchronous read to the ins buffer scheme) and then call the operation again, allowing the operation to resume.

The code so far does not actually perform any network transput. For this, I have written two auxiliary methods:

class session
{
        [...]
        bool read_active, write_active;
        [...]
};
void session::post_write()
{
        if (write_active) return;
        if (outs_low > 0 && outs_low == outs.size())
        {
                outs.clear();
                outs_low = 0;
        }
        else if (outs_low > 4096)
        {
                outs.erase(outs.begin(), outs.begin() + outs_low);
                outs_low = 0;
        }
        if (outs_low < outs.size())
        {
                stream.async_write_some
                        (boost::asio::buffer(outs.data()+outs_low,
                                             outs.size()-outs_low),
                         boost::bind(&session::sent_some,
                                     this, _1, _2));
                write_active = true;
        }
}

void session::post_read()
{
        if (read_active) return;
        if (ins_low > 0 && ins_low == ins.size())
        {
                ins.clear();
                ins_low = 0;
                ins_high = 0;
        }
        else if (ins_low > 4096)
        {
                ins.erase(ins.begin(), ins.begin() + ins_low);
                ins_high -= ins_low;
                ins_low = 0;
        }
        
        if (ins_high + 4096 >= ins.size()) ins.resize(ins_high + 4096);

        stream.async_read_some(boost::asio::buffer(ins.data()+ins_high,
                                                   ins.size()-ins_high),
                               boost::bind(&session::received_some,
                                           this, _1, _2));
        read_active = true;
}

Both helpers prune the buffers when necessary. (I should really remove those magic 4096s and make them a symbolic constant.)

The data members read_active and write_active ensure that at most one asynchronous read and at most one asynchronous write is pending at any given time. My first version did not have this safeguard (instead trying to rely on the ASIO stream reset method to cancel any outstanding asynchronous transput at need), and the code sent some TLS records twice – which is not good: sending the ServerHello twice is guaranteed to confuse the client.

Once ASIO completes an asynchronous transput request, it calls the corresponding handler:

void session::received_some(boost::system::error_code ec, size_t n)
{
        read_active = false;
        if (ec) { pending_error = ec; return; }
        ins_high += n;
        post_pending_actions();
}
void session::sent_some(boost::system::error_code ec, size_t n)
{
        write_active = false;
        if (ec) { pending_error = ec; return; }
        outs_low += n;
        post_pending_actions();
}

Their job is to update the bookkeeping and to trigger the resumption of suspended gnutls operations (which is done by post_pending_actions).

Now we have all the main pieces of the puzzle. The remaining pieces are obvious but rather messy, and I’d rather not repeat them here (not even in a cleaned-up form). But their essential idea goes as follows:

When called by the application code or when resumed by post_pending_actions, an asynchronous wrapper of a gnutls operation first examines the session state for a saved error code. If one is found, it is propagated to the application using the usual ASIO techniques, and the operation is cancelled. Otherwise, the wrapper calls the actual gnutls operation. When it returns, the wrapper examines the return value. If successful completion is indicated, the handler given by the application is posted in the ASIO io_service for later execution. If GNUTLS_E_AGAIN is indicated, post_read and post_write are called to schedule actual network transput, and the wrapper is suspended (by pushing it into a queue of pending actions). If any other kind of failure is indicated, it is propagated to the application using the usual ASIO techniques.

The post_pending_actions merely empties the queue of pending actions and schedules the actions that it found in the queue for resumption.

The code snippets above are not my actual working code. I have mainly removed from them some irrelevant details (mostly certain template parameters, debug logging and mutex handling). I don’t expect the snippets to compile. I expect I will be able to post my actual git repository to the web in a couple of days.

Please note that my (actual) code has received only rudimentary testing. I believe it is correct, but I won’t be surprised to find it contains bugs in the edge cases. I hope this is, still, of some use to somebody 🙂

Some things to avoid when triaging other people’s bugs

DO NOT send your query for more information only to nnn@bugs.debian.org. That address (almost) never reaches the submitter. (The correct address is nnn-submitter@bugs.debian.org – or you can CC the submitter directly.)

DO NOT close a bug just because your query “can you still reproduce it” has not been promptly answered.

And, actually:

DO NOT close a bug if you do not have the maintainer’s permission to do so. You may, if you wish, state in the bug logs that you think the bug should be closed.

This ends today’s public service announcement. Thank you for your attention.

grep-dctrl is ten years old

My message Intent to package a Debian control file grepper to WNPP and debian-devel is today a decade old. Apparently, the message predates the invention of the acronym ITP for Intent to Package (the first instance I can find is from May 1999).

The changelog reveals that the first upload was on March 1st, 1999. There is unfortunately no record of when the package hit unstable the first time, since dinstall did not send Installed (or even the later Accepted) mails to an archived mailing list at that time. I suppose I might have it in some old private email archive, but more likely it’s just gone. Similarly, the first fixed bug (#35527) predates BTS archiving and is now lost.

The grep-dctrl program came out of repeated awkward grepping of the dpkg available and status files. Eventually I decided there must be a better way, and failed to find any canned solutions (I was later pointed to sgrep, which didn’t look useful enough, and even later to dpkg-awk, but I had already committed to my own solution by then). I wrote a simple C program that processed these files as a sequence of records and did simple substring searches in each record. I rapidly added support for field selection, regular expressions and output field selection. By version 1.3a of March 2000 (which was released with Debian 2.2 ‘potato’), the program was as good as it was going to get – with one exception.

“Make disjunctive searches possible,” said my TODO file those days. Conjunctive searches (that is, AND-searches) were possible even then by using more than one grep-dctrl command in a pipeline. Disjunctive (OR) was not possible. The problem was not so much that it would be hard to program (although the program’s internal structure wasn’t very good, to be honest, making extensive changes difficult), it was more a problem of coming up with a good command line syntax.

Another thing that bothered me with the old grep-dctrl was how to implement Ben Armstrong’s feature request. Again, the programming part wasn’t the problem, the problem was coming up with a good, clean semantics for the feature.

It was finally the appearance of ara in 2003 that got me moving again. Ara’s author proudly compared eir program to grep-dctrl, claiming that my program did not do disjunctive searches while ara does. Competition being good for the soul, I took it as a challenge. In April 2003 I announced a complete rewrite of grep-dctrl, which was completed in January 2004 (the 2.0 release).

The rewrite changed the way the command line was handled – even though the usual Unix switch style is still used, the command line is regarded as a language with a parser (first an operator-precedence parser, then a recursive-descent one). The command line is transformed into an interpreted stack-based language which drives the actual grepping.

The rewrite also generalised the internal data structures into an internal library which could be used to write other tools. The first such tool was sort-dctrl (introcuded in 2.7, June 2005), which was soon followed by tbl-dctrl (2.8, July 2005). The later appearance of join-dctrl (2.11, August 2007) finally allowed me to close Ben Armstrong’s longstanding feature request mentioned above.

The unpronounceable part of the names, “dctrl”, is an abbreviation for “Debian control”, which I decided to call the file format used by dpkg. Some people call it a RFC-822 format, but that really is a misnomer, since the differences between dctrl and RFC-822 outweigh the mainly superficial similarities (and the historical connection). I did consider calling my program dpkg-grep, but I didn’t feel like I had the right to invade the dpkg namespace. The later rename to dctrl-tools reflects the fact that there are now several tools, grep-dctrl being just the oldest.

I have several plans for the dctrl-tools suite, but my time and energy are mostly claimed by other responsibilities. The suite is currently team-maintained, but unfortunately the team is not very active. I would love it if I weren’t the most active one with my busy schedule! Feel free to pop in on the dctrl-tools-devel mailing list, and to look at the Git repository and the todo list. If you decide to participate, please follow the rules.

RFH: dctrl-tools — Command-line tools to process Debian package information

I request assistance with maintaining the dctrl-tools package.

There are several tasks that could use more manpower (in no particular order):

  • Writing test cases
    One could mine the BTS for past bug reports and create regression tests for them.
    One could use standard black-box and white-box testing techniques to generate general tests.
  • Writing documentation
    The whole suite of tools could use a unified tutorial manual on how to best use it. The current documentation is reference material in the man pages.
  • Internationalise the man pages
    Use po4a?
  • Swatting the BTS wishlist entries
    I’ve kept the BTS clean of actual bugs pretty well, but there are a number of wishlist reports still outstanding.
  • Take over maintaining the debian/ directory
    If you commit to maintaining it (and I trust your judgment), you’ll get last say in that part of the package (including deciding what helper to use).
  • Whatever you wish 🙂
    Discuss on the dctrl-tools-devel mailing list first though.

Eventually I’d like to pass the package on to competent successors, but I have too much emotional attachment to the package to do that without a transitional period where I still retain a veto on what goes in the package. I also have some ideas for future tools that I’d like to be
able to concentrate on, and having co-maintainers might allow that.

The package is now under Git in collab-maint. See the new README.Debian for information and a push-access code of conduct.

It’s time to fix the ABI

SELinux is entirely correct about disallowing dynamic code generation, as it is a major security risk.

Disregarding Just-In-Time compilation, the main legitimate need for dynamic code generation is to support (downward) closures that are ABI-compatible with normal C functions. GCC’s local functions extension of C is one example, and many non-C languages need them badly in their foreign-function interfaces (Haskell is one, Ada I’m told is another).

A closure is a function pointer that refers to a function that is local to another function. That function has access to the local variables of the parent function, and this access is provided by having the caller give the callee a static link (a pointer to the parent function’s stack frame) as a hidden parameter. If the call is direct (that is, not through a function pointer), the caller knows the appropriate static link and can just pass it. The trouble comes with function pointers, as we need some way of including the static link in the function pointer.

The simplest way is to have function pointers be two words long; one of the words contains the classical function pointer (that is, the entry point address), and the other contains the static link. Unfortunately, the prevalent C ABIs, including the SYSV ABI used by GNU/Linux, mandate that function pointers are one word long. The only way I know to work around this is to dynamically generate, when the function pointer is created, a small snippet of code that loads the correct static link to the appropriate place and jumps to the actual code of the function, and use the address of this snippet (usually called a trampoline) as the function pointer. The snippet is generated on the stack or in the heap, and thus requires an executable stack or executable dynamic memory.

It’s time to fix the ABI to allow for proper two-word function pointers.

New C standard in 2012?

The C standards committee is apparently preparing to revise the C standard. The proposed C1x charter sets the goal of the committee draft’s completion in 2009, with publication of the new standard in 2012. The proposed charter adds security as a significant new goal for the standard:

Trust the programmer, as a goal, is outdated in respect to the security and safety programming communities.

The proposed charter also sets the requirement that all new features must have a history in non-experimental compilers and be in common use.

The Dying Philosophers Problem

Reading a masters thesis draft that mentions the dining philosophers problem – a parable about the difficulties of process synchronization very well known in computer science – it occurred to me that it must not be a very good idea to eat just spaghetti (or just rice). I asked a nutritionist about it, and here is her answer. Even if they manage to avoid deadlock or livelock, dying of malnutrition is not going to be their first problem, go read the full story!

[Typos and thinkos corrected after initial publication]