Skip to content

Real-time Linux Kernel drivers – Part 1, the backstory

At work I recently had to create a set of real-time Linux drivers for our custom hardware board. I’ve found it to be an interesting experience that I’d like to share – especially since it took some help from the PREEMPT_RT community to get the real-time stuff working correctly.

I’ll divide the story into a few parts, starting with a quick description of the setup.

The trusty old platform

Our current generation of machines run on an AT91RM9200 (ARM) platform, running Linux with the PREEMPT_RT patch. It interfaces to our custom hardware (mainly digital and analog I/O), through a parallel, in-house-made, legacy communicationbus, running over some of the GPIO pins of the microcontroller. The I/O is scanned at a rate of 100 Hz, created from a kernel module using one of the ARM hardware timers inside the microcontroller, and running with real-time scheduling to guarantee that we don’t stray too far away from the desired 10 ms.
(Also note that my company makes machines for handling, weighing and packing potatoes, onions and similar products, so we’re not exactly talking life-critical systems. A couple of milliseconds more or less than expected is thus not going to do any harm.)

This platform has worked well for us, and still does. However, the AT91RM9200 is not exactly the fastest system around, and with a moderate amount of memory, and a not-at-all-fantastic network throughput, we were quick to jump onboard when we had the chance to upgrade to a more modern platform.

The shiny new platform

We now have a 1.6 GHz, dual-core x86 based platform with 2 GB of memory, and gigabit Ethernet. This has given us the necessary hardware power to implement some interesting new features in our system, but it also gave us some work porting our existing application and drivers to it (or well, gave *me* some work). Even though the new platform is faster on all accounts, we would still like to be able to use our existing application and I/O hardware. As we are still running Linux, the high-level application was ported with only a few hitches caused by a newer compiler. Our legacy communicationbus was implemented in an FPGA, attached over the LPC bus available on the CPU, and our drivers were quickly ported to this, as it was simply a matter of replacing some ARM GPIO register calls with some inb/outb calls.

However – x86 processors are not exactly famous for their built-in ARM hardware timers, so our 100 Hz I/O scanner kernel module had to be reimplemented from scratch. And instead of attempting to access hardware timers directly once again, I wanted to do something slightly more portable.

Fumbling in the dark

Creating a real-time, 100 Hz loop in a kernel module was not as easy as I’d first imagined though. Mainly because I had a hard time figuring out which timing APIs where available, and which would actually support real-time behavior. After lots of googling around, I ended up settling for the High-Resolution Timers API, and got a kernel module set up with a separate real-time thread (or so I thought). During the initial phase testing was pretty light, and everything looked good – we had a 100 Hz loop, and our I/O cards were being polled once every 10 ms.

And so, the story could have ended here. But, since our application was used to run on a 180 MHz ARM processor, we were not exactly pushing the new CPU to its limits though. Once we tried that, things started to look quite a bit worse…

– to be continued in part 2.

Leave a Reply

Your email address will not be published. Required fields are marked *