Embedded Linux

If you come from the world of small micro-controllers, embedded Linux can be a challenge. Even if you are used to Linux from the server world, embedded Linux also has its challenges.

This page will walk you through the traits of Linux and what they mean to an embedded developer.

Licensing

Linux basically uses the GNU Public License v3. This means that you cannot take the source code from a popular open source Linux program and change it into your own product – UNLESS you make your source open as well. This is known as the “copy-left rule”.

Include-files (like h-files) use the “Lesser GNU Public License”. This means that you are allowed to write your own program from scratch, and in this program you can link to a LGPL-based library – and you can include the h-file that goes with the library. You don’t need to make your source public. You must however either make the object-file available to users OR use dynamic linking, so that others can run your program with newer versions of the open source libraries later on.

File System

Due to the above licensing concept any product (based on Linux) contains a file-system with numerous standard, recognizable folders and files. The developers of the product are standing on the shoulders of the open source developers, and in order to use the capabilities of existing daemons and programs, the files need to be there “as usual” – compiled to the given platform. You will recognize folders like:

  • /bin – programs needed to perform basic tasks, i.e. change a directory or copy a file
  • /boot – boot configuration and images 
  • /dev – special files that represent hardware devices
  • /etc – configuration files
  • /home – contains private directories of users
  • /media or /mnt – Mount point for external drives connected to this computer, i.e. CDs or USB keys
  • /tmp – temporary files
  • /usr – programs installed on the computer
  • /var – variable data produced by programs, like error logs
  • /proc – Not really files but live data on processes

As stated earlier, developers are not allowed to change any of the main binaries into their own “closed source” code. It is therefore extremely practical that we can call the existing binaries from our own programs and scripts – exactly like we do in a shell – and that they are often designed to do one thing only – but do it good.

Linux files on Beagle-Bone Black

Shell

As mentioned above there is always a shell in a Linux system – e.g. “bash”. Embedded Linux systems might not have as advanced shells as the PC-based systems. On the downloads page, you can find a “Cheat-Sheet” with the most important shell commands. It also shows the file-system figure above in better resolution.

Busybox

In classic Unix, even the simplest command has its own binary program. This is one of the secrets behind the simple, yet efficient, piping of commands that lets you e.g. use one program to filter the output of another in a single line. Typically these system programs share a lot of code like command-line parsing, error-handling etc. For embedded Linux, a configurator allows you to build your chosen set of these “system commands” into a single program – Busybox – saving system resources. The system commands only exist as links to Busybox in the Linux file system.

When called, Busybox checks the first argument – which is the name of the original program – and then decides what to do with the other arguments.

Kernel

This is the “core” of Linux, where processes and threads are scheduled and interrupts are serviced. Linux supports “preemption” which means that it can decide that its time for one running process or thread to take a break and make room for another one – typically based on priorities.

Originally the kernel had to be built together with the drivers in one big “monolith”. Then came “kernel modules” – allowing developers to swap parts of the kernel – e.g. a specific driver – with another version. Even later the “Device Tree” was invented. We will get back to this.

The Linux kernel does a great job in protecting processes from each other – a great help for the developer. This requires the CPU to have an MMU – Memory Management Unit. Many small CPUs do not have MMUs and therefore do not support Linux.

Real-Time

“Real-time” normally means that an interrupt is serviced within a guaranteed time-frame. This is not supported by Linux. Instead Linux is designed for high throughput. You will probably realize that you very rarely need actual real-time, and that fast throughput is actually what you need.

When you really need real-time, a so-called “PREEMPT_RT” patch allows you to use Linux for real-time – at the cost of various compatibility issues. Alternatively you can use another CPU for the things that needs to be done in real-time. Some CPUs are “heterogeneous”- meaning that they come with multiple, different “cores”. One or more cores run embedded Linux, while one or more runs a real-time kernel, or no kernel at all. See Linux and non-Linux in one system. A third alternative is to use dedicated hardware – which is often also build into the modern embedded CPUs. This can e.g. be a SPI-port that receives samples from a hardware clock-driven A/D-converter and buffers them – all without software interference. A DMA process may empty the small hardware buffer to main-memory.

Device-Tree

Not so long ago, the kernel needed to include drivers for all kinds of new hardware. With ARM the so-called “Device-Tree” was conceived. This allows the developer to declare which hardware ports etc. are enabled, what their name will be in the software and what driver is used. The Device Tree is read by the boot-loader together with the kernel. This means that you can make many changes to the hardware I/O-map without changing anything in the kernel.

A consequence of this is that the open source kernel doesn’t have to be updated every time a new device sees the light of day.

See example from Toradex.

Boot-loader

The kernel cannot be directly started at power-up. Enter the boot-loader – a nice stepping stone that e.g. allows us to select between “updated version” and “fallback version” of the kernel. Or boot from several different media – like USB, Flash, DVD etc.

In embedded systems the boot-loader is often “Das U-boot”, which has a lot of bells & whistles. U-boot also loads the Device-Tree.

Toolchain

Cross-debugging with Eclipse
Cross-debugging with Eclipse

Developers need to be able to compile and link programs. This is typically done on a Linux-based PC. Since the target (the embedded system) and the host (the PC) are different, we need to use a “cross-compiler”. The compiler must know the architecture of the CPU, and we might also need a floating-point library if the CPU does not natively support floating point operations. We also need the various configuration tools for BusyBox, the kernel and other things. It can be a complex task to get all these tools right, and even more complicated if you want to be able to update the tools in a controlled fashion.

Yocto and Buildroot

As stated above it can be quite tedious to maintain and updated a well-oiled tool-chain.

Also stated earlier: It is very productive that you can develop individual processes and copy a single binary to a target and then run. However, when you release a product to the market you need more control. You want to release a coherent, well-defined system. With small embedded non-Linux systems you build a single executable. Not so with Linux.

From the PC-world we know the concept of Linux “distributions” – aka “distros”. If you are maintaining an embedded product over time, it makes sense to have you own distro that handles versions of the toolchain in one or more “layers”, as well as the many git repos that you may fetch open or closed source from in other layers.

This is where “Yocto” comes in. Yocto is a tool maintained by many of the larger vendors that depend on Linux. It is very good – but also very complex. As you might expect by now, Yocto has a “layer” concept. You can e.g. build for two different versions of the actual hardware. You can use one layer for the actual target and another – parallel and interchangeable – layer for PC hardware. This way it is possible to test many algorithms etc. on PCs in parallel each night – possibly before you even have target hardware. Similarly you can run on a Raspberry Pi or BeagleBone until you have plenty of your “real” hardware.

Since Yocto “knows” the source of your source, it can also keep track of which license that is used. You may e.g. set Yocto to warn if GPL v2 is used instead of v3.

There is also a simpler tool called “Buildroot”. If you e.g. buy pre-built hardware it may come with Buildroot support, and it may be all you need.

Writing Embedded Linux Applications

If you come from the smaller embedded systems, you are used to create an architecture that is maintained as a single binary build. You might have tried to use multiple cores, and found it to be a very “handheld” process – requiring a lot of manual assistance. You may have designed an “init” process, and had to rethink code to handle reentrancy and multiple users. Adding new peripherals used to be a lot of work.

This is all very different when it comes to Embedded Linux. Now you can write, debug and test each process as an independent mini-project. You have network-managers, webservers and a choice of databases if you want. You can update peripherals and fetch updated drivers for them.

Using multiple cores is almost a free performance boost, with close to no administration due to SMP – Symmetrical Multi Processing. This is extremely important in modern CPUs, since getting more performance by raising the clock-frequency has become harder than ever. Linux has turned the use of multiple-core CPUs into a simple upgrade.

Compilers and libraries robustly support reentrancy, and  all existing Linux code is written for multiple users. You have ready-made initialization concepts.

This is all much like working on a PC. You can test much of your code on a PC version – even before the first prototype hardware is on the table.

When processes need to communicate, you have the choice between many different forms – e.g. pipes and sockets. You might want to consider using sockets, if there is even a slim chance that you later want to spread the processes between more (physical) CPUs – or even separate devices. Socket communication between processes inside a modern Linux CPU is pretty efficient, and it will be a simple task to later move one or more processes to other CPUs. During development you may even run some processes on PC and others in the target. This is very practical as some processes might depend on the I/O in the target, while others are more algorithmic and may benefit from the good tools and fast turnaround time on your PC. In other words; the hardware serves as a gateway to the real world, while you develop code on the PC. Please see my IoT book to learn about socket communication.

Naturally, you still need an architecture. The point is however, that you get a lot for free from standard Linux processes and concepts. Even the things that you don’t get right out of the box, can be molded from the existing Linux ways of working.