Containerized applications have been a common solution in the server and even desktop space for quite a while. More recently they’ve also been gaining interest on embedded projects.
Containers can help decouple application development from the development of the embedded platform itself in timelines, teams, and tools. They can also allow application developers to work on desktop or workstation targets, then later deploy to the actual target hardware.
Containerized applications and services are an attractive solution on embedded Linux devices where:
- the application architecture is independent microservices
- the application development team is separate from the platform development team and is unfamiliar with the platform development tools
- the applications are portable, intended to run on both desktop/server and embedded devices
- legacy applications need to run on newer embedded targets without having to include multiple, outdated versions of all their dependencies on a standardized platform software design
Tech Overview
There are a few different solutions for running programs in containers on Linux. In this example, we will look at using Docker as the container engine.
The operating system (OS) for our target device is being built with Yocto. We start with an existing project already set up for the target.
While it’s fairly straightforward to pull images onto a running target, even programmatically, here we consider the case where network connectivity isn’t reliably available at first boot, and we want to have all of the device’s software already in place as it boots for the first time.
We’ll use Docker on the host as part of the Yocto build process to assemble the image that will be used to run the container.
Once created, the image will be archived to a tarball using docker save. This tarball will be embedded in the root filesystem of the target. When it first boots, it will import this archive using docker load, and then run a container based on that image.
Considerations
Boot Time
To pull images from a registry, or even to load them from a local archive as we’ll show in this example, can take a non-trivial amount of time.
As long as the docker engine store (`var/lib/docker` by default) is located in persistent storage, this delay can be kept to just the first boot; startup times for subsequent boots should be minimally affected by the container(s) initializing.
Tooling
Yocto currently has support for Docker, but it only builds it for the target; it doesn’t provide it as a native tool, so it will need to be added as a host requirement. In local.conf (or maybe distro.conf, if you are creating a full distro layer), add
shell
HOSTTOOLS_NONFATAL += ” docker”
and make sure that the host has docker installed and that the current user is in the ‘docker’ group and can use it without elevated privileges (alternatively we could run all the docker commands in the recipe with ‘sudo’, with a restricted sudoers setup).
Beyond that, all that should be required for this approach is the standard setup and requirements for a Yocto host.
Getting Started With Containers in Yocto
Building Cross-Platform Docker Images
Similar to the way programs must be cross-compiled to match the target they’ll be run on, the container images must be built for the target arch as well. And it’s not simply a matter of copying the right files into the image; usually programs inside the image must be run as part of the build process (‘adduser’, ‘sh’, development tools, etc.). So we need to use a feature of the kernel that enables it to run binaries compiled for other architectures when encountered, binfmt_misc, using qemu for emulation to run the binaries. This way an amd64 host can, for example, run ARM or ARM64 binaries inside the temporary containers as it builds the images.
An easy way to enable this functionality, if your host is Debian/Ubuntu or a derivative, is to install the ‘qemu-user-static’ package. It installs the necessary statically-linked qemu emulators and registers them as binfmt-misc handlers. It can also be set up by hand; one example is in this script provided by the qemu project.
Recent versions of Docker Desktop include this functionality as part of their buildx engine as well, requiring no additional setup.
Add Container Sources
Now add the necessary Docker image sources and other required resources to the recipe. To demonstrate, we have a simple Dockerfile and C source file in the ‘files’ directory
.
container-archive.bb
files/
dockerproj/
app.c
Dockerfile
and have added them to the container-archive.bb recipe:
SRC_URI = “file://dockerproj/Dockerfile \
file://dockerproj/app.c \
“
The Dockerfile is a basic one that compiles the C file and adds the resulting binary to a bare Alpine Linux base image:
shell
# Dockerfile
FROM alpine:3.16.0 as build
RUN apk add –no-cache gcc musl-dev
COPY app.c .
RUN gcc app.c -o app
FROM alpine:3.16.0
COPY –from=build app .
CMD [“./app”]
Next we’ll add the docker image preparation to the Yocto recipe.
Build Image for the Target
First we construct the image by building it from the Dockerfile in the “build_image” task:
do_build_image() {
# Build the image from Dockerfile
if ! /usr/bin/docker build –platform=arm64 -f ${WORKDIR}/dockerproj/Dockerfile -t tsdemo/alpine-arm-app ${WORKDIR}/dockerproj ; then
bbfatal “Error: could not run docker”
fi
}
Note here the ‘–platform=arm64’ switch to ‘docker build’. This tells the docker engine specifically to set the platform to arm64 rather than, in this case, the detected amd64 of the host.
Archive Image
The next step is to export the image into a .tar archive in the “archive_image” task:
do_archive_image() {
# Save the Docker image out to an archive
if ! /usr/bin/docker save -o ${WORKDIR}/alpine-arm-app.tar tsdemo/alpine-arm-app ; then
bbfatal “Error saving archive”
fi
}
This leaves the alpine-arm-app.tar archive sitting in ${WORKDIR} to be installed.
We also need to register these user-added tasks:
addtask build_image before do_archive_image after do_fetch
addtask do_archive_image before do_install after do_build_image
Install Image
Next we need to install that archive into the RFS:
ARCHIVE_DIR = “container-archives”
do_install() {
install -d “${D}${datadir}/${ARCHIVE_DIR}”
install -m 0400 “${WORKDIR}/alpine-arm-app.tar” “${D}${datadir}/${ARCHIVE_DIR}/alpine-arm-app.tar”
}
FILES:${PN} = “${datadir}/${ARCHIVE_DIR}\
“
Target Setup
Docker is already enabled in recent i.MX 8 Linux BSPs. If it is not already in your target platform, you will need to add `” docker”` to IMAGE_INSTALL for your image (from the meta_virtualization layer) and make any kernel modifications that may be needed to support it, like cgroups, etc.
If the RFS is read-only, the docker store must be located somewhere else on a writable partition. This can be done with a bind mount, or by simply changing the location of the store in ‘/etc/docker/daemon.json’ (with the `data-root` element).
Init Scripts
Before trying to start up the container on the target, the image archive has to exist in the docker engine store. We could look at the status of `$(docker images)` to see if the image has already been loaded, and load it if not.
But we can simply try to load it on every boot at very little cost, since `docker load` already checks the manifest in the archive and quickly returns if the image is already loaded. Something like
shell
image_archive=/usr/share/container-archives/alpine-arm-app.tar
# …
docker load ${image_archive}
# …
docker run ${OTHER_DOCKER_OPTS} –rm -d tsdemo/alpine-arm-app
will load the image if necessary, then start the container from it. The only cost is the time to scan through the archive for the manifest file, which is arguably more robust than depending on any inferred data (like version numbers) from the actual filename.
Where to Next?
While this is just a basic example to demonstrate the approach, replace the static Dockerfile and source file with a git repo of the containerized app to turn this into a one-stop project to build the entire image to flash to the embedded device.
If necessary, first-time boot time can be reduced by pre-populating the docker engine store (‘/var/lib/docker’ by default) as part of the rfs construction rather than loading the image from the embedded archive (at the expense of some build-time complexity, especially if the Yocto build itself is taking place inside a container). After constructing the image, the docker service on the host is stopped and ‘/var/lib/docker’ is copied directly into the RFS.
Conclusion
With VigiShield Secure by Design, your build process can easily be updated to include containerized apps and services as part of the manufacturing image, minimizing the number of dependencies and human interaction points in your builds. VigiShield benefits include running app containers on different platforms, rapidly updating apps without needing to update the base operating system, support for legacy apps with outdated dependencies, and more.
Contact Timesys for more information on how to optimize and better secure container image integration in your embedded development process with the VigiShield container security add-on.
Acknowledgements
Special thanks to Savoir-faire and Toradex for some helpful tips and ideas to include in this introduction.