build a docker image that can use LLVM to compile ROS projects - docker

I intend to build a Docker image that could use LLVM to compile ROS project code. According to Docker's official document https://docs.docker.com/config/containers/multi-service_container/ that it's better to avoid multiple services in one container. LLVM and ROS both got their Docker Image, how to compose these two into one and ship it together?

Update (2019.12.13):
If all you want to do use clang within the official ROS docker images, you can do something like this:
FROM ros:melodic
RUN apt-get -y install clang-6.0
RUN update-alternatives --install /usr/bin/c++ c++ $(command -v clang++-6.0) 1000
RUN update-alternatives --install /usr/bin/cc cc $(command -v clang-6.0) 1000
You can do this because the official ROS images inherit from Ubuntu images which use update-alternatives to manage how generic commands (e.g., C/C++ compilers) map to the packages that provide them. In brief, the calls to update-alternatives will install various symlinks such that /usr/bin/cc and /usr/bin/c++ both (eventually) point to clang-6.0 and clang++-6.0, respectively.
For detail on how update-alternatives works, refer to its man page.
Original answer follows.
You have a few strategies available to you:
Combine Dockerfiles
If you have access to both Dockerfiles, then try combining the content therein into one Dockerfile. You'll have to choose a single FROM instruction, but other instructions can be combined in the manner of your choosing.
Change the FROM on one
If you have access to only one Dockerfile, then try changing its FROM instruction to inherit from the other image. E.g., the ros:kinetic-ros-core-xenial image inherits from ubuntu:xenial; try changing it to inherit from, say, reaverproject/llvm.
ADD both tarballs
If you have access to neither Dockerfile, then you'll have to reverse-engineer a bit. Start by creating a "noop" container from each image and exporting a filesystem tarball therefrom. I.e., do...
$ docker container run --name noop-foobar foobar sh -c 'exit 0'
$ docker container export --output foobar.tar noop-foobar
$ docker container rm noop-foobar
...substituting "foobar" as necessary.
Once you have exported both filesystem tarballs, create a "base image" by ADDing them to a scratch image:
FROM scratch
ADD llvm.tar
ADD ros.tar
...
It's very likely you'll have to manually resolve conflicts between filesystem tarballs to get the base image working as intended.
References:
https://docs.docker.com/engine/reference/builder/
https://hub.docker.com/_/ros
https://docs.docker.com/engine/reference/commandline/container_export/
https://docs.docker.com/develop/develop-images/baseimages/#create-a-simple-parent-image-using-scratch

Related

Set ldconfig LD_LIBRARY_PATH in a docker container

I have a docker container which I use to build software and generate shared libraries in. I would like to use those libraries in another docker container for actually running applications. To do this, I am using the build docker with a mounted volume to have those libraries on the host machine.
My docker file for the RUNTIME container looks like this:
FROM openjdk:8
RUN apt update
ENV LD_LIBRARY_PATH /build/dist/lib
RUN ldconfig
WORKDIR /build
and when I run with the following:
docker run -u $(id -u ${USER}):$(id -g ${USER}) -it -v $(realpath .):/build runtime_docker bash
I do not see any of the libraries from /build/dist/lib in the ldconfig -p cache.
What am I doing wrong?
You need to COPY the libraries into the image before you RUN ldconfig; volumes won't help you here.
Remember that first you run a docker build command. That runs all of the commands in the Dockerfile, without any volumes mounted. Then you take that image and docker run a container from it. Volume mounts only happen when the docker run happens, but the RUN ldconfig has already happened.
In your Dockerfile, you should COPY the files into the image. There's no particular reason to not use the "normal" system directories, since the image has an isolated filesystem.
FROM openjdk:8
# Copy shared-library dependencies in
COPY dist/lib/libsomething.so.1 /usr/lib
RUN ldconfig
# Copy the actual binary to run in and set it as the default container command
COPY dist/bin/something /usr/bin
CMD ["something"]
If your shared libraries are only available at container run-time, the conventional solution (as far as I can tell) would be to include the ldconfig command in a startup script, and use the dockerfile ENTRYPOINT directive to make your runtime container execute this script every time the container runs.
This should achieve your desired behaviour, and (I think) should avoid needing to generate a new container image every time you rebuild your code. This is slightly different from the common Docker use case of generating a new image for every build by running docker build at build-time, but I think it's a perfectly valid use case, and quite compatible with the way Docker works. Docker has historically been used as a CI/CD tool to streamline post-build workflows, but it is increasingly being used for other things, such as the build step itself. This naturally means people are coming up with slightly different ways of using Docker to facilitate various new and different types of workflow.

How to share apt-package across Docker containers

I want to use docker to help me stay organized with developing and deploying package/systems using ROS (Robot Operating System).
I want to have multiple containers/images for various pieces of software, and a single image that has all of the ROS dependencies. How can I have one container use the apt-packaged from my dependency master container?
For example, I may have the following containers:
MyRosBase: sudo apt-get install all of the ros dependencies I need (There are many). Set up some other Environment variables and various configuration items.
MyMoveApplication: Use the dependencies from MyRosBase and install any extra and specific dependencies to this image. Then run software that moves the robot arm.
MySimulateApplication: Use the dependencies from MyRosBase and install any extra and specific dependencies to this image. Then run software that simulates the robot arm.
How do I use apt packages from container in another container without reinstalling them on each container each time?
You can create your own images that serve you as base images using Dockerfiles.
Example:
mkdir ROSDocker
cd ROSDocker
vim Dockerfile-base
FROM debian:stretch-slim
RUN apt-get install dep1 dep2 depn
sudo docker build -t yourusername/ros-base:0.1 -f Dockerfile-base .
After the build is complete you can create another docker file from this base image.
FROM yourusername/ros-base:0.1
RUN apt-get install dep1 dep2 depn
Now build the second images:
sudo docker build -t yourusername/mymoveApplication:0.1 -f Dockerfile-base .
Now you have an image for your move application, each container that you run from this image will have all the dependencies installed.
You can have docker image repository for managing your built images and sharing between people/environments.
This example can be expanded multiple times.

package manager for docker container running image busybox:uclibc

I want to install net-tools on one of my running containers, which is running busybox:uclibc image. But this image doesn't have any package manager like apt-get or apk. Is there a way to do it or should I just make changes to my image?
Anything based on Busybox doesn't have a package manager. It's a single binary with a bunch of symlinks into it, and the way to add software to it is to write C code and recompile. That is, /bin/busybox literally is ls and sed and sh and cp and ...

Docker: Run an arbitrary command from inside a priveleged container on the host

I'm trying to run a command from a priveleged docker. Specifically the command nmcli.
When I try to add nmcli as a volume, it complains that it's missing other files.
When I add the entire /usr/bin it complains about python being unable to add site-packages.
Is there a way, I can run a command on the host machine from a child container
The majority of tools that are installed with a package manager like yum or apt will make use of shared libraries to reduce the overall size of the install.
The container would either need to be the same distro and have the same package dependencies installed, or mount all the dependencies of the binary into the container.
Something like this should work:
docker run \
-v /lib:/lib \
-v /usr/lib/:/usr/lib \
-v /usr/bin/nmcli:/usr/bin/nmcli \
busybox \
/usr/bin/nmcli
But you might need to be more specific about the library mounts if you want the container to use it's own shared libraries as well.
Some packages can provide a "static binary" that include's all their dependencies in the executable. I doubt this exists for nmcli as it's a RHEL specific tool to manage a RHEL box, whose policy is to use yum to manage shared libraries.

Compatability of Dockerfile RUN Commands Cross-OS (apt-get)

A beginner's question; how does Docker handle underlying operating system variations when using the RUN command?
Let's take, for example, a very simple Official Docker Hub Dockerfile, for JRE 1.8. When it comes to installing the packages for java, the Dockerfile uses apt-get:
RUN apt-get update && apt-get install -y --no-install-recommends ...
To the untrained eye, this appears to be a platform-specific instruction that will only work on Debian-based operating systems (or at least ones with APT installed).
How exactly would this work on a CentOS installation, for example, where the package manager would be yum? Or god forbid, something like Solaris.
If this pattern of using RUN to fork arbitrary shell commands is prevalent in docker, how does one avoid inter-platform, or even inter-version, dependencies?
i.e. what if the Dockerfile writer has a newer version of (say) grep than I do, and they've used some new CLI flag that isn't available on earlier versions?
The only two outcomes from this can be: (1) RUN command exits with non-zero exit code (2) the Dockerfile changes the installed version of grep before running the command.
The common point shared by all Dockerfiles is the FROM statement. It is the first line in the file and indicates the parent Docker image you're building on. A typical base image could be one with Ubuntu (i.e.: https://hub.docker.com/_/ubuntu/). The snippet you share in your question would fit well in an Ubuntu image (with apt-get) but not in a CentOS image.
In summary, you're installing docker in your CentOS system, but you're building a Docker image with Ubuntu in it.
As I commented in your question, you can add FROM statement to specify which relaying OS you want. for example:
FROM docker.io/centos:latest
RUN yum update -y
RUN yum install -y java
...
now you have to build/create the image with:
docker build -t <image-name> .
The idea is that you'll use the OS you are familiar with (for example, CentOS) and build an image of it. Now, you can take this image and run it above Ubuntu/CentOS/RHEL/whatever... with
docker run -it <image-name> bash
(You just need to install docker in the desired OS.

Resources