Install packages from DVD repository in Docker image - docker

I am trying to build a Docker image that needs to install some packages from a DVD iso but I cannot mount the iso inside the container.
My Dockerfile is:
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest
USER root
WORKDIR /home
COPY tools.iso ./
COPY tools.repo /etc/yum.repos.d/
RUN mkdir /mnt/tools && \
mount -r ./tools.iso /mnt/tools && \
yum -y install make && \
umount /mnt/tools && \
rm tools.iso
CMD /bin/bash
When I run docker build it returns the following error:
mount: /home/tools.iso: failed to setup loop device: No such file or directory
I also tried to add the command modprobe loop before mounting the iso but logs says it returned code=1.
Is this the correct way to install packages from a DVD in Docker?

In general Docker containers can't access host devices and shouldn't mount additional filesystems. These restrictions are even tighter during a docker build sequence, because the various options that would let you circumvent it aren't available to you.
The most straightforward option is to write a wrapper script that does the mount and unmount for you, something like:
#!/bin/sh
if [ ! -d tools ]; then mkdir tools; fi
mount -r tools.iso tools
docker build "$#" .
umount tools
Then you can have a two-stage Docker image where the first stage has access to the entire DVD contents and runs its installer, and the second stage actually builds the image you want to run. That would look something like (totally hypothetically)
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest AS install
COPY tools tools
RUN cd tools && ./install.sh /opt/whatever
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest
COPY --from=install /opt/whatever /opt
EXPOSE 8888
CMD ["/opt/whatever/bin/whateverd", "--bind", "0.0.0.0:8888", "--foreground"]
The obvious problem with this is that the entire contents of the DVD will be sent across a networked channel from the host to itself as part of the docker build sequence, and then copied again during the COPY step; if it does get into the gigabyte range then this starts to get unwieldy. You can use a .dockerignore file to cause some of it to be hidden to speed this up a little.
Depending on what the software is, you should also consider whether it can run successfully in a Docker container (does it expect to be running multiple services with a fairly rigid communication pattern?); a virtual machine may prove to be a better deployment option, and "mount a DVD to a VM" is a much better-defined operation.

Related

Mount-ing a CDROM repo during docker build

I'm building a docker image which also involves a small yum install. I'm currently in a location where firewall's and access controls makes docker pull, yum install etc extremely slow.
In my case, its a JRE8 docker image using this official image script
My problem:
Building the image requires just 2 libraries (gzip + tar) which combined is only of (132 kB + 865 kB). But the yum inside docker build script will first download the repo information which is over 80 MB. While 80 MB is generally small, here, this took over 1 hour just to download. If my colleagues need to build, this would be sheer waste of productive time, not to mention frustration.
Workarounds I'm aware of:
Since this image may not need the full yum power, I can simply grab the *.rpm files, COPY in container script and use rpm -i instead of yum
I can save the built image and locally distribute
I could also find closest mirror for docker-hub, but not yum
My bet:
I've copy of the linux CD with about the same version
I can add commands in dockerfile to rename the *.repo to *.repo.old
Add a cdrom.repo in /etc/yum.repos.d/ inside the container
Use yum to load most common libraries from the CDROM instead of internet
My problem:
I'm not able to make out how to create a mount point to a cdrom repo from inside the container build without using httpd.
In plain linux I do this:
mkdir /cdrom
mount /dev/cdrom /cdrom
cat > /etc/yum.repos.d/cdrom.repo <<EOF
[cdrom]
name=CDROM Repo
baseurl=file:///cdrom
enabled=1
gpgcheck=1
gpgkey=file:///cdrom/RPM-GPG-KEY-oracle
EOF
Any help appreciated.
Docker containers cannot access host devices. I think you will have to write a wrapper script around the docker build command to do the following
First mount the CD ROM to a directory within the docker context ( that would be a sub-directory where your DockerFile exists).
call docker build command using contents from this directory
Un-mount the CD ROM.
so,
cd docker_build_dir
mkdir cdrom
mount /dev/cdrom cdrom
docker build "$#" .
umount cdrom
In the DockerFile, you would simple do this:
RUN cd cdrom && rpm -ivh rpms_you_need

Docker ROS automatic start of launch file

I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else

Docker container with build output and no source

I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"

How to package files with docker image

I have an application that requires some binaries on host machine for a docker based application to work. I can ship the image using docker registry but how do I ship those binaries to host machine? creating deb/rpm seems one option but that would be against the docker platform independent philosophy.
If you need them outside the docker image on the host machine what you can do is this.
Add them to your Dockerfile with ADD or COPY
Also had an installation script which calls cp -f src dest
Then bind mount an installation directory from the host to dest in the container.
Something like the following example:
e.g. Dockerfile
FROM ubuntu:16.04
COPY file1 /src
COPY file2 /src
COPY install /src
CMD install
Build it:
docker build -t installer .
install script:
#/bin/bash
cp -f /src /dist
Installation:
docker run -v /opt/bin:/dist
Will result in file1 & file2 ending up in /opt/bin on the host.
If your image is based off of an image with a package manager, you could use the package manager to install the required binaries, e.g.
RUN apt-get update && apt-get install -y required-package
Alternatively, you could download the statically linked binaries from the internet and extract them, e.g.
RUN curl -s -L https://example.com/some-bin.tar.gz | tar -C /opt -zx
If the binaries are created as part of the build process, you'd want to COPY them over
COPY build/target/bin/* /usr/local/bin/

debootstrap inside a docker container

Here's my problem: I want to build a chroot environment inside a docker container. The problem is that debootstrap cannot run, because it cannot mount proc in the chroot:
W: Failure trying to run: chroot /var/chroot mount -t proc proc /proc
(in the log the problem turns out to be: mount: permission denied)
If I run --privileged the container, it (of course) works...
I'd really really really like to debootstrap the chroot in the Dockerfile (much much cleaner). Is there a way I can get it to work?
Thanks a lot!
You could use the fakechroot variant of debootstrap, like this:
fakechroot fakeroot debootstrap --variant=fakechroot ...
Cheers!
No, this is not currently possible.
Issue #1916 (which concerns running privileged operations during docker build) is still an open issue. There was discussion at one point of adding a command-line flag and RUNP command but neither of these have been implemented.
Adding --cap-add=SYS_ADMIN --security-opt apparmor:unconfined to the docker run command works for me.
See moby/moby issue 16429
This still doesn't work (2018-05-31).
Currently the only option is debootstrap followed by docker import - Import from a local directory
# mkdir /path/to/target
# debootstrap bionic /path/to/target
# tar -C /path/to/target -c . | docker import - ubuntu:bionic
debootstrap version 1.0.107, which is available since Debian 10 Buster (July 2019) or in Debian 9 Stretch-Backports has native support for Docker and allows building a Debian root image without requiring privileges.
Dockerfile:
FROM debian:buster-slim AS builder
RUN apt-get -qq update \
&& apt-get -q install --assume-yes debootstrap
ARG MIRROR=http://deb.debian.org/debian
ARG SUITE=sid
RUN debootstrap --variant=minbase "$SUITE" /work "$MIRROR"
RUN chroot /work apt-get -q clean
FROM scratch
COPY --from=builder /work /
CMD ["bash"]
docker build -t my-debian .
docker build -t my-debian:bullseye --build-arg SUITE=bullseye .
There is a fun workaround, but it involves running Docker twice.
The first time, using a standard docker image like ubuntu:latest, only run the first stage of debootstrap by using the --foreign option.
debootstrap --foreign bionic /path/to/target
Then don't let it do anything that would require privileged and isn't needed anyway by modifying the functions that will be used in the second stage.
sed -i '/setup_devices ()/a return 0' /path/to/target/debootstrap/functions
sed -i '/setup_proc ()/a return 0' /path/to/target/functions
The last step for that docker run is to have that docker execution tar itself up to a directory that is included as a volume.
tar --exclude='dev/*' -cvf /guestpath/to/volume/rootfs.tar -C /path/to/target .
Ok, now prep for a second run. First load your tar file as a docker image.
cat /hostpath/to/volume/rootfs.tar | docker import - my_image:latest
Then, run docker using FROM my_image:latest and run the second debootstrap stage.
/debootstrap/debootstrap --second-stage
That might be obtuse, but it does work without requiring --priveledged. You are effectively replacing running chroot with running a 2nd docker container.
This does not address the OP requirements for doing chroot in a container without --privileged set, but it is an alternative method that may be of use.
See Docker Moby for hetergenous rootfs builds. It creates a native temp directory and creates a rootfs in it using debootstrap which needs sudo. THEN it creates a docker image using
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/bash"]
This is a common recipe for running a pre-made rootfs in a docker image. Once the image is built, it does not need special permissions. AND it's supported by the docker devel team.
Short answer, without privileged mode no there isn't a way.
Docker is targeted at micro-services and is not a drop in replacement for virtual machines. Having multiple installations in one container definitely not congruent with that. Why not use multiple docker containers instead?

Resources