Mount-ing a CDROM repo during docker build - docker

I'm building a docker image which also involves a small yum install. I'm currently in a location where firewall's and access controls makes docker pull, yum install etc extremely slow.
In my case, its a JRE8 docker image using this official image script
My problem:
Building the image requires just 2 libraries (gzip + tar) which combined is only of (132 kB + 865 kB). But the yum inside docker build script will first download the repo information which is over 80 MB. While 80 MB is generally small, here, this took over 1 hour just to download. If my colleagues need to build, this would be sheer waste of productive time, not to mention frustration.
Workarounds I'm aware of:
Since this image may not need the full yum power, I can simply grab the *.rpm files, COPY in container script and use rpm -i instead of yum
I can save the built image and locally distribute
I could also find closest mirror for docker-hub, but not yum
My bet:
I've copy of the linux CD with about the same version
I can add commands in dockerfile to rename the *.repo to *.repo.old
Add a cdrom.repo in /etc/yum.repos.d/ inside the container
Use yum to load most common libraries from the CDROM instead of internet
My problem:
I'm not able to make out how to create a mount point to a cdrom repo from inside the container build without using httpd.
In plain linux I do this:
mkdir /cdrom
mount /dev/cdrom /cdrom
cat > /etc/yum.repos.d/cdrom.repo <<EOF
[cdrom]
name=CDROM Repo
baseurl=file:///cdrom
enabled=1
gpgcheck=1
gpgkey=file:///cdrom/RPM-GPG-KEY-oracle
EOF
Any help appreciated.

Docker containers cannot access host devices. I think you will have to write a wrapper script around the docker build command to do the following
First mount the CD ROM to a directory within the docker context ( that would be a sub-directory where your DockerFile exists).
call docker build command using contents from this directory
Un-mount the CD ROM.
so,
cd docker_build_dir
mkdir cdrom
mount /dev/cdrom cdrom
docker build "$#" .
umount cdrom
In the DockerFile, you would simple do this:
RUN cd cdrom && rpm -ivh rpms_you_need

Related

Docker - how to ensure commit will persist a file?

I keep doing a pull, run, <UPLOAD FILE>, commit, tag, push cycle only to be dismayed that my file is gone when I pull the pushed container. My goal is to include an ipynb file with my image that serves as a README/ tutorial for my users.
Reading other posts, I see that commit is/ isn't the way to add a file. What causes commit to persist/ disregard a file? Am I supposed to use docker cp to add the file before commiting?
If you need to publish your notebook file in a docker image, use a Dockerfile, something like this-
FROM jupyter/datascience-notebook
COPY mynotebook.ipynb /home/jovyan/work
Then, once you have your notebook the way you want it, just run docker build, docker push. To try and help you a bit more, the reason you are having your problem is that the jupyter images store the notebooks in a volume. Data in a volume is not part of the image, it lives on the filesystem of the host machine. That means that a commit isn't going to save anything in the work folder.
Really, an ipynb is a data file, not an application. The right way to do this is probably to just upload the ipynb file to a file store somewhere and tell your users to download it, since they could use one docker image to run many data files. If you really want a prebuilt image using the workflow you described, you could just put the file somewhere else that isn't in a volume so that it gets captured in your commit.
For those of you looking for some place to start with docker build, below are the lines in the Dockerfile that I triggered with docker build -t your-image-name:your-new-t
Dockerfile
FROM jupyter/datascience-notebook:latest
MAINTAINER name <email>
# ====== PRE SUDO ======
ENV JUPYTER_ENABLE_LAB=yes
# If you run pip as sudo it continually prints errors.
# Tidyverse is already installed, and installing gorpyter installs the correct versions of other Python dependencies.
RUN pip install gorpyter
# commenting out our public repo
ENV R_HOME=/opt/conda/lib/R
# https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s09.html
# Looks like /usr/local/man is symlinking all R/W toward /usr/local/share/man instead
COPY python_sdk.ipynb /usr/local/share/man
COPY r_sdk.ipynb /usr/local/share/man
ENV NOTEBOOK_DIR=/usr/local/share/man
WORKDIR /usr/local/share/man
# ====== SUDO ======
USER root
# Spark requires Java 8.
RUN sudo apt-get update && sudo apt-get install openjdk-8-jdk -y
ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
# If you COPY files into the same VOLUME that you mount in docker-compose.yml, then those files will disappear at runtime.
# `user_notebooks/` is the folder that gets mapped as a VOLUME to the user's local folder during runtime.
RUN mkdir /usr/local/share/man/user_notebooks

Install packages from DVD repository in Docker image

I am trying to build a Docker image that needs to install some packages from a DVD iso but I cannot mount the iso inside the container.
My Dockerfile is:
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest
USER root
WORKDIR /home
COPY tools.iso ./
COPY tools.repo /etc/yum.repos.d/
RUN mkdir /mnt/tools && \
mount -r ./tools.iso /mnt/tools && \
yum -y install make && \
umount /mnt/tools && \
rm tools.iso
CMD /bin/bash
When I run docker build it returns the following error:
mount: /home/tools.iso: failed to setup loop device: No such file or directory
I also tried to add the command modprobe loop before mounting the iso but logs says it returned code=1.
Is this the correct way to install packages from a DVD in Docker?
In general Docker containers can't access host devices and shouldn't mount additional filesystems. These restrictions are even tighter during a docker build sequence, because the various options that would let you circumvent it aren't available to you.
The most straightforward option is to write a wrapper script that does the mount and unmount for you, something like:
#!/bin/sh
if [ ! -d tools ]; then mkdir tools; fi
mount -r tools.iso tools
docker build "$#" .
umount tools
Then you can have a two-stage Docker image where the first stage has access to the entire DVD contents and runs its installer, and the second stage actually builds the image you want to run. That would look something like (totally hypothetically)
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest AS install
COPY tools tools
RUN cd tools && ./install.sh /opt/whatever
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest
COPY --from=install /opt/whatever /opt
EXPOSE 8888
CMD ["/opt/whatever/bin/whateverd", "--bind", "0.0.0.0:8888", "--foreground"]
The obvious problem with this is that the entire contents of the DVD will be sent across a networked channel from the host to itself as part of the docker build sequence, and then copied again during the COPY step; if it does get into the gigabyte range then this starts to get unwieldy. You can use a .dockerignore file to cause some of it to be hidden to speed this up a little.
Depending on what the software is, you should also consider whether it can run successfully in a Docker container (does it expect to be running multiple services with a fairly rigid communication pattern?); a virtual machine may prove to be a better deployment option, and "mount a DVD to a VM" is a much better-defined operation.

How to share apt-package across Docker containers

I want to use docker to help me stay organized with developing and deploying package/systems using ROS (Robot Operating System).
I want to have multiple containers/images for various pieces of software, and a single image that has all of the ROS dependencies. How can I have one container use the apt-packaged from my dependency master container?
For example, I may have the following containers:
MyRosBase: sudo apt-get install all of the ros dependencies I need (There are many). Set up some other Environment variables and various configuration items.
MyMoveApplication: Use the dependencies from MyRosBase and install any extra and specific dependencies to this image. Then run software that moves the robot arm.
MySimulateApplication: Use the dependencies from MyRosBase and install any extra and specific dependencies to this image. Then run software that simulates the robot arm.
How do I use apt packages from container in another container without reinstalling them on each container each time?
You can create your own images that serve you as base images using Dockerfiles.
Example:
mkdir ROSDocker
cd ROSDocker
vim Dockerfile-base
FROM debian:stretch-slim
RUN apt-get install dep1 dep2 depn
sudo docker build -t yourusername/ros-base:0.1 -f Dockerfile-base .
After the build is complete you can create another docker file from this base image.
FROM yourusername/ros-base:0.1
RUN apt-get install dep1 dep2 depn
Now build the second images:
sudo docker build -t yourusername/mymoveApplication:0.1 -f Dockerfile-base .
Now you have an image for your move application, each container that you run from this image will have all the dependencies installed.
You can have docker image repository for managing your built images and sharing between people/environments.
This example can be expanded multiple times.

package manager for docker container running image busybox:uclibc

I want to install net-tools on one of my running containers, which is running busybox:uclibc image. But this image doesn't have any package manager like apt-get or apk. Is there a way to do it or should I just make changes to my image?
Anything based on Busybox doesn't have a package manager. It's a single binary with a bunch of symlinks into it, and the way to add software to it is to write C code and recompile. That is, /bin/busybox literally is ls and sed and sh and cp and ...

How do I dockerize an existing application...the basics

I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT
How do I take an existing application sitting on my local machine (lets just say it has one file index.php, for simplicity). How do I take that and put it into a docker image and run it?
Imagine you have the following existing python2 application "hello.py" with the following content:
print "hello"
You have to do the following things to dockerize this application:
Create a folder where you'd like to store your Dockerfile in.
Create a file named "Dockerfile"
The Dockerfile consists of several parts which you have to define as described below:
Like a VM, an image has an operating system. In this example, I use ubuntu 16.04. Thus, the first part of the Dockerfile is:
FROM ubuntu:16.04
Imagine you have a fresh Ubuntu - VM, now you have to install some things to get your application working, right? This is done by the next part of the Dockerfile:
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
For Docker, you have to create a working directory now in the image. The commands that you want to execute later on to start your application will search for files (like in our case the python file) in this directory. Thus, the next part of the Dockerfile creates a directory and defines this as the working directory:
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
As a next step, you copy the content of the folder where the Dockerfile is stored in to the image. In our example, the hello.py file is copied to the directory we created in the step above.
COPY . /usr/src/app
Finally, the following line executes the command "python hello.py" in your image:
CMD [ "python", "hello.py" ]
The complete Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "python", "hello.py" ]
Save the file and build the image by typing in the terminal:
$ docker build -t hello .
This will take some time. Afterwards, check if the image "hello" how we called it in the last line has been built successfully:
$ docker images
Run the image:
docker run hello
The output shout be "hello" in the terminal.
This is a first start. When you use Docker for web applications, you have to configure ports etc.
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.
Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.
There are two main concepts you need to understand for Docker: Images and Containers.
An image is a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You should always make your image from an existing base, using the FROM directive in the Dockerfile (Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).
A container is an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you can commit a container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
You'll need to build a docker image first, using a dockerFile, you'd probably setup apache on it, tell the dockerFile to copy your index.php file into your apache and expose a port.
See http://docs.docker.com/reference/builder/
See my other question for an example of a docker file:
Switching users inside Docker image to a non-root user (this is for copying over a .war file into tomcat, similar to copying a .php file into apache)
First off, you need to choose a platform to run your application (for instance, Ubuntu). Then install all the system tools/libraries necessary to run your application. This can be achieved by Dockerfile. Then, push Dockerfile and app to git or Bitbucket. Later, you can auto-build in the docker hub from github or Bitbucket. The later part of this tutorial here has more on that. If you know the basics just fast forward it to 50:00.

Resources