Dockerfile Preload Files & Volumes - docker

I'm not sure that I am using the correct terminology here but I thought I would at least try to explain my issue:
Goal:
I want to create a docker image that people can use that will have all the files preloaded and ready to go because they will have to make an edit to a config.json file basically. So I want people to be able to map their docker-compose (or just CLI) to something like "/my/host/path/config:/config" and when they spin up the image, all the files will be available at that location on their host machine in persistent storage.
Issue:
When you spin up the image for the first time, the directories get created on the host but there are no files for end-user to modify. So they are left with manually copying files into this folder to make it work and that is not acceptable in my humble opinion.
Quick overview of the image:
Python script that uses Selenium to perform some actions on a website
Dockerfile:
FROM python:3.9.2
RUN apt-get update
RUN apt-get install -y cron wget apt-transport-https
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# Set TZ
RUN apt-get install -y tzdata
ENV TZ=America/New_York
RUN mkdir -p /config
COPY . /config
WORKDIR /config
RUN pip install --no-cache-dir -r requirements.txt
# set display port to avoid crash
ENV DISPLAY=:99
# Custom Env Vars
ENV DOCKER_IMAGE=true
# Setup Volumes
VOLUME [ "/config" ]
# Run the command on container startup
CMD ["python3", "-u", "/config/RunTests.py"]
Any help would be greatly appreciated.

This is not how docker bind mount work. Docker bind mounts use the mount system call under the hood and it hides the content of the folder inside your containers when a host folder in bind mount.
Running this command docker run -v /my/config:/config container will always hide (override) the content inside your container.
On the contrary if you use empty docker volumes(created by docker volume command), Docker will copy the files to the volume before binding it.
So docker run config_volume:/config container will copy you config files into you volume the first time. Then you can use the volume with volume-from or mount it on another container.
To learn more about this take a look at this issue.
Another workaround is to bind you volume to a folder while creating it. More info here.
docker volume create --driver local \
--opt type=none \
--opt device=$configVolumePath \
--opt o=bind \
config_vol
For me the best solution is to copy or symlink your configuration files on container startup.
You can do so by modifying your add the cp or ln command into your entrypoint script.

Related

Copy container working directory data to local host directory

I am new to docker. I build a docker image using the below code
FROM ubuntu
ADD requirements.txt .
RUN apt-get update && \
apt-get install -y python3 && \
apt install python3-pip -y && \
apt-get install -y libglib2.0-0 \
libsm6 \
libxrender1 \
libxext6
RUN python3 -m pip install -r requirements.txt
COPY my_code /container/home/user
ENV PYTHONPATH /container/home/user/program_dir_1
RUN apt install -y libgl1
WORKDIR /container/home/user
CMD python3 program_dir_1/program_dir_2/program_dir_3/main.py
Now I have a local dir /home/host/local_dir. I want to write/copy all the file that the program creates during the runtime to this local dir.
I am using the below command to bind the volume
docker run -it --volume /home/host/local_dir:/container/home/user my_docker_image
It is giving me an error that
program_dir_1/program_dir_2/program_dir_3/main.py [Errno 2] No such file or directory
When I run the below command
docker run -it --volume /home/host/local_dir:/container/home/user my_docker_image pwd
It is giving the path to the host dir which I linked. It seems like it is also switching the working directory to the host volume to which I am linking.
Can anyone please help me to understand how to copy all the files and data generated using the working directory of the container to the directory of the host?
PS: I have used the below StackOverflow link and tried to understand but didn't get any success
How to write data to host file system from Docker container # Found one solution but got the error that
docker run -it --rm --volume v_mac:/home/host/local_dir --volume v_mac:/container/home/user my_docker_image cp -r /home/host/local_dir /container/home/user
Docker: Copying files from Docker container to host # This is not much of use as I assume the container should be in a running state. In my mine it exited after completion of the program

Installing Kubernetes in Docker container

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?
As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install
Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

How to exchange files between docker container and local filesystem?

I have a TypeScript code that reads the contents of a directory and has to delete them one by one at some intervals.
Everything works fine locally. I made a docker container for my code and wanted to achieve the same purpose, however, I realized that the directory contents are the same ones existed at the time of building the container.
As for my understanding, the connection between the docker container and the local file system is missing.
I have been wandering around bind and volume options, and I came across the following simple tutorial:
How To Share Data Between the Docker Container and the Host
According to the previous tutorial, theoretically, I would be able to achieve my goal:
If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.
However, I followed exactly the same steps but still couldn't see the changes made locally reflected in the docker container, or vice versa.
My question is: How can I access my local file system from a docker container to read/write/delete files?
Update
This is my dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
Easy way to volume mount on docker run command
docker run -it -v /<Source Dir>/:/<Destination Dir> <container_name> bash
Another way is using docker-compose.
Let's try it with docker-compose
put your dockerfile and docker-compose at the same location or dir
main focus
volumes:
- E:\dirToMap:/work
docker-compose.yaml
version: "3"
services:
ampervue:
build:
context: ./
image: <Image Name>
container_name: ampervueservice
volumes:
- E:\dirToMap:/vol1
ports:
- 8080:8080
And add volume in dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
VOLUME /vol1
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
and run following command to up the container
docker-compose -f "docker-compose-sr.yml" up -d --build
At the examples below which come directly from the docs:
The --mount and -v examples below produce the same result. You can't run them both unless you remove the devtest container after running the first one.
with -v:
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest
with --mount:
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app nginx:latest
This is where you have to type your 2 different paths:
-v /path/from/your/host:/path/inside/the/container
<-------host------->:<--------container------->
--mount type=bind,source=/path/from/your/host,target=/path/inside/the/container
<-------host-------> <--------container------->

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

How to build my own custom Ubuntu ISO with docker

As a background, I have a custom Ubuntu LiveUSB that will automatically boot into "Try it" and the OS will have pre-installed apps that I have burned into the ISO itself.
It works great, but I keep running into problems automating the process.
Rather than doing it by hand every time, (because my bash scripts keep getting different results when I try again for the first time in a while) I was thinking of generating a docker image with the unpacked files from the ISO ready for modification, then run a container with a script in a volume (docker run -v $(pwd)/bin:/data myimage /data/myscript.sh) that would modify the contents, pack it back up into an ISO and save the ISO in /data for me to grab and distribute.
FROM ubuntu:16.04
MAINTAINER Myself
ENV ISO_FILE="ubuntu-16.04.3-desktop-amd64.iso" \
OS_VERSION="16.04.3"
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y squashfs-tools genisoimage gnupg2 \
nodejs rsync build-essential libc6-dev-i386 \
wget
# Make directories
RUN mkdir /data
RUN mkdir -p /root/workspace
# Download ubuntu iso
WORKDIR /root/workspace
RUN wget http://releases.ubuntu.com/$OS_VERSION/$ISO_FILE
RUN wget http://releases.ubuntu.com/$OS_VERSION/SHA256SUMS
RUN wget http://releases.ubuntu.com/$OS_VERSION/SHA256SUMS.gpg
# Check hash (default /bin/sh errors out)
RUN /bin/bash -c "sha256sum -c <(grep $ISO_FILE SHA256SUMS)"
# Check signatures
RUN gpg2 --keyserver hkp://keyserver.ubuntu.com --recv-keys 0xFBB75451 0xEFE21092
RUN gpg2 --verify SHA256SUMS.gpg SHA256SUMS
# Create mount
RUN mkdir mnt
# Here is where the docker build fails
RUN mount -o loop $ISO_FILE mnt
# Extract official DVD
RUN mkdir extract-cd
RUN rsync --exclude=/casper/filesystem.squashfs -a mnt/ extract-cd
RUN unsquashfs mnt/casper/filesystem.squashfs
RUN mv squashfs-root edit
RUN umount mnt
# Insert buildscript and make it executable
COPY bin/buildscript.sh /root/workspace/edit/buildscript.sh
RUN chmod +x edit/buildscript.sh
# Prepare to chroot into unsquashed ubuntu image, and run buildscript.sh
RUN mount -o bind /run/ edit/run
RUN mount --bind /dev/ edit/dev
RUN chroot edit/ ./buildscript.sh
# unmount the mountpoints and delete the buildscript.
RUN umount edit/dev
RUN umount edit/run
RUN rm edit/buildscript.sh
And the buildscript.sh I run in chroot inside the builder (or fail to run) is:
#!/bin/bash
mount -t proc none /proc
mount -t sysfs none /sys
mount -t devpts none /dev/pts
export HOME=/root
export LC_ALL=C
add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe multiverse"
curl -sL https://deb.nodesource.com/setup_8.x | bash -
apt install -y nodejs
apt upgrade -y
apt install -y chromium-browser git
apt install -y language-pack-ja language-pack-gnome-ja language-pack-ja-base language-pack-gnome-ja-base
localectl set-locale LANG=ja_JP.UTF-8 LANGUAGE="ja_JP:ja"
source /etc/default/locale
mkdir src
apt autoclean
rm -rf /tmp/* ~/.bash_history
umount /proc || umount -lf /proc
umount /sys
umount /dev/pts
exit
Since this didn't work, I found online that build-run-commit method might work... so I changed the end of the dockerfile to the following
# Create mount
RUN mkdir mnt
RUN mkdir extract-cd
COPY bin/buildscript.sh /root/workspace/buildscript.sh
COPY bin/build_run_step2.sh /root/workspace/build_run_step2.sh
RUN chmod +x buildscript.sh
RUN chmod +x build_run_step2.sh
and then the "run" step of build run commit is the build_run_step2.sh which has the following (run with --privileged)
#!/bin/bash
cd /root/workspace
mount -o loop $ISO_FILE mnt
# Extract official DVD
rsync --exclude=/casper/filesystem.squashfs -a mnt/ extract-cd
unsquashfs mnt/casper/filesystem.squashfs
mv squashfs-root edit
umount mnt
mv ./buildscript.sh edit/buildscript.sh
# Prepare to chroot into unsquashed ubuntu image, and run buildscript.sh
mount -o bind /run/ edit/run
mount --bind /dev/ edit/dev
chroot edit/ ./buildscript.sh
# unmount the mountpoints and delete the buildscript.
umount edit/dev
umount edit/run
rm edit/buildscript.sh
Which works... but then I run into a problem:
Running apt-get update gets errors:
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/xenial-security/InRelease Temporary failure resolving 'security.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial-updates/InRelease Temporary failure resolving 'archive.ubuntu.com'
and checking ping gets me "no host found" while chrooted.
So One major question and one smaller question (if the major question has no answer):
How can I use docker to create an image with an opened up liveCD ready for customizing and then use docker run on that image to chroot, modify, repackage, and extract the new iso? (I know the commands to do that normally, so rather, I am wondering if/why all these things are not working in docker... aka what are the limitations of chrooting in docker?)
How can I get the chroot system within the container to reach dns so it can run the updates via URLS? (I attempted ping 8.8.8.8 from within the chroot in the container and the pings were coming back fine.)
So incase anyone finds this post. The way to resolve the dns issue is to make sure your resolv.conf file in the chroot is actually pointing to a proper dns servers. Some apps like cubic already do this for you.
This is likely to happen if you have not updated the sources list. Try:
sudo apt update

Resources