gcsfuse cannot mount when building a docker container - docker

I am trying to mount a storage bucker inside my container during docker build. I've read other threads, here, here and understood that this may be a privileged problem. It can be solved by adding the --privileged flag in the docker run process, but I would like to get the bucket mounted right off the building stage.
Attached to the container, checked that both fuse and gcsfuse are installed. GOOGLE_APPLICATION_CREDENTIALS is set, no problem with accessing Google APIs. Here's the error I am getting.
Opening GCS connection...
Opening bucket...
Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
Dockerfile
FROM gcr.io/google-appengine/python
.
.
.
ENV GCSFUSE_REPO=gcsfuse-jessie
RUN apt-get update && apt-get install -y ca-certificates \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" > /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update && apt-get install -y gcsfuse
# Config fuse
RUN chmod a+r /etc/fuse.conf
RUN perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Alter permission
RUN chmod a+w mount-folder
RUN gcsfuse -o --implicit-dirs bucket mount-folder

For mounting a bucker using gcsfuse, you need to run docker with --privileged flag. Now the question is, "How to build a docker image without any error in such case"?
Scenario:
When in Dockerfile you have the line:
RUN chmod +x launcher.sh
USER ubuntu
RUN mkdir db
RUN gcsfuse mybucket /home/username/mypath
EXPOSE 8000
CMD ["python3.6","manage.py","runmodwsgi"]
You see
Step 8/12 : USER ubuntu
---> Running in 0d880396010b
Removing intermediate container 0d880396010b
---> 390ed28965e1
Step 9/12 : RUN mkdir mypath
---> Running in b1f657562f24
Removing intermediate container b1f657562f24
---> 002ca425ac4c
Step 10/12 : RUN gcsfuse mybucket /home/username/mypath
---> Running in cae87fd47c1d
Using mount point: /home/username/mypath
Opening GCS connection...
Opening bucket...
Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
The command '/bin/sh -c gcsfuse database_simrut /home/ubuntu/db' returned a non-zero code: 1
Solution:
Remove the line
RUN gcsfuse mybucket /home/username/mypath
from the Dockerfile and create a launcher script instead which has following content:
#!/bin/bash
gcsfuse mybucket /home/username/mypath
python3.6 manage.py runmodwsgi
and your new docker should look like:
RUN chown $USER:$USER $HOME
RUN chmod +x launcher.sh
USER ubuntu
RUN mkdir db
EXPOSE 8000
CMD ["./launcher.sh"]
Output:
Step 8/11 : USER ubuntu
---> Running in 83c0e73295c2
Removing intermediate container 83c0e73295c2
---> fad81733c14d
Step 9/11 : RUN mkdir mypath
---> Running in 5d83416f035e
Removing intermediate container 5d83416f035e
---> 4575ba28dc44
Step 10/11 : EXPOSE 8000
---> Running in 081dc094c046
Removing intermediate container 081dc094c046
---> 546c8edd43c3
Step 11/11 : CMD ["./launcher.sh"]
---> Running in 6780900e1b2d
Removing intermediate container 6780900e1b2d
---> 9ef9fd5af68c
Successfully built 9ef9fd5af68c

Related

How to create directory in docker image?

I tried mkdir -p it didn't work.
I have the following Dockerfile:
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
RUN mkdir -p $PLUGIN_DIR
RUN ls $PLUGIN_DIR
# WORKDIR /var/jenkins_home/plugins # Can't use this, as it changes the permission to root
# which breaks the plugin installation step
# # COPY plugins.txt /usr/share/jenkins/plugins.txt
# # RUN jenkins-plugin-cli -f /usr/share/jenkins/plugins.txt --verbose
#
#
# # disable the setup wizard as we will set up jenkins as code
# ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
#
# ENV CASC_JENKINS_CONFIG /configs/jcasc.yaml
The build fails!
docker build -t jenkins:test.1 .
Sending build context to Docker daemon 51.2kB
Step 1/5 : FROM jenkins/jenkins:2.363-jdk11
---> 90ff7cc5bfd1
Step 2/5 : ENV PLUGIN_DIR /var/jenkins_home/plugins
---> Using cache
---> 0a158958aab0
Step 3/5 : RUN echo $PLUGIN_DIR
---> Running in ce56ef9146fc
/var/jenkins_home/plugins
Step 4/5 : RUN mkdir -p $PLUGIN_DIR
---> Using cache
---> dbc4e12b9808
Step 5/5 : RUN ls $PLUGIN_DIR
---> Running in 9a0edb027862
I need this because Jenkins deprecated old plugin installation method. The new cli installs plugins to /usr/share/jenkins/ref/plugins instead.
Also:
+$ docker run -it --rm --entrypoint /bin/bash --name jenkins jenkins:test.1
jenkins#7ad71925f638:/$ ls /var/jenkins_home/
jenkins#7ad71925f638:/$
The official Jenkins image on dockerhub declare VOLUME /var/jenkins_home, and subsequent changes to that directory (even in derived images) are discarded.
To workaround, you can execute mkdir as ENTRYPOINT.
And to verify that its working you can add an sleep to enter into the container and verify. It work !.
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
USER root
RUN echo "#!/bin/sh \n mkdir -pv $PLUGIN_DIR && sleep inf" > ./mkdir.sh
RUN chmod a+x ./mkdir.sh
USER jenkins
ENTRYPOINT [ "/bin/sh", "-c", "./mkdir.sh"]
after
docker build . -t <image_name>
docker run -d <image_name> --name <container_name>
docker exec -it <container_name> bash
and you will see your directory
Sources:
https://forums.docker.com/t/simple-mkdir-p-not-working/42179
https://hub.docker.com/_/jenkins

Docker build creates and tags an image that docker run cannot find

I have been given a project that is in a Docker container. I have managed to build the Docker container image and tag it, but when I run it I have problems.
bash-5.1$ docker build -t game:0.0.1 -t game:latest .
Sending build context to Docker daemon 2.584MB
Step 1/12 : FROM nvidia/cuda:10.2-base-ubuntu18.04
---> 84b82c2f5736
Step 2/12 : MAINTAINER me
---> Using cache
---> b8a86a8860d5
Step 3/12 : EXPOSE 5006
---> Using cache
---> fabdfc06768c
Step 4/12 : EXPOSE 8888
---> Using cache
---> a6f8585ce52d
Step 5/12 : ENV DEBIAN_FRONTEND noninteractive
---> Using cache
---> c4dd4de87fdc
Step 6/12 : ENV WD=/home/game/
---> Using cache
---> 871163f5db29
Step 7/12 : WORKDIR ${WD}
---> Using cache
---> 36678a12e551
Step 8/12 : RUN apt-get -y update && apt-get -y upgrade && apt-get -y install git ssh pkg-config python3-pip python3-opencv
---> Using cache
---> 4b83b4944484
Step 9/12 : COPY requirements.txt /requirements.txt
---> Using cache
---> 8e1db9206e80
Step 10/12 : RUN cd / && python3 -m pip install --upgrade pip && pip3 install -r requirements.txt
---> Using cache
---> e096029d458a
Step 11/12 : CMD ["start.py"]
---> Using cache
---> 795bb5a65bc8
Step 12/12 : ENTRYPOINT ["python3"]
---> Using cache
---> 59b472b693f2
Successfully built 59b472b693f2
Successfully tagged game:0.0.1
Successfully tagged game:latest
bash-5.1$ docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix game:latest
Unable to find image 'game:latest' locally
docker: Error response from daemon: pull access denied for game, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
bash-5.1$ sudo docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix game:latest
It doesn't seem to find the game:latest image even though the output of the above command says it just created it.
I also try to do this after logging into my session.
I tried to run 59b472b693f2 (what is it, is it a container hash code?):
bash-5.1$ docker run 59b472b693f2
python3: can't open file 'start.py': [Errno 2] No such file or directory
bash-5.1$ ls
data_collection demonstrateur.ipynb demo.py Dockerfile examples README.md requirements.txt serious_game start.py test
bash-5.1$
Here is the list of available images:
bash-5.1$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
game 0.0.1 7e7ad7272cf0 15 minutes ago 1.77GB
game latest 7e7ad7272cf0 15 minutes ago 1.77GB
ubuntu latest ba6acccedd29 7 weeks ago 72.8MB
hello-world latest feb5d9fea6a5 2 months ago 13.3kB
nvidia/cuda 10.2-base-ubuntu18.04 84b82c2f5736 2 months ago 107MB
bash-5.1$ docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix game:latest
python3: can't open file 'start.py': [Errno 2] No such file or directory
bash-5.1$
I tried to add it in the Dockerfile but still got the same error:
Removing intermediate container 10f2d7506d17
---> 1b776923e5a9
Step 11/13 : COPY start.py /start.py
---> 172c81ff16e9
Step 12/13 : CMD ["start.py"]
---> Running in c7217e2e0f21
Removing intermediate container c7217e2e0f21
---> eaf947ffa0b1
Step 13/13 : ENTRYPOINT ["python3"]
---> Running in 77e2e7b90658
Removing intermediate container 77e2e7b90658
---> 924d8c473e36
Successfully built 924d8c473e36
Successfully tagged seriousgame:0.0.1
Successfully tagged seriousgame:latest
bash-5.1$ docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix seriousgame:latest
python3: can't open file 'start.py': [Errno 2] No such file or directory
Here is my Dockerfile:
#############################################################################################################
#
# Creation du container
#
##############################################################################################################
FROM nvidia/cuda:10.2-base-ubuntu18.04
MAINTAINER me
EXPOSE 5006
EXPOSE 8888
ENV DEBIAN_FRONTEND noninteractive
ENV WD=/home/game/
WORKDIR ${WD}
# Add git and ssh
RUN apt-get -y update && \
apt-get -y upgrade && \
apt-get -y install git ssh pkg-config python3-pip python3-opencv
# Dépendances python
COPY requirements.txt /requirements.txt
RUN cd / && \
python3 -m pip install --upgrade pip && \
pip3 install -r requirements.txt
COPY start.py /start.py
CMD ["start.py"]
ENTRYPOINT ["python3"]
Here are all the files within my project:
bash-5.1$ ls
data_collection demonstrateur.ipynb demo.py Dockerfile examples README.md requirements.txt serious_game start.py test
In the first block of code you posted it says Successfully tagged game:latest and Successfully tagged game:0.0.1, but in your docker images output you don't see those images. Looking at the output of your docker images I see that the last time you built the image named serious-game was 1 hour ago. I'm guessing so that you tried to rename the image, but the image ID didn't change.
You can try to remove the old image with docker image rm command (docs), and then try to build it again. The commands sequence to execute is the code block below. Data should be safe becouse I see that you're using volumes (I assume that you know what you're doing).
docker image rm 59b472b693f2
docker build -t game:0.0.1 -t game:latest .
The sequence 59b472b693f2 is the unique ID of the image in your Docker local environment (you can assume that it's an ID like the ones used in databases for indexing).

docker container couldn't locate file but file is present

I am trying to package gotty into a Docker container but found a weird behavior.
$ tree
.
├── Dockerfile
├── gotty
└── gotty_linux_amd64.tar.gz
Dockerfile:
FROM alpine:3.11.3
RUN mkdir -p /home/gotty
WORKDIR /home/gotty
COPY gotty /home/gotty
RUN chmod +x /home/gotty/gotty
CMD ["/bin/sh"]
The image was built without issue:
[strip...]
Removing intermediate container 0dee1ab645e0
---> b5c6957d36e1
Step 7/9 : COPY gotty /home/gotty
---> fb1a1adec04a
Step 8/9 : RUN chmod +x /home/gotty/gotty
---> Running in 90031140da40
Removing intermediate container 90031140da40
---> 609e1a5453f7
Step 9/9 : CMD ["/bin/sh"]
---> Running in 30ce65cd4339
Removing intermediate container 30ce65cd4339
---> 099bc22ee6c0
Successfully built 099bc22ee6c0
The chmod changed the file mode successfully. So /home/gotty/gotty is present.
$ docker run -itd 099bc22ee6c0
9b219a6ef670b9576274a7b82a1b2cd813303c6ea5280e17a23a917ce809c5fa
$ docker exec -it 9b219a6ef670 /bin/sh
/home/gotty # ls
gotty
/home/gotty # ./gotty
/bin/sh: ./gotty: not found
Go into the container, the gotty command is there. I ran it with relative path. Why the not found?
You are running into one of the more notorious problems with Alpine: Musl, instead of glibc. Check out the output of ldd gotty. Try adding libc6-compat:
apk add libc6-compat
and see if that fixes it.

Docker build and run with Miniconda environments on Ubuntu host

I am in the process of creating a docker container which has a miniconda environment setup with some packages (pip and conda). Dockerfile :
# Use an official Miniconda runtime as a parent image
FROM continuumio/miniconda3
# Create the conda environment.
# RUN conda create -n dev_env Python=3.6
RUN conda update conda -y \
&& conda create -y -n dev_env Python=3.6 pip
ENV PATH /opt/conda/envs/dev_env/bin:$PATH
RUN /bin/bash -c "source activate dev_env" \
&& pip install azure-cli \
&& conda install -y nb_conda
The behavior I want is that when the container is launched, it should automatically switch to the "dev_env" conda environment but I haven't been able to get this to work. Logs :
dparkar#mymachine:~/src/dev/setupsdk$ docker build .
Sending build context to Docker daemon 2.56kB
Step 1/4 : FROM continuumio/miniconda3
---> 1284db959d5d
Step 2/4 : RUN conda update conda -y && conda create -y -n dev_env Python=3.6 pip
---> Using cache
---> cb2313f4d8a8
Step 3/4 : ENV PATH /opt/conda/envs/dev_env/bin:$PATH
---> Using cache
---> 320d4fd2b964
Step 4/4 : RUN /bin/bash -c "source activate dev_env" && pip install azure-cli && conda install -y nb_conda
---> Using cache
---> 3c0299dfbe57
Successfully built 3c0299dfbe57
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57
(base) root#3db861098892:/# source activate dev_env
(dev_env) root#3db861098892:/# exit
exit
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 source activate dev_env
[FATAL tini (7)] exec source failed: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash source activate dev_env
/bin/bash: source: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash "source activate dev_env"
/bin/bash: source activate dev_env: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash -c "source activate dev_env"
dparkar#mymachine:~/src/dev/setupsdk$
As you can see above, when I am within the container, I can successfully run "source activate dev_env" and the environment switches over. But I want this to happen automatically when the container is launched.
This also happens in the Dockerfile during build time. Again, I am not sure if that has any effect either.
You should use the command CMD for anything related to runtime.
Anything typed after RUN will only be run at image creation time, not when you actually run the container.
The shell used to run such commands is closed at the end of the image creation process, making the environment activation non-persistent in that case.
As such, your additional line might look like this:
CMD ["conda activate <your-env-name> && <other commands>"]
where <other commands> are other commands you might need at runtime after the environment activation.
This docker build file worked for me.
# start with miniconda image
FROM continuumio/miniconda3
# setting the working directory
WORKDIR /usr/src/app
# Copy the file from your host to your current location in container
COPY . /usr/src/app
# Run the command inside your image filesystem to create an environment and name it in the requirements.yml file, in this case "myenv"
RUN conda env create --file requirements.yml
# Activate the environment named "myenv" with shell command
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
# Make sure the environment is activated by testing if you can import flask or any other package you have in your requirements.yml file
RUN echo "Make sure flask is installed:"
RUN python -c "import flask"
# exposing port 8050 for interaction with local host
EXPOSE 8050
#Run your application in the new "myenv" environment
CMD ["conda", "run", "-n", "myenv", "python", "app.py"]

Docker build failed to copy a file

Hi I am new to Docker and trying to wrap around my head on how to clone a private repo from github and found some interesting link issues/6396
I followed one of the post and my dockerfile looks like
FROM python:2.7 as builder
# Deploy app's code
#RUN set -x
RUN mkdir /code
RUN mkdir /root/.ssh/
RUN ls -l /root/.ssh/
# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
RUN echo "${SSH_PRIVATE_KEY}"
# Set up root user SSH access for GitHub
ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
RUN ssh -o StrictHostKeyChecking=no -vT git#github.com 2>&1 | grep -i auth
# Test SSH access (this returns false even when successful, but prints results)
RUN git clone git#github.com:***********.git
COPY . /code
WORKDIR /code
ENV PYTHONPATH /datawarehouse_process
# Setup app's virtualenv
RUN set -x \
&& pip install tox \
&& tox -e luigi
WORKDIR /datawarehouse_process
# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not
going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*
#FROM python:2.7 as runtime
#COPY --from=builder /code /code
When I run docker build . from the correct location I get this error below. Any clue will be appreciated.
c:\Domain\Project\Docker-Images\datawarehouse_process>docker build .
Sending build context to Docker daemon 281.7MB
Step 1/15 : FROM python:2.7 as builder
---> 43c5f3ee0928
Step 2/15 : RUN mkdir /code
---> Running in 841fadc29641
Removing intermediate container 841fadc29641
---> 69fdbcd34f12
Step 3/15 : RUN mkdir /root/.ssh/
---> Running in 50199b0eb002
Removing intermediate container 50199b0eb002
---> 6dac8b120438
Step 4/15 : RUN ls -l /root/.ssh/
---> Running in e15040402b79
total 0
Removing intermediate container e15040402b79
---> 65519edac99a
Step 5/15 : ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
---> Running in 10e0c92eed4f
Removing intermediate container 10e0c92eed4f
---> 707279c92614
Step 6/15 : RUN echo "${SSH_PRIVATE_KEY}"
---> Running in a9f75c224994
C:\Users\MyUser\.ssh\id_rsa
Removing intermediate container a9f75c224994
---> 96e0605d38a9
Step 7/15 : ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
ADD failed: stat /var/lib/docker/tmp/docker-
builder142890167/C:\Users\MyUser\.ssh\id_rsa: no such file or
directory
From the Documentation:
ADD obeys the following rules:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
You are passing an absolute path to ADD, but you can see from the error:
/var/lib/docker/tmp/docker-builder142890167/C:\Users\MyUser\.ssh\id_rsa:
no such file or directory
It is being looked for within the build context. Again from the documentation:
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context.
So, you need to place the RSA key somewhere in the directory tree which has it's root at the path that you specify in your Docker build command, so if you are entering docker build . your ARG statement would change to something like:
ARG SSH_PRIVATE_KEY = .\.ssh\id_rsa

Resources