docker image - intermediate container issue - docker

I have a dockerfile which downloads file through wget and in the next step when I try to "RUN unzip . I get the below error.
Step 10/27 : RUN wget http://artifactory.orbit8.com/artifactory/build-dependencies/7.2.0/ext-7.2.0.zip -P /var/jenkins_home/Extjs_7.2.0
---> Using cache
---> bb39b7a46fd1
Step 11/27 : RUN unzip /var/jenkins_home/Extjs_7.2.0/epa-7.2.0.zip -d /var/jenkins_home/Extjs_7.2.0
---> Running in 515e6e4e5456
unzip: cannot find or open /var/jenkins_home/Extjs_7.2.0/epa-7.2.0.zip, /var/jenkins_home/Extjs_7.2.0/epa-7.2.0.zip.zip or /var/jenkins_home/Extjs_7.2.0/epa-7.2.0.zip.ZIP.

BMitch said it. you missed the right naming.
Also dont forget to execute wget and unzip and rm in one layer otherwise you will waste resources. (Multiple RUN vs. single chained RUN in Dockerfile, which is better?)
Try this :
RUN wget http://artifactory.orbit8.com/artifactory/build-dependencies/7.2.0/ext-7.2.0.zip -P /var/jenkins_home/Extjs_7.2.0 && \
unzip /var/jenkins_home/Extjs_7.2.0/epa-7.2.0.zip -d /var/jenkins_home/Extjs_7.2.0 && \
rm /var/jenkins_home/Extjs_7.2.0/epa-7.2.0.zip

Related

docker can't run vscodium

Mine is a bit of a peculiar situation, I created a dockerfile that "works" if not for some proiblems,
Here is a "working" version:
ARG IMGVERS=latest
FROM bensuperpc/tinycore:${IMGVERS}
LABEL maintainer "Vinnie Costante <****#gmail.com>"
ARG DOWNDIR=/tmp/download
ARG INSTDIR=/opt/vscodium
ARG REPOAPI="https://api.github.com/repos/VSCodium/vscodium/releases/latest"
ENV LANG=C.UTF-8 LC_ALL=C PATH="${PATH}:${INSTDIR}/bin/"
RUN tce-load -wic Xlibs nss gtk3 libasound libcups python3.9 tk8.6 \
&& rm -rf /tmp/tce/optional/*
RUN sudo ln -s /lib /lib64 \
&& sudo ln -s /usr/local/etc/fonts /etc/fonts \
&& sudo mkdir -p ${DOWNDIR} ${INSTDIR} \
&& sudo chown -R tc:staff ${DOWNDIR} ${INSTDIR}
#COPY VSCodium-linux-x64-1.57.1.tar.gz ${DOWNDIR}/
RUN wget http://192.168.43.6:8000/VSCodium-linux-x64-1.57.1.tar.gz -P ${DOWNDIR}
RUN tar xvf ${DOWNDIR}/VSCodium*.gz -C ${INSTDIR} \
&& rm -rf ${DOWNDIR}
CMD ["codium"]
The issues are these:
Starting the image with this command vscodium does not start, but entering the shell (adding /bin/ash to the end of the docker run) and then running codium instead vscodium starts. I tried many ways, even changing the entrypoint, the result is always the same. But if I try to add any other graphic program (like firefox) and replace the argument of the CMD instruction inside the dockerfile, everything works as it should.
docker run -it --rm \
--net=host \
--env="DISPLAY=unix${DISPLAY}" \
--workdir /home/tc \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
--name tc \
tinycodium
the last two versions of codium (1.58.0 and 1.58.1) don't work at all on docker but they start normally on the same distro not containerized. I tried installing other dependencies but nothing worked. Right now I don't know how to understand what's wrong with these two new versions.
I don't know how to set a volume to save codium data, I tried something like this --volume=/home/vinnie/docker:/home/tc but there are always problems with user/group permissions. I've also tried booting the container as user by adding it to the docker group but there's always a mess with permissions. If someone could explain me how to proceed, the directories I want to save are these:
/home/tc/.vscode-oss
/home/tc/.cache/mesa_shader_cache
/home/tc/.config/VSCodium
/home/tc/.config/glib-2.0/settings
/home/tc/.local/share
Try running codium --verbose and see if the container starts

How to access root folder inside a Docker container

I am new to docker, and am attempting to build an image that involves performing an npm install. Some of our the dependencies are coming from private repos we have, and I am hitting an SSH related issue:
I realised I was not supplying any form of SSH details to my file, and came across various posts online about how to do this using args into the docker build command.
So taken from here, I have added the following to my dockerfile before the npm install command gets run:
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
So running the docker build command again with the correct args supplied, I do see further activity in the console that suggests my SSH key is being utilised:
But as you can see I am getting no hostkey alg messages and
I still getting the same 'Host key verification failed' error. I was wondering if I could view the log file it references in the error:
Do I need to get the image running in order to be able to connect to it and browse the 'root' folder?
I hope I have made sense, please be gentle I am a docker noob!
Thanks
The lines that start with —-> in the docker build output are valid Docker image IDs. You can pick any of these and docker run them:
docker run --rm -it 59c45dac474a sh
If a step is actually failing, one useful debugging trick is to launch the image built in the step before it and run the command by hand.
Remember that anyone who has your image can do this; the way you’ve built it, if you ever push your image to any repository, your ssh private key is there for the taking, and you should probably consider it compromised. That’s doubly true since it will also be there in plain text in docker history output.

Can see a file in Docker container, but cannot access it

I'm new to Docker and ran into the following problem:
In my Dodckerfile I have these lines:
ADD dir/archive.tgz /dir/
RUN tar -xzf /dir/archive2.tar.gz -C /dir/
RUN ls -l /dir/
RUN ls -l /dir/dir1/
The first ls prints out files correctly and I can see that dir1 was created inside dir by the archive, with permissions drwxr-xr-x. But the second ls gives me:
ls: "cannot access /dir/dir1/: No such file or directory"
I thought that if the Docker can see a file, it can access it. Do I need to do some special magic here?
I thought that if the Docker can see a file, it can access it.
In a way you are right, but also missing a piece of info. Those RUN commands are not necessarily sequentially executed since docker operates in layers, and your third RUN command is executed while your first might be skipped. In order to preserve proper execution order you need to put them in same RUN command as such so they end up on the same layer (and are updated together):
RUN tar -xzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir1/
This is common issue, most often when this is put in Dockerfile:
RUN apt-get update
RUN apt-get install some-package
Instead of this:
RUN apt-get update && \
apt-get install some-package
Note: This is in line with best practices for usage of RUN command in Dockerfile, documented here: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run and avoids possible confusion with caches/layes...
To recreate your problem here is small test to resemble similar setup to yours, depending on actual directory structure in your archive this may differ:
Dummy archive 2 with dir/dir1/somefile.txt created:
mkdir -p ~/test-sowf/dir/dir1 && cd ~/test-sowf && echo "Yay" | tee --append dir/dir1/somefile.txt && tar cvzf archive2.tar.gz dir && rm -rf dir
Dockerfile created in ~/test-sowf with following content
from ubuntu:latest
COPY archive2.tar.gz /dir/
RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir/dir1/
Build command like so:
docker build -t test-sowf .
Gives following result:
Sending build context to Docker daemon 5.632kB
Step 1/3 : from ubuntu:latest
---> 452a96d81c30
Step 2/3 : COPY archive2.tar.gz /dir/
---> Using cache
---> 852ef4f706d3
Step 3/3 : RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && ls -l /dir/ && ls -l /dir/dir/dir1/
---> Running in b2ab281190a2
dir/
dir/dir1/
dir/dir1/somefile.txt
total 8
-rw-r--r-- 1 root root 177 May 10 15:43 archive2.tar.gz
drwxr-xr-x 3 1000 1000 4096 May 10 15:43 dir
total 4
-rw-r--r-- 1 1000 1000 4 May 10 15:43 somefile.txt
Removing intermediate container b2ab281190a2
---> 05b7dfe52e36
Successfully built 05b7dfe52e36
Successfully tagged test-sowf:latest
Note that extracted files are with 1000:1000 as opposed to root:root for the archive, so unless you are not running from some other user (non root) you should not have problems with user, but, depending on your archive you might run into path problems (/dir/dir/dir1 as shown here).
test that file is correct, and contains 'Yay' inside:
docker run --rm --name test-sowf test-sowf:latest cat /dir/dir/dir1/somefile.txt
clean the test mess afterwards (deliberatelynot using rm -rf but cleaning individual files):
docker rmi test-sowf && cd && rm ~/test-sowf/archive2.tar.gz && rm ~/test-sowf/Dockerfile && rmdir ~/test-sowf
For those using docker-compose:
Sometimes when you volume mount a folder/file from one container to another before it exists, it can have weird permissions after it's created
For example if one container is certbot and another is your webserver, certbot will take time to generate the /etc/letsencrypt folder and its contents
From the webserver you might be able to see the folder or its contents with an ls, but not open them. You can see the behavior with a cat * and you'll get back
cat: <files in question>: No such file or directory
One solution is generating the folder at build time with a RUN mkdir -p /directory/of/choice in your dockerfile for the container generating the folder/files. Then the folder will exist and docker will happily mount it to your other container or host machine the way you want it to

Using ccache in automated builds on Docker cloud

I am using automated builds on Docker cloud to compile a C++ app and provide it in an image.
Compilation is quite long (range 2-3 hours) and commits on github are frequent (~10 to 30 per day).
Is there a way to keep the building cache (using ccache) somehow?
As far as I understand it, docker caching is useless since the compilation layer producing the ccache will not be used due to the source code changes.
Or can we tweak to bring some data back to first layer?
Any other solution? Pushing it somewhere?
Here is the Dockerfile:
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG}
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install \
&& rm -rf /usr/src/QGIS
WORKDIR /
You should try saving and restoring your cache data from a third party service:
- an online object storage like Amazon S3
- a simple FTP server
- an Internet available machine with ssh to make a scp
I'm assuming that your cache data is stored inside the ´~/.ccache´ directory
Using Docker multistage build
From some time, Docker supports Multi-stage builds and you can try using it to implement the solution with a single Dockerfile:
Warning: I've not tested it
# STAGE 1 - YOUR ORIGINAL DOCKER FILE CUSTOMIZED
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG} as builder
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
# restore cache
RUN curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2
RUN tar -xjvf ccache.tar.bz2
COPY --from=downloader /.ccache ~/.ccache
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install
# save the current cache online
WORKDIR ~/
RUN tar -cvjSf ccache.tar.bz2 .ccache
RUN curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
# STAGE 2
FROM alpine:latest
# YOUR CUSTOM LOGIC TO CREATE THE FINAL IMAGE WITH ONLY REQUIRED BINARIES
# USE THE FROM IMAGE YOU NEED, this is only an example
# E.g.:
# COPY --from=builder /usr/src/QGIS/build/YOUR_EXECUTABLE /usr/bin
# ...
In the stage 2 you will build the final image that will be pushed to your repository.
 Using Docker cloud hooks
Another, but less clear, approach could be using a Docker Cloud pre_build hook file to download cache data:
#!/bin/bash
echo "=> Downloading build cache data"
curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2 # e.g. Amazon S3 like service
cd /
tar -xjvf ccache.tar.bz2
Obviously you can use dedicate docker images to run curl or tar mounting the local directory as a volume in this script.
Then, copy the .ccache extracted folder inside your container during the build, using a COPY command before your cmake call:
WORKDIR /usr/src/QGIS/build
COPY /.ccache ~/.ccache
RUN cmake ...
In order to make this you should find a way to upload your cache data after the build and you could make this easily using a post_build hook file:
#!/bin/bash
echo "=> Uploading build cache data"
tar -cvjSf ccache.tar.bz2 ~/.ccache
curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
But your compilation data aren't available from the outside, because they live inside the container. So you should upload the cache after the cmake command inside your main Dockerfile:
RUN cmake...
&& tar ...
&& curl ...
&& ninja ...
&& rm ...
If curl or tar aren't available, just add them to your container using the package manager (qgis/qgis3-build-deps is based on Ubuntu 16.04, so they should be available).

How to add a file to an image in Dockerfile without using the ADD or COPY directive

I need the contents of a large *.zip file (5 gb) in my Docker container in order to compile a program. The *.zip file resides on my local machine. The strategy for this would be:
COPY program.zip /tmp/
RUN cd /tmp \
&& unzip program.zip \
&& make
After having done this I would like to remove the unzipped directory and the original *.zip file because they are not needed any more. The problem is that the COPY (and also the ADD directive) will add a layer to the image that will contain the file program.zip which is problematic as may image will be at least 5gb big. Is there a way to add a file to a container without using COPY or ADD directive? wget will not work as the mentioned *.zip file is on my local machine and curl file://localhost/home/user/program.zip -o /tmp/program.zip will not work either.
It is not straightforward but it can be done via wget or curl with a little support from python. (All three tools should usually be available on a *nix system.)
wget will not work when no url is given and
curl file://localhost/home/user/program.zip -o /tmp/
will not work from within a Dockerfile's RUN instruction. Hence, we will need a server which wget and curl can access and download program.zip from.
To do this we set up a little python server which serves our http requests. We will be using the http.server module from python for this. (You can use python or python 3. It will work with both.).
python -m http.server --bind 192.168.178.20 8000
The python server will serve all files in the directory it is started in. So you should make sure that you start your server either in the directory the file you want to download during your image build resides in or create a temporary directory which contains your program. For illustration purposes let's create the file foo.txt which we will later download via wget in our Dockerfile:
echo "foo bar" > foo.txt
When starting the http server, it is important, that we specify the IP address of our local machine on the LAN. Furthermore, we will open Port 8000. Having done this we should see the following output:
python3 -m http.server --bind 192.168.178.20 8000
Serving HTTP on 192.168.178.20 port 8000 ...
Now we build a Dockerfile to illustrate how this works. (We will assume that the file foo.txt should be downloaded into /tmp):
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://192.168.178.20:8000/foo.txt
Now we start the build with
docker build -t test .
During the build you will see the following output on our python server:
172.17.0.21 - - [01/Nov/2014 23:32:37] "GET /foo.txt HTTP/1.1" 200 -
and the build output of our image will be:
Step 2 : RUN cd /tmp && wget http://192.168.178.20:8000/foo.txt
---> Running in 49c10e0057d5
--2014-11-01 22:56:15-- http://192.168.178.20:8000/foo.txt
Connecting to 192.168.178.20:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 25872 (25K) [text/plain]
Saving to: `foo.txt'
0K .......... .......... ..... 100% 129M=0s
2014-11-01 22:56:15 (129 MB/s) - `foo.txt' saved [25872/25872]
---> 5228517c8641
Removing intermediate container 49c10e0057d5
Successfully built 5228517c8641
You can then check if it really worked by starting and entering a container from the image you just build:
docker run -i -t --rm test bash
You can then look in /tmp for foo.txt.
We can now add any file to our image without creating an new layer. Assuming you want to add a program of about 5 gb as mentioned in the question we could do:
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://conventiont:8000/program.zip \
&& unzip program.zip \
&& cd program \
&& make \
&& make install \
&& cd /tmp \
&& rm -f program.zip \
&& rm -rf program
In this way we will not be left with 10 gb of cruft.
There's no way to do this. A feature request is here https://github.com/docker/docker/issues/3156.
Can you not map a local folder to the container when launched and then copy the files you need.
sudo docker run -d -P --name myContainerName -v /localpath/zip_extract:/container/path/ yourContainerID
https://docs.docker.com/userguide/dockervolumes/
I have posted a similar answer here: https://stackoverflow.com/a/37542913/909579
You can use docker-squash to squash newly created layers. That will essentially remove the archive from final image if you remove it in subsequent RUN instruction.

Resources