I'm trying to base a Dockerfile on another local one.
$ ls -lR
total 0
-rw-r--r-- 1 me me 42 14 avr 10:42 Dockerfile
drwxr-xr-x 3 me me 42 14 avr 10:42 prod
./prod:
total 0
-rw-r--r-- 1 me me 42 14 avr 10:42 Dockerfile
$ cat prod/Dockerfile
FROM ../Dockerfile
...
$ docker build - < prod/Dockerfile
unable to process Dockerfile: unable to parse repository info: repository name component must match "a-z0-9(?:[._]a-z0-9)*"
I know that FROM expects an image and not a path.
How can I extend Dockerfile from prod/Dockerfile ?
Dockerfiles don't extend Dockerfiles but images, the FROM line is not an "include" statement.
So, if you want to "extend" another Dockerfile, you need to build the original Dockerfile as an image, and extend that image.
For example;
Dockerfile1:
FROM alpine
RUN echo "foo" > /bar
Dockerfile2:
FROM myimage
RUN echo "bar" > /baz
Build the first Dockerfile (since it's called Dockerfile1, use the -f option as docker defaults to look for a file called Dockerfile), and "tag" it as myimage
docker build -f Dockerfile1 -t myimage .
# Sending build context to Docker daemon 3.072 kB
# Step 1 : FROM alpine
# ---> d7a513a663c1
# Step 2 : RUN echo "foo" > /bar
# ---> Running in d3a3e5a18594
# ---> a42129418da3
# Removing intermediate container d3a3e5a18594
# Successfully built a42129418da3
Then build the second Dockerfile, which extends the image you just built. We tag the resulting image as "myextendedimage";
docker build -f Dockerfile2 -t myextendedimage .
# Sending build context to Docker daemon 3.072 kB
# Step 1 : FROM myimage
# ---> a42129418da3
# Step 2 : RUN echo "bar" > /baz
# ---> Running in 609ae35fe135
# ---> 4ea44560d4b7
# Removing intermediate container 609ae35fe135
# Successfully built 4ea44560d4b7
To check the results, run a container from the image and verify that both files (/bar and /baz) are in the image;
docker run -it --rm myextendedimage sh -c "ls -la ba*"
# -rw-r--r-- 1 root root 4 Apr 14 10:18 bar
# -rw-r--r-- 1 root root 4 Apr 14 10:19 baz
I suggest to read the User guide, which explains how to work with images and containers
Take a look at multi-stage builds it could help you
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
https://blog.alexellis.io/mutli-stage-docker-builds/
I wrote simple bash script for this. It works next way:
Example structure:
|
|_Dockerfile(base)
|_prod
|_Dockerfile(extended)
Dockerfile(extended):
FROM ../Dockerfile
...
Run script:
./script.sh prod
It merges your base dockerfile with extended and build merged file.
Script:
#!/bin/bash
fromLine=$(head -n 1 $1/Dockerfile)
read -a fromLineArray <<< $fromLine
extPath=${fromLineArray[1]}
tail -n +2 "$1/Dockerfile" > strippedDocker
cat $1/$extPath strippedDocker > resDocker
rm strippedDocker
docker build - < resDocker
rm resDocker
I'm using conditionals:
Dockerfile
Install sudo only on local build.
FROM ubuntu:latest
ARG APP_ENVIRONMENT=local
RUN apt-get update && bash -c "set -ex ; \
apt-get install -y $([ ${APP_ENVIRONMENT} = local ] \
&& echo 'curl sudo' \
|| echo 'curl' \
)"
CMD bash -c "set -ex ; \
[ ${APP_ENVIRONMENT} = local ] \
&& { app debug ; exit $? ; } \
|| { app start ; exit $? ; } \
"
Build
# Production
docker build \
-t my-image \
--build-arg APP_ENVIRONMENT='prod' \
.
# Local
docker build \
-t my-image \
.
Docker Compose
version: "3.7"
services:
app:
build:
context: .
args:
APP_ENVIRONMENT: "${APP_ENVIRONMENT:-local}"
If you use Docker 20.10+, you can do this:
# syntax = edrevo/dockerfile-plus
INCLUDE+ ../Dockerfile
RUN ...
The INCLUDE+ instruction gets imported by the first line in the Dockerfile. You can read more about the dockerfile-plus at https://github.com/edrevo/dockerfile-plus
Related
I need to use host ssh key inside docker , for this purpose i have build docker like
docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev .
if we use direct docker command it is working fine , but if I use inside the jenkins pipe-line script getting below error
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 92: expecting '}', found 'ssh_prv_key' # line 92, column 116.
ev:${GIT_COMMIT} "--build-arg ssh_prv_ke
Below step i have used in jenkins pipe-line
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev ."
And docker file used like below
ARG ssh_prv_key
# Authorize SSH Host
# Add the keys and set permissions
RUN mkdir -p /root/.ssh
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
I solved a similar issue as follow:
Jenkins pipeline
sh "cp ~/.ssh/id_rsa id_rsa"
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} -f dockerfile-dev ."
sh "rm id_rsa"
Dockerfile
# Some instructions...
ADD id_rsa id_rsa
# Now use the "id_rsa" file inside the image...
could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:
Hello i want to add At command to docker container. I am using linux alpine .
I tried to use apk add at andapk add atd it is giving me the same error.
ERROR: unsatisfiable constraints: atd (missing):
required by: world[atd]
Is there a way to fix that or can is there a way to use apt-get since at exists for apt-get
Looks like at just available as is: apk add at
this Dockerfile works fine for me:
FROM alpine:latest
RUN apk add at
CMD at --help
example run:
$ docker build -t at_command_line -f Dockerfile .
$ docker run at_command_line:latest
at: unrecognized option: -
Usage: at [-V] [-q x] [-f file] [-u username] [-mMlbv] timespec ...
at [-V] [-q x] [-f file] [-u username] [-mMlbv] -t time
at -c job ...
atq [-V] [-q x]
at [ -rd ] job ...
atrm [-V] job ...
batch
I would just add to #ujlbu4's answer that you need to run the at daemon atd once your container is up and running or else the jobs will sit in the queue without getting executed.
Example Dockerfile:
FROM python:alpine
RUN apk add at
ENTRYPOINT ["atd"]
If you don't run atd you may see the following:
$ docker exec -it my_running_container /bin/sh
# echo "echo hi" | at now + 1 minutes
warning: commands will be executed using /bin/sh
job 6 at Mon Jun 21 18:11:00 2021
Can't open /var/run/atd.pid to signal atd. No atd running?
I'm new to Docker and ran into the following problem:
In my Dodckerfile I have these lines:
ADD dir/archive.tgz /dir/
RUN tar -xzf /dir/archive2.tar.gz -C /dir/
RUN ls -l /dir/
RUN ls -l /dir/dir1/
The first ls prints out files correctly and I can see that dir1 was created inside dir by the archive, with permissions drwxr-xr-x. But the second ls gives me:
ls: "cannot access /dir/dir1/: No such file or directory"
I thought that if the Docker can see a file, it can access it. Do I need to do some special magic here?
I thought that if the Docker can see a file, it can access it.
In a way you are right, but also missing a piece of info. Those RUN commands are not necessarily sequentially executed since docker operates in layers, and your third RUN command is executed while your first might be skipped. In order to preserve proper execution order you need to put them in same RUN command as such so they end up on the same layer (and are updated together):
RUN tar -xzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir1/
This is common issue, most often when this is put in Dockerfile:
RUN apt-get update
RUN apt-get install some-package
Instead of this:
RUN apt-get update && \
apt-get install some-package
Note: This is in line with best practices for usage of RUN command in Dockerfile, documented here: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run and avoids possible confusion with caches/layes...
To recreate your problem here is small test to resemble similar setup to yours, depending on actual directory structure in your archive this may differ:
Dummy archive 2 with dir/dir1/somefile.txt created:
mkdir -p ~/test-sowf/dir/dir1 && cd ~/test-sowf && echo "Yay" | tee --append dir/dir1/somefile.txt && tar cvzf archive2.tar.gz dir && rm -rf dir
Dockerfile created in ~/test-sowf with following content
from ubuntu:latest
COPY archive2.tar.gz /dir/
RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir/dir1/
Build command like so:
docker build -t test-sowf .
Gives following result:
Sending build context to Docker daemon 5.632kB
Step 1/3 : from ubuntu:latest
---> 452a96d81c30
Step 2/3 : COPY archive2.tar.gz /dir/
---> Using cache
---> 852ef4f706d3
Step 3/3 : RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && ls -l /dir/ && ls -l /dir/dir/dir1/
---> Running in b2ab281190a2
dir/
dir/dir1/
dir/dir1/somefile.txt
total 8
-rw-r--r-- 1 root root 177 May 10 15:43 archive2.tar.gz
drwxr-xr-x 3 1000 1000 4096 May 10 15:43 dir
total 4
-rw-r--r-- 1 1000 1000 4 May 10 15:43 somefile.txt
Removing intermediate container b2ab281190a2
---> 05b7dfe52e36
Successfully built 05b7dfe52e36
Successfully tagged test-sowf:latest
Note that extracted files are with 1000:1000 as opposed to root:root for the archive, so unless you are not running from some other user (non root) you should not have problems with user, but, depending on your archive you might run into path problems (/dir/dir/dir1 as shown here).
test that file is correct, and contains 'Yay' inside:
docker run --rm --name test-sowf test-sowf:latest cat /dir/dir/dir1/somefile.txt
clean the test mess afterwards (deliberatelynot using rm -rf but cleaning individual files):
docker rmi test-sowf && cd && rm ~/test-sowf/archive2.tar.gz && rm ~/test-sowf/Dockerfile && rmdir ~/test-sowf
For those using docker-compose:
Sometimes when you volume mount a folder/file from one container to another before it exists, it can have weird permissions after it's created
For example if one container is certbot and another is your webserver, certbot will take time to generate the /etc/letsencrypt folder and its contents
From the webserver you might be able to see the folder or its contents with an ls, but not open them. You can see the behavior with a cat * and you'll get back
cat: <files in question>: No such file or directory
One solution is generating the folder at build time with a RUN mkdir -p /directory/of/choice in your dockerfile for the container generating the folder/files. Then the folder will exist and docker will happily mount it to your other container or host machine the way you want it to
I have a problem with Docker which does not persist commands launch via "RUN".
Here is my Dockerfile :
FROM jenkins:latest
RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile
RUN cat /var/jenkins_home/.profile
And here is the output :
Sending build context to Docker daemon 373.8 kB Step 1 : FROM jenkins:latest ---> fc39417bd5fb Step 2 : RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home ---> Using cache
---> c614b13d9d83 Step 3 : RUN ls -alh /var/jenkins_home ---> Using cache ---> 8a16a0c92f67 Step 4 : RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home ---> Using cache ---> f6ca5d5bdc64 Step 5 : RUN ls -alh /var/jenkins_home
---> Using cache ---> 3372c3275b1b Step 6 : RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile ---> Running in 79842be2c6e3
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi bar ---> 28559b8fe041 Removing intermediate container 79842be2c6e3 Step 7 : RUN cat /var/jenkins_home/.profile ---> Running in c694e0cb5866
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi ---> b7e47d65d65e Removing intermediate container c694e0cb5866 Successfully built b7e47d65d65e
Do you guys know why "foo" file is not persisted on step 3? Why ".bash_logout" file is recreated on step 5? Why "bar" is not in my ".profile" file anymore on step 7?
And of course, if I start a container based on this image, none of my modifications are persisted... so my Dockerfile is... useless. Any clue?
The reason those changes are not persisted, is that they are inside a volume the Jenkins Dockerfile marks /var/jenkins_home/ as a VOLUME.
Information inside volumes is not persisted during docker build, or more precisely; each build-step creates a new volume based on the image's content, discarding the volume that was used in the previous build step.
How to resolve this?
I think the best way to resolve this, is to;
Add the files you want to modify inside jenkins_home in a different location inside the image, e.g. /var/jenkins_home_overrides/
Create a custom entrypoint based on, or "wrapping", the default entrypoint script that copies the content of your jenkins_home_overrides to jenkins_home the first time the container is started.
Actually...
And just when I wrote that up; It looks like the official Jenkins image already support this out of the box;
https://github.com/jenkinsci/docker/blob/683b0d6ed17016ee3211f247304ef2f265102c2b/jenkins.sh#L5-L23
According to the documentation, you need to add your files to the /usr/share/jenkins/ref/ directory, and those will be copied to /var/jenkins/home upon start.
Also see https://issues.jenkins-ci.org/browse/JENKINS-24986