I tried mkdir -p it didn't work.
I have the following Dockerfile:
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
RUN mkdir -p $PLUGIN_DIR
RUN ls $PLUGIN_DIR
# WORKDIR /var/jenkins_home/plugins # Can't use this, as it changes the permission to root
# which breaks the plugin installation step
# # COPY plugins.txt /usr/share/jenkins/plugins.txt
# # RUN jenkins-plugin-cli -f /usr/share/jenkins/plugins.txt --verbose
#
#
# # disable the setup wizard as we will set up jenkins as code
# ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
#
# ENV CASC_JENKINS_CONFIG /configs/jcasc.yaml
The build fails!
docker build -t jenkins:test.1 .
Sending build context to Docker daemon 51.2kB
Step 1/5 : FROM jenkins/jenkins:2.363-jdk11
---> 90ff7cc5bfd1
Step 2/5 : ENV PLUGIN_DIR /var/jenkins_home/plugins
---> Using cache
---> 0a158958aab0
Step 3/5 : RUN echo $PLUGIN_DIR
---> Running in ce56ef9146fc
/var/jenkins_home/plugins
Step 4/5 : RUN mkdir -p $PLUGIN_DIR
---> Using cache
---> dbc4e12b9808
Step 5/5 : RUN ls $PLUGIN_DIR
---> Running in 9a0edb027862
I need this because Jenkins deprecated old plugin installation method. The new cli installs plugins to /usr/share/jenkins/ref/plugins instead.
Also:
+$ docker run -it --rm --entrypoint /bin/bash --name jenkins jenkins:test.1
jenkins#7ad71925f638:/$ ls /var/jenkins_home/
jenkins#7ad71925f638:/$
The official Jenkins image on dockerhub declare VOLUME /var/jenkins_home, and subsequent changes to that directory (even in derived images) are discarded.
To workaround, you can execute mkdir as ENTRYPOINT.
And to verify that its working you can add an sleep to enter into the container and verify. It work !.
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
USER root
RUN echo "#!/bin/sh \n mkdir -pv $PLUGIN_DIR && sleep inf" > ./mkdir.sh
RUN chmod a+x ./mkdir.sh
USER jenkins
ENTRYPOINT [ "/bin/sh", "-c", "./mkdir.sh"]
after
docker build . -t <image_name>
docker run -d <image_name> --name <container_name>
docker exec -it <container_name> bash
and you will see your directory
Sources:
https://forums.docker.com/t/simple-mkdir-p-not-working/42179
https://hub.docker.com/_/jenkins
Related
I have a config.sh:
IMAGE_NAME="back_end"
APP_PORT=80
PUBLIC_PORT=8080
and a build.sh:
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build -t ${IMAGE_NAME} .
and a run.sh:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
and finally, a Dockerfile:
...
CMD ["gunicorn", "-b", "0.0.0.0:${APP_PORT}", "main:app"]
I'd like to be able to reference the APP_PORT variable in my config.sh within the Dockerfile as shown above. However, what I have does not work and it complains: Error: ${APP_PORT} is not a valid port number. So it's not interpreting APP_PORT as a variable. Is there a way to reference the variables within config.sh from within the Dockerfile?
Thanks!
EDIT: New Files based on suggested solutions (still don't work)
I have a config.sh:
IMAGE_NAME="back_end"
APP_PORT=80
PUBLIC_PORT=8080
and a build.sh:
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build --build-arg APP_PORT="${APP_PORT}" -t "${IMAGE_NAME}" .
and a run.sh:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
and finally, a Dockerfile:
FROM python:buster
LABEL maintainer="..."
ARG APP_PORT
#ENV PORT $APP_PORT
ENV APP_PORT=${APP_PORT}
#RUN echo "$PORT"
# Install gunicorn & falcon
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
# Add demo app
COPY ./app /app
COPY ./config.sh /app/config.sh
WORKDIR /app
RUN ls -a
CMD ["gunicorn", "-b", "0.0.0.0:${APP_PORT}", "main:app"]
run.sh still fails and reports: Error: '${APP_PORT} is not a valid port number.'
Define a variable in Dockerfile as follows:
FROM python:buster
LABEL maintainer="..."
ARG APP_PORT
ENV APP_PORT=${APP_PORT}
# Install gunicorn & falcon
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
# Add demo app
COPY ./app /app
COPY ./config.sh /app/config.sh
WORKDIR /app
CMD gunicorn -b 0.0.0.0:$APP_PORT main:app # NOTE! without separating with ["",""]
Pass it as build-arg, e.g. in your build.sh:
Note! Passing build argument is only necessary when it is used for building docker image. You use it on CMD and one can omit passing it during building docker image.
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build --build-arg APP_PORT="${APP_PORT}" -t "${IMAGE_NAME}" .
# sudo docker build --build-arg APP_PORT=80 -t back_end . -> You may omit using config.sh and directly define the value of variables
and pass value of $APP_PORT in run.sh as well when starting the container:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-e APP_PORT=$APP_PORT \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
You need a shell to replace environment variables and when your CMD is in exec form, there's no shell.
If you use the shell form, there is a shell and you can use environment variables.
CMD gunicorn -b 0.0.0.0:${APP_PORT} main:app
Read here for more information on the two forms of the CMD statement: https://docs.docker.com/engine/reference/builder/#cmd
I'm trying to customize a Dockerfile, I just want to create a folder and assign the user (PID and GID) on the new folder.
Here is my full Dockerfile :
FROM linuxserver/nextcloud
COPY script /home/
RUN /home/script
The content of the script file :
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
I gave him the following permission : chmod +x script
At this moment it doesn't create the folder, and I see no error in logs.
Command to run the container :
docker run -d \
--name=nextcloud \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Europe/Paris \
-p 443:443 \
-p 8080:80 \
-v /home/foouser/nextcloud:/config \
-v /home/foouser/data:/data \
--restart unless-stopped \
nextcloud_custom
Logs from build :
Step 1/3 : FROM linuxserver/nextcloud
---> d1af592649f2
Step 2/3 : COPY script /home/
---> 0b005872bd3b
Step 3/3 : RUN /home/script
---> Running in 9fbd3f9654df
Removing intermediate container 9fbd3f9654df
---> 91cc65981944
Successfully built 91cc65981944
Successfully tagged nextcloud_custom:latest
you can try to run the commands directly:
RUN mkdir -p /data/local_data && chown -R abc:abc /data/local_data
you may try also to chabge your shebang to:
#!/bin/bash
to debugging you may try to set -xin your script a well.
EDIT:
I had notice this Removing intermediate container in your logs , the solution to it would be to use volume with your docker run command:
-v /path/your/new/folder/HOST:/path/your/new/folder/container
You are trying to modify a folder which is specified as a VOLUME in your base image, but as per Docker documentation on Volumes:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
linuxserver/nextcloud does declare a volume /data which you are trying to change afterward, it's like doing:
VOLUME /data
...
RUN mkdir -p /data/local_data
The directory created will be discarded. You can however create your directory on container startup by modifying it's entrypoint so when container starts the directory is created. Currently linuxserver/nextcloud uses /init as entrypoint, so you can do:
Your script content which you then define as entrypoint:
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
# Call the base image entrypoint with parameters
/init "$#"
Dockerfile:
FROM linuxserver/nextcloud
# Copy the script and call it at entrypoint instead
COPY script /home/
ENTRYPOINT ["/home/script"]
Hi I am new to Docker and trying to wrap around my head on how to clone a private repo from github and found some interesting link issues/6396
I followed one of the post and my dockerfile looks like
FROM python:2.7 as builder
# Deploy app's code
#RUN set -x
RUN mkdir /code
RUN mkdir /root/.ssh/
RUN ls -l /root/.ssh/
# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
RUN echo "${SSH_PRIVATE_KEY}"
# Set up root user SSH access for GitHub
ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
RUN ssh -o StrictHostKeyChecking=no -vT git#github.com 2>&1 | grep -i auth
# Test SSH access (this returns false even when successful, but prints results)
RUN git clone git#github.com:***********.git
COPY . /code
WORKDIR /code
ENV PYTHONPATH /datawarehouse_process
# Setup app's virtualenv
RUN set -x \
&& pip install tox \
&& tox -e luigi
WORKDIR /datawarehouse_process
# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not
going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*
#FROM python:2.7 as runtime
#COPY --from=builder /code /code
When I run docker build . from the correct location I get this error below. Any clue will be appreciated.
c:\Domain\Project\Docker-Images\datawarehouse_process>docker build .
Sending build context to Docker daemon 281.7MB
Step 1/15 : FROM python:2.7 as builder
---> 43c5f3ee0928
Step 2/15 : RUN mkdir /code
---> Running in 841fadc29641
Removing intermediate container 841fadc29641
---> 69fdbcd34f12
Step 3/15 : RUN mkdir /root/.ssh/
---> Running in 50199b0eb002
Removing intermediate container 50199b0eb002
---> 6dac8b120438
Step 4/15 : RUN ls -l /root/.ssh/
---> Running in e15040402b79
total 0
Removing intermediate container e15040402b79
---> 65519edac99a
Step 5/15 : ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
---> Running in 10e0c92eed4f
Removing intermediate container 10e0c92eed4f
---> 707279c92614
Step 6/15 : RUN echo "${SSH_PRIVATE_KEY}"
---> Running in a9f75c224994
C:\Users\MyUser\.ssh\id_rsa
Removing intermediate container a9f75c224994
---> 96e0605d38a9
Step 7/15 : ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
ADD failed: stat /var/lib/docker/tmp/docker-
builder142890167/C:\Users\MyUser\.ssh\id_rsa: no such file or
directory
From the Documentation:
ADD obeys the following rules:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
You are passing an absolute path to ADD, but you can see from the error:
/var/lib/docker/tmp/docker-builder142890167/C:\Users\MyUser\.ssh\id_rsa:
no such file or directory
It is being looked for within the build context. Again from the documentation:
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context.
So, you need to place the RSA key somewhere in the directory tree which has it's root at the path that you specify in your Docker build command, so if you are entering docker build . your ARG statement would change to something like:
ARG SSH_PRIVATE_KEY = .\.ssh\id_rsa
This is a part of my dockerfile:
COPY ./startup.sh /root/startup.sh
RUN chmod +x /root/startup.sh
ENTRYPOINT ["/root/startup.sh"]
EXPOSE 3306
CMD ["/usr/bin/mysqld_safe"]
USER jenkins
I have to switch in the end to USER jenkins and i have to run the container as jenkins.
My Question is now how can I run the startup.sh as root user when the container starts?
Delete the USER jenkins line in your Dockefile.
Change the user at the end of your entrypoint script (/root/startup.sh).
by adding: su - jenkins man su
Example:
Dockerfile
FROM debian:8
RUN useradd -ms /bin/bash exemple
COPY entrypoint.sh /root/entrypoint.sh
ENTRYPOINT "/root/entrypoint.sh"
entrypoint.sh
#!/bin/bash
echo "I am root" && id
su - exemple
# needed to run parameters CMD
$#
Now you can run
$ docker build -t so-test .
$ docker run --rm -it so-test bash
I am root
uid=0(root) gid=0(root) groups=0(root)
exemple#37b01e316a95:~$ id
uid=1000(exemple) gid=1000(exemple) groups=1000(exemple)
It's just a simple example, you can also use the su -c option to run command with changing user.
Is there a way to set an ENV variable for a custom USER in a docker file?
I am trying the following:
FROM some_repo/my_base_image
ENV FOO_VAR bar_value
USER webapp
# ... continued (not important)
But my "webapp" user can not see the "FOO_VAR" variable. HOWEVER, my root user CAN.
Any help would be appreciated.
Any user can see the environment variables:
$ cat Dockerfile
FROM debian
ENV foo bar
RUN groupadd -r am && useradd -r -g am am
USER am
$ docker build -t test .
...
$ docker run test bash -c 'echo $foo'
bar
So that's not what the problem is. It may be that your process forked a new environment, but I can't be sure as you haven't shared how you're checking the value.
If you switch user context using su within the dockerfile's ENTRYPOINT, CMD or docker exec ... using the form below you enter a new shell process for the given username that does not persist your original environment variables provided by the ENV targets through dockerfile, docker-compose yaml, or docker run -e ...
> su - username -c "run a process"
To avoid this behavior simply remove the dash - from the call like so:
> su username -c "run a process"
Your assigned docker environment variables will now persist.
For future reference, this also holds true within the Dockerfile (and not just for any container's user during run-time):
$ cat Dockerfile
FROM library/debian:9.5
ENV FOO="BAR"
RUN groupadd -r testuser && useradd -r -g testuser testuser
RUN mkdir -p /home/testuser && chown -R testuser /home/testuser
RUN echo "${FOO}" && echo "meh.${FOO}.blah"
USER testuser
RUN echo "${FOO}" && echo "meh.${FOO}.blah" | tee -a ~/test.xt
And docker build:
$ docker build -t test .
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM library/debian:9.5
---> be2868bebaba
Step 2/7 : ENV FOO="BAR"
---> Running in f2cd5ecca056
Removing intermediate container f2cd5ecca056
---> f6f7b3f26cad
Step 3/7 : RUN groupadd -r testuser && useradd -r -g testuser testuser
---> Running in ab9c0726cc1e
Removing intermediate container ab9c0726cc1e
---> dc9f2a35fb09
Step 4/7 : RUN mkdir -p /home/testuser && chown -R testuser /home/testuser
---> Running in 108b1c03323d
Removing intermediate container 108b1c03323d
---> 4a63e70fc886
Step 5/7 : RUN echo "${FOO}" && echo "meh.${FOO}.blah"
---> Running in 9dcdd6b73e7d
BAR
meh.BAR.blah
Removing intermediate container 9dcdd6b73e7d
---> c33504cadc37
Step 6/7 : USER testuser
---> Running in 596b0588dde6
Removing intermediate container 596b0588dde6
---> 075e2c861021
Step 7/7 : RUN echo "${FOO}" && echo "meh.${FOO}.blah" | tee -a ~/test.xt
---> Running in fb2648d8c120
BAR
meh.BAR.blah
Removing intermediate container fb2648d8c120
---> c7c1c69e200f
Successfully built c7c1c69e200f
Successfully tagged test:latest
(Yet for some reason it doesn't work for me in my own project, when I use the variables as a part of a curl URL target...)
here's what worked for me after browsing around the web looking for the answer:
in the dockerfile
...
RUN apt install sudo -y
ENV MY_VAR="some value"
...
now inside the container (or in my case the script i wrote to run inide it):
sudo -E -u my_user env # <- switch here to whatever command you want to execute
-E stands for preserve-env which means the env vars of the root user will be passed to my_user
heres my reference:
https://dev.to/pfreitag/passing-environment-variables-with-sudo-1ej6