dockerfile how to run shopt -s extglob - docker

I have the following Dockerfile
FROM python:3.9-slim-buster
## DO SOMETHING HERE
RUN /bin/bash -c shopt -s extglob && rm -rfv !(".env")
I am getting
Step 42/49 : RUN /bin/bash -c shopt -s extglob && rm -rfv !(".env")
---> Running in 5b4ceacb1908
/bin/sh: 1: Syntax error: "(" unexpected
HOw to run this command. I need this

Every RUN line in your Dockerfile will launch a new container with a separate shell. The extglob shell option must be set before a line of input is parsed by the shell, so one way to go about this is to launch the shell with that option enabled:
RUN /bin/bash -O extglob -c 'rm -rfv !(".env")'
The main caveat is that you have to quote the command line as a string.

I'd recommend avoiding bash-specific syntax wherever possible.
In this case, the bash-specific glob pattern !(.env) means "every file except .env. So you can accomplish this specific task by moving .env out of the way, deleting the whole directory, recreating it, and moving .env back; all without worrying about which shell expansion objects are set.
RUN cd .. \
&& mv the_dir/.env . \
&& rm -rf the_dir \
&& mkdir the_dir \
&& mv .env the_dir
You also might consider whether you need a broad rm -rf at all. Because of Docker's layer system, the previous content of the directory is still "in the image". If you use a multi-stage build then the later stage will start from a clean slate, and you can COPY in whatever you need.
FROM python:3.9-slim-buster AS build
...
FROM python:3.9-slim-buster
WORKDIR /app
COPY .env .
COPY --from=build ...

/bin/bash seems to work for "shopt -s extglob" part but not the other. Separate the lines like this:
RUN /bin/bash -c shopt -s extglob
RUN /bin/bash -c rm -rfv !(".env")
or
RUN /bin/bash -c "shopt -s extglob && rm -rfv !(\".env\")"

Related

Extending a docker image to run a diff command

I have a docker file as below.
FROM registry.access.redhat.com/ubi8/ubi
ENV JAVA_HOME /usr/lib/jvm/zulu11
RUN \
set -xeu && \
yum -y -q install https://cdn.azul.com/zulu/bin/zulu-repo-1.0.0-1.noarch.rpm && \
yum -y -q install python3 zulu11-jdk less && \
yum -q clean all && \
rm -rf /var/cache/yum && \
alternatives --set python /usr/bin/python3 && \
groupadd trino --gid 1000 && \
useradd trino --uid 1000 --gid 1000 && \
mkdir -p /usr/lib/trino /data/trino && \
chown -R "trino:trino" /usr/lib/trino /data/trino
ARG TRINO_VERSION
COPY trino-cli-${TRINO_VERSION}-executable.jar /usr/bin/trino
COPY --chown=trino:trino trino-server-${TRINO_VERSION} /usr/lib/trino
COPY --chown=trino:trino default/etc /etc/trino
EXPOSE 8080
USER trino:trino
ENV LANG en_US.UTF-8
CMD ["/usr/lib/trino/bin/run-trino"]
HEALTHCHECK --interval=10s --timeout=5s --start-period=10s \
CMD /usr/lib/trino/bin/health-check
I would like to extend this Dockerfile and run a run a couple of instructions before running the main command in the Dockerfile? Not sure how to to that.
If you want to run those commands when the container starts, you can use an entrypoint to leave the original command untouched.
The exec $# will execute the arguments that were passed to the entrypoint with PID 1. Whatever arguments were provided as CMD, those will be in $#, so you essentially execute the CMD after the ENTRYPOINT, doesn't matter what this command might be.
Create an entrypoint script:
#!/usr/bin/env sh
# run some preperation
exec "$#"
And then you copy that into your build and use it.
FROM baseimage
COPY --chmod=755 ./entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
If you want to run those commands on build, use a FROM instruction and add your RUN instructions. The new image will use the CMD from the base image. So you don't need to set any CMD.
FROM baseimage
RUN more
RUN stuff
Since you can only have one CMD statement in a Dockerfile (if you have more than one, only the last one is executed), you need to get all your commands into a single CMD statement.
You can use the ability of the shell to chain commands using the && operator, like this
CMD my-first-command && \
my-second-command && \
/usr/lib/trino/bin/run-trino
That will run your commands first before running run-trino which is the current CMD.
Whenever you are using a base image, docker throw's away the last layer of the image. So you can extend that image, by writing your own image.
for example: this is my first image (that i get from a third party, just like you)
FROM alpine:latest
CMD echo "hello"
I want to extend it, to output hello world instead of hello, so I extend write another docker file like this
FROM first:latest
CMD echo "hello world"
and when I build the image and run it,
docker build -t second .
docker run second
I get my expected output
hello world
Hopefully that helps

Creating an entrypoint.sh during docker build that takes variable on docker run

I am trying to avoid using docker compose if possible, and so I'm building a Dockerfile that creates the entrypoint.sh within the Dockerfile. This works flawlessly with one issue - I need to accept an environment variable in the entrypoint.sh file
Here's the end of my Dockerfile:
FROM kalilinux/kali-rolling
RUN echo "\n##### Provision openvas user #####\n" >> /opt/entrypoint.sh && \
echo "useradd -rm -d /home/openvas -s /bin/bash -g _gvm openvas" >> /opt/entrypoint.sh && \
echo "chown -R openvas:_gvm /home/openvas" >> /opt/entrypoint.sh && \
echo "gvmd --user=admin --new-password=${OV_PASSWORD}" >> /opt/entrypoint.sh && \
chmod +x /opt/entrypoint.sh
ENTRYPOINT ["/opt/entrypoint.sh"]
CMD ["tail", "-f","/dev/null"]
However, when I try to start the container with docker run and passing an environment variable to it, it doesn't seem to work because of some format error. Here's an example of what I'm running and its output:
ubuntu#ip-10-20-32-116:~/openvas$ docker run -e OV_PASSWORD=password -ti 05190b668480 /bin/bash
standard_init_linux.go:228: exec user process caused: exec format error
Any thoughts as to what may be happening here? I believe the error started as soon as I added the ${OV_PASSWORD} into the echo statement. I also tried with just OV_PASSWORD and got a similar error, so not quite sure what I'm doing wrong in terms of formatting.
At a mechanical level, you may find it easier to create the entrypoint script as a separate file on your host, then simply COPY it into the image. This avoids the quoting problem that #DannyB highlights in their answer.
FROM kalilinux/kali-rolling
COPY entrypoint.sh /opt/entrypoint.sh
# RUN chmod +x /opt/entrypoint.sh # if not already executable
ENTRYPOINT ["/opt/entrypoint.sh"]
CMD ["tail", "-f","/dev/null"]
Make sure that script ends with the line exec "$#" to run the CMD, and that the ENTRYPOINT line in the Dockerfile uses JSON-array syntax. (Comments suggest your setup already has this correctly.)
There are a couple of things in the setup you show that will always be done exactly the same way every time you start the container, and these could be refactored to run only once when you build the image. So I might have instead:
FROM kalilinux/kali-rolling
... do the other things to install the application ...
# Create a non-root user
RUN useradd -rM -g _gvm openvas
# Change file ownership so the non-root user can modify the application
# at runtime (not usually recommended)
# RUN chown -R openvas:_gvm /home/openvas
# Copy in the entrypoint script
COPY entrypoint.sh /opt/entrypoint.sh
# RUN chmod +x /opt/entrypoint.sh # if not already executable
# Specify how to run the container
USER openvas
ENTRYPOINT ["/opt/entrypoint.sh"]
CMD ["openvas", "-u"]
Where the entrypoint just contains
#!/bin/sh
# /opt/entrypoint.sh
# Configure the application password
if [ -n "$OV_PASSWORD" ]; then
gvmd --user=admin "--new-password=${OV_PASSWORD}"
fi
# Run the main container command
exec "$#"
when both CMD and ENTRYPOINT are given, the CMD is treated as default input to entrypoint.
I suggest, you remove CMD and use docker logs <container-id> instead
You have a couple of issues with your Dockerfile.
CMD is used as arguments to ENTRYPOINT, therefore not behaving as you expect.
When you echo with variables, and you wish the variables to NOT be evaluated during the echo, use single quotes.
Your ENTRYPOINT does not need to use the [...] syntax.
To achieve what you want, put the tail -f command inside your entrypoint.
Here is a working sample
FROM alpine
RUN echo 'echo got password $PASS' > /opt/entrypoint.sh && \
echo "tail -f /dev/null" >> /opt/entrypoint.sh && \
chmod +x /opt/entrypoint.sh
ENTRYPOINT "/opt/entrypoint.sh"
Run with:
$ docker run --rm -it -e PASS=123 your_image_name
RUN echo "\n##### Provision openvas user #####\n" >> /opt/entrypoint.sh && \
echo "useradd -rm -d /home/openvas -s /bin/bash -g _gvm openvas" >> /opt/entrypoint.sh && \
echo "chown -R openvas:_gvm /home/openvas" >> /opt/entrypoint.sh && \
echo "gvmd --user=admin --new-password=${OV_PASSWORD}" >> /opt/entrypoint.sh && \
chmod +x /opt/entrypoint.sh
Your shell script doesn't have an interpreter defined, e.g. the first line that's normally #!/bin/sh. And then you try to execute that script from the kernel with an exec rather than from within a shell that would default to using itself to interpret the script since you've used the json/exec format for your entrypoint:
ENTRYPOINT ["/opt/entrypoint.sh"]
That is what is triggering the:
standard_init_linux.go:228: exec user process caused: exec format error
Which is saying your executable/binary isn't properly formatted for the kernel to run. To fix, make the first line a shell that exists in your image:
RUN echo '#!/bin/sh\n' > /opt/entrypoint.sh && \
echo '\n##### Provision openvas user #####\n' >> /opt/entrypoint.sh && \
echo 'useradd -rm -d /home/openvas -s /bin/bash -g _gvm openvas' >> /opt/entrypoint.sh && \
echo 'chown -R openvas:_gvm /home/openvas' >> /opt/entrypoint.sh && \
echo 'gvmd --user=admin --new-password=${OV_PASSWORD}' >> /opt/entrypoint.sh && \
chmod +x /opt/entrypoint.sh

How to set environment variables dynamically by script in Dockerfile?

I build my project by Dockerfile. The project need to installation of Openvino. Openvino needs to set some environment variables dynamically by a script that depends on architecture. The sciprt is: script to set environment variables
As I learn, Dockerfile can't set enviroment variables to image from a script.
How do I follow way to solve the problem?
I need to set the variables because later I continue install opencv that looks the enviroment variables.
What I think that if I put the script to ~/.bashrc file to set variables when connect to bash, if I have any trick to start bash for a second, it could solve my problem.
Secondly I think that build openvino image, create container from that, connect it and initiliaze variables by running script manually in container. After that, convert the container to image. Create new Dockerfile and continue building steps by using this images for ongoing steps.
Openvino Dockerfile exp and line that run the script
My Dockerfile:
FROM ubuntu:18.04
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_2020.2.120.tgz
ENV INSTALLDIR /opt/intel/openvino
# openvino download
RUN curl -LOJ "${DOWNLOAD_LINK}"
# opencv download
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.3.0.zip && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.3.0.zip
RUN apt-get -y install sudo
# openvino installation
RUN tar -xvzf ./*.tgz && \
cd l_openvino_toolkit_p_2020.2.120 && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
# rm -rf /tmp/* && \
sudo -E $INSTALLDIR/install_dependencies/install_openvino_dependencies.sh
WORKDIR /home/sa
RUN /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh" && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> /home/sa/.bashrc && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \
$INSTALLDIR/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
$INSTALLDIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN bash
# opencv installation
RUN unzip opencv.zip && \
unzip opencv_contrib.zip && \
# rm opencv.zip opencv_contrib.zip && \
mv opencv-4.3.0 opencv && \
mv opencv_contrib-4.3.0 opencv_contrib && \
cd ./opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_INF_ENGINE=ON -D ENABLE_CXX11=ON -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/sa/opencv_contrib/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D WIDTH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D WITH_GSTREAMER=OFF -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=OFF .. && \
make && \
make install && \
ldconfig
You need to cause the shell to load that file in every RUN command where you use it, and also at container startup time.
For startup time, you can use an entrypoint wrapper script:
#!/bin/sh
# Load the script of environment variables
. /opt/intel/openvino/bin/setupvars.sh
# Run the main container command
exec "$#"
Then in the Dockerfile, you need to include the environment variable script in RUN commands, and make this script be the image's ENTRYPOINT.
RUN . /opt/intel/openvino/bin/setupvars.sh && \
/opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
/opt/intel/openvino/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN ... && \
. /opt/intel/openvino/bin/setupvars.sh && \
cmake ... && \
make && \
...
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD same as the command you set in the original image
If you docker exec debugging shells in the container, they won't see these environment variables and you'll need to manually re-read the environment variable script. If you use docker inspect to look at low-level details of the container, it also won't show the environment variables.
It looks like that script just sets a couple of environment variables (especially $LD_LIBRARY_PATH and $PYTHONPATH), if to somewhat long-winded values, and you could just set these with ENV statements in the Dockerfile.
If you look at the docker build output, there are lines like ---> 0123456789ab after each build step; those are valid image IDs that you can docker run. You could run
docker run --rm 0123456789ab \
env \
| sort > env-a
docker run --rm 0123456789ab \
sh -c '. /opt/intel/openvino/bin/setupvars.sh && env' \
| sort > env-b
This will give you two local files with the environment variables with and without running this setup script. Find the differences (say, with comm(1)), put ENV before each line, and add that to your Dockerfile.
You can't really use .bashrc in Docker. Many common paths don't invoke its startup files: in the language of that documentation, neither a Dockerfile RUN command nor a docker run instruction is an "interactive shell" so those don't read dot files, and usually docker run ... command doesn't invoke a shell at all.
You also don't need sudo (you are already running as root, and an interactive password prompt will fail); RUN sh -c is redundant (Docker inserts it on its own); and source isn't a standard shell command (prefer the standard ., which will work even on Alpine-based images that don't have shell extensions).

How do I configure umask in alpine based docker container

I have a Java application that runs in docker based on the cutdown alpine distribution, I want umask to be set to 0000 so that all files created by the application in the configured volume /music are accessible to all users.
The last thing the Dockerfile does is run a script that starts the application
CMD /opt/songkong/songkongremote.sh
This file contains the following
umask 0000
java -XX:MaxRAMPercentage=60 \
-Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog\
-Dorg.jboss.logging.provider=jdk \
-Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLogging\ --add-opens java.base/java.lang=ALL-UNNAMED -jar lib/songkong-6.9.jar -r
The application runs, but in the docker container logs I see the following is output to stdout
/opt/songkong/songkongremote.sh: umask: line 1: illegal mode: 0000
indicating the umask command did not work, which I do not understand since that is a valid value for umask. (I have also tried umask 000 at that failed with same error)
I also tried adding
#!/bin/sh
as the first line to the file, but then Docker complained it could not find /bin/sh.
Full Dockerfile is:
FROM adoptopenjdk/openjdk11:alpine-jre
RUN apk --no-cache add \
ca-certificates \
curl \
fontconfig \
msttcorefonts-installer \
tini \
&& update-ms-fonts \
&& fc-cache -f
RUN mkdir -p /opt \
&& curl http://www.jthink.net/songkong/downloads/build1114/songkong-linux-docker.tgz?val=121| tar -C /opt -xzf - \
&& find /opt/songkong -perm /u+x -type f -print0 | xargs -0 chmod a+x
EXPOSE 4567
ENTRYPOINT ["/sbin/tini"]
# Config, License, Logs, Reports and Internal Database
VOLUME /songkong
# Music folder should be mounted here
VOLUME /music
WORKDIR /opt/songkong
CMD /opt/songkong/songkongremote.sh
Your /opt/songkong/songkongremote.sh script has what looks like non-linux newlines (Windows?).
You can view it by running:
$ docker run --rm -it your-image-name vi /opt/songkong/songkongremote.sh
And it is the same reason the #!/bin/sh line did not work, it probably looked like #!/bin/sh^M as well.
You have carriage return characters in your script file:
umask 0000^M
java -XX:MaxRAMPercentage=60 -Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog -Dorg.jboss.logging.provider=jdk -Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLoggi
^M
You can add RUN sed -i -e 's/\r//g' /opt/songkong/songkongremote.sh to the Dockerfile or better recreate the script.

Perl installation issue using Docker

I am trying to build the docker image with perl installation.
Dockerfile:
FROM amazonlinux
WORKDIR /shared
RUN yum -y install gcc
ADD http://www.cpan.org/src/5.0/perl-5.22.1.tar.gz /shared
RUN tar -xzf perl-5.22.1.tar.gz
WORKDIR /shared/perl-5.22.1
RUN ./Configure -des -Dprefix=/opt/perl-5.22.1/localperl
RUN make
RUN make test
RUN make install
all these steps are executed i am can see it executing the make, make test and make install commands but when i do :
docker run -it testsh /bin/bash
Error:
when I check perl -v it says command not found.
and I need to go the perl directory
'cd perl-5.22.1' and run 'make install' again then perl -v works
But I want the perl installation to work when I build it with docker image. can anyone tell me what is going wrong here?
perl was indeed installed, just wasn't added to the path.
export PATH=$PATH:/shared/perl-5.22.1 should do it -- but of course, you'd want to add a PATH update in the Dockerfile.
At first glance I thought that when you run make install second time, it adds perl's bin directory to PATH env, but when I compared output of env before and after make install it showed the same PATH variable content.
The reason you getting perl -v working after make install in running container is that make install puts perl binary to /usr/bin/perl. I don't know why it works such way, but it is just as it is. Also, it's almost useless to store sources inside of your image.
Anyway, I agree with #belwood suggestion about adding your perl's bin directiry to PATH environment variable. I just wanna correct the path: /opt/perl-5.22.1/localperl/bin
You need to add it in your Dockerfile (basically I've rewritten your file to make it produce more efficient image), for example:
FROM amazonlinux
RUN mkdir -p /shared/perl-5.22.1
WORKDIR /shared/perl-5.22.1
RUN yum -y install gcc \
&& curl -SL http://www.cpan.org/src/5.0/perl-5.22.1.tar.gz -o perl-5.22.1.tar.gz \
&& tar --strip-components=1 -xzf perl-5.22.1.tar.gz \
&& rm perl-5.22.1.tar.gz \
&& ./Configure -des -Dprefix=/opt/perl-5.22.1/localperl \
&& make -j $(nproc) \
&& make -j $(nproc) test \
&& make install \
&& rm -fr /shared/perl-5.22.1 /tmp/*
ENV PATH="/opt/perl-5.22.1/localperl/bin:$PATH"
WORKDIR /root
CMD ["perl","-de0"]
When you simply run container with this image, you'll immediately get into perl's shell. If you need bash, then use docker run -it --rm amazon-perl /bin/bash
It would be also good to look at Environment replacement section in the Dockerfile reference documentation, just to figure out how things work. For example, it isn't a best pratice to have that many RUN lines in your Dockerfile because of the RUN instruction will execute commands in a new layer on top of the current image and commit the results. So you'll get many unnecessary layers.

Resources