Pass input parameter to docker container at runtime - docker

I have the below working docker file:
# Use the Oracle SQLcl image as the base image
FROM container-registry.oracle.com/database/sqlcl:latest
# Set the current working directory
WORKDIR /app/
# Set the TNS_ADMIN environment variable
ENV TNS_ADMIN /opt/oracle/network/admin/
# Copy the TNSNAMES.ORA file into the container
COPY TNSNAMES.ORA $TNS_ADMIN/TNSNAMES.ORA
COPY scripts/ scripts/
# Login with AQTOPDATA
ENTRYPOINT ["sql", "mycredentials", "#scripts/script1.sql"]
However, I would like to parametrize #scripts/script1.sql to take scripts from a public repository, instead of being harcoded like that.
For example, let's say there are the below files in an open repository:
script1.sql
script2.sql
I would like to be able to do something like (pseudo code) when I run the container:
docker run mycontainer --parameter https://github.com/repo/script2.sql
How can I achieve this?

Just remove #script/script1.sql from your ENTRYPOINT, and put it in CMD, so that you have:
ENTRYPOINT ["sql", "mycredentials"]
CMD ["#scripts/script1.sql"]
If you just docker run myimage, it will run:
sql mycredentials #scripts/script1.sql
But if you run instead docker run myimage https://github.com/repo/script2.sql, then it will run:
sql mycredentials https://github.com/repo/script2.sql

Related

Best way to update config file in Docker with environment variables

im unable to find an easy solution, but probably i'm just searching for the wrong things:
I have a docker-compose.yml which contains a tomcat that is built by the contents of the /tomcat folder. In /tomcat there is a Dockerfile, a .war and a server.xml.
The Dockerfile is based on tomcat:9, and copys the server.xml and .war files into the right directories.
If I do docker-compose up, everything is running fine. But i would love to find a way to update the connectors within the server.xml, without pruning the image, adjusting the server.xml and start it again.
It would be perfect to put a $CONNECTOR_CONFIG in the server.xml, and provide an variables.env to docker-compose where the $CONNECTOR_CONFIG variable is set to like ""
I know i could adjust the server.xml within the Dockerfile with sed, but this way the image must be pruned everytime i want to change something right?
Is there a way that i can later just edit the variables.env and docker-compose down/up?
Regards,
EdFred
A useful pattern here is to use the image's ENTRYPOINT as a wrapper script that does first-time setup. If that script ends with exec "$#" then it will execute the image's CMD as normal. You can use this to do things like rewrite configuration files based on environment variables.
#!/bin/sh
# docker-entrypoint.sh
# Replace any environment variable references in server.xml.tmpl.
# (Assumes the image has the full GNU tool set.)
envsubst <"$CATALINA_BASE/conf/server.xml.tmpl" >"$CATALINA_BASE/conf/server.xml"
# Run the standard container command.
exec "$#"
Normally in a tomcat image you wouldn't include a CMD since the base image knows how to start Tomcat. The Docker Hub tomcat image page has a mention of it, or you can click through to find the original Dockerfile. You need to know this since specifying an ENTRYPOINT in a derived Dockerfile will reset the CMD.
Your Dockerfile then needs to COPY this script in and set up the ENTRYPOINT and CMD.
# Dockerfile
FROM tomcat:9
COPY myapp.war /usr/local/tomcat/webapps/
COPY server.xml.tmpl /usr/local/tomcat/conf/
COPY docker-entrypoint.sh /usr/local/tomcat/bin/
# ENTRYPOINT _MUST_ be JSON-array form
ENTRYPOINT ["docker-entrypoint.sh"]
# Duplicate from base image
CMD ["catalina.sh", "run"]
You can verify this by hand using a docker run command. Any command you specify after the image name gets run instead of the CMD; but the main container command is still constructed by passing that command as arguments to the alternate ENTRYPOINT and so your wrapper script will run.
docker run --rm \
-e CONNECTOR_CONFIG=test-connector-config \
my-image \
cat /usr/local/tomcat/conf/server.xml
In your final Compose setup, you can include the configuration as an environment: variable.
version: '3.8'
services:
myapp:
build: .
ports: ['8080:8080']
environment:
CONNECTOR_CONFIG: ...
envsubst is a GNU tool that replaces $ENVIRONMENT_VARIABLE references in text files. It's very useful for this specific case, but you can do the same work with sed or another text-processing tool, especially if you don't have the GNU tools available (in particular if you have an Alpine-based image).

How can I copy files from the GitLab Runner helper container to the build container?

Set up
I set up GitLab Runner with a KubernetesExecutor and want to create a custom helper image which adds some extra files to the build container.
The current set up is quite basic:
A Dockerfile, which adds some files (start.sh and Dockerfile) into the container.
A start.sh file which is present in the helper image. This should be executed when the helper is run.
Code
start.sh
#!/bin/bash
printenv > /test.out # Check whether the script is run.
cp /var/Dockerfile /builds/Dockerfile # Copy file to shared volume.
exec "$#"
Dockerfile
FROM gitlab/gitlab-runner-helper:x86_64-latest
ADD templates/Dockerfile /var/Dockerfile
ADD start.sh /var/run/start.sh
ENTRYPOINT ["sh", "/var/run/start.sh"]
The shared volume between the containers is: /builds. As such, I'd like to copy /var/Dockerfile to /builds/Dockerfile.
Problem
I can't seem to find a way to (even) run start.sh when the helper image is executed. Using kubectl exec -it pod-id -c build bash and kubectl exec -it pod-id -c helper bash, I verify whether the files are created. When I run start.sh (manually) from the latter command, the files are copied. However, neither /test.out nor /builds/Dockerfile are available when logging in to the helper image initially.
Attempts
I've tried setting up a different CMD (/var/run/start.sh), but it seems like it simply doesn't run the sh file.

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

Passing multiple classFile as argument to Dockerfile

I have a Dockerfile like this:
FROM java:8
ARG cName
ADD target/jar1.jar p2p.jar
ADD ci/docker_entrypoint.sh .
CMD ["bash", "docker_entrypoint.sh" , "$cName"]
I have a docker_entrypoint.sh which look :
java -cp p2p.jar $1
I have multiple classes to run and I am providing className as input parameter to dockerfile. I am running couple of commands to build and run docker.
docker build -f Dockerfile -t docker-p2p --build-arg cName=com.HelloWorld .
docker run docker-p2p
after running the second command I am getting below error:
Error: Could not find or load main class $cName
I am new to docker and I am not able to parameterise by dockerfile but when I mention a className "HelloWorld" in the dockerfile, it runs well. But when I try to pass parameters , it throws me out with this error.
You have to differ between docker run, cmd and entrypoint.
For your example you can use an entrypoint and set the parameter via an environment variable.
One simple and easy Dockerfile example could be:
FROM java:8
ENV NAME="John Dow"
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $NAME"]
with docker build . -t test and docker run -e NAME="test123" test
Also have a look at some further docu: docker-run-vs-cmd-vs-entrypoint.
If you do wind up with a Docker image that can do multiple things, it's a little unusual to create one image per task the way you're describing. You can pass additional command-line parameters in docker run or most other ways to start a container, and you can use that to control what the image does.
For example, you might want to set up your image so that you can run
docker run ... docker-p2p com.HelloWorld
passing the class name as an argument. I'd write an entrypoint script that wrapped this in a java call if appropriate (but passed through non-class names, like docker run ... sh):
#!/bin/sh
set -e
case "$1" of
com.*) exec java "$#" ;;
*) exec "$#" ;;
esac
The corresponding Dockerfile doesn't take any ARGs; it could be
FROM java:8
# I prefer COPY to ADD, unless you explicitly want automatic
# HTTP fetches and/or tar file extraction.
COPY target/jar1.jar /p2p.jar
COPY ci/docker_entrypoint.sh /
# Globally set the class path. (A Docker image only does one thing.)
ENV CLASSPATH /p2p.jar
# Always launch the entrypoint script.
ENTRYPOINT ["/docker_entrypoint.sh"]
# Give a default command, which with our script is a class name.
CMD ["com.HelloWorld"]
If you actually want a container per task, you could create a base image that contained everything up to the ENTRYPOINT line, and then created derived images FROM that base image that just set a different CMD.

Docker mount happens before or after entrypoint execution

I'm building a Docker image to run my Spring Boot based application. I want to have user to be able to feed a run time properties file by mounting the folder containing application.properties into container. Here is my Dockerfile,
FROM java:8
RUN mkdir /app
RUN mkdir /app/config
ADD myapp.jar /app/
ENTRYPOINT ["java","-jar","/app/myapp.jar"]
When kicking off container, I run this,
docker run -d -v /home/user/config:/app/config myapp:latest
where /home/user/config contains the application.properties I want the jar file to pick up during run time.
However this doesn't work, the app run doesn't pick up this mounted properties file, it's using the default one packed inside the jar. But when I exec into the started container and manually run the entrypoint cmd again, it works as expected by picking up the file I mounted in. So I'm wondering is this something related to how mount works with entrypoint? Or I just didn't write the Dockerfile correctly for this case?
Spring Boot searches for application.properties inside a /config subdirectory of the current directory (among other locations). In your case, current directory is / (docker default), so you need to change it to /app. To do that, add
WORKDIR /app
before the ENTRYPOINT line.
And to answer your original question: mounts are done before anything inside the container is run.

Resources