How to pass arguments in dockerfile during docker run command - docker

I am new in docker. For learning purpose, I'm working on code submission platform (online judge). So, I know at high level that whenever a user submit a code, it will hit an API which will receive code, languageID and inputs(if any), and this code will run on a docker container and return the output at the client side(or error if any).
Dockerfile :
FROM gcc:latest
COPY main.cpp /usr/src/cpp_test/prog.cpp
WORKDIR /usr/src/cpp_test/
CMD [ "sh", "-c", "g++ -o Test prog.cpp && ./Test" ]
So, whenever user submit a code, Everytime, I am first building this dockerfile(docker build) because main.cpp file will be different everytime and then running the docker run command.
So, my question is, Is there any way that I build this dockerfile only once(by making it more general) and now whenever a user submit a code, I just only need to run the docker run command.
Remember, there are 3 arguments that I have to pass ie., code, languageID, inputs(if any) to dockerfile.
Any help will be appreciated.
Thankyou.

First, looking at your existing Dockerfile: the output of docker build is an image containing a compiled, ready-to-run binary. The image you have now contains a compiler plus a source file, but you'd generally have a compiled binary. (If you're used to a compiler like g++, think of docker build as the same sort of sequence: it creates a directly runnable image and doesn't need copy of the source or to be rebuilt.) So I'd typically RUN g++, not defer it until container startup:
FROM gcc:latest
WORKDIR /usr/src/cpp_test
COPY main.cpp .
RUN g++ -o Test main.cpp
CMD [ "./Test" ]
Even for the higher-level workflow you describe, I'm not sure I'd stray too far from this Dockerfile. It doesn't do a whole lot beyond running the compiler and then creating a runnable image; so long as the source file is named main.cpp and it doesn't have any library dependencies this will work for any C++ source file.
If you really wanted to build and run an arbitrary source file without building an image per submission, you can use the unmodified gcc image here; you don't need a custom image since it won't really do anything. I'd suggest writing a launcher shell script:
#!/bin/sh
set -e
g++ -o Test main.cpp
./Test
Create an execution directory:
mkdir run_1
cp -a launcher.sh run_1
cp submission_1.cpp run_1/main.cpp
Then run the container against that tree:
docker run --rm \
-v "$PWD/run_1:/code" \
-w /code \
gcc:latest \
./launcher.sh

Related

Docker approach for multi-conditional pipeline

I'm completely new in Docker. I have the following idea in mind: I need to provide single image that will be able based on runtime arguments like profile/stage and python is included or not perform different scripts.
These scripts are used lot's of params that can be override from outside. I searched over the similar issues but I didn't find anything similar.
I have the following idea in mind but it seems quite difficult to support and ugly I hope someone can provide better solution:
The image content is raw:
FROM openjdk:8
#ARG py_ver=2.7
#RUN if [-z "$py_ver" ] ; then echo python version not provided ; else echo python version is $py_ver ; fi
#FROM python:${py_ver}
# set the working directory in the container
WORKDIR /models
# copy the dependencies file to the working directory
COPY training.sh execution.sh requirements/ ./
#RUN pip install --no-cache-dir -r requirements.txt
ENV profile="training"
ENV pythonEnabled=false
RUN if [ "$profile" = "training" ]; then \
command="java=/usr/bin/java training.sh"; \
else \
command="java=/usr/bin/java execution.sh"; \
fi
ENTRYPOINT ["${command}"]
I suppose I have several issues: 1) I need to have 1 image but based on runtime parameters I need to choose appropriate run script; 2) I have to pass a lot of args to training and execution scripts (app. 6-7 params). It's a bit difficult to do with "-e"
3) My image can download all python versions and use in runtime specified in args version.
I revised docker-compose but it helps if you need to manage several services. It's not my case. I have single service with different setup params and preparation flow. Could someone suggest better approach than having spaghetti if-else conditions for selected in runtime python version and profile?
It might help to look at this question in two parts. First, how can you control what runtime you're using; and second, how can you control what happens when the container runs?
A Docker image typically contains a single application, but if there's a substantial code base and several ways to invoke it, you can package that all together. In Python land, a Flask application and an associated Celery worker might be bundled together, for example. Regardless, the image still contains a single language interpreter and the associated libraries: you wouldn't build an image with three versions of Python and four versions of supporting libraries.
For things that control the single language interpreter and library stack that get built into an image, ARG as you've shown it is the correct way to do it:
ARG py_ver=3.9
RUN apt-get install python${py_ver}
ARG requirements=requirements.txt
RUN pip install -r ${requirements}
If you need to build the same image for multiple language versions, you can build it using a shell loop, or similar automation:
for py_ver in 3.6 3.7 3.8 3.9; do
docker build --build-arg py_ver="$py_ver" -t "myapp:python${py_ver}" .
done
docker run ... myapp:python3.9
As far as what gets run when you launch the container, you have a couple of choices. You can provide an alternate command when you start the container, and the easiest thing to do is to discard the entire "profile" section at the end of the Dockerfile and just provide that:
docker run ... myapp:python3.9 \
training.sh
You mention that a couple of the invocations are more involved. You can wrap these in shell scripts
#!/bin/sh
java -Dfoo=bar -Dbaz.quux.meep=peem ... \
-jar myapp.jar \
arg1 arg2 arg3
and then COPY them into your image into one of the usual executable paths
COPY training-full.sh /usr/local/bin
and then you can just run that script as the main container command
docker run ... myapp:python3.9 training-full.sh
You can, with some care, use ENTRYPOINT here. The important detail is that the CMD gets passed to the ENTRYPOINT as command-line arguments, and in your Dockerfile the ENTRYPOINT generally must have JSON-array syntax. You could in principle use this to create artificial "commands":
#!/bin/sh
case "$1" of
training)
shift
exec training.sh foo bar "$#"
;;
execution)
shift
exec execution.sh "$#"
;;
*)
exec "$#"
;;
esac
Then you can launch the container in a couple of ways
docker run --rm myapp:python3.9 training
docker run --rm myapp:python3.9 execution 'hello world'
docker run --rm myapp:python3.9 ls -l /
docker run --rm -it myapp:python3.9 bash

how to use a executable file created in a docker file in the same docker file

What I want: i want to run a cpp file witch use opencv inside a container
What I've done:
installing an image of opencv:
docker pull spmallick/opencv-docker:opencv
create a docker file
FROM spmallick/opencv-docker:opencv
RUN ["g++ a.cpp -o a.out"]
COPY . .
CMD ["./a.out"]
bash command
sudo docker build -t project_opencv
OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "g++ a.cpp -o a.out": executable file not found in $PATH": unknown
first I try this with cmd instead of RUN (how to use cmd inside a Dockerfile to compile a file). It couldn't find a.out although I've done COPY . .
Now it seems that there is a problem to create a.out
When you use the JSON-array form of RUN (or CMD or ENTRYPOINT), you explicitly provide the set of "words" that make up the command. As you've shown it, it is a single word, and Docker tries to run it as such, including the spaces in the command name it's looking up. You need to split it into its constituent words yourself.
RUN ["g++", "a.cpp", "-o", "a.out"]
The reverse side of this is that there is no splitting, expansion, or interpolation that happens in this command:
# Preserves both the literal $ and the spaces
RUN ["cp", "dollars-word-$word.txt", "file name with spaces.txt"]
Especially for RUN it's common to use the shell form. This wraps the command in sh -c so it works like an ordinary command. (There are technical reasons you might not want this for CMD and especially ENTRYPOINT.)
RUN g++ a.cpp -o a.out
RUN cp dollars-word-\$word.txt, 'file name with spaces.txt'
RUN tail env-var-$word.txt
First copy your a.cpp into docker image then your RUN command will work , I am not sure about your RUN ["g++ a.cpp -o a.out"] command this will work or not but try this:
FROM spmallick/opencv-docker:opencv
COPY . .
RUN ["g++ a.cpp -o a.out"]
CMD ["./a.out"]
I think you need to check the permissions for the files you have copied, from your host machine into the container.
You could also check the directory you are in, and if you need to change the directory, then you can use the WORKDIR command in your Dockerfile.
It would also be useful to start a bash session in the container, and run the commands manually. For each successful command, you can enter the command into the Dockerfile, or if you encounter an error, you can also troubleshoot more easily, rather than change the Dockerfile each time and run, and if you need to debug which is time consuming.

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

Create multistage docker file

I have 2 different docker files (production and test environment). I want to build a single multistage docker file with these 2 dockerfiles.
First dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Second dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss ./install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Question How to create a multistage docker file for these 2 docker files?
Thanks in advance.
I'm not sure what you are trying to do requires a multi-stage build. Instead, what you may want to do is use a custom base image (for dev) which would be your first code block and use that base image for production.
If the first image was tagged as shanmukh/wildfly:dev:
FROM shanmukh/wildfly:dev
USER jboss
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
This could be tagged as shanmukh/wildfly:prod.
The reason why I don't think you want a multi-stage build is because you mentioned you are trying to handle two environments (test and production).
Even if you did want to say use a multi-stage build for production, from what I can see, there is no reason to. If your initial build stage included installing build dependencies such as a compiler, copying code and building it, then it'd be efficient to use a multi-stage build as the final image would not include the initial dependencies (such as the compiler and other potential dangerous dev tools) or any unneeded build artifacts that were produced.
Hope this helps.

How to Wrap an Executable in Docker?

I have an executable binary called abin that I want to containerize. All abin does is output a test string via printf. So I create a directory called test, which contains only abin and the following Dockerfile:
from alpine:3.7
copy abin /
entrypoint /abin
So I sudo docker build -t test/testy:tag . && sudo docker run --rm test/testy:tag, and get the following:
/bin/sh: /abin: not found
This baffles me for two reasons:
why is sh running despite setting the entrypoint to /abin?
why is /abin not found?
Changing the entrypoint to stat /abin then re-building and re-running gives the expected stat output, clearly indicating that there's an executable file at /abin. By the same token, removing the entrypoint and running in the container's interactive shell, I can see the abin file, and I can ls or stat and cat etc., but ./abin or /abin still give the /bin/sh: ./abin: not found error.
EDIT:
I incorrectly assumed that Alpine ran the same kind of binaries as most linuxes. Not so. Also, it doesn't even come with stdio - my second mistake. Lastly, I needed to specify the entrypoint as an absolute path. Thus the following Dockerfile works:
from alpine:3.7
workdir /
copy test.c .
run apk add gcc libc-dev && gcc test.c -o abin
entrypoint ["/abin"]
If your entrypoint is not inside square brackets, Docker will call sh <entrypoint>. Use ENTRYPOINT ["abin"] instead and I believe it should work.

Resources