Dockerfile and Docker run -w / workdir - docker

Let's take the sample python dockerfile as an example.
FROM python:3
WORKDIR /project
COPY . /project
and then the run command to run the tests with in that container:
docker run --rm -v$(CWD):/project -w/project mydocker:1.0 pytest tests/
We are declaring the WORKDIR in the dockerfile and the run.
Am I right in saying
The WORKDIR in the dockerfile is the directory which the subsequent commands in the Dockerfile are run? But this will have no impact on when we run the docker run command?
Instead we need to pas in the -w/project to have pytests run in the /projects directory, well for pytest to look for the rests directory in /projects.
My setup.cfg
[tool:pytest]
addopts =
--junitxml=results/unit-tests.xml

In the example you give, you shouldn't need either the -v or -w option.
Various options in the Dockerfile give defaults that can be overridden at docker run time. CMD in the Dockerfile, for example, will be overridden by anything in a docker run command after the image name. (...and it's better to specify it in the Dockerfile than to have to manually specify it on every docker run invocation.)
Specifically to your question, WORKDIR affects the current directory for subsequent RUN and COPY commands, but it also specifies the default directory when the container runs; if you don't have a docker run -w option it will use that WORKDIR. Specifying -w to the same directory as the final image WORKDIR won't have an effect.
You also COPY the code into your image in the Dockerfile (which is good). You don't need a docker run -v option to overwrite that code at run time.
More specifically looking at pytest, it won't usually write things out to the filesystem. If you are using functionality like JUnit XML or code coverage reports, you can set it to write those out somewhere other than your application directory:
docker run --rm \
-v $PWD/reports:/reports \
mydocker:1.0 \
pytest --cov=myapp --cov-report=html:/reports/coverage.html tests

Related

how to override the files in docker container

I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

docker -v no more needed? and Dockerfile

I've read tutorials about use docker:
docker run -it -p 9001:3000 -v $(pwd):/app simple-node-docker
but if i use:
docker run -it -p 9001:3000 simple-node-docker
it's working too? -v is not more needed? or is taking from the Dockerfile the line WORKDIR?
FROM node:9-slim
# WORKDIR specifies the directory our
# application's code will live within
WORKDIR /app
another tutorials use mkdir ./app on the workfile, anothers don't, so WORKDIR is enough to docker create the folder automatically if does not exist
There are two common ways to get application content into a Docker container. Many Node tutorials I've seen confusingly do both of them. You don't need docker run -v, provided you docker build your container when you make changes.
The first way is to copy a static copy of the application into the image. You'd do this via a Dockerfile, typically looking something like this:
FROM node
WORKDIR /app
# Install only dependencies now, to make rebuilds faster
COPY package.json yarn.lock ./
RUN yarn install
# NB: node_modules is in .dockerignore so this doesn't overwrite
# the previous step
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
The resulting Docker image is self-contained: if you have just the image (maybe you docker pulled it from a repository) you can run it, as you note, without any special -v option. This path has the downside that you need to re-run docker build to recreate the image if you've made any changes.
The second way is to use docker run -v to inject the current source directory into the container. For example:
docker run \
--rm \ # clean up after we're done
-p 3000:3000 \ # publish a port
-v $PWD:/app \ # mount current directory over /app
-w /app \ # set default working directory
node \ # image to run
yarn start # command to run
This path hides everything in the /app directory in the image and replaces it in the container with whatever you have in your current directory. This requires you to have a built functional copy of the application's source tree available, and so it supports things like live reloading; helpful for development, not what you want for Docker in production.
Like I say, I've seen a lot of tutorials do both things:
# First build an image, populating /app in that image
docker build -t myimage .
# Now run it, hiding whatever was in /app
docker run --rm -p3000:3000 -v$PWD:/app myimage
You don't need the -v option, but you do need to manually rebuild things if your application changes.
$EDITOR src/file.js
yarn test
sudo docker build -t myimage .
sudo docker run --rm -p3000:3000 myimage
As I note here the docker commands require root-equivalent permission; but on the flip side the final docker run command is very close to what you'd run "for real" (maybe via Docker Compose or Kubernetes, but without requiring a copy of the application source).

Docker file: I want to invoke one script from docker file

I am creating one docker image name with soaphonda.
the code of docker file is below
FROM centos:7
FROM python:2.7
FROM java:openjdk-7-jdk
MAINTAINER Daniel Davison <sircapsalot#gmail.com>
# Version
ENV SOAPUI_VERSION 5.3.0
COPY entry_point.sh /opt/bin/entry_point.sh
COPY server.py /opt/bin/server.py
COPY server_index.html /opt/bin/server_index.html
COPY SoapUI-5.3.0.tar.gz /opt/SoapUI-5.3.0.tar.gz
COPY exit.sh /opt/bin/exit.sh
RUN chmod +x /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/server.py
# Download and unarchive SoapUI
RUN mkdir -p /opt
WORKDIR /opt
RUN tar -xvf SoapUI-5.3.0.tar.gz .
# Set working directory
WORKDIR /opt/bin
# Set environment
ENV PATH ${PATH}:/opt/SoapUI-5.3.0/bin
EXPOSE 3000
RUN chmod +x /opt/SoapUI-5.3.0/bin/mockservicerunner.sh
CMD ["/opt/bin/entry_point.sh","exit","pwd", "sh", "/Users/ankitsrivastava/Documents/SametimeFileTransfers/Honda/files/hondascript.sh"]
My image creation is getiing successfull. I want that once the image creation is done it should retag and push in the docker hub. For that i have created the script which is below;
docker tag soaphonda ankiksri/soaphonda
docker push ankiksri/soaphonda
docker login
docker run -d -p 8089:8089 --name demo ankiksri/soaphonda
containerid=`docker ps -aqf "name=demo"`
echo $containerid
docker exec -it $containerid bash -c 'cd ../SoapUI-5.3.0;sh /opt/SoapUI-5.3.0/bin/mockservicerunner.sh "/opt/SoapUI-5.3.0/Honda-soapui-project.xml"'
Please help me how i can call the second script from docker file and the exit command is not working in docker file.
What you have to understand here is what you are specifying within the Dockerfile are the commands that gets executed when you build and run a Docker container from the image you have created using your Dockerfile.
So Docker image tag, push running should all done after you have built the Docker image from the Dockerfile. It cannot be done within the Dockerfile itself.
To achieve this kind of a thing you would have to use a build tool like Maven (an example) and automate the process of tagging, pushing the image. Also by looking at your image, I don't see any nessactiy to keep on tagging and pushing the image unless you are continuously updating the image. Also there is no point of using three FROM commands as it will unnecessarily make your Docker image size huge.

Docker ONBUILD COPY doesn't seem to copy files

I am trying to copy files to a docker image when I execute the docker build command. I am not sure what I am doing wrong because this seems to work for the docker rails onbuild file but doesn't work for my custom dockerfile.
Here is my Dockerfile
FROM ubuntu:14.04
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY Gemfile /usr/src/app
CMD ["tail", "-f", "/dev/null"]
The docker commands that I run are the following.
docker build -t copy-test .
docker run --name run-test -d copy-test
docker exec -i -t 2e9adebae0fc /bin/bash
When I connect to the container with docker exec it starts in /usr/src/app but the Gemfile is not there. I don't understand why the mkdir and WORKDIR instructions seem to work but the ONBUILD COPY does not. (And yes the directory I am invoking these commands in does have a Gemfile present)
You have to use COPY to build your image. Use ONBUILD if your image is kind of template to build other images.
See Docker documentation:
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.

Resources