I am trying to copy files to a docker image when I execute the docker build command. I am not sure what I am doing wrong because this seems to work for the docker rails onbuild file but doesn't work for my custom dockerfile.
Here is my Dockerfile
FROM ubuntu:14.04
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY Gemfile /usr/src/app
CMD ["tail", "-f", "/dev/null"]
The docker commands that I run are the following.
docker build -t copy-test .
docker run --name run-test -d copy-test
docker exec -i -t 2e9adebae0fc /bin/bash
When I connect to the container with docker exec it starts in /usr/src/app but the Gemfile is not there. I don't understand why the mkdir and WORKDIR instructions seem to work but the ONBUILD COPY does not. (And yes the directory I am invoking these commands in does have a Gemfile present)
You have to use COPY to build your image. Use ONBUILD if your image is kind of template to build other images.
See Docker documentation:
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
Related
Here is my Dockerfile:
FROM golang:1.17.5 as builder
WORKDIR /go/src/github.com/cnosdb/cnosdb
COPY . /go/src/github.com/cnosdb/cnosdb
RUN go env -w GOPROXY=https://goproxy.cn,direct
RUN go env -w GO111MODULE=on
RUN go install ./...
FROM debian:stretch
COPY --from=builder /go/bin/cnosdb /go/bin/cnosdb-cli /usr/bin/
COPY --from=builder /go/src/github.com/cnosdb/cnosdb/etc/cnosdb.sample.toml /etc/cnosdb/cnosdb.conf
EXPOSE 8086
VOLUME /var/lib/cnosdb
COPY docker/entrypoint.sh /entrypoint.sh
COPY docker/init-cnosdb.sh /init-cnosdb.sh
RUN chmod +x /entrypoint.sh /init-cnosdb.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["cnosdb"]
Here is my configuration of my jenkins:
But the image docker built didn't have a name.
why?
I haven't used Jenkins for this, as I build from the command line. But are you expecting your name arg to show up in the REPOSITORY and TAG columns? If that is the case, docker has a docker tag command, like so:
~$ docker tag <image-id> repo/name:tag
When I build from the command line, I do it like this:
~$ docker build -t repo/name:0.1 .
Then if I check images:
❯ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
repo/name 0.1 689459c139ef 2 days ago 187MB
Adding to what #rtl9069 mentioned the docker build command can be ran as part of a pipeline. Please take a look at this article https://www.liatrio.com/blog/building-with-docker-using-jenkins-pipelines as it describes it with examples.
I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.
I'm trying to run Cypress tests from docker image.
I'm using this Dockerfile:
FROM cypress/included:4.8.0
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY cypress.json /usr/src/app/cypress.json
In the docs I found I can run tests like this:
docker run -it -v $PWD:/services/cypress -w /usr/src/app --entrypoint=cypress cypress/included:4.8.0 run
This gives me error saying it can't find cypress.json in /usr/src/app
services/cypress is where the Dockerfile is located.
I copied the file over to /usr/src/app in Dockerfile so I don't understand why it complains it can't find it.
Can I get any feedback on this ?
The command you are using to run (docker run -it -v $PWD:/services/cypress -w /usr/src/app --entrypoint=cypress cypress/included:4.8.0 run) is running the "cypress/included:4.8.0" image that doesn't have your cypress.json file.
First, you need to build your own image using the Dockerfile you've built. The dockerfile contains instructions to copy your cypress.json file within a layer in your image.
docker build -t TheNameOfYourImage .
Then, you can run this image:
docker run -it -v $PWD:/services/cypress -w /usr/src/app --entrypoint=cypress TheNameOfYourImage run
Let's take the sample python dockerfile as an example.
FROM python:3
WORKDIR /project
COPY . /project
and then the run command to run the tests with in that container:
docker run --rm -v$(CWD):/project -w/project mydocker:1.0 pytest tests/
We are declaring the WORKDIR in the dockerfile and the run.
Am I right in saying
The WORKDIR in the dockerfile is the directory which the subsequent commands in the Dockerfile are run? But this will have no impact on when we run the docker run command?
Instead we need to pas in the -w/project to have pytests run in the /projects directory, well for pytest to look for the rests directory in /projects.
My setup.cfg
[tool:pytest]
addopts =
--junitxml=results/unit-tests.xml
In the example you give, you shouldn't need either the -v or -w option.
Various options in the Dockerfile give defaults that can be overridden at docker run time. CMD in the Dockerfile, for example, will be overridden by anything in a docker run command after the image name. (...and it's better to specify it in the Dockerfile than to have to manually specify it on every docker run invocation.)
Specifically to your question, WORKDIR affects the current directory for subsequent RUN and COPY commands, but it also specifies the default directory when the container runs; if you don't have a docker run -w option it will use that WORKDIR. Specifying -w to the same directory as the final image WORKDIR won't have an effect.
You also COPY the code into your image in the Dockerfile (which is good). You don't need a docker run -v option to overwrite that code at run time.
More specifically looking at pytest, it won't usually write things out to the filesystem. If you are using functionality like JUnit XML or code coverage reports, you can set it to write those out somewhere other than your application directory:
docker run --rm \
-v $PWD/reports:/reports \
mydocker:1.0 \
pytest --cov=myapp --cov-report=html:/reports/coverage.html tests
I've read tutorials about use docker:
docker run -it -p 9001:3000 -v $(pwd):/app simple-node-docker
but if i use:
docker run -it -p 9001:3000 simple-node-docker
it's working too? -v is not more needed? or is taking from the Dockerfile the line WORKDIR?
FROM node:9-slim
# WORKDIR specifies the directory our
# application's code will live within
WORKDIR /app
another tutorials use mkdir ./app on the workfile, anothers don't, so WORKDIR is enough to docker create the folder automatically if does not exist
There are two common ways to get application content into a Docker container. Many Node tutorials I've seen confusingly do both of them. You don't need docker run -v, provided you docker build your container when you make changes.
The first way is to copy a static copy of the application into the image. You'd do this via a Dockerfile, typically looking something like this:
FROM node
WORKDIR /app
# Install only dependencies now, to make rebuilds faster
COPY package.json yarn.lock ./
RUN yarn install
# NB: node_modules is in .dockerignore so this doesn't overwrite
# the previous step
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
The resulting Docker image is self-contained: if you have just the image (maybe you docker pulled it from a repository) you can run it, as you note, without any special -v option. This path has the downside that you need to re-run docker build to recreate the image if you've made any changes.
The second way is to use docker run -v to inject the current source directory into the container. For example:
docker run \
--rm \ # clean up after we're done
-p 3000:3000 \ # publish a port
-v $PWD:/app \ # mount current directory over /app
-w /app \ # set default working directory
node \ # image to run
yarn start # command to run
This path hides everything in the /app directory in the image and replaces it in the container with whatever you have in your current directory. This requires you to have a built functional copy of the application's source tree available, and so it supports things like live reloading; helpful for development, not what you want for Docker in production.
Like I say, I've seen a lot of tutorials do both things:
# First build an image, populating /app in that image
docker build -t myimage .
# Now run it, hiding whatever was in /app
docker run --rm -p3000:3000 -v$PWD:/app myimage
You don't need the -v option, but you do need to manually rebuild things if your application changes.
$EDITOR src/file.js
yarn test
sudo docker build -t myimage .
sudo docker run --rm -p3000:3000 myimage
As I note here the docker commands require root-equivalent permission; but on the flip side the final docker run command is very close to what you'd run "for real" (maybe via Docker Compose or Kubernetes, but without requiring a copy of the application source).