how to override the files in docker container - docker

I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js

Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.

Related

The image docker built through jenkins didn't have name and tag

Here is my Dockerfile:
FROM golang:1.17.5 as builder
WORKDIR /go/src/github.com/cnosdb/cnosdb
COPY . /go/src/github.com/cnosdb/cnosdb
RUN go env -w GOPROXY=https://goproxy.cn,direct
RUN go env -w GO111MODULE=on
RUN go install ./...
FROM debian:stretch
COPY --from=builder /go/bin/cnosdb /go/bin/cnosdb-cli /usr/bin/
COPY --from=builder /go/src/github.com/cnosdb/cnosdb/etc/cnosdb.sample.toml /etc/cnosdb/cnosdb.conf
EXPOSE 8086
VOLUME /var/lib/cnosdb
COPY docker/entrypoint.sh /entrypoint.sh
COPY docker/init-cnosdb.sh /init-cnosdb.sh
RUN chmod +x /entrypoint.sh /init-cnosdb.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["cnosdb"]
Here is my configuration of my jenkins:
But the image docker built didn't have a name.
why?
I haven't used Jenkins for this, as I build from the command line. But are you expecting your name arg to show up in the REPOSITORY and TAG columns? If that is the case, docker has a docker tag command, like so:
~$ docker tag <image-id> repo/name:tag
When I build from the command line, I do it like this:
~$ docker build -t repo/name:0.1 .
Then if I check images:
❯ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
repo/name 0.1 689459c139ef 2 days ago 187MB
Adding to what #rtl9069 mentioned the docker build command can be ran as part of a pipeline. Please take a look at this article https://www.liatrio.com/blog/building-with-docker-using-jenkins-pipelines as it describes it with examples.

Dockerfile and Docker run -w / workdir

Let's take the sample python dockerfile as an example.
FROM python:3
WORKDIR /project
COPY . /project
and then the run command to run the tests with in that container:
docker run --rm -v$(CWD):/project -w/project mydocker:1.0 pytest tests/
We are declaring the WORKDIR in the dockerfile and the run.
Am I right in saying
The WORKDIR in the dockerfile is the directory which the subsequent commands in the Dockerfile are run? But this will have no impact on when we run the docker run command?
Instead we need to pas in the -w/project to have pytests run in the /projects directory, well for pytest to look for the rests directory in /projects.
My setup.cfg
[tool:pytest]
addopts =
--junitxml=results/unit-tests.xml
In the example you give, you shouldn't need either the -v or -w option.
Various options in the Dockerfile give defaults that can be overridden at docker run time. CMD in the Dockerfile, for example, will be overridden by anything in a docker run command after the image name. (...and it's better to specify it in the Dockerfile than to have to manually specify it on every docker run invocation.)
Specifically to your question, WORKDIR affects the current directory for subsequent RUN and COPY commands, but it also specifies the default directory when the container runs; if you don't have a docker run -w option it will use that WORKDIR. Specifying -w to the same directory as the final image WORKDIR won't have an effect.
You also COPY the code into your image in the Dockerfile (which is good). You don't need a docker run -v option to overwrite that code at run time.
More specifically looking at pytest, it won't usually write things out to the filesystem. If you are using functionality like JUnit XML or code coverage reports, you can set it to write those out somewhere other than your application directory:
docker run --rm \
-v $PWD/reports:/reports \
mydocker:1.0 \
pytest --cov=myapp --cov-report=html:/reports/coverage.html tests

Dockerfile COPY and RUN in one layer

I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push

docker -v no more needed? and Dockerfile

I've read tutorials about use docker:
docker run -it -p 9001:3000 -v $(pwd):/app simple-node-docker
but if i use:
docker run -it -p 9001:3000 simple-node-docker
it's working too? -v is not more needed? or is taking from the Dockerfile the line WORKDIR?
FROM node:9-slim
# WORKDIR specifies the directory our
# application's code will live within
WORKDIR /app
another tutorials use mkdir ./app on the workfile, anothers don't, so WORKDIR is enough to docker create the folder automatically if does not exist
There are two common ways to get application content into a Docker container. Many Node tutorials I've seen confusingly do both of them. You don't need docker run -v, provided you docker build your container when you make changes.
The first way is to copy a static copy of the application into the image. You'd do this via a Dockerfile, typically looking something like this:
FROM node
WORKDIR /app
# Install only dependencies now, to make rebuilds faster
COPY package.json yarn.lock ./
RUN yarn install
# NB: node_modules is in .dockerignore so this doesn't overwrite
# the previous step
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
The resulting Docker image is self-contained: if you have just the image (maybe you docker pulled it from a repository) you can run it, as you note, without any special -v option. This path has the downside that you need to re-run docker build to recreate the image if you've made any changes.
The second way is to use docker run -v to inject the current source directory into the container. For example:
docker run \
--rm \ # clean up after we're done
-p 3000:3000 \ # publish a port
-v $PWD:/app \ # mount current directory over /app
-w /app \ # set default working directory
node \ # image to run
yarn start # command to run
This path hides everything in the /app directory in the image and replaces it in the container with whatever you have in your current directory. This requires you to have a built functional copy of the application's source tree available, and so it supports things like live reloading; helpful for development, not what you want for Docker in production.
Like I say, I've seen a lot of tutorials do both things:
# First build an image, populating /app in that image
docker build -t myimage .
# Now run it, hiding whatever was in /app
docker run --rm -p3000:3000 -v$PWD:/app myimage
You don't need the -v option, but you do need to manually rebuild things if your application changes.
$EDITOR src/file.js
yarn test
sudo docker build -t myimage .
sudo docker run --rm -p3000:3000 myimage
As I note here the docker commands require root-equivalent permission; but on the flip side the final docker run command is very close to what you'd run "for real" (maybe via Docker Compose or Kubernetes, but without requiring a copy of the application source).

Docker file: I want to invoke one script from docker file

I am creating one docker image name with soaphonda.
the code of docker file is below
FROM centos:7
FROM python:2.7
FROM java:openjdk-7-jdk
MAINTAINER Daniel Davison <sircapsalot#gmail.com>
# Version
ENV SOAPUI_VERSION 5.3.0
COPY entry_point.sh /opt/bin/entry_point.sh
COPY server.py /opt/bin/server.py
COPY server_index.html /opt/bin/server_index.html
COPY SoapUI-5.3.0.tar.gz /opt/SoapUI-5.3.0.tar.gz
COPY exit.sh /opt/bin/exit.sh
RUN chmod +x /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/server.py
# Download and unarchive SoapUI
RUN mkdir -p /opt
WORKDIR /opt
RUN tar -xvf SoapUI-5.3.0.tar.gz .
# Set working directory
WORKDIR /opt/bin
# Set environment
ENV PATH ${PATH}:/opt/SoapUI-5.3.0/bin
EXPOSE 3000
RUN chmod +x /opt/SoapUI-5.3.0/bin/mockservicerunner.sh
CMD ["/opt/bin/entry_point.sh","exit","pwd", "sh", "/Users/ankitsrivastava/Documents/SametimeFileTransfers/Honda/files/hondascript.sh"]
My image creation is getiing successfull. I want that once the image creation is done it should retag and push in the docker hub. For that i have created the script which is below;
docker tag soaphonda ankiksri/soaphonda
docker push ankiksri/soaphonda
docker login
docker run -d -p 8089:8089 --name demo ankiksri/soaphonda
containerid=`docker ps -aqf "name=demo"`
echo $containerid
docker exec -it $containerid bash -c 'cd ../SoapUI-5.3.0;sh /opt/SoapUI-5.3.0/bin/mockservicerunner.sh "/opt/SoapUI-5.3.0/Honda-soapui-project.xml"'
Please help me how i can call the second script from docker file and the exit command is not working in docker file.
What you have to understand here is what you are specifying within the Dockerfile are the commands that gets executed when you build and run a Docker container from the image you have created using your Dockerfile.
So Docker image tag, push running should all done after you have built the Docker image from the Dockerfile. It cannot be done within the Dockerfile itself.
To achieve this kind of a thing you would have to use a build tool like Maven (an example) and automate the process of tagging, pushing the image. Also by looking at your image, I don't see any nessactiy to keep on tagging and pushing the image unless you are continuously updating the image. Also there is no point of using three FROM commands as it will unnecessarily make your Docker image size huge.

Resources