I want to create a docker image using either git sources, or the already build app. I created two Dockerfiles like these (note: this is pseudo code):
Runtime-Image:
FROM <baseimage>
EXPOSE 1234/tcp
EXPOSE 4321/tcp
VOLUME /foobar
COPY myapp.tgz .
RUN tar -xzf myapp.tgz && rm -f myapp.tgz
ENTRYPOINT ["myapp"]
myapp.tgz is created on a buildserver or maybe by compiling manually. It is available on the docker host server locally.
To build directly from source I use:
FROM <devimage> AS buildenv
ARG GIT_USER
ARG GIT_PASSWORD
RUN git clone http://${GIT_USER}:${GIT_PASSWORD}#<my.git.host>
RUN ./makefile && cp /source/build/myapp.tgz /drop/myapp.tgz
FROM <baseimage> AS runenv
EXPOSE 1234/tcp
EXPOSE 4321/tcp
VOLUME /foobar
COPY --from=buildenv /drop/myapp.tgz .
RUN tar -xzf myapp.tgz && rm -f myapp.tgz
ENTRYPOINT ["myapp"]
The instructions in the second build stage of this are obviously a duplicate of the Runtime-Image Dockerfile.
I'd like to have just ONE Dockerfile, which can build from source, or from context on the docker host, as required. I could put the duplicated commands in a custom baseimage and reuse that to build onto (FROM), but this would obfuscate the Dockerfile.
What is the recommended, most elegant way to do this?
I can't use a bind mount to get myapp.tgz in the current directory on the docker host, can I? For this I would have to start a Container to build my app?
There is no IF directive in the Dockfile for conditions?
If there is no myapp.tgz on the docker host, COPY myapp.tgz . will fail
If there is no buildenv, COPY --from=buildenv /drop/myapp.tgz . will fail.
I could use COPY ./* . and then check with
[ -f /myapp.tgz ] && <prepare-container> || <build-from-git-source>
I guess? Our would you rather just create a seperate Dockerfile just for building from source and then use something like
docker run --rm -v /SomewhereOnHost/drop:/drop my-compile-image
For the past 2 days I have been trying to figure this out, now I have a good solution to achieve a conditional build (a if in Dockerfile)
ARG mode=local
FROM alpine as build_local
ONBUILD COPY myapp.tgz .
FROM alpine as build_remote
ONBUILD RUN git clone GIT_URL
ONBUILD RUN cd repo && ./makefile && cp /source/build/myapp.tgz .
FROM build_${mode} AS runenv
EXPOSE 1234/tcp
EXPOSE 4321/tcp
VOLUME /foobar
RUN tar -xzf myapp.tgz && rm -f myapp.tgz
ENTRYPOINT ["myapp"]
The toplevel mode allows you to pass the condition with docker build --build-arg mode=remote .. ONBUILD is used so the command is only executed if the corresponding branch is selected.
Related
I just started learning docker. To teach myself, I managed to containerize bandit (a python code scanner) but I'm not able to see the output of the scan before the container destroys itself. How can I copy the output file from inside the container to the host, or otherwise save it?
Right now i'm just using bandit to scan itself basically :)
Dockerfile
FROM python:3-alpine
WORKDIR /
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git ./code-to-scan
CMD [ "python -m bandit -r ./code-to-scan -o bandit.txt" ]
You can mount a volume on you host where you can share the output of bandit.
For example, you can run your container with:
docker run -v $(pwd)/output:/tmp/output -t your_awesome_container:latest
And you in your dockerfile:
...
CMD [ "python -m bandit -r ./code-to-scan -o /tmp/bandit.txt" ]
This way the bandit.txt file will be found in the output folder.
Better place the code in your image not in the root directory.
I did some adjustments to your Dockerfile.
FROM python:3-alpine
WORKDIR /usr/myapp
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git .
CMD [ "bandit","-r",".","-o","bandit.txt" ]`
This clones git in your WORKDIR.
Note the CMD, it is an array, so just devide all commands and args as in the Dockerfile about.
I put the the Dockerfile in my D:\test directory (Windows).
docker build -t test .
docker run -v D:/test/:/usr/myapp test
It will generate you bandit.txt in the test folder.
After the code is execute the container exits, as there are nothing else to do.
you can also put --rm to remove the container once it finishs.
docker run --rm -v D:/test/:/usr/myapp test
I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.
I have GitHub Actions which uses rust-cross to perform cross-compilation for arm64 and other hardware platforms.
I perform cross-compilation on the host machine already and wish to just use the binaries and static libraries to be copied into the Dockerfile and create a light Alpine Container.
Caveat
In rust-cross the released binaries are under specific directories, for example:
arm64 -> target/aarch64-unknown-linux-gnu/release/
amd64 -> target/x86_64-unknown-linux-musl/release/
armv7 -> target/armv7-unknown-linux-gnueabihf/release/
Trials
I am trying to use case within my Dockerfile which relies on docker buildx kit provides and provide the TARGETPLATFORM based on some well documented repository from BretFisher/multi-platform-docker
FROM alpine as base
FROM --platform=${BUILDPLATFORM} alpine as tiny-project
# Use BuildKit to help translate architecture names
ARG TARGETPLATFORM
RUN case ${TARGETPLATFORM} in \
"linux/amd64") TARGET_DIR=x86_64-unknown-linux-musl ;; \
"linux/arm64") TARGET_DIR=aarch64-unknown-linux-gnu ;; \
*) exit 1 \ # ignore other architectures for now!
esac \
WORKDIR /app
RUN cp target/<HOW TO PASS VALUE TARGET_DIR>/release/myBinary .
RUN cp target/<HOW TO PASS VALUE TARGET_DIR>/release/*.so .
FROM base as release
COPY --from=tiny-project /app/* ./
RUN echo '#!/bin/ash' > /entrypoint.sh
RUN echo 'echo " * Starting: /myBinary $*"' >> /entrypoint.sh
RUN echo 'exec /myBinary $*' >> /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 7447/udp
EXPOSE 7447/tcp
EXPOSE 8000/tcp
ENV RUST_LOG info
ENTRYPOINT ["/entrypoint.sh"]
I have tried doing a lot of variations but it seems like TARGET_DIR is not being recognized on the host machine
RUN cp ./target/$(echo $TARGET_DIR)/release/myBinary /
RUN cp ./target/$(echo $TARGET_DIR)/release/*.so /
# as well as storing the value in a file and calling it
# echo aarch64-unknown-linux-gnu > /tmp/rust_target.txt
RUN cp ./target/$(cat /tmp/rust_target.txt)/release/zenohd /
RUN cp ./target/$(cat /tmp/rust_target.txt)/release/*.so /
But it seems like the neither the file nor the variable are available to the host and I keep getting an error during my GitHub Actions Workflow Log
Requirements
I wish to keep a single Dockerfile and based on the platform from docker buildx build command I want to copy the binaries from appropriate source directories to the destination directory in the Dockerfile.
How does one achieve this?
Each RUN command runs in its own shell (and its own container), so you can't set variables in one RUN command that last beyond that Dockerfile line.
However, each RUN line also is implicitly wrapped in sh -c, and so you can use ordinary shell constructs to run multiple commands in a single RUN instruction. Since you haven't left a single Dockerfile line, the shell variable you set will still be valid:
WORKDIR /app
# All in a single RUN line:
RUN case "${TARGETPLATFORM}" in \
"linux/amd64") TARGET_DIR=x86_64-unknown-linux-musl ;; \
"linux/arm64") TARGET_DIR=aarch64-unknown-linux-gnu ;; \
*) exit 1 ;; \
esac; \
cp target/$TARGET_DIR/release/myBinary .; \
cp target/$TARGET_DIR/release/*.so .
It would also be reasonable to put this logic into a shell script that you COPY and RUN, or to create a staging directory on the host that contains the files you want to include in the layout you want and use that directory as the docker build context directory.
rm -rf docker-build
mkdir docker-build
TARGET_DIR=...
cp "target/$TARGET_DIR/release/myBinary" docker-build/myBinary
cp Dockerfile docker-build
docker build ./docker-build
I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push
I've got a repo set up like this:
/config
config.json
/worker-a
Dockerfile
<symlink to config.json>
/code
/worker-b
Dockerfile
<symlink to config.json>
/code
However, building the images fails, because Docker can't handle the symlinks. I should mention my project is far more complicated than this, so restructuring directories isn't a great option. How do I deal with this situation?
Docker doesn't support symlinking files outside the build context.
Here are some different methods for using a shared file in a container:
Build Time
Copy from a config image (Docker buildkit)
Recent versions of Docker allow RUN steps to bind mount from a named image or previous build stage with the --mount=type=bind,target=/dir,source=/dir,from=image-or-stage-name
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
FROM scratch
COPY config.json /config.json
Build and tag the config image me/worker-config
docker build -t me/worker-config:latest .
Mount the me/worker-config image during the real build
RUN --mount=type=bind,target=/worker-config,source=/,from=me/worker-config:latest \
cp /worker-config/config.json /app/config.json;
Share a base image
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
COPY config.json /config.json
Build and tag the image me/worker-config
docker build -t me/worker-config:latest .
Source the base me/worker-config image for all your worker Dockerfiles
FROM me/worker-config:latest
Build script
Use a script to push the common config to each of your worker containers.
./build worker-n
#!/bin/sh
set -uex
rundir=$(readlink -f "${0%/*}")
container=$(shift)
cd "$rundir/$container"
cp ../config/config.json ./config-docker.json
docker build "$#" .
Build from URL
Pull the config from a common URL for all worker-n builds.
ADD http://somehost/config.json /
Increase the scope of the image build context
Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.
cd ..
docker build -f worker-a/Dockerfile .
All the source paths you reference in a Dockerfile must also change to match the new build context:
COPY workerathing /app
becomes
COPY worker-a/workerathing /app
Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers. Note that only the .dockerignore file from the base of the build context is referenced.
Alternate build that can mount volumes
Other projects that strive for Dockerfile compatibility may support volumes at build time. For example a podman build / buildah support a --volume option to bind mount files from the host into a build container.
podman build --volume /project/config:/worker-config:ro,Z -t me/worker-a .
Then the build can reference the mounted volume
COPY /worker-config/config.json /app
Run time
Mount a config directory from a named volume
Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.
docker volume create --name=worker-cfg-vol
docker run -v worker-cfg-vol:/config worker-config cp config.json /config
docker run -v worker-cfg-vol:/config:/config worker-a
Mount config directory from data container
Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.
docker create --name wcc -v /config worker-config /bin/true
docker run --volumes-from wcc worker-a
Mount config file from host at runtime
docker run -v /app/config/config.json:/config.json worker-a
Node.js-specific solution
I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.
Install yalc in your container, e.g. RUN npm i -g yalc.
Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.
Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.
# You can access the container using:
# docker run -it my-name sh
# To start it stand-alone:
# docker run -it -p 8888:3000 my-name
FROM node:alpine AS builder
# Install yalc globally (the apk add... line is only needed if your installation requires it)
RUN apk add --no-cache --virtual .gyp python make g++ && \
npm i -g yalc
RUN mkdir /packages && \
mkdir /packages/models && \
mkdir /packages/gui && \
mkdir /packages/server
COPY ./packages/models /packages/models
WORKDIR /packages/models
RUN npm install && \
npm run build && \
yalc publish --private
COPY ./packages/gui /packages/gui
WORKDIR /packages/gui
RUN yalc add my-models && \
npm install && \
npm run build
COPY ./packages/server /packages/server
WORKDIR /packages/server
RUN yalc add my-models && \
npm install && \
npm run build
FROM node:alpine
RUN mkdir -p /app
COPY --from=builder /packages/server/package.json /app/package.json
COPY --from=builder /packages/server/dist /app/dist
# Make sure you copy the yalc registry too.
COPY --from=builder /packages/server/.yalc /app/.yalc
COPY --from=builder /packages/server/node_modules /app/node_modules
COPY --from=builder /packages/gui/dist /app/dist/public
WORKDIR /app
EXPOSE 3000
CMD ["node", "./dist/index.js"]
Hope that helps...
The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.
docker build -f worker-a/Dockerfile .
docker build -f worker-b/Dockerfile .
You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.
Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.
How to include files outside of Docker's build context?
Hope this helps! Cheers
An alternative solution is to upgrade all your soft links into hard links.