docker build with inline fails - docker

I am trying to build a docker image with inline commands, it goes well until copy statement where it fails saying it cannot find the file. When I put the build statements in Dockerfile and run build, it works fine.
docker build -t casspy -<<EOF
FROM alpine:latest
RUN apk -v add python3 py-pip bash && \
pip install ldap3 cassandra-driver configargparse boto3
COPY script.py .
EOF
Step 3/3 : COPY script.py .
COPY failed: stat /var/lib/docker/tmp/docker-builder451609694/script.py: no such file or directory

You need a specify a build context for docker.
Omitting the build context can be useful in situations where your Dockerfile does not require files to be copied into the image, and improves the build-speed, as no files are sent to the daemon.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
docker build -t casspy -f- . <<EOF

Related

Copy file from dockerfile build to host - bandit

I just started learning docker. To teach myself, I managed to containerize bandit (a python code scanner) but I'm not able to see the output of the scan before the container destroys itself. How can I copy the output file from inside the container to the host, or otherwise save it?
Right now i'm just using bandit to scan itself basically :)
Dockerfile
FROM python:3-alpine
WORKDIR /
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git ./code-to-scan
CMD [ "python -m bandit -r ./code-to-scan -o bandit.txt" ]
You can mount a volume on you host where you can share the output of bandit.
For example, you can run your container with:
docker run -v $(pwd)/output:/tmp/output -t your_awesome_container:latest
And you in your dockerfile:
...
CMD [ "python -m bandit -r ./code-to-scan -o /tmp/bandit.txt" ]
This way the bandit.txt file will be found in the output folder.
Better place the code in your image not in the root directory.
I did some adjustments to your Dockerfile.
FROM python:3-alpine
WORKDIR /usr/myapp
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git .
CMD [ "bandit","-r",".","-o","bandit.txt" ]`
This clones git in your WORKDIR.
Note the CMD, it is an array, so just devide all commands and args as in the Dockerfile about.
I put the the Dockerfile in my D:\test directory (Windows).
docker build -t test .
docker run -v D:/test/:/usr/myapp test
It will generate you bandit.txt in the test folder.
After the code is execute the container exits, as there are nothing else to do.
you can also put --rm to remove the container once it finishs.
docker run --rm -v D:/test/:/usr/myapp test

failed to compute cache key: "/requirements.txt" not found: not found when build script is in child directory

I am trying to build my Dockerfile from a child folder context.
this is my build file in build.sh
#!/bin/bash -ex
docker build -t app:latest -f ../Dockerfile .
This is my Dockerfile
FROM app:latest
WORKDIR /app
# This path must exist as it is used as a mount point for testing
# Ensure your app is loading files from this location
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3-pip
# Install Dependencies
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
This is my requirements.txt
Flask
pandas
requests
gunicorn
pytest
When I attempt to run build.sh within the scripts folder I get this error
#8 ERROR: "/requirements.txt" not found: not found
------
> [stage-1 4/7] COPY requirements.txt .:
------
failed to compute cache key: "/requirements.txt" not found: not found
When I just do the docker build command in the command line in the app directory I will do this:
docker build -t app:latest -f Dockerfile .
This will work, however going into the child directory and attempting to build it using the bash script will fail with the requirements.txt caching issue.
How do I successfully build my docker container from the child folder?
The docker build command takes a path to a context directory
docker build [... options ...] .
# ^ this path
When the Dockerfile COPY requirements.txt . it is always relative to the path at the end of the docker build command. It doesn't matter where the Dockerfile itself is physically located.
If you want to build an image from a parent directory, where the Dockerfile is located in that parent directory, you need to specify the path. If the Dockerfile is named Dockerfile and is in the root of the context directory (the standard recommended location) then you do not need a docker build -f option.
cd $HOME/testing-docker
docker build -t app .
cd $HOME/testing-docker/scripts
docker build -t app ..
# ^^ build the parent directory
# but no -f option
# the Dockerfile is in the default place
# Any way to specify the path will work
cd
docker build -t app $HOME/testing-docker

Docker: Copy file out of container while building it

I build the following image with docker build -t mylambda .
I now try to export lambdatest.zip to my localhost while building it so I see the .zip file on my Desktop. So far I used docker cp <Container ID>:/var/task/lambdatest.zip ~/Desktop but that doesn't work inside my Dockerfile (?). Do you have any ideas?
FROM lambci/lambda:build-python3.7
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
Dockerfile (updated):
FROM lambci/lambda:build-python3.7
RUN python3 -m venv venv
RUN . venv/bin/activate
RUN pip install --upgrade pip
RUN pip install pystan==2.18
RUN pip install fbprophet
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x
The typical answer is you do not. A Dockerfile does not have access to write files out to the host, by design, just as it does not have access to read arbitrary files from outside of the build context. There are various reasons for that, including security (you don't want an image build dropping a backdoor on a build host in the cloud) and reproducibility (images should not have dependencies outside of their context).
As a result, you need to take an extra step to extract contexts of an image back to the host. Typically this involves creating a container a running a docker cp command, along the lines of the following:
docker build -t your_image .
docker create --name extract your_image
docker cp extract:/path/to/files /path/on/host
docker rm extract
Or it can involve I/O pipes, where you run a tar command inside the container to package the files, and pipe that to a tar command running on the host to save the files.
docker build -t your_image
docker run --rm your_image tar -cC /path/in/container . | tar -xC /path/on/host
Recently, Docker has been working on buildx which is currently experimental. Using that, you can create a stage that consists of the files you want to export to the host and use the --output option to write that stage to the host rather than to an image. Your Dockerfile would then look like:
FROM lambci/lambda:build-python3.7 as build
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
FROM scratch as artifact
COPY --from=build /var/task/lambdatest.zip /lambdatest.zip
FROM build as release
And then the build command to extract the zip file would look like:
docker buildx build --target=artifact --output type=local,dest=$(pwd)/out/ .
I believe buildx is still marked as experimental in the latest release, so to enable that, you need at least the following json entry in $HOME/.docker/config.json:
{ "experimental": "enabled" }
And then for all the buildx features, you will want to create a non-default builder with docker buildx create.
With recent versions of the docker CLI, integration to buildkit has exposed more options. Now it's no longer needed to run buildx to get access to the output flag. That means the above changes to:
docker build --target=artifact --output type=local,dest=$(pwd)/out/ .
If buildkit hasn't been enabled on your version (should be on by default in 20.10), you can enable it in your shell with:
export DOCKER_BUILDKIT=1
or for the entire host, you can make it the default with the following in /etc/docker/daemon.json:
{
"features": {"buildkit": true }
}
And to use the daemon.json the docker engine needs to be reloaded:
systemctl reload docker
Since docker 18.09, it natively supports a custom backend called BuildKit:
DOCKER_BUILDKIT=1 docker build -o target/folder myimage
This allows you to copy your latest stage to target/folder. If you want only specific files and not an entire filesystem, you can add a stage to your build:
FROM XXX as builder-stage
# Your existing dockerfile stages
FROM scratch
COPY --from=builder-stage /file/to/export /
Note: You will need your docker client and engine to be compatible with Docker Engine API 1.40+, otherwise docker will not understand the -o flag.
Reference: https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs

Dockerfile COPY and RUN in one layer

I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push

Docker and symlinks

I've got a repo set up like this:
/config
config.json
/worker-a
Dockerfile
<symlink to config.json>
/code
/worker-b
Dockerfile
<symlink to config.json>
/code
However, building the images fails, because Docker can't handle the symlinks. I should mention my project is far more complicated than this, so restructuring directories isn't a great option. How do I deal with this situation?
Docker doesn't support symlinking files outside the build context.
Here are some different methods for using a shared file in a container:
Build Time
Copy from a config image (Docker buildkit)
Recent versions of Docker allow RUN steps to bind mount from a named image or previous build stage with the --mount=type=bind,target=/dir,source=/dir,from=image-or-stage-name
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
FROM scratch
COPY config.json /config.json
Build and tag the config image me/worker-config
docker build -t me/worker-config:latest .
Mount the me/worker-config image during the real build
RUN --mount=type=bind,target=/worker-config,source=/,from=me/worker-config:latest \
cp /worker-config/config.json /app/config.json;
Share a base image
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
COPY config.json /config.json
Build and tag the image me/worker-config
docker build -t me/worker-config:latest .
Source the base me/worker-config image for all your worker Dockerfiles
FROM me/worker-config:latest
Build script
Use a script to push the common config to each of your worker containers.
./build worker-n
#!/bin/sh
set -uex
rundir=$(readlink -f "${0%/*}")
container=$(shift)
cd "$rundir/$container"
cp ../config/config.json ./config-docker.json
docker build "$#" .
Build from URL
Pull the config from a common URL for all worker-n builds.
ADD http://somehost/config.json /
Increase the scope of the image build context
Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.
cd ..
docker build -f worker-a/Dockerfile .
All the source paths you reference in a Dockerfile must also change to match the new build context:
COPY workerathing /app
becomes
COPY worker-a/workerathing /app
Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers. Note that only the .dockerignore file from the base of the build context is referenced.
Alternate build that can mount volumes
Other projects that strive for Dockerfile compatibility may support volumes at build time. For example a podman build / buildah support a --volume option to bind mount files from the host into a build container.
podman build --volume /project/config:/worker-config:ro,Z -t me/worker-a .
Then the build can reference the mounted volume
COPY /worker-config/config.json /app
Run time
Mount a config directory from a named volume
Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.
docker volume create --name=worker-cfg-vol
docker run -v worker-cfg-vol:/config worker-config cp config.json /config
docker run -v worker-cfg-vol:/config:/config worker-a
Mount config directory from data container
Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.
docker create --name wcc -v /config worker-config /bin/true
docker run --volumes-from wcc worker-a
Mount config file from host at runtime
docker run -v /app/config/config.json:/config.json worker-a
Node.js-specific solution
I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.
Install yalc in your container, e.g. RUN npm i -g yalc.
Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.
Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.
# You can access the container using:
# docker run -it my-name sh
# To start it stand-alone:
# docker run -it -p 8888:3000 my-name
FROM node:alpine AS builder
# Install yalc globally (the apk add... line is only needed if your installation requires it)
RUN apk add --no-cache --virtual .gyp python make g++ && \
npm i -g yalc
RUN mkdir /packages && \
mkdir /packages/models && \
mkdir /packages/gui && \
mkdir /packages/server
COPY ./packages/models /packages/models
WORKDIR /packages/models
RUN npm install && \
npm run build && \
yalc publish --private
COPY ./packages/gui /packages/gui
WORKDIR /packages/gui
RUN yalc add my-models && \
npm install && \
npm run build
COPY ./packages/server /packages/server
WORKDIR /packages/server
RUN yalc add my-models && \
npm install && \
npm run build
FROM node:alpine
RUN mkdir -p /app
COPY --from=builder /packages/server/package.json /app/package.json
COPY --from=builder /packages/server/dist /app/dist
# Make sure you copy the yalc registry too.
COPY --from=builder /packages/server/.yalc /app/.yalc
COPY --from=builder /packages/server/node_modules /app/node_modules
COPY --from=builder /packages/gui/dist /app/dist/public
WORKDIR /app
EXPOSE 3000
CMD ["node", "./dist/index.js"]
Hope that helps...
The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.
docker build -f worker-a/Dockerfile .
docker build -f worker-b/Dockerfile .
You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.
Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.
How to include files outside of Docker's build context?
Hope this helps! Cheers
An alternative solution is to upgrade all your soft links into hard links.

Resources