Docker: Copy file out of container while building it - docker

I build the following image with docker build -t mylambda .
I now try to export lambdatest.zip to my localhost while building it so I see the .zip file on my Desktop. So far I used docker cp <Container ID>:/var/task/lambdatest.zip ~/Desktop but that doesn't work inside my Dockerfile (?). Do you have any ideas?
FROM lambci/lambda:build-python3.7
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
Dockerfile (updated):
FROM lambci/lambda:build-python3.7
RUN python3 -m venv venv
RUN . venv/bin/activate
RUN pip install --upgrade pip
RUN pip install pystan==2.18
RUN pip install fbprophet
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x

The typical answer is you do not. A Dockerfile does not have access to write files out to the host, by design, just as it does not have access to read arbitrary files from outside of the build context. There are various reasons for that, including security (you don't want an image build dropping a backdoor on a build host in the cloud) and reproducibility (images should not have dependencies outside of their context).
As a result, you need to take an extra step to extract contexts of an image back to the host. Typically this involves creating a container a running a docker cp command, along the lines of the following:
docker build -t your_image .
docker create --name extract your_image
docker cp extract:/path/to/files /path/on/host
docker rm extract
Or it can involve I/O pipes, where you run a tar command inside the container to package the files, and pipe that to a tar command running on the host to save the files.
docker build -t your_image
docker run --rm your_image tar -cC /path/in/container . | tar -xC /path/on/host
Recently, Docker has been working on buildx which is currently experimental. Using that, you can create a stage that consists of the files you want to export to the host and use the --output option to write that stage to the host rather than to an image. Your Dockerfile would then look like:
FROM lambci/lambda:build-python3.7 as build
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
FROM scratch as artifact
COPY --from=build /var/task/lambdatest.zip /lambdatest.zip
FROM build as release
And then the build command to extract the zip file would look like:
docker buildx build --target=artifact --output type=local,dest=$(pwd)/out/ .
I believe buildx is still marked as experimental in the latest release, so to enable that, you need at least the following json entry in $HOME/.docker/config.json:
{ "experimental": "enabled" }
And then for all the buildx features, you will want to create a non-default builder with docker buildx create.
With recent versions of the docker CLI, integration to buildkit has exposed more options. Now it's no longer needed to run buildx to get access to the output flag. That means the above changes to:
docker build --target=artifact --output type=local,dest=$(pwd)/out/ .
If buildkit hasn't been enabled on your version (should be on by default in 20.10), you can enable it in your shell with:
export DOCKER_BUILDKIT=1
or for the entire host, you can make it the default with the following in /etc/docker/daemon.json:
{
"features": {"buildkit": true }
}
And to use the daemon.json the docker engine needs to be reloaded:
systemctl reload docker

Since docker 18.09, it natively supports a custom backend called BuildKit:
DOCKER_BUILDKIT=1 docker build -o target/folder myimage
This allows you to copy your latest stage to target/folder. If you want only specific files and not an entire filesystem, you can add a stage to your build:
FROM XXX as builder-stage
# Your existing dockerfile stages
FROM scratch
COPY --from=builder-stage /file/to/export /
Note: You will need your docker client and engine to be compatible with Docker Engine API 1.40+, otherwise docker will not understand the -o flag.
Reference: https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs

Related

how to override the files in docker container

I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.

docker build with inline fails

I am trying to build a docker image with inline commands, it goes well until copy statement where it fails saying it cannot find the file. When I put the build statements in Dockerfile and run build, it works fine.
docker build -t casspy -<<EOF
FROM alpine:latest
RUN apk -v add python3 py-pip bash && \
pip install ldap3 cassandra-driver configargparse boto3
COPY script.py .
EOF
Step 3/3 : COPY script.py .
COPY failed: stat /var/lib/docker/tmp/docker-builder451609694/script.py: no such file or directory
You need a specify a build context for docker.
Omitting the build context can be useful in situations where your Dockerfile does not require files to be copied into the image, and improves the build-speed, as no files are sent to the daemon.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
docker build -t casspy -f- . <<EOF

Dockerfile COPY and RUN in one layer

I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push

Docker file: I want to invoke one script from docker file

I am creating one docker image name with soaphonda.
the code of docker file is below
FROM centos:7
FROM python:2.7
FROM java:openjdk-7-jdk
MAINTAINER Daniel Davison <sircapsalot#gmail.com>
# Version
ENV SOAPUI_VERSION 5.3.0
COPY entry_point.sh /opt/bin/entry_point.sh
COPY server.py /opt/bin/server.py
COPY server_index.html /opt/bin/server_index.html
COPY SoapUI-5.3.0.tar.gz /opt/SoapUI-5.3.0.tar.gz
COPY exit.sh /opt/bin/exit.sh
RUN chmod +x /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/server.py
# Download and unarchive SoapUI
RUN mkdir -p /opt
WORKDIR /opt
RUN tar -xvf SoapUI-5.3.0.tar.gz .
# Set working directory
WORKDIR /opt/bin
# Set environment
ENV PATH ${PATH}:/opt/SoapUI-5.3.0/bin
EXPOSE 3000
RUN chmod +x /opt/SoapUI-5.3.0/bin/mockservicerunner.sh
CMD ["/opt/bin/entry_point.sh","exit","pwd", "sh", "/Users/ankitsrivastava/Documents/SametimeFileTransfers/Honda/files/hondascript.sh"]
My image creation is getiing successfull. I want that once the image creation is done it should retag and push in the docker hub. For that i have created the script which is below;
docker tag soaphonda ankiksri/soaphonda
docker push ankiksri/soaphonda
docker login
docker run -d -p 8089:8089 --name demo ankiksri/soaphonda
containerid=`docker ps -aqf "name=demo"`
echo $containerid
docker exec -it $containerid bash -c 'cd ../SoapUI-5.3.0;sh /opt/SoapUI-5.3.0/bin/mockservicerunner.sh "/opt/SoapUI-5.3.0/Honda-soapui-project.xml"'
Please help me how i can call the second script from docker file and the exit command is not working in docker file.
What you have to understand here is what you are specifying within the Dockerfile are the commands that gets executed when you build and run a Docker container from the image you have created using your Dockerfile.
So Docker image tag, push running should all done after you have built the Docker image from the Dockerfile. It cannot be done within the Dockerfile itself.
To achieve this kind of a thing you would have to use a build tool like Maven (an example) and automate the process of tagging, pushing the image. Also by looking at your image, I don't see any nessactiy to keep on tagging and pushing the image unless you are continuously updating the image. Also there is no point of using three FROM commands as it will unnecessarily make your Docker image size huge.

How to package files with docker image

I have an application that requires some binaries on host machine for a docker based application to work. I can ship the image using docker registry but how do I ship those binaries to host machine? creating deb/rpm seems one option but that would be against the docker platform independent philosophy.
If you need them outside the docker image on the host machine what you can do is this.
Add them to your Dockerfile with ADD or COPY
Also had an installation script which calls cp -f src dest
Then bind mount an installation directory from the host to dest in the container.
Something like the following example:
e.g. Dockerfile
FROM ubuntu:16.04
COPY file1 /src
COPY file2 /src
COPY install /src
CMD install
Build it:
docker build -t installer .
install script:
#/bin/bash
cp -f /src /dist
Installation:
docker run -v /opt/bin:/dist
Will result in file1 & file2 ending up in /opt/bin on the host.
If your image is based off of an image with a package manager, you could use the package manager to install the required binaries, e.g.
RUN apt-get update && apt-get install -y required-package
Alternatively, you could download the statically linked binaries from the internet and extract them, e.g.
RUN curl -s -L https://example.com/some-bin.tar.gz | tar -C /opt -zx
If the binaries are created as part of the build process, you'd want to COPY them over
COPY build/target/bin/* /usr/local/bin/

Resources