I'm building a docker image as follows:
TEMP_FILE="/home/username/any/directory/temp"
touch $TEMP_FILE
<secrets> > $TEMP_FILE
export DOCKER_BUILDKIT=1
pushd $PROJECT_ROOT
docker build -t $DOCKER_IMAGE_NAME \
--secret id=netrc,src=$TEMP_FILE \
--build-arg=<...> \
-f Dockerfile .
rm $TEMP_FILE
Currently this works.
I'd now like to use $(mktemp) to create the TEMP_FILE in the /tmp directory. However, when I point TEMP_FILE outside of /home, I get the following error:
could not parse secrets: [id=netrc,src=/tmp/temp-file-name]: failed to stat /tmp/temp-file-name: stat /tmp/temp-file-name: no such file or directory
The script itself has no issue, I can easily find and view the temporary file for example with cat $TEMP_FILE.
How do I give docker build access to /tmp?
Related
I need to extract the filesystem of a debian image onto the host, modify it, then repackage it back into a docker image. I'm using the following commands:
docker export container_name > archive.tar
tar -xf archive.tar -C debian/
modifying the file system here
tar -cpjf archive-modified.tar debian/
docker import archive-modified.tar debian-modified
docker run -it debian-modified /bin/bash
After I try to run the new docker image I get the following error:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
ERRO[0000] error waiting for container: context canceled
I've tried the above steps without modifying the file system at all and I get the same behavior. I've also tried importing the output of docker export directly, and this works fine. This probably means I'm creating the new tar archive incorrectly. Can anyone tell me what I'm doing wrong?
Take a look at the archive generated by docker export:
# tar tf archive.tar | sort | head
bin/
bin/bash
bin/cat
bin/chgrp
bin/chmod
bin/chown
bin/cp
bin/dash
bin/date
bin/dd
And then at the archive you generate with your tar -cpjf ... command:
# tar tf archive-modified.tar | sort | head
debian/
debian/bin/
debian/bin/bash
debian/bin/cat
debian/bin/chgrp
debian/bin/chmod
debian/bin/chown
debian/bin/cp
debian/bin/dash
debian/bin/date
You've moved everything into a debian/ top-level directory, so there is no /bin/bash in the image (it would be /debian/bin/bash, and probably wouldn't work anyway because your shared libraries aren't in the expected location, either.
You probably want to create the updated archive like this:
# tar -cpjf archive-modified.tar -C debian/ .
I'm trying to create a docker image using this command (removed the address as it's a company address):
docker build -f Dockerfile.web --build-arg _env=MTP-uat1 . -t Company/address:NlLogDownloadAl
But I keep getting this error:
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount745508724/Dockerfile.web: no such file or directory
Now I've gone through like 30 similar questions and followed what they say would fix it but it does no difference.
I have done the following:
Changed the docker engine script buildkit from true to false.
Made sure the directory I'm referring to has the Dockerfile.web file.
Removed some things mentioned from the .dockerignore file.
I still get the same error all the time. Why?
The last part of the command has to be context (the directory where Docker should look for files / "the dot"):
Usage: docker build [OPTIONS] PATH | URL | -
Try this one:
docker build \
-f Dockerfile.web \
--build-arg _env=MTP-uat1 \
-t Company/address:NlLogDownloadAl \
.
You are getting no such file or directory because you haven't specified the context properly, thus it probably cut off the last argument of the command Company/address:NlLogDownloadAl (or its part), treated it as a folder which probably doesn't even exist and then it tried to look up for Dockerfile.web which wouldn't exist too either due to invalid folder or just because of the wrong folder specified.
I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit
I want to use wildcard to select multiple files from a directory in a container and use docker cp to copy these files from container to docker host.
I couldn't find if support for using wildcard is available with docker cp yet or not.
docker cp fd87af99b650:/foo/metrics.csv* /root/metrices_testing/
This results with the error metrics.csv*: no such file or directory
I came across an example where for loop was used to select a few files and then sent to container, but i want to transfer files from container to host and want to do this on docker host itself as script is running on host only.
Using docker exec to select files first and then copying them using docker cp can be an option. But that is a 2 step process.
Can someone please help me do this in one step?
EDIT:
I tried this. A step close but still failing.
# for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*");
do docker cp SPSRS:$f /root/metrices_testing/;
done
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-08:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:45
In fact your solution can make your aims just need a little change:
for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:$f /root/metrices_testing/; done
->
for f in $(docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:`echo $f | sed 's/\r//g'` /root/metrices_testing/; done
This is because docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*" will have \r in every matched string, so finally the cp can not find the files in container.
So, we use echo $f | sed 's/\r//g' to get rid of \r for every file name, this could make you work.
NOTE: for alpine, we need to use sh to replace bash, meanwhile, -it should be deleted to avoid colorful print in alpine introduce some invisible characters like ^[[0;0m, etc.
Docker cp command supports to copy folder with all the contents inside a folder
docker cp -a container-id:/opt/tpa/logs/ /root/testing/
In the above example copying files from container folder /opt/tpa/logs to local machine /root/testing/ folder. Here all the files inside /logs/ will be copied to local. The trick here is using -a option along with docker cp
Docker cp still doesn't support wildcards. You can however use them in a Dockerfile in the following way:
COPY hom* /mydir/ # adds all files starting with "hom"
COPY hom?.txt /mydir/ # ? is replaced with any single character, e.g., "home.txt"
Reference: https://docs.docker.com/engine/reference/builder/#copy
Run this inside the container:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
local archive="$(mktemp -t "export-XXXXX.tgz")"
tar czf "$archive" "$#" --checkpoint=.52428800
printf "docker exec %q cat %q | tar xvz -C .\n" "$(hostname)" "$archive"
fi
}
Then select the files you want to copy out:
dcp /foo/metrics.csv*
It'll create an archive inside of the container and spit out a command for you to run. Run that command on the host.
e.g.
docker exec 1c75ed99fa42 cat /tmp/export-x9hg6.tgz | tar xvz -C .
Or, I guess you could do it without the temporary archive:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
printf "docker exec %q tar czC %q" "$(hostname)" "$PWD"
printf " %q" "$#"
printf " | tar xzvC .\n"
fi
}
Will generate a command for you, like:
docker exec 1c75ed99fa42 tar czC /root .cache .zcompdump .zinit .zshrc .zshrc.d foo\ bar | tar xzvC .
You don't even need the alias then, it's just a convenience.
docker cp accepts either files, or tar archives, so you can pack the list of files provided as arguments to an tar archive, return the archive to stdout and pipe to docker cp.
#!/bin/bash
if [[ "$#" -lt 2 || "$1" == "-h" || "$1" == "--help" ]]; then
printf "Copy files to docker container directory.\n\n"
echo "Usage: $(basename $0) files... container:directory"
exit 0
fi
SOURCE="${*%${!#}}"
TARGET="${#:$#}"
tar cf - $SOURCE | docker cp - $TARGET
How can I run docker-compose with the base docker-compose.yml and a whole directory of docker-compose files.
Like if I had this directory structure:
parentdir
docker-compose.yml
folder1/
foo.yml
bar.yml
folder2/
foo.yml
other.yml
How can I specify which folder of manifests to run when running compose?
I hope i understood your question well.
You could use the -f flags:
docker-compose -f docker-compose1.yml
Edit
To answer your comment: no you can't docker-compose several files with only one command. You need to specify a file path, not a directory path.
What you could do is create a shell script like:
#!/bin/bash
DOCKERFILE_PATH=$DOCKER_PATH
for dockerfile in $DOCKERFILE_PATH
do
if [[ -f $dockerfile ]]; then
docker-compose -f $dockerfile
fi;
done
By calling it like: DOCKER_PATH=dockerfiles/* ./script.sh which will execute docker-compose -f with every files in DOCKER_PATH.
(docs)
My best option was to have a run.bash file in the base directory of my project.
I then put all my compose files in say compose/ directory, then run it with this command:
docker-compose $(./run.bash) up
run.bash:
#!/usr/bin/env bash
PROJECT_NAME='projectname' # need to set manually since it normally uses current directory as project name
DOCKER_PATH=$PWD/compose/*
MANIFESTS=' '
for dockerfile in $DOCKER_PATH
do
MANIFESTS="${MANIFESTS} -f $dockerfile"
done
MANIFESTS="${MANIFESTS} -p $PROJECT_NAME"
echo $MANIFESTS
You can pass multiple files to docker-compose by using -f for each file. For example if you have N files, you can pass them as follows:
docker-compose -f file1 -f file2 ... -f fileN [up|down|pull|...]
If you have files in sub-directories and you want to pass them to docker-compose recursively, you can use the following:
docker-compose $(for i in $(find . -type f | grep yaml)
do
echo -f $i
done
) [up|down|pull|...]