Include directory of docker-compose files - docker

How can I run docker-compose with the base docker-compose.yml and a whole directory of docker-compose files.
Like if I had this directory structure:
parentdir
docker-compose.yml
folder1/
foo.yml
bar.yml
folder2/
foo.yml
other.yml
How can I specify which folder of manifests to run when running compose?

I hope i understood your question well.
You could use the -f flags:
docker-compose -f docker-compose1.yml
Edit
To answer your comment: no you can't docker-compose several files with only one command. You need to specify a file path, not a directory path.
What you could do is create a shell script like:
#!/bin/bash
DOCKERFILE_PATH=$DOCKER_PATH
for dockerfile in $DOCKERFILE_PATH
do
if [[ -f $dockerfile ]]; then
docker-compose -f $dockerfile
fi;
done
By calling it like: DOCKER_PATH=dockerfiles/* ./script.sh which will execute docker-compose -f with every files in DOCKER_PATH.
(docs)

My best option was to have a run.bash file in the base directory of my project.
I then put all my compose files in say compose/ directory, then run it with this command:
docker-compose $(./run.bash) up
run.bash:
#!/usr/bin/env bash
PROJECT_NAME='projectname' # need to set manually since it normally uses current directory as project name
DOCKER_PATH=$PWD/compose/*
MANIFESTS=' '
for dockerfile in $DOCKER_PATH
do
MANIFESTS="${MANIFESTS} -f $dockerfile"
done
MANIFESTS="${MANIFESTS} -p $PROJECT_NAME"
echo $MANIFESTS

You can pass multiple files to docker-compose by using -f for each file. For example if you have N files, you can pass them as follows:
docker-compose -f file1 -f file2 ... -f fileN [up|down|pull|...]
If you have files in sub-directories and you want to pass them to docker-compose recursively, you can use the following:
docker-compose $(for i in $(find . -type f | grep yaml)
do
echo -f $i
done
) [up|down|pull|...]

Related

docker build cannot find file secret file outside home directory

I'm building a docker image as follows:
TEMP_FILE="/home/username/any/directory/temp"
touch $TEMP_FILE
<secrets> > $TEMP_FILE
export DOCKER_BUILDKIT=1
pushd $PROJECT_ROOT
docker build -t $DOCKER_IMAGE_NAME \
--secret id=netrc,src=$TEMP_FILE \
--build-arg=<...> \
-f Dockerfile .
rm $TEMP_FILE
Currently this works.
I'd now like to use $(mktemp) to create the TEMP_FILE in the /tmp directory. However, when I point TEMP_FILE outside of /home, I get the following error:
could not parse secrets: [id=netrc,src=/tmp/temp-file-name]: failed to stat /tmp/temp-file-name: stat /tmp/temp-file-name: no such file or directory
The script itself has no issue, I can easily find and view the temporary file for example with cat $TEMP_FILE.
How do I give docker build access to /tmp?

Is there any way to using hadolint for multiple dockerfiles?

Hadolint is an awesome tool for linting Dockerfiles. I am trying
to integrated to my CI but I am dealing with for run over multiple Dockerfiles. Does someone know how the syntax look like? Here is how my dirs appears to:
dir1/Dockerfile
dir2/Dockerfile
dir3/foo/Dockerfile
in gitlab-ci
stage: hadolint
image: hadolint/hadolint:latest-debian
script:
- mkdir -p reports
- |
hadolint dir1/Dockerfile > reports/dir1.json \
hadolint dir2/Dockerfile > reports/dir2.json \
hadolint dir3/foo/Dockerfile > reports/dir3.json
But the sample above is now working.
So as far as I found it, hadolint runs recursively. So in my case:
- hadolint */Dockerfile > reports/all_reports.json
But the problem with this approach is that all reports will be in one file which humper the maintenance and clarity
If you want to keep all reports separated (one per top-level directory), you may want to rely on some shell snippet?
I mean something like:
- |
find . -name Dockerfile -exec \
sh -c 'src=${1#./} && { set -x && hadolint "$1"; } | tee -a "reports/${src%%/*}.txt"' sh "{}" \;
Explanation:
find . -name Dockerfile loops over all Dockerfiles in the current directory;
-exec sh -c '…' runs a subshell for each Dockerfile, setting:
$0 = "sh" (dummy value)
$1 = "{}" (the full, relative path of the Dockerfile), "{}" and \; being directly related to the find … -exec pattern;
src=${1#./} trims the path, replacing ./dir1/Dockerfile with dir1/Dockerfile
${src%%/*} extracts the top-level directory name (dir1/Dockerfile → dir1)
and | tee -a … copies the output, appending hadolint's output to the top-level directory report file, for each parsed Dockerfile (while > … should be avoided here for obvious reasons, if you have several Dockerfiles in a single top-level directory).
I have replaced the .json extension with .txt as hadolint does not seem to output JSON data.

How set set bash variable to file name in Docker

I have a Dockerfile in which files in a directory are downloaded:
RUN wget https://www.classe.cornell.edu/~cesrulib/downloads/tarballs/ -r -l1 --no-parent -A tgz \
--cut=99 -nH -nv --show-progress --progress=bar:force:noscroll
I know that there is exactly one file here of the form "bmad_dist_YYYY_MMDD.tgz" where "YYYY_MMDD" is a date. For example, the file might be named "bmad_dist_2020_0707.tgz". I want to set a bash variable to the file name without the ".tgz" extension. If this was outside of docker I could use:
FULLNAME=$(ls -1 bmad_dist_*.tgz)
BMADDIST="${FULLNAME%.*}"
So I tried in the dockerfile:
ENV FULLNAME $(ls -1 bmad_dist_*.tgz)
ENV BMADDIST "${FULLNAME%.*}"
But this does not work. Is it possible to do what I want?
Shell expansion does not happen in Dockerfile ENV. Then workaround that you can try is to pass the name during Docker build.
Grab the filename during build name and discard the file or you can try --spider for wget to just get the filename.
ARG FULLNAME
ENV FULLNAME=${FULLNAME}
Then pass the full name dynamically during build time.
For example
docker build --build-args FULLNAME=$(wget -nv https://upload.wikimedia.org/wikipedia/commons/5/54/Golden_Gate_Bridge_0002.jpg 2>&1 |cut -d\" -f2) -t my_image .
The ENV ... ... syntax is mainly for plaintext content, docker build arguments, or other environment variables. It does not support a subshell like your example.
It is also not possible to use RUN export ... and have that variable defined in downstream image layers.
The best route may be to write the name to a file in the filesystem and read from that file instead of an environment variable. Or, if an environment variable is crucial, you could set an environment variable from the contents of that file in an ENTRYPOINT script.

Use docker compose config from file and stdin

docker-compose can use a config file from stdin using -f - (Example: cat config.yml | docker-compose -f - up)
However, this does not seem to work when providing multiple config files. For example, the command:
cat config.yml | docker-compose -f ./docker-compose.yml -f - up
returns with the error: ERROR: .FileNotFoundError: [Errno 2] No such file or directory: './-'
Is there a way to use multiple config files and still provide one config through stdin?
You can use the special device /dev/stdin, as in:
cat config.yml | docker-compose -f docker-compose.yml -f /dev/stdin up
This may not work in all cases (I've encountered some oddness when -f /dev/stdin is the first file listed on the command line), but it does seem to work.

How to copy multiple files from container to host using docker cp

I want to use wildcard to select multiple files from a directory in a container and use docker cp to copy these files from container to docker host.
I couldn't find if support for using wildcard is available with docker cp yet or not.
docker cp fd87af99b650:/foo/metrics.csv* /root/metrices_testing/
This results with the error metrics.csv*: no such file or directory
I came across an example where for loop was used to select a few files and then sent to container, but i want to transfer files from container to host and want to do this on docker host itself as script is running on host only.
Using docker exec to select files first and then copying them using docker cp can be an option. But that is a 2 step process.
Can someone please help me do this in one step?
EDIT:
I tried this. A step close but still failing.
# for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*");
do docker cp SPSRS:$f /root/metrices_testing/;
done
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-08:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:45
In fact your solution can make your aims just need a little change:
for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:$f /root/metrices_testing/; done
->
for f in $(docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:`echo $f | sed 's/\r//g'` /root/metrices_testing/; done
This is because docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*" will have \r in every matched string, so finally the cp can not find the files in container.
So, we use echo $f | sed 's/\r//g' to get rid of \r for every file name, this could make you work.
NOTE: for alpine, we need to use sh to replace bash, meanwhile, -it should be deleted to avoid colorful print in alpine introduce some invisible characters like ^[[0;0m, etc.
Docker cp command supports to copy folder with all the contents inside a folder
docker cp -a container-id:/opt/tpa/logs/ /root/testing/
In the above example copying files from container folder /opt/tpa/logs to local machine /root/testing/ folder. Here all the files inside /logs/ will be copied to local. The trick here is using -a option along with docker cp
Docker cp still doesn't support wildcards. You can however use them in a Dockerfile in the following way:
COPY hom* /mydir/ # adds all files starting with "hom"
COPY hom?.txt /mydir/ # ? is replaced with any single character, e.g., "home.txt"
Reference: https://docs.docker.com/engine/reference/builder/#copy
Run this inside the container:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
local archive="$(mktemp -t "export-XXXXX.tgz")"
tar czf "$archive" "$#" --checkpoint=.52428800
printf "docker exec %q cat %q | tar xvz -C .\n" "$(hostname)" "$archive"
fi
}
Then select the files you want to copy out:
dcp /foo/metrics.csv*
It'll create an archive inside of the container and spit out a command for you to run. Run that command on the host.
e.g.
docker exec 1c75ed99fa42 cat /tmp/export-x9hg6.tgz | tar xvz -C .
Or, I guess you could do it without the temporary archive:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
printf "docker exec %q tar czC %q" "$(hostname)" "$PWD"
printf " %q" "$#"
printf " | tar xzvC .\n"
fi
}
Will generate a command for you, like:
docker exec 1c75ed99fa42 tar czC /root .cache .zcompdump .zinit .zshrc .zshrc.d foo\ bar | tar xzvC .
You don't even need the alias then, it's just a convenience.
docker cp accepts either files, or tar archives, so you can pack the list of files provided as arguments to an tar archive, return the archive to stdout and pipe to docker cp.
#!/bin/bash
if [[ "$#" -lt 2 || "$1" == "-h" || "$1" == "--help" ]]; then
printf "Copy files to docker container directory.\n\n"
echo "Usage: $(basename $0) files... container:directory"
exit 0
fi
SOURCE="${*%${!#}}"
TARGET="${#:$#}"
tar cf - $SOURCE | docker cp - $TARGET

Resources