Use docker compose config from file and stdin - docker

docker-compose can use a config file from stdin using -f - (Example: cat config.yml | docker-compose -f - up)
However, this does not seem to work when providing multiple config files. For example, the command:
cat config.yml | docker-compose -f ./docker-compose.yml -f - up
returns with the error: ERROR: .FileNotFoundError: [Errno 2] No such file or directory: './-'
Is there a way to use multiple config files and still provide one config through stdin?

You can use the special device /dev/stdin, as in:
cat config.yml | docker-compose -f docker-compose.yml -f /dev/stdin up
This may not work in all cases (I've encountered some oddness when -f /dev/stdin is the first file listed on the command line), but it does seem to work.

Related

docker build cannot find file secret file outside home directory

I'm building a docker image as follows:
TEMP_FILE="/home/username/any/directory/temp"
touch $TEMP_FILE
<secrets> > $TEMP_FILE
export DOCKER_BUILDKIT=1
pushd $PROJECT_ROOT
docker build -t $DOCKER_IMAGE_NAME \
--secret id=netrc,src=$TEMP_FILE \
--build-arg=<...> \
-f Dockerfile .
rm $TEMP_FILE
Currently this works.
I'd now like to use $(mktemp) to create the TEMP_FILE in the /tmp directory. However, when I point TEMP_FILE outside of /home, I get the following error:
could not parse secrets: [id=netrc,src=/tmp/temp-file-name]: failed to stat /tmp/temp-file-name: stat /tmp/temp-file-name: no such file or directory
The script itself has no issue, I can easily find and view the temporary file for example with cat $TEMP_FILE.
How do I give docker build access to /tmp?

How to properly create a tar archive to import with docker

I need to extract the filesystem of a debian image onto the host, modify it, then repackage it back into a docker image. I'm using the following commands:
docker export container_name > archive.tar
tar -xf archive.tar -C debian/
modifying the file system here
tar -cpjf archive-modified.tar debian/
docker import archive-modified.tar debian-modified
docker run -it debian-modified /bin/bash
After I try to run the new docker image I get the following error:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
ERRO[0000] error waiting for container: context canceled
I've tried the above steps without modifying the file system at all and I get the same behavior. I've also tried importing the output of docker export directly, and this works fine. This probably means I'm creating the new tar archive incorrectly. Can anyone tell me what I'm doing wrong?
Take a look at the archive generated by docker export:
# tar tf archive.tar | sort | head
bin/
bin/bash
bin/cat
bin/chgrp
bin/chmod
bin/chown
bin/cp
bin/dash
bin/date
bin/dd
And then at the archive you generate with your tar -cpjf ... command:
# tar tf archive-modified.tar | sort | head
debian/
debian/bin/
debian/bin/bash
debian/bin/cat
debian/bin/chgrp
debian/bin/chmod
debian/bin/chown
debian/bin/cp
debian/bin/dash
debian/bin/date
You've moved everything into a debian/ top-level directory, so there is no /bin/bash in the image (it would be /debian/bin/bash, and probably wouldn't work anyway because your shared libraries aren't in the expected location, either.
You probably want to create the updated archive like this:
# tar -cpjf archive-modified.tar -C debian/ .

Issues while creating a Docker image

I'm trying to create a docker image using this command (removed the address as it's a company address):
docker build -f Dockerfile.web --build-arg _env=MTP-uat1 . -t Company/address:NlLogDownloadAl
But I keep getting this error:
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount745508724/Dockerfile.web: no such file or directory
Now I've gone through like 30 similar questions and followed what they say would fix it but it does no difference.
I have done the following:
Changed the docker engine script buildkit from true to false.
Made sure the directory I'm referring to has the Dockerfile.web file.
Removed some things mentioned from the .dockerignore file.
I still get the same error all the time. Why?
The last part of the command has to be context (the directory where Docker should look for files / "the dot"):
Usage: docker build [OPTIONS] PATH | URL | -
Try this one:
docker build \
-f Dockerfile.web \
--build-arg _env=MTP-uat1 \
-t Company/address:NlLogDownloadAl \
.
You are getting no such file or directory because you haven't specified the context properly, thus it probably cut off the last argument of the command Company/address:NlLogDownloadAl (or its part), treated it as a folder which probably doesn't even exist and then it tried to look up for Dockerfile.web which wouldn't exist too either due to invalid folder or just because of the wrong folder specified.

How to pass argument for a configuration file in JuPyterhub's deployment?

I want to install envkey in my docker image which requires a key-value pair. I have the key-value pair with me but I am unable to figure out as to how do I install it in my docker image using those arguments and then deploy the same on jupyterhub.
I tried reading other deployments of mine which use envkey. Here is how it goes:
1. I have a Makefile and I run the command sudo make dev config=aviral.cfg
2. The dev command in the Makefile is as follows:
dev:
docker build -t $(IMAGE) -f Dockerfile.dev . && docker tag $(IMAGE) $(ALIAS)
#echo "\nCreate docker container.."
CONFIG=$(config) IMAGE=$(IMAGE) docker-compose -f docker-compose.yml up -d --scale test=0 --scale airflow_worker=0
#echo "\n$(GREEN)Done.$(NO_COLOR)\n"
#echo "Try airflow at http://localhost:8080."
#echo "and flower at http://localhost:5555."
The docker-compose file is:
airflow_worker:
image: ${IMAGE}:latest
restart: always
depends_on:
- airflow_scheduler
# ports:
# - 8793:8793
# environment:
# - GOOGLE_APPLICATION_CREDENTIALS=/gcloud/cloud.json
env_file:
- ${CONFIG}
command: worker
As you can see, the env_file is passed on.
I am unable to deduce how to do this same in the JuPyterHub.
The helm chart is here(https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz). And my config is:
proxy:
secretToken: "yada_yada"
singleuser:
image:
name: yada_yada.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: yada_yada.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#yada_yada.com
password: yada_yada
In my config file, I pass variables as:
ENVKEY=my_personal_envkey
I expect to have my configs passed in the docker, or perhaps I write a proper Makefile for this stuff, as of now, I am facing this error:
Step 19/32 : RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
---> Running in 35bc1cf0e1c8
envkey-source 1.2.9 Quick Install
Copyright (c) 2019 Envkey Inc. - MIT License
https://github.com/envkey/envkey-source
Downloading envkey-source binary for linux-amd64
Downloading tarball from https://github.com/envkey/envkey-source/releases/download/v1.2.9/envkey-source_1.2.9_linux_amd64.tar.gz
envkey-source is installed in /usr/local/bin
Installation complete. Info:
bash: line 97: 29 Segmentation fault envkey-source -h
The command '/bin/sh -c curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash' returned a non-zero code: 139
Although this question alone should be good enough to give you the picture but for the sake of context(if), here are some of the questions:
1. How do I make jupyter-hub access my private docker image repository?
2. Unable to run a lifecycle command from config.yaml while deploying jupyterhub
3. How to have file written automatically in the startup folder when a new user signs up/in on JuPyter hub?
Probably you get this error because install.sh script tries to add envkey-source binary under /usr/local/bin directory and then tries to run envkey-source -h and fails. Check if user(if non-root) have permission to do that or if /usr/local/bin directory exists in container image.
Hope it helps!

Include directory of docker-compose files

How can I run docker-compose with the base docker-compose.yml and a whole directory of docker-compose files.
Like if I had this directory structure:
parentdir
docker-compose.yml
folder1/
foo.yml
bar.yml
folder2/
foo.yml
other.yml
How can I specify which folder of manifests to run when running compose?
I hope i understood your question well.
You could use the -f flags:
docker-compose -f docker-compose1.yml
Edit
To answer your comment: no you can't docker-compose several files with only one command. You need to specify a file path, not a directory path.
What you could do is create a shell script like:
#!/bin/bash
DOCKERFILE_PATH=$DOCKER_PATH
for dockerfile in $DOCKERFILE_PATH
do
if [[ -f $dockerfile ]]; then
docker-compose -f $dockerfile
fi;
done
By calling it like: DOCKER_PATH=dockerfiles/* ./script.sh which will execute docker-compose -f with every files in DOCKER_PATH.
(docs)
My best option was to have a run.bash file in the base directory of my project.
I then put all my compose files in say compose/ directory, then run it with this command:
docker-compose $(./run.bash) up
run.bash:
#!/usr/bin/env bash
PROJECT_NAME='projectname' # need to set manually since it normally uses current directory as project name
DOCKER_PATH=$PWD/compose/*
MANIFESTS=' '
for dockerfile in $DOCKER_PATH
do
MANIFESTS="${MANIFESTS} -f $dockerfile"
done
MANIFESTS="${MANIFESTS} -p $PROJECT_NAME"
echo $MANIFESTS
You can pass multiple files to docker-compose by using -f for each file. For example if you have N files, you can pass them as follows:
docker-compose -f file1 -f file2 ... -f fileN [up|down|pull|...]
If you have files in sub-directories and you want to pass them to docker-compose recursively, you can use the following:
docker-compose $(for i in $(find . -type f | grep yaml)
do
echo -f $i
done
) [up|down|pull|...]

Resources