docker-compose and passed-in shell env variables (-e option) - docker

I have a docker-compose.yml that references ${host_repo_dir}. Trying to run the service defined there as follows:
docker-compose run -e host_repo_dir=$(pwd) http-api
Output:
WARNING: The host_repo_dir variable is not set. Defaulting to a blank string.
What gives? I think I'm following https://docs.docker.com/compose/reference/run/.
my environment:
docker version: 18.09.3
docker-compose version: 1.24.0
docker for windows, accessed from WSL
Is this a WSL problem?
Update: setting the env vars in advance sort-of works, but seemingly not always:
echo "host repo dir: ${host_repo_dir}, repo name: ${repository_name}, docker repo: ${docker_repository}"
docker-compose run \
-e host_repo_dir \
-e repository_name \
-e docker_repository \
${repository_name} || ( cd ${previous_directory} ; exit 3 )
output:
host repo dir: /c/Users/muellmi1/projects/payoff-2015/http-api, repo name: http-api, docker repo: localhost.localdomain
WARNING: The docker_repository variable is not set. Defaulting to a blank string.
docker_repository was set beforehand by $1 at the beginning of this script. I now understand that it is required to export the variables beforehand. So what worked for me was running export [VARNAME] for each of the variables later passed to docker-compose.

Assuming host_repo_dir is already set and exported, you can do.
docker-compose run -e host_repo_dir http-api

Related

Inject SSH key into a Docker container

I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.

How to pass argument for a configuration file in JuPyterhub's deployment?

I want to install envkey in my docker image which requires a key-value pair. I have the key-value pair with me but I am unable to figure out as to how do I install it in my docker image using those arguments and then deploy the same on jupyterhub.
I tried reading other deployments of mine which use envkey. Here is how it goes:
1. I have a Makefile and I run the command sudo make dev config=aviral.cfg
2. The dev command in the Makefile is as follows:
dev:
docker build -t $(IMAGE) -f Dockerfile.dev . && docker tag $(IMAGE) $(ALIAS)
#echo "\nCreate docker container.."
CONFIG=$(config) IMAGE=$(IMAGE) docker-compose -f docker-compose.yml up -d --scale test=0 --scale airflow_worker=0
#echo "\n$(GREEN)Done.$(NO_COLOR)\n"
#echo "Try airflow at http://localhost:8080."
#echo "and flower at http://localhost:5555."
The docker-compose file is:
airflow_worker:
image: ${IMAGE}:latest
restart: always
depends_on:
- airflow_scheduler
# ports:
# - 8793:8793
# environment:
# - GOOGLE_APPLICATION_CREDENTIALS=/gcloud/cloud.json
env_file:
- ${CONFIG}
command: worker
As you can see, the env_file is passed on.
I am unable to deduce how to do this same in the JuPyterHub.
The helm chart is here(https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz). And my config is:
proxy:
secretToken: "yada_yada"
singleuser:
image:
name: yada_yada.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: yada_yada.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#yada_yada.com
password: yada_yada
In my config file, I pass variables as:
ENVKEY=my_personal_envkey
I expect to have my configs passed in the docker, or perhaps I write a proper Makefile for this stuff, as of now, I am facing this error:
Step 19/32 : RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
---> Running in 35bc1cf0e1c8
envkey-source 1.2.9 Quick Install
Copyright (c) 2019 Envkey Inc. - MIT License
https://github.com/envkey/envkey-source
Downloading envkey-source binary for linux-amd64
Downloading tarball from https://github.com/envkey/envkey-source/releases/download/v1.2.9/envkey-source_1.2.9_linux_amd64.tar.gz
envkey-source is installed in /usr/local/bin
Installation complete. Info:
bash: line 97: 29 Segmentation fault envkey-source -h
The command '/bin/sh -c curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash' returned a non-zero code: 139
Although this question alone should be good enough to give you the picture but for the sake of context(if), here are some of the questions:
1. How do I make jupyter-hub access my private docker image repository?
2. Unable to run a lifecycle command from config.yaml while deploying jupyterhub
3. How to have file written automatically in the startup folder when a new user signs up/in on JuPyter hub?
Probably you get this error because install.sh script tries to add envkey-source binary under /usr/local/bin directory and then tries to run envkey-source -h and fails. Check if user(if non-root) have permission to do that or if /usr/local/bin directory exists in container image.
Hope it helps!

Get Host IP in Dockerfile

I have a line in my Dockerfile:
&& echo "xdebug.remote_host=192.168.0.216" >> /usr/local/etc/php/conf.d/xdebug.ini`
I want to make the IP dynamic. How would I get the host IP in there?
You need to use build-time variables (–build-arg).
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
So, Dockerfile is modified to:
ARG IP_ADDRESS
RUN ... && echo "xdebug.remote_host=$IP_ADDRESS" >> /usr/local/etc/php/conf.d/xdebug.ini`
And you just need to define build-time variable IP_ADDRESS during image building:
$ docker build --build-arg IP_ADDRESS=<IP_ADDRESS> .
If you use docker-compose:
1. Create file .env with the following content:
IP_ADDRESS="<IP_ADDRESS>"
You can make it every time like (example is for a linux machine):
IP_ADDRESS=$(ip a | grep <interface> | grep inet | awk '{print $2}' | awk -F'/' '{print $1}')
echo "IP_ADDRESS=$IP_ADDRESS" > .env
2. Use the following docker-compose.yaml to build your image:
version: '3'
services:
myservice:
build:
context: .
args:
IP_ADDRESS: ${IP_ADDRESS}
3. Build the above image:
docker-compose build
There's no simple in built way to get the Docker host IP (unless you are using Docker for Mac)
Entrypoint
It's best not to set a Docker host IP at build time, otherwise the image will be tied to the host it was built on and won't work anywhere else.
An ENTRYPOINT can be used to do the config setup based on an environment variable and then pass through all commands to the container:
#!/bin/sh
if [ -n "$IP_ADDRESS" ]; then
echo "xdebug.remote_host=$IP_ADDRESS" >> /usr/local/etc/php/conf.d/xdebug.ini
else
echo "No environment variable IP_ADDRESS set for xdebug"
fi
exec "$#"
Then run with:
docker run -e IP_ADDRESS=192.168.51.5 me/app-debug
Docker for Mac
On Docker for Mac 17.12+ you can use the host name docker.for.mac.host.internal
Xdebug
Another option is setting xdebug.remote_connect_back = 1 so you don't need a specific remote_host for xdebug.
Build
Nicolay's answer covers the build time setup.

Docker-compose not passing environment variable to container

I am using Docker 17.04.0-ce, build 4845c56 with docker-compose 1.12.0, build b31ff33 on Ubuntu 16.04.2 LTS. I simply want to pass an environment variable and display it from my script running in a container. I am doing this according to the documentation https://docs.docker.com/compose/compose-file/#environment . The problem is that the variable is not passed to the container.
My docker-compose.yml file:
env-file-test:
build: .
dockerfile: Dockerfile
environment:
- DEMO_VAR
My Dockerfile:
FROM alpine
COPY docker-start.sh /
CMD ["/docker-start.sh"]
And the docker-start.sh file:
#!/bin/sh
echo "DEMO_VAR Var Passed in: $DEMO_VAR"
I try to set the variable in my current terminal session and pass it to the container:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo docker-compose up
Starting envfiletest_env-file-test_1
Attaching to envfiletest_env-file-test_1
env-file-test_1 | DEMO_VAR Var Passed in:
envfiletest_env-file-test_1 exited with code 0
So you can see that the variable DEMO_VAR is empty!
I also tried using variables in docker-compose.yml like this: DEMO_VAR=${DEMO_VAR} but then when I run sudo docker-compose up, I get a warning: "WARNING: The DEMO_VAR variable is not set. Defaulting to a blank string.".
What am I doing wrong? What should I do to pass the variable to the container?
I found a solution. Answering my own question...
The problem was with the sudo command. It turned out that it does not pass environment variables by default. There are some possible solutions:
Use sudo -E. Demo:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo -E docker-compose up
env-file-test_1 | DEMO_VAR Var Passed in: aabbdd
Use sudo VAR=value:
sudo DEMO_VAR=$DEMO_VAR docker-compose up
Add environment variables to the sudoers file (https://stackoverflow.com/a/8636711)
Use docker without sudo (https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo)
you should use ENV in your Dockerfile, and avoid export.
See the doc
https://docs.docker.com/engine/reference/builder/#env

How to get all Travis CI environment variables, excluding the system defaults?

I want to pass into docker run all the environment variables I've configured in the Travis web UI.
I'm able to run env > .env to save them to a file and then pass that into docker via --env-file .env.
Unfortunately, this also overrides system ones such as PATH that interfere with the container.
I'm able to filter out PATH using env | grep -vE "^(PATH=)" > .env but I'm wondering whether there's a way to get just the Travis ones?
Here's my .travis.yml:
language: bash
sudo: required
services:
- docker
before_install:
- env | grep -vE "^(PATH=)" > .env
install:
- docker build -t mycompany/myapp .
script:
- docker run -i --env-file .env mycompany/myapp nosetests
after_success:
- echo "SUCCESS!"
I don't recommend passing all your environment vars, but if you whitelist them by prefixing them with something like, say, TRAVIS_ you could do something like:
export TRAVIS_WUT=foo
export TRAVIS_FOO=asdf
docker run $(printenv | grep -E '^TRAVIS_' | sed 's/TRAVIS_/-e /g')
# would run -> docker run -e FOO=asdf -e WUT=foo something

Resources