I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.
I am using Docker 17.04.0-ce, build 4845c56 with docker-compose 1.12.0, build b31ff33 on Ubuntu 16.04.2 LTS. I simply want to pass an environment variable and display it from my script running in a container. I am doing this according to the documentation https://docs.docker.com/compose/compose-file/#environment . The problem is that the variable is not passed to the container.
My docker-compose.yml file:
env-file-test:
build: .
dockerfile: Dockerfile
environment:
- DEMO_VAR
My Dockerfile:
FROM alpine
COPY docker-start.sh /
CMD ["/docker-start.sh"]
And the docker-start.sh file:
#!/bin/sh
echo "DEMO_VAR Var Passed in: $DEMO_VAR"
I try to set the variable in my current terminal session and pass it to the container:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo docker-compose up
Starting envfiletest_env-file-test_1
Attaching to envfiletest_env-file-test_1
env-file-test_1 | DEMO_VAR Var Passed in:
envfiletest_env-file-test_1 exited with code 0
So you can see that the variable DEMO_VAR is empty!
I also tried using variables in docker-compose.yml like this: DEMO_VAR=${DEMO_VAR} but then when I run sudo docker-compose up, I get a warning: "WARNING: The DEMO_VAR variable is not set. Defaulting to a blank string.".
What am I doing wrong? What should I do to pass the variable to the container?
I found a solution. Answering my own question...
The problem was with the sudo command. It turned out that it does not pass environment variables by default. There are some possible solutions:
Use sudo -E. Demo:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo -E docker-compose up
env-file-test_1 | DEMO_VAR Var Passed in: aabbdd
Use sudo VAR=value:
sudo DEMO_VAR=$DEMO_VAR docker-compose up
Add environment variables to the sudoers file (https://stackoverflow.com/a/8636711)
Use docker without sudo (https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo)
you should use ENV in your Dockerfile, and avoid export.
See the doc
https://docs.docker.com/engine/reference/builder/#env
I want to pass into docker run all the environment variables I've configured in the Travis web UI.
I'm able to run env > .env to save them to a file and then pass that into docker via --env-file .env.
Unfortunately, this also overrides system ones such as PATH that interfere with the container.
I'm able to filter out PATH using env | grep -vE "^(PATH=)" > .env but I'm wondering whether there's a way to get just the Travis ones?
Here's my .travis.yml:
language: bash
sudo: required
services:
- docker
before_install:
- env | grep -vE "^(PATH=)" > .env
install:
- docker build -t mycompany/myapp .
script:
- docker run -i --env-file .env mycompany/myapp nosetests
after_success:
- echo "SUCCESS!"
I don't recommend passing all your environment vars, but if you whitelist them by prefixing them with something like, say, TRAVIS_ you could do something like:
export TRAVIS_WUT=foo
export TRAVIS_FOO=asdf
docker run $(printenv | grep -E '^TRAVIS_' | sed 's/TRAVIS_/-e /g')
# would run -> docker run -e FOO=asdf -e WUT=foo something
I have a docker-compose.yml file like this:
version : '2'
services :
s1 :
build : .
environment :
HELLO : world
And a Dockerfile like this:
FROM ubuntu
RUN /bin/bash -c 'echo "$HELLO" > /txt'
How can I end up with an image with a txt file holding the text world in it?
Right now when I test the given example, the file is empty!
[UPDATE]
If I put the environment variable in the Dockerfile it works just fine, Which makes me think it's a docker-compose issue!
FROM ubuntu
ENV HELLO=world
RUN /bin/bash -c 'echo "$HELLO" > /txt'
Environments defined in the docker-compose.yml will not be accessible at build time.
To achieve that, you will have to use the args option of the build field (only supported in version 2 file format of docker-compose). It will let you add build arguments.
https://docs.docker.com/compose/compose-file/#args
Find below how to use it:
docker-compose.yml
services :
s1 :
build :
context: .
args:
HELLO: world
Also, note that you will have to define in your Dockerfile an ARG tag with the same key name.
FROM ubuntu
ARG HELLO
RUN echo "$HELLO" > /txt
I want to set $PS1 environment variable to the container. It helps me to identify multilevel or complex docker environment setup. Currently docker container prompts with:
root#container-id#
If I can change it as following , I can identify the container by looking at the $PS1 prompt itself.
[Level-1]root#container-id#
I did experiments by exporting $PS1 by making my own image (Dockerfile), .profile file etc. But it's not reflecting.
I had the same problem but in docker-compose context.
Here is how I managed to make it work:
# docker-compose.yml
version: '3'
services:
my_service:
image: my/image
environment:
- "PS1=$$(whoami):$$(pwd) $$ "
Just pass PS1 value as an environment variable in docker-compose.yml configuration file.
Notice how dollars signs need to be escaped to prevent docker-compose from interpolating values (documentation).
This Dockerfile sets PS1 by doing:
RUN echo 'export PS1="[\u#docker] \W # "' >> /root/.bash_profile
We use a similar technique for tracking inputs and outputs in complex container builds.
https://github.com/ianmiell/shutit/blob/master/shutit_global.py#L1338
This line represents the product of hard-won experience dealing with docker/(p)expect combinations:
"SHUTIT_BACKUP_PS1_%s=$PS1 && PS1='%s' && unset PROMPT_COMMAND"
Backing up the prompt is handy if you want to revert, setting the PS1 with PS1= sets the PS1, and unsetting the PROMPT_COMMAND removes any nasty surprises with the terminal being reset etc.. for the expect.
If the question is about how to ensure it's set when you run the container up (as opposed to building), then you may need to add something to your .bashrc / .profile files depending on how you run up your container. As far as I know there's no way to ensure it with a dockerfile directive and make it persist.
I normally create /home/USER/.bashrc or /root/.bashrc, depending on who the USER of the Dockerfile is. That works well. I've tried
ENV PS1 '# '
but that never worked for me.
Here's a way to set the PS1 when you run the container:
docker run -it \
python:latest \
bash -c "echo \"export PS1='[python:latest] \w$ '\" >> ~/.bashrc && bash"
I made a little wrapper script, to be able to run any image with my custom prompt:
#!/usr/bin/env bash
# ~/bin/docker-run
set -eu
image=$1
docker run -it \
-v $(pwd):/opt/app
-w /opt/app ${image} \
bash -c "echo \"export PS1='[${image}] \w$ '\" >> ~/.bashrc && bash"
In debian 9, for running bash, this worked:
RUN echo 'export PS1="[\$ENV_VAR] \W # "' >> /root/.bashrc
It's generally running as root and I generally know I am in docker, so I wanted to have a prompt that indicated what the container was, so I used an environment variable. And I guess the bash I use loads .bashrc preferentially.
Try setting environment variables using docker options
Example:
docker run \
-ti \
--rm \
--name ansibleserver-debug \
-w /githome/axel-ansible/ \
-v /home/lordjea/githome/:/githome/ \
-e "PS1=DEBUG$(pwd)# " \
lordjea/priv:311 bash
docker --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
Options:
...
-e, --env list Set environment variables
...
You should set that in .profile, not .bashrc.
Just open .profile from your root or home and replace PS1='\u#\h:\w\$ ' with PS1='\e[33;1m\u#\h: \e[31m\W\e[0m\$ ' or whatever you want.
Note that you need to restart your container.
On my MAC I have an alias named lxsh that will start a bash shell using the ubuntu image in my current directory (details). To make the shell's prompt change, I mounted a host file onto /root/.bash_aliases. It's a dirty hack, but it works. The full alias:
alias lxsh='echo "export PS1=\"lxsh[\[$(tput bold)\]\t\[$(tput sgr0)\]\w]\\$\[$(tput sgr0)\] \"" > $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668 && docker run --rm --name lxsh -v $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668:/root/.bash_aliases -v $PWD:$PWD -w $PWD -it ubuntu'
The below solution assumes that you've used Dockerfile USER to set a non-root Linux user for Bash.
What you might have tried without success:
ENV PS1='[docker]$' ## may not work
Using ENV to set PS1 can fail because the value can be overridden by default settings in a preexisting .bashrc when an interactive shell is started. Some Linux distributions are opinionated about PS1 and set it in an initial .bashrc for each user (Ubuntu does this, for example).
The fix is to modify the Dockerfile to set the desired value at the end of the user's .bashrc -- overriding any earlier settings in the script.
FROM ubuntu:20.04
# ...
USER myuser ## the username
RUN echo "PS1='\n[ \u#docker \w ]\n$ '" >>.bashrc