I was wondering if there is a way to use environment variables taken from the host where the container is deployed, instead of the ones taken from where the docker stack deploy command is executed. For example imagine the following docker-compose.yml launched on three node Docker Swarm cluster:
version: '3.2'
services:
kafka:
image: wurstmeister/kafka
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
deploy:
mode: global
environment:
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=${JMX_HOSTNAME} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=1099"
The JMX_HOSTNAME should be taken from the host where the container is actually deployed and should not be the same value for every container.
Is there a correct way to do this?
Yes, this works when you combine two concepts:
Swarm node labels, of which Hostname is one of the built-in ones.
Swarm service go templates, which also work in stack files.
This would pull in the hostname to the ENV value of DUDE for each container to be the host that it's running on:
version: '3.4'
services:
nginx:
image: nginx
environment:
DUDE: "{{.Node.Hostname}}"
deploy:
replicas: 3
It works if you run the docker command through env.
env JMX_HOSTNAME="${JMX_HOSTNAME}" docker stack deploy -c docker-compose.yml mystack
Credit to GitHub issue that pointed me in the right direction.
I found another way for when you have many environment variables. The same method also works with docker-compose up
sudo -E docker stack deploy -c docker-compose.yml mystack
instead of
env foo="${foo}" bar="${bar}" docker stack deploy -c docker-compose.yml mystack
sudo -E man description;
-E, --preserve-env
Indicates to the security policy that the user wishes to
preserve their existing environment variables. The
security policy may return an error if the user does not
have permission to preserve the environment.
Related
I have a docker stack file that is deployed across my swarm, which have many nodes.
For a specific reason one of the nodes (lets call it Node A) has a connection to the outside (internet), and the others dont, so when deploying a container on the other nodes, I need to set the HTTP_PROXY environment variable.
Question is: how do I set this ONLY on the nodes with a specific label (and not on the A node)
docker-compose.yml
version: '3.6'
services:
app:
image: my_image
ports:
- "8033:8000"
environment:
- HTTP_PROXY=proxy.server.com:3128
- HTTPS_PROXY=proxy.server.com:3128
deploy:
replicas: 10
placement:
constraints: [node.labels.app_server == app_server]
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
The only alternative so far would be to have the stack deployed with different variables, and place a constraint on deployment. But I am trying to avoid it.
How about setting those environment variables in the necessary hosts and passing them through to your container like so:
version: '3'
services:
app:
image: ubuntu
environment:
- HTTP_PROXY
- HTTPS_PROXY
They will only be set in the container if they are also set in the host environment. See documentation here.
Example/More Info:
# When HTTP_PROXY is set in the host environment, value is passed through to the container.
$ HTTP_PROXY=test docker-compose run app env | grep -i proxy
Creating some-nodes-only_app_run ... done
HTTP_PROXY=test
# When HTTP_PROXY is not set in the host environment, nothing is set in container.
$ docker-compose run app env | grep -i proxy
Creating some-nodes-only_app_run ... done
You could also write an entrypoint script to set the proxy when needed. I would recommend checking the connectivity of the container and then falling back to a proxy if necessary, but if you want to do it based on the hostname you could use something like this:
entrypoint.sh
#!/bin/bash
PROXY_HOSTNAME=some-host
if [ -f /etc/host_hostname ]; then
HOST_HOSTNAME=$(cat /etc/host_hostname)
if [ "$HOST_HOSTNAME" = "$PROXY_HOSTNAME" ]; then
echo "Setting fallback http proxy"
export HTTP_PROXY=${FALLBACK_HTTP_PROXY}
fi
fi
exec $#
Dockerfile
# test:latest
FROM ubuntu
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
docker-compose.yml
version: '3'
services:
app:
image: test:latest
environment:
- FALLBACK_HTTP_PROXY=test
volumes:
- /etc/hostname:/etc/host_hostname:ro
Example run:
$ docker-compose run app env | grep -i http_proxy
Creating some-nodes-only_app_run ... done
Setting fallback http proxy
FALLBACK_HTTP_PROXY=test
HTTP_PROXY=test
So, I'm running into an issue. Say you have a simple Docker Compose file like this:
version: "3.7"
services:
web:
image: repo.hostname.com/web:latest
environment:
port: 8080
ports:
- 8080:8080
Then, I'd run the following command to apply it:
docker stack deploy --compose-file path/to/compose.yml
Now, here's my problem. Once I've created the services via stack deploy, how do I UPDATE an existing service via the compose file?
If I just change the environment variable of "port" from "8080" to "8000" and rerun stack deploy with the new compose file, it doesn't pick up the change.
And, no, I can't use Kubernetes for reasons that are way out of the scope of this post.
I am trying to setup a Docker-based Jenkins instance. Essentially, I run the jenkins/jenkins:lts image as a container and mount a data volume to persist the data Jenkins will create.
Now, what I would like to do is share the host's ssh keys with this Jenkins instance. It's probably due to my limited Docker knowledge, but my problem is I don't know how I can mount additional files/directories to my volume and Jenkins requires that I put ssh keys within var/jenkins_home/.ssh.
I tried naively creating the directories in Dockerfile and then mounting them with docker-compose. It failed, as you might expect, since the volume is the one containing Jenkins' home directory data, not the Jenkins container itself.
I have the following docker-compose.yml (not working, for the reasons mentioned above):
version: '3.1'
services:
jenkins:
restart: always
build: ./jenkins
environment:
VIRTUAL_HOST: ${NGINX_VIRTUAL_HOST}
VIRTUAL_PORT: 8080
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
TZ: America/New_York
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_data:/var/jenkins_home
networks:
- web
- proxy
healthcheck:
test: ["CMD", "curl --fail http://${NGINX_VIRTUAL_HOST}/ || exit 1"]
interval: 1m
timeout: 10s
retries: 3
secrets:
- host_ssh_key
volumes:
jenkins_data:
networks:
web:
driver: bridge
proxy:
external:
name: nginx-proxy
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
My question is: is there anyway I could get this secret within my data volume?
I know this is a fairly old thread but a lot of people get stuck on this including me and the answer is simply not true. You can indeed use secrets with docker-compose without using Swarm provided it's a local machine or the secrets file is mounted on the host. Not saying this is secure or desirable, just that it can be done. One of the best explanations of the several ways this is possible is this blog;
Using Docker Secrets during Development
Below is an example of parts of a docker compose file used to add an api key to a Spring application. The key are then available at /run/secrets/captcha-api-key inside the Docker container. Docker compose "fakes" it by literally binding the file as a mount which then can be accessed in whatever way. It's not secure as in the file is still there, visible to all with access to /run/secrets but it's definitely doable as a work-around. Great for dev servers but would not do it in production though;
version: '3.6'
services:
myapp:
image: mmyapp
restart: always
secrets:
- captcha-api-key
secrets:
captcha-api-key:
file: ./captcha_api_key.txt
EDIT: Besides that, one can simply just run a one-node swarm which is just a tiny bit more on the resources and use secrets the way they are intended. Provided the images are already built, "docker stack deploy mydocker-composefile.yml mystackname" will do mostly the same as old docker compose did. Note though that the yml file must be written in 3 or higher specification.
Here is a short but concise write-up on compose vs swarm; The Difference Between Docker Compose And Docker Stack
mount the secret like given and try.
secrets:
- source: host_ssh_key
target: /var/jenkins_home/.ssh/id_rsa
mode: 0600
It can't be done. Secrets will only work with docker swarm; docker-compose is unable to use secrets.
More details in this GitHub issue.
I have a service deployed to my Docker Swarm Cluster as global service (ELK Metricbeat).
I want to each of this service to have a hostname the same as the hostname of the running node (host)?
in another word, how I can achieve the same result in the yml file such as:
docker run -h `hostname` elastic/metricbeat:5.4.1
this is my yml file:
metricbeat:
image: elastic/metricbeat:5.4.1
command: metricbeat -e -c /etc/metricbeat/metricbeat.yml -system.hostfs=/hostfs
hostname: '`hostname`'
volumes:
- /proc:/hostfs/proc:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /:/hostfs:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
user: root
deploy:
mode: global
I have tried:
hostname: '`hostname`'
hostname: '${hostname}'
but no success.
Any solution?
Thank you in advance.
For anyone coming here :
services:
myservice:
hostname: "{{.Node.Hostname}}-{{.Service.Name}}"
No need to alter entry point ( at least on swarm on deploy )
I resolved the issue by mounting the host hostname file under /etc/nodehostname and changing the service container to use an entrypoint that read the file and replace a variable (name) in metricbeat.yml
docker-entrypoint.sh
export NODE_HOSTNAME=$(eval cat /etc/nodehostname)
envsubst '$NODE_HOSTNAME' </etc/metricbeat/metricbeat.yml.tpl > /etc/metricbeat/metricbeat.yml
In my docker-compose.yml file, I have the following. However the container does not pick up the hostname value. Any ideas?
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
When I check the hostname in the container it does not pick up affy.
As of docker-compose version 3.0 and later, you can just use the hostname key:
version: "3.0"
services:
yourservicename:
hostname: your-name
I found that the hostname was not visible to other containers when using docker run. This turns out to be a known issue (perhaps more a known feature), with part of the discussion being:
We should probably add a warning to the docs about using hostname. I think it is rarely useful.
The correct way of assigning a hostname - in terms of container networking - is to define an alias like so:
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias2
Unfortunately this still doesn't work with docker run. The workaround is to assign the container a name:
docker-compose run --name alias1 some-service
And alias1 can then be pinged from the other containers.
UPDATE: As #grilix points out, you should use docker-compose run --use-aliases to make the defined aliases available.
This seems to work correctly. If I put your config into a file:
$ cat > compose.yml <<EOF
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
EOF
And then bring things up:
$ docker-compose -f compose.yml up
Creating tmp_dns_1...
Attaching to tmp_dns_1
dns_1 | 2015-04-28T17:47:45.423387 [dockerdns] table.add tmp_dns_1.docker -> 172.17.0.5
And then check the hostname inside the container, everything seems to be fine:
$ docker exec -it stack_dns_1 hostname
affy.affy.com
Based on docker documentation:
https://docs.docker.com/compose/compose-file/#/command
I simply put
hostname: <string>
in my docker-compose file.
E.g.:
[...]
lb01:
hostname: at-lb01
image: at-client-base:v1
[...]
and container lb01 picks up at-lb01 as hostname.
The simplest way I have found is to just set the container name in the docker-compose.yml See container_name documentation. It is applicable to docker-compose v1+. It works for container to container, not from the host machine to container.
services:
dns:
image: phensley/docker-dns
container_name: affy
Now you should be able to access affy from other containers using the container name. I had to do this for multiple redis servers in a development environment.
NOTE The solution works so long as you don't need to scale. Such as consistant individual developer environments.
I needed to spin freeipa container to have a working kdc and had to give it a hostname otherwise it wouldn't run.
What eventually did work for me is setting the HOSTNAME env variable in compose:
version: 2
services:
freeipa:
environment:
- HOSTNAME=ipa.example.test
Now its working:
docker exec -it freeipa_freeipa_1 hostname
ipa.example.test