I have a service deployed to my Docker Swarm Cluster as global service (ELK Metricbeat).
I want to each of this service to have a hostname the same as the hostname of the running node (host)?
in another word, how I can achieve the same result in the yml file such as:
docker run -h `hostname` elastic/metricbeat:5.4.1
this is my yml file:
metricbeat:
image: elastic/metricbeat:5.4.1
command: metricbeat -e -c /etc/metricbeat/metricbeat.yml -system.hostfs=/hostfs
hostname: '`hostname`'
volumes:
- /proc:/hostfs/proc:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /:/hostfs:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
user: root
deploy:
mode: global
I have tried:
hostname: '`hostname`'
hostname: '${hostname}'
but no success.
Any solution?
Thank you in advance.
For anyone coming here :
services:
myservice:
hostname: "{{.Node.Hostname}}-{{.Service.Name}}"
No need to alter entry point ( at least on swarm on deploy )
I resolved the issue by mounting the host hostname file under /etc/nodehostname and changing the service container to use an entrypoint that read the file and replace a variable (name) in metricbeat.yml
docker-entrypoint.sh
export NODE_HOSTNAME=$(eval cat /etc/nodehostname)
envsubst '$NODE_HOSTNAME' </etc/metricbeat/metricbeat.yml.tpl > /etc/metricbeat/metricbeat.yml
Related
Project structure
Here is my .yaml
version: "3.3"
services:
database:
image: mysql:8
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
MYSQL_ROOT_PASSWORD: ${mysql_root_password}
ports:
- "6033:3306"
networks:
- ${network_name}
volumes:
- dbdata:/var/lib/mysql
- "./.scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
- "./.scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"
secrets:
- mysql_user
- mysql_password
- mysql_root_password
- container_name
- network_name
secrets:
mysql_user:
file: /run/secrets/mysql_user
mysql_password:
file: /run/secrets/mysql_password
mysql_root_password:
file: /run/secrets/mysql_root_password
network_name:
file: /run/secrets/network_name
networks:
${network_name}:
driver: bridge
Here is my script
#!/bin/bash
# Leave current swarm
docker swarm leave --force
# Initialize the host as a Swarm manager
docker swarm init
# Create the secrets
echo "server_user" | docker secret create mysql_user -
echo "server_password" | docker secret create mysql_password -
echo "a1128f69-e6f7-4e93-a2df-3d4db6030abc" | docker secret create mysql_root_password -
echo "template_network" | docker secret create network_name -
# Deploy the stack using the secrets
docker stack deploy -c docker-compose.yaml mynetwork
Here is the error
Node left the swarm.
Swarm initialized: current node (y46rjvlu57bibyhgwk7nthykw) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-161prfq442ha035laq1plnv1o2qfqs026dmg6aslpd4kao7o0i-bnwc5zxiwt3ctmfbxfoszbick 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
l3nrkqhy7ygtrb05x7c5rvavu
xrhp70n50waaas1hqha8fk2j2
wj1y8runsi8vydpzc09hp9bmp
u5b6suutp7tkt4lqd5i90bgif
Creating network mynetwork_
failed to create network mynetwork_: Error response from daemon: rpc error: code = InvalidArgument desc = name must be valid as a DNS name component
I do not get the error when I don't use docker secrets for the variables, so I'm wondering if that is something to do with it.
I have tried restarting / clearing / destroying all the containers / networks / services / images in Docker too.
Any help or tips for improvement are also welcomed.
Delete the two networks: blocks.
The actual problem here is the top-level networks:. This is outside the context of any particular service, so where you're getting a network_name secret within the database service, that doesn't happen at the top level. That means Compose tries to expand the environment variable network_name as the network name, but it's empty.
(In Swarm mode you may not want bridge networking either.)
If you remove the networks: blocks then Compose will create a network named default and attach the container(s) to it. If you do need some non-standard settings then it's possible to configure the default network, but keeping the default name and still attaching containers to it by default. More details are in Networking in Compose in the Docker documentation.
Setup
I have a setup with multiple containers, using dnsmasq as a nameserver for my virtual hosts. I want the containers to be accessible within my local network so I need to resolve all requests to the current local ip of the machine on which the containers are running on (here 192.168.178.21)
version: "3"
services:
dnsmasq:
image: andyshinn/dnsmasq
ports:
- 53:53/tcp
- 53:53/udp
cap_add:
- NET_ADMIN
command: [
"--log-queries",
"--log-facility=-",
"--address=/.test/192.168.178.21"
]
apache:
...
gulp:
...
nginx-proxy:
...
Issue
What I would like to do is to 'add' the current ip dynamically, in concept like a variable, that gets the current ip, when I start docker-compose:
...
"--address=/.test/current_local_ip"
...
This way I can start a project with this setup on every development machine in the network and make it reachable for others without manually changing things in the docker-compose file. Thanks for your suggestions
You can use .env file and add
env_file=.env
environment:
- IP_ADDR
and modify the command to
"--address=/.test/$IP_ADDR"
OR
map conf file like
volumes:
- .docker/dnsmasq.conf:/etc/dnsmasq.conf
I solved it using a makefile to pass an environment variable to docker-compose like this:
Makefile
LOCAL_IP := $(shell ipconfig getifaddr en0)
all:
make docker-start
docker-start:
LOCAL_IP=$(LOCAL_IP) docker-compose -f dev-docker-compose.yml up --detach
dev-docker-compose.yml
version: "3"
services:
dnsmasq:
image: andyshinn/dnsmasq
ports:
- 53:53/tcp
- 53:53/udp
cap_add:
- NET_ADMIN
command: [
"--log-queries",
"--log-facility=-",
"--address=/.test/${LOCAL_IP}"
]
...
The only issue I run into is that en0 is not always the desired ethernet adapter. Does anyone know a command that always gets the local ip regardless of the active adapter?
How do I dynamically add container ip in other Dockerfile ( I am running two container a) Redis b) java application .
I need to pass redis url on run time to my java arguments
Currently I am manually checking the redis ip and copying it in Dockerfile. and later creating new image using redis ip for java application.
docker run --name my-redis -d redis
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-redis
IN Dockerfile (java application)
CMD ["-Dspring.redis.host=172.17.0.2", "-jar", "/apps/some-0.0.1-SNAPSHOT.jar"]
Can I use any script to update the DockerFile or can use any environment variable.
you can assign a static ip address to your dokcer container when you run it, following the steps:
1 - create custom network:
docker network create --subnet=172.17.0.0/16 redis-net
2 - run the redis container to use the specified network, and assign the ip address:
docker run --net redis-net --ip 172.17.0.2 --name my-redis -d redis
by then you have the static ip address 172.17.0.2 for my-redis container, you don't need to inspect it anymore.
3 - now it is possible to run the java appication container but it must use the same network:
docker run --net redis-net my-java-app
of course you can optimize the solution, by using env variables or whatever you find convenient to your setup.
More infos can be found in the official docs (search for --ip):
docker run
docker network
Edit (add docker-compose):
I just find out that it is also possible to assign static ips using docker-compose, and this answer gives an example how.
This is a similar example just in case:
version: '3'
services:
redis:
container_name: redis
image: redis:latest
restart: always
networks:
vpcbr:
ipv4_address: 172.17.0.2
java-app:
container_name: java-app
build: <path to Dockerfile>
networks:
vpcbr:
ipv4_address: 172.17.0.3
depends_on:
- redis
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 172.17.0.0/16
gateway: 172.17.0.1
official docs: https://docs.docker.com/compose/networking/
hope this helps you find your way.
You should add your containers in the same network . Then at runtime you can use that name to refer to the container with its name. Container's name is the host name in the network. Thus at runtime it will be resolved as container's ip address.
Follow these steps:
First, create a network for the containers:
docker network create my-network
Start redis: docker run -d --network=my-network --name=redis redis
Edit java application's Dockerfile, replace -Dspring.redis.host=172.17.0.2" with -Dspring.redis.host=redis" and build again.
Finally start java application container: docker run -it --network=my-network your_image. Optionally you can define a name for the container, but it is not required as you do not access java application's container from redis container.
Alternatively you can use a docker-compose file. By default docker-compose creates a network for running services. I am not aware of your full setup, so I will provide a sample docker-compose.yml that illustrates the main concept.
version: "3.7"
services:
redis:
image: redis
java_app_image:
image: your_image_name
In both ways, you are able to access redis container from java application dynamically using container's hostname instead of providing a static ip.
I was wondering if there is a way to use environment variables taken from the host where the container is deployed, instead of the ones taken from where the docker stack deploy command is executed. For example imagine the following docker-compose.yml launched on three node Docker Swarm cluster:
version: '3.2'
services:
kafka:
image: wurstmeister/kafka
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
deploy:
mode: global
environment:
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=${JMX_HOSTNAME} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=1099"
The JMX_HOSTNAME should be taken from the host where the container is actually deployed and should not be the same value for every container.
Is there a correct way to do this?
Yes, this works when you combine two concepts:
Swarm node labels, of which Hostname is one of the built-in ones.
Swarm service go templates, which also work in stack files.
This would pull in the hostname to the ENV value of DUDE for each container to be the host that it's running on:
version: '3.4'
services:
nginx:
image: nginx
environment:
DUDE: "{{.Node.Hostname}}"
deploy:
replicas: 3
It works if you run the docker command through env.
env JMX_HOSTNAME="${JMX_HOSTNAME}" docker stack deploy -c docker-compose.yml mystack
Credit to GitHub issue that pointed me in the right direction.
I found another way for when you have many environment variables. The same method also works with docker-compose up
sudo -E docker stack deploy -c docker-compose.yml mystack
instead of
env foo="${foo}" bar="${bar}" docker stack deploy -c docker-compose.yml mystack
sudo -E man description;
-E, --preserve-env
Indicates to the security policy that the user wishes to
preserve their existing environment variables. The
security policy may return an error if the user does not
have permission to preserve the environment.
In my docker-compose.yml file, I have the following. However the container does not pick up the hostname value. Any ideas?
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
When I check the hostname in the container it does not pick up affy.
As of docker-compose version 3.0 and later, you can just use the hostname key:
version: "3.0"
services:
yourservicename:
hostname: your-name
I found that the hostname was not visible to other containers when using docker run. This turns out to be a known issue (perhaps more a known feature), with part of the discussion being:
We should probably add a warning to the docs about using hostname. I think it is rarely useful.
The correct way of assigning a hostname - in terms of container networking - is to define an alias like so:
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias2
Unfortunately this still doesn't work with docker run. The workaround is to assign the container a name:
docker-compose run --name alias1 some-service
And alias1 can then be pinged from the other containers.
UPDATE: As #grilix points out, you should use docker-compose run --use-aliases to make the defined aliases available.
This seems to work correctly. If I put your config into a file:
$ cat > compose.yml <<EOF
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
EOF
And then bring things up:
$ docker-compose -f compose.yml up
Creating tmp_dns_1...
Attaching to tmp_dns_1
dns_1 | 2015-04-28T17:47:45.423387 [dockerdns] table.add tmp_dns_1.docker -> 172.17.0.5
And then check the hostname inside the container, everything seems to be fine:
$ docker exec -it stack_dns_1 hostname
affy.affy.com
Based on docker documentation:
https://docs.docker.com/compose/compose-file/#/command
I simply put
hostname: <string>
in my docker-compose file.
E.g.:
[...]
lb01:
hostname: at-lb01
image: at-client-base:v1
[...]
and container lb01 picks up at-lb01 as hostname.
The simplest way I have found is to just set the container name in the docker-compose.yml See container_name documentation. It is applicable to docker-compose v1+. It works for container to container, not from the host machine to container.
services:
dns:
image: phensley/docker-dns
container_name: affy
Now you should be able to access affy from other containers using the container name. I had to do this for multiple redis servers in a development environment.
NOTE The solution works so long as you don't need to scale. Such as consistant individual developer environments.
I needed to spin freeipa container to have a working kdc and had to give it a hostname otherwise it wouldn't run.
What eventually did work for me is setting the HOSTNAME env variable in compose:
version: 2
services:
freeipa:
environment:
- HOSTNAME=ipa.example.test
Now its working:
docker exec -it freeipa_freeipa_1 hostname
ipa.example.test