The docker compose file:
version: '3'
services:
rs:
build: .
ports:
- "9090:9090"
consul:
image: "consul"
ports:
- "8500:8500"
hostname: "abc"
rs is a go micro server app which will access consul and the consul address configured in a file like:
"microservice": {
"serviceName": "hello",
"registry": "consul=abc:8500",
However this don't work, the rs report error log:
register error: Put http://abc:8500/v1/agent/service/register: dial tcp: lookup abc on 127.0.0.11:53: no such host
I can access consul ui in host machine by http://127.0.0.1:8500, it works properly.
How to configure the network let rs can access consul?
You have changed hostname of the consul container, however rs service is not aware of this, and it attempts to resolve abc by querying default DNS server which is 127.0.0.11 port 53. You can see this in error message, since this DNS server is unable to resolve abc as it has no information about it.
The easiest way to solve this, and have it working in docker-compose, on the network created between the services in docker-compose is following:
version: '3'
services:
rs:
build: .
# image: aline:3.7
ports:
- "9090:9090"
# command: sleep 600
networks:
rs-consul:
consul:
image: "consul"
ports:
- "8500:8500"
hostname: "abc"
networks:
rs-consul:
aliases:
- abc
networks:
rs-consul:
This will create new network rs-consul (check with docker network ls, it will have some prefix, mine had working_directory_name_ as prefix). And on this network the Consul machine has alias abc. Now your rs service should be able to reach Consul service via http://abc:8500/
I have used commented lines (image:alpine:3.7 and command: sleep 600) instead of build: ., to test the connection, since I don't have your rs service code to use in build:. Once containers are started, I used docker exec -it <conatiner-id> sh to start shell on rs container, then installed curl and was able to retrieve Consul UI page via following command:
curl http://abc:8500/ui/
Hope this helps.
i am using akka http server in my app and mongodb as a backed database, akka http uses standard input to keep running the server,
here is how i am binding it
val host = "0.0.0.0"
val port = 8080
val bindingFuture = Http().bindAndHandle(MainRouter.routes, host, port)
log.info("Server online ")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => system.terminate()) // and shutdown when done
i need to dockerize my app docker closes the standard input by default when it starts the container, to keep it running we need to provide -i option with the container like this
docker run -p 8080:8080 -i imagename:tag
now the problem is i need to use docker-compose to start my app with mongo
here is my docker-compose.yml
version: '3.3'
services:
mongodb:
image: mongo:4.2.1
container_name: docker-mongo
ports:
- "27017:27017"
akkahttpservice:
image: app:0.0.1
container_name: docker-app
ports:
- "8080:8080"
depends_on:
- mongodb
how can i provide the -i option with docker-app container
Note after doing docker-compose up
docker exec -it containerid sh
did not worked for me
Any help would be appreciated
I have a service deployed to my Docker Swarm Cluster as global service (ELK Metricbeat).
I want to each of this service to have a hostname the same as the hostname of the running node (host)?
in another word, how I can achieve the same result in the yml file such as:
docker run -h `hostname` elastic/metricbeat:5.4.1
this is my yml file:
metricbeat:
image: elastic/metricbeat:5.4.1
command: metricbeat -e -c /etc/metricbeat/metricbeat.yml -system.hostfs=/hostfs
hostname: '`hostname`'
volumes:
- /proc:/hostfs/proc:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /:/hostfs:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
user: root
deploy:
mode: global
I have tried:
hostname: '`hostname`'
hostname: '${hostname}'
but no success.
Any solution?
Thank you in advance.
For anyone coming here :
services:
myservice:
hostname: "{{.Node.Hostname}}-{{.Service.Name}}"
No need to alter entry point ( at least on swarm on deploy )
I resolved the issue by mounting the host hostname file under /etc/nodehostname and changing the service container to use an entrypoint that read the file and replace a variable (name) in metricbeat.yml
docker-entrypoint.sh
export NODE_HOSTNAME=$(eval cat /etc/nodehostname)
envsubst '$NODE_HOSTNAME' </etc/metricbeat/metricbeat.yml.tpl > /etc/metricbeat/metricbeat.yml
Having issues getting docker links to work if I use "host" mode. How do you access the other linked docker container if it is using "host" mode?
An example:
If I use these 2 compose files below, I can run the following:
$ docker-compose up
$ docker exec -it [CONTAINER ID OF REDIS1] bash
$ redis-cli -h redis2 [OR redis-cli -h redis2-alias]
$ PING => you will get back PONG from redis2
docker-compose.yml
version: "2"
services:
redis1:
image: "redis"
ports:
- "6379"
links:
- redis2:redis2-alias
redis2:
extends:
file: docker-compose.redis2.yml
service: redis
docker-compose.redis2.yml
version: "2"
services:
redis:
image: "redis"
ports:
- "6379"
however, if you change docker-compose.redis2.yml to use host mode. when you try to connect to redis2 (from redis1)...it just hangs and never connects
docker-compose.redis2.yml
version: "2"
services:
redis:
image: "redis"
network_mode: "host"
$ docker-compose up
$ docker exec -it [CONTAINER ID OF REDIS1] bash
$ redis-cli -h redis2 => this just hangs...never connects to redis2
how do you connect to redis2 (when it is in host mode) from redis1?
Links are not supported with --net=host. Links are also deprecated (in philosophy) now. Prefer using a custom network.
Looks like Docker chose not to support this use case due to its complexity, see the GitHub issue here.
Beginner Docker question here,
So I have a development environment in which I'm running a modular app, it is working using Docker Compose to run 3 containers: server, client, database.
The docker-compose.yml looks like this:
#############################
# Server
#############################
server:
container_name: server
domainname: server.dev
hostname: server
build: ./server
working_dir: /app
ports:
- "3000:3000"
volumes:
- ./server:/app
links:
- database
#############################
# Client
#############################
client:
container_name: client
domainname: client.dev
hostname: client
image: php:5.6-apache
ports:
- "80:80"
volumes:
- ./client:/var/www/html
#############################
# Database
#############################
database:
container_name: database
domainname: database.dev
hostname: database
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
ports:
- "5432:5432"
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
You can see I'm assigning a .dev domainname to each one, this works fine to see one machine from another one (Docker internal network), for example here I'm pinging server.dev from client.dev's CLI:
root#client:/var/www/html# ping server.dev
PING server.dev (127.0.53.53): 56 data bytes
64 bytes from 127.0.53.53: icmp_seq=0 ttl=64 time=0.036 ms
This works great internally, but not on my host OS network.
For convenience, I would like to assigns domains in MY local network, not the Docker containers network so that I can for example type: client.dev on my browsers URL and load the Docker container.
Right now, I can only access if I use the Docker IP, which is dynamic:
client: 192.168.99.100:80
server: 192.168.99.100:3000
database: 192.168.99.100:5432
Is there an automated/convenient way to do this that doesn't involve me manually adding the IP to my /etc/hosts file ?
BTW I'm on OSX if that has any relevance.
Thanks!
Edit: I found this Github issue which seems to be related: https://github.com/docker/docker/issues/2335
As far as I understood, they seem to say that it is something that is not available outside of the box and they suggest external tools like:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Is that correct? And if so, which one should I go for in my particular scenario?
OK,
so since it seems that there is no native way to do this with Docker, I finally opted for this alternate solution from Ryan Armstrong, which consists in dynamically updating the /etc/hosts file.
I chose this since it was convenient for me since this works as a script, and I already had a startup script, so I could just append this function in to it.
The following example creates a hosts entry named docker.local which
will resolve to your docker-machine IP:
update-docker-host(){
# clear existing docker.local entry from /etc/hosts
sudo sed -i '' '/[[:space:]]docker\.local$/d' /etc/hosts
# get ip of running machine
export DOCKER_IP="$(echo ${DOCKER_HOST} | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
# update /etc/hosts with docker machine ip
[[ -n $DOCKER_IP ]] && sudo /bin/bash -c "echo \"${DOCKER_IP} docker.local\" >> /etc/hosts"
}
update-docker-host
This will automatically add or udpate the /etc/hosts line on my host OS when I start the Docker machine through my startup script.
Anyways, as I found out during my research, apart from editing the hosts file, you could also solve this problem by setting up a custom DNS server:
Also found several projects on Github which apparently aim to solve this problem, although I didn't try them:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Extending on #eduwass's own answer, here's what I did manually (without a script).
As mentioned in the question, define the domainname: myapp.dev and hostname: www in the docker-compose.yml file
Bring up your Docker containers as normal
Run docker-compose exec client cat /etc/hosts to get an output of the container's hosts file (where client is your service name)
(Output example: 172.18.0.6 www.myapp.dev)
Open your local (host machine) /etc/hosts file and add that line: 172.18.0.6 server.server.dev
If your Docker service container changes IPs or does anything fancy you will want a more complex solution, but this is working for my simple needs at the moment.
Another solution would be to use a browser with a proxy extension sending the requests through a proxy container that will know where to resolve the domains to. If you consider using jwilder/nginx-proxy for production mode, then your issue can be easily solved with mitm-nginx-proxy-companion.
Here is an example based on your original stack:
version: '3.3'
services:
server:
build: ./server
working_dir: /app
volumes:
- ./server:/app
client:
environment:
- VIRTUAL_HOST: client.dev
image: php:5.6-apache
volumes:
- ./client:/var/www/html
database:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Run docker-compose up
Add a proxy extension to your browser, with proxy address being 127.0.0.1:8080
Access http://client.dev
The request will follow the route:
Access a local development domain in a browser
The proxy extension forwards that request to mitm-nginx-proxy-companion instead of the “real” internet
mitm-nginx-proxy-companion tries to resolve the domain name through the dns server in the same container
If the domain is not a “local” one, it will forward the request to the “real” internet
But if the domain is a “local” one, it will forward the request to the nginx-proxy
The nginx-proxy in its turn forwards the request to the appropriate container that includes the service we want to access
Side notes:
links removed as it's outdated and is replaced by Docker networks
you don't need to add domain names to server and database containers. client will be able to access them on server and database domains because they are all in the same network (similar to what link was doing previously)
you don't need to use ports on server and database containers because it only forwards ports to be used through 127.0.0.1. PHP in client container will do only "back-end" requests to other containers, and because those containers are in the same network, you already can access them with database:5432 and server:3000. The same goes for server <-> database connections.
I am the author of mitm-nginx-proxy-companion
In order to make whole domain for localhost you can use dnsmasq. In this case if you chose the domain .dev any subdomain will point to your container. But you have to know about problems with .dev zone
Or you can use bash script for launch your docker-compose which on start will add line to /etc/hosts and after you kill this process this line will removed
#!/usr/bin/env bash
sudo sed -i '1s;^;127.0.0.1 example.dev\n;' /etc/hosts
trap 'sudo sed -i "/example.dev/d" /etc/hosts' 2
docker-compose up
My Bash script WITH ALIAS without docker-machine
Based on http://cavaliercoder.com/blog/update-etc-hosts-for-docker-machine.html
#!/bin/bash
#alias
declare -A aliasArr
aliasArr[docker_name]="alias1,alias2"
# clear existing *.docker.local entries from /etc/hosts
sudo sed -i '/\.docker\.local$/d' /etc/hosts
# iterate over each machine
docker ps -a --format "{{.Names}}" \
| while read -r MACHINE; do
MACHINE_IP="$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${MACHINE} 2>/dev/null)"
if [[ ${aliasArr[$MACHINE]} ]]
then
DOMAIN_NAME=$(echo ${aliasArr[$MACHINE]} | tr "," "\n")
else
DOMAIN_NAME=( ${MACHINE} )
fi
for addr in $DOMAIN_NAME
do
echo "add ${MACHINE_IP} ${addr}.docker.local"
[[ -n $MACHINE_IP ]] && sudo /bin/bash -c "echo \"${MACHINE_IP} ${addr}.docker.local\" >> /etc/hosts"
export no_proxy=$no_proxy,$MACHINE_IP
done
done