I have the following docker-compose file. I am trying to access the service running in the container, from the host.
But the hostname never resolves.
version: '2'
networks:
mynet:
driver: bridge
services:
grpcserver:
image: test/image
volumes:
- ./:/var/local/git
ports:
- 50051:50051
stdin_open: true
tty: true
hostname: grpcserver
networks:
- mynet
entrypoint: bash ../var/local/git/service/start.sh
When I exec to the container I can telnet grpcserver 50051 to the running service using the hostname successfully. But from the host, I cannot.
Version
docker-compose version 1.16.1, build 6d1ac21
Docker containers are not resolved using their name on the host. They can only be resolved inside other containers. The name would be dependent on whether you are trying to connect from another service in same compose/network or a different one.
If you need your containers to be discoverable from host you need to use a tool like dnsmasq. See the answer below on more details on how to do such a setup
Access to container by his hostname from host-mascine
Related
In the docker compose file i added api server as a service and mongodb is installed in my local pc. But when the api run in docker container it could not connect with 127.0.0.1:27017.
Here is the docker-compose file.
networks:
test_network:
name: test_network
driver: bridge
services:
api:
container_name: api
build: ./api
ports:
- "3031:3031"
networks:
- test_network
Why this problem is happening
and how i can resolve this problem ?
It happens because the IP address 127.0.0.1:27107 refers to the container itself.
You have to address the above port, but the host ip as I explained here.
I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.
I am doing a flask aplication deployed in uwsgi and nginx. My problem is that when i run it, it does in docker.internal.host and i need to specify the host. I would like to know if there is some way to create a docker network and in it, specify the host where i will see my flask application, or some other way to run a container in a specific ip.
Spected behaviour:
docker-compose up and then when i go to the ip,which i selct in docker compose.yml or in my net config, i see my flask app.
Thanks.
You can do something like:
services:
srv:
image: image:latest
command: start
networks:
mynet:
ipv4_address: 192.168.42.11
networks:
mynet:
ipam:
config:
- subnet: 192.168.42.0/24
but it works only on linux and from the host where docker is running. You should consider if port mapping or the host network mode could be options for you.
check this: https://docs.docker.com/compose/networking/
Hi everyone i have create a network with mac-vlan type in docker because i wanted my containers to be on the same LAN as host.Now the strange thing which i have noticed is that when i stop and then restart a container with docker start command the container gets started but the IP assigned to it is the one that was assigned before the container was shutdown. doesn't IP change when containers are restarted furthermore the container is now not reachable because the IP its showing as its own has now been reassigned to another machine on the network from what i have read that the container is assigned the same IP as before but if the container couldn't get the IP it fails to start but my container is starting just fine. What am i missing here? on ubuntu version 17.10 docker version 17.11.0-ce Api version 1.34 (both client and server)
You should not use static IP's in docker unless you are working with something that allows routing from outside to the inside container, like in you're case macvlan. DNS is already there for service discovery inside of the container network and supports container scaling. And outside the container network, you should use exposed ports on the host.
That being said, you can achieve the above using docker-compose like below :
services:
mysql:
container_name: backend-database
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
mynetwork:
ipv4_address: 10.5.0.5
apache-tomcat:
container_name: apache-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
mynetwork:
ipv4_address: 10.5.0.6
depends_on:
- mysql
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql