Configure Docker containers to communicate through network - docker

How can a docker container be configured to communicate with another docker container through network?
In my case, redis is running in a docker container with port 6379 opened. It can be access from host machine. I have started another container that needs to access redis, but it is not available.
Setting network_mode to host didn't fix solve the problem.

I usually connect conainters using network. In my project, I have two containers one as an dotnet core application and another a database and I'll connect these two using the following docker-compose file:
services:
coreapi:
ports:
- "8088:80"
volumes:
- ~/logs/dockerlogs/coreapi:/app/logs
networks:
- test
mongodb:
ports:
- "27017:27017"
- "27018:27018"
- "27019:27019"
networks:
- test
networks:
test:
external: true

Related

Docker-Compose, How To Connect Java Application With Custom Docker Network On Redis Container

I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.

docker containers with mac-vlan network show wrong ip after being restarted?

Hi everyone i have create a network with mac-vlan type in docker because i wanted my containers to be on the same LAN as host.Now the strange thing which i have noticed is that when i stop and then restart a container with docker start command the container gets started but the IP assigned to it is the one that was assigned before the container was shutdown. doesn't IP change when containers are restarted furthermore the container is now not reachable because the IP its showing as its own has now been reassigned to another machine on the network from what i have read that the container is assigned the same IP as before but if the container couldn't get the IP it fails to start but my container is starting just fine. What am i missing here? on ubuntu version 17.10 docker version 17.11.0-ce Api version 1.34 (both client and server)
You should not use static IP's in docker unless you are working with something that allows routing from outside to the inside container, like in you're case macvlan. DNS is already there for service discovery inside of the container network and supports container scaling. And outside the container network, you should use exposed ports on the host.
That being said, you can achieve the above using docker-compose like below :
services:
mysql:
container_name: backend-database
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
mynetwork:
ipv4_address: 10.5.0.5
apache-tomcat:
container_name: apache-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
mynetwork:
ipv4_address: 10.5.0.6
depends_on:
- mysql
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1

Docker container issue can't communicate

I beginner in Docker, I write the simple docker-compose.yml file for run two service container first container for node app and another one for redis issue with my app server unable to connect with redis container here is my code:
version: '3'
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- test
app_server:
image: app_server
depends_on:
- redis
links:
- redis
ports:
- "4004:4004"
networks:
- test
networks:
test:
Output:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
Looks like your webapp is connecting to 127.0.0.1/localhost instead of redis. So not a docker issue, but more of a programming issue within your web app. you could add environment variable in your webapp (something like REDIS_HOST) and then give that parameter in the compose-file. This of course requires your web application to read redis host from environment variable.
Example environment variable assignment in compose:
webapp:
image: my_web_app
environment:
- REDIS_HOST=redis
Again, this requires that your web app is actually utilizing REDIS_HOST environment variable in its code.
127.0.0.1:6379 is connect to current container localhost not to redis container
With your docker-composer file. Now your connect to redis via redis container name. Becase docker-compose automatic create an docker bridge network - whic allow you call to another container via their name...
docker inspect to see redis container name - for example current redis container name is redis_abc, so you can connect to redis via redis_abc:6379 Or more simple, just add container_name: redis_server to docker-compose file for certain container name..
https://docs.docker.com/network/bridge/

docker-compose networking - hostname not resolving

I have the following docker-compose file. I am trying to access the service running in the container, from the host.
But the hostname never resolves.
version: '2'
networks:
mynet:
driver: bridge
services:
grpcserver:
image: test/image
volumes:
- ./:/var/local/git
ports:
- 50051:50051
stdin_open: true
tty: true
hostname: grpcserver
networks:
- mynet
entrypoint: bash ../var/local/git/service/start.sh
When I exec to the container I can telnet grpcserver 50051 to the running service using the hostname successfully. But from the host, I cannot.
Version
docker-compose version 1.16.1, build 6d1ac21
Docker containers are not resolved using their name on the host. They can only be resolved inside other containers. The name would be dependent on whether you are trying to connect from another service in same compose/network or a different one.
If you need your containers to be discoverable from host you need to use a tool like dnsmasq. See the answer below on more details on how to do such a setup
Access to container by his hostname from host-mascine

rationale behind docker compose "links" order

I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.

Resources