I am trying to (in between other services of the compose) connect this 2 by TCP:
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
expose:
- 5020
restart: unless-stopped
networks:
- monitor-net
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
networks:
- monitor-net
depends_on:
- autoserver
Where monitor-net is bridge:
version: '2.1'
networks:
monitor-net:
driver: bridge
So in a python file executed by the client I use a pymodbus library to run this:
host = 'localhost'
client = ModbusTcpClient(host, port=5020)
Where I obviously have problems with that ´localhost´. When I run each container manually I used docker run --network host but now that I am forced to use network bridge I dont know what to put instead of localhost. I have tried with "autoserver", "172.18.0.5" which is the IP given to autoserver by the docker network:
"57c6e2c366e81f59636a21b61e7935f68e6c700787b57eba572543e76f35f1ce": {
"Name": "autoserver",
"EndpointID": "56e586b875e6d2c17779e236b2448825910d330cc502dec96e2c3ec3771e5bf3",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
And other combinations, but I dont know how to actually manage to make that connection.
If I try with 'autoserver' as suggested, it just cant connect:
File "/usr/lib/python3/dist-packages/pymodbus/client/sync.py", line
107, in execute
raise ConnectionException("Failed to connect[%s]" % (self.str())) pymodbus.exceptions.ConnectionException: Modbus
Error: [Connection] Failed to
connect[ModbusTcpClient(autoserver:5020)] [ERROR/MainProcess] failed
to run test successfully
Related
I'm trying to run two Docker containers attached to a single Docker network using Docker Compose.
I'm running into the following error when I run the containers:
Error response from daemon: failed to add interface veth5b3bcc5 to sandbox:
error setting interface "veth5b3bcc5" IP to 172.19.0.2/16:
cannot program address 172.19.0.2/16 in sandbox
interface because it conflicts with existing
route {Ifindex: 10 Dst: 172.19.0.0/16 Src: 172.19.0.1 Gw: <nil> Flags: [] Table: 254}
My docker-compose.yml looks like this:
version: '3'
volumes:
dsn-redis-data:
driver: local
dsn-redis-conf:
driver: local
networks:
dsn-net:
driver: bridge
services:
duty-students-notifier:
image: duty-students-notifier:latest
network_mode: host
container_name: duty-students-notifier
build:
context: ../
dockerfile: ./docker/Dockerfile
env_file: ../.env
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- dsn-net
restart: always
dsn-redis:
image: redis:latest
expose:
- 5432
volumes:
- dsn-redis-data:/var/lib/redis
- dsn-redis-conf:/usr/local/etc/redis/redis.conf
networks:
- dsn-net
restart: always
Thanks!
The network_mode: host setting generally disables Docker networking, and can interfere with other options. In your case it looks like it might be trying to apply the networks: configuration to the host system network layer.
network_mode: host is almost never necessary, and deleting it may resolve this issue.
I am trying to re-route request from gateway to a target service , and in case I am using static port binding everything works fine. But it is not flexible , so I want to run containers with dynamic ip and port in the network , to be able to run multiple instances.
I changed docker-compose.yml from
-ports:
"8082:8082"
to
-ports:
"8082"
And got an error
Get "http://c0000203.addr.dc1.consul.:60949/api/v1/users": dial tcp 192.0.2.3:60949: connect: connection refused
I am using consul as a service discovery , and registartor for registering containers
Base docker-compose file:
version: '3.7'
services:
postgres:
image: postgres:13
restart: 'always'
environment:
- POSTGRES_DB=user-db
- POSTGRES_PASSWORD=password
- POSTGRES_USER=user
ports:
- "5432"
networks:
- vpcbr
consul:
image: consul:latest
restart: 'always'
environment:
CONSUL_LOCAL_CONFIG: |
{
"recursors": [
"8.8.8.8",
"8.8.4.4"
],
"dns_config": {
"recursor_strategy": "random"
},
"ports": {
"dns": 53
}
}
networks:
vpcbr:
ipv4_address: 192.0.2.10
ports:
- '8500:8500'
- '53/tcp'
- '53/udp'
registrator:
image: gliderlabs/registrator:latest
command: "consul://consul:8500"
container_name: registrator
depends_on:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
vpcbr:
ipv4_address: 192.0.2.20
krakend_gateway:
image: devopsfaith/krakend:2
command: [ "run", "-d", "-c", "/krakend.json" ]
dns: 192.0.2.10
volumes:
- ./krakend-gateway/krakend.json:/krakend.json
ports:
- "1234:1234"
- "8080:8080"
- "8090:8090"
networks:
vpcbr:
ipv4_address: 192.0.2.23
user-ms:
build: user-ms/
platform: linux/arm64
restart: 'always'
depends_on: [consul, krakend_gateway]
networks:
- vpcbr
ports:
- "8082"
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 192.0.2.0/24
Service which is sending request is Krakend , target service is user-ms , which should accept this request , everything is work when I am changing user-ms port binding to "8082:8082"
Will be glad if someone could help me , thanks
Use Registrator's -internal CLI flag to configure to register services with the internal IP addresses of the Docker container, not the external host IP.
Per https://gliderlabs.github.io/registrator/latest/user/run/#registrator-options
-internal Use exposed ports instead of published ports.
If the -internal option is used, Registrator will register the docker0 internal IP and port instead of the host mapped ones.
The modified container definition will then be as follows.
# docker-compose.yaml
...
registrator:
image: gliderlabs/registrator:latest
command: "consul://consul:8500 -internal"
container_name: registrator
depends_on:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
vpcbr:
ipv4_address: 192.0.2.20
...
I have two containers with different default networks and I want to communicate with each other. First I made a common network named "my_network" then enter into the container1's bash. I could ping the other container by name (container2) but when I tried to check the telnet with port 4000, I got the error:
telnet: Unable to connect to remote host: Connection refused
Curl request also didn't work. But, when I replaced the container name with the host's ip address (eg. 10.244.140.92), everything just worked fine. So what am I doing wrong?
My simplified compose:
version: "3.9"
networks:
my_network:
driver: bridge
external: true
default:
services:
container1:
image: ...
ports:
- 5000:80
- 5001:443
networks:
- default
- my_network
version: "3.9"
networks:
my_network:
services:
container2:
image: ...
ports:
- 4000:4000
networks:
- my_network
I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
#ubuntu(16.04)
ubuntu:
image: ubuntu_base
build:
context: .
dockerfile: dockerfileBase
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
ports:
- "8081:8081"
tty: true
#tensorflow
tensorflow:
image: tensorflow_jupyter
build:
context: .
dockerfile: dockerfileTensorflow
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
- .:/notebooks
networks:
- default
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rstudio1
build:
context: .
dockerfile: dockerfileRstudio1
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
environment:
- PASSWORD=test
ports:
- "8787:8787"
tty: true
volumes:
ubuntu:
tensorflow:
rstudio:
networks:
default:
driver: bridge
I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:
"Containers": {
"83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
"Name": "composetest_ubuntu_1",
"EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
"MacAddress": "02:42:c0:a8:40:04",
"IPv4Address": "192.168.64.4/20",
"IPv6Address": ""
},
"8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
"Name": "composetest_rstudio_1",
"EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
"MacAddress": "02:42:c0:a8:40:03",
"IPv4Address": "192.168.64.3/20",
"IPv6Address": ""
},
"ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
"Name": "composetest_tensorflow_1",
"EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
"MacAddress": "02:42:c0:a8:40:02",
"IPv4Address": "192.168.64.2/20",
"IPv6Address": ""
}
A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?
Docker version 18.09.1
Docker-compose version 1.17.1
but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.
You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.
Bin path:
$ echo $PATH 127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.
I need to curl my API from another container.
Container 1 is called nginx
Container 2 is called fpm
I need to by able to bash into my fpm container and curl the nginx container.
Config:
#docker-compose.yaml
services:
nginx:
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
volumes:
- ./docker/nginx/conf/dev/api.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
links:
- fpm
fpm:
build:
context: .
dockerfile: ./docker/fpm/Dockerfile
volumes:
- .:/var/www/html
- ./docker/fpm/conf/dev/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
- ./docker/fpm/conf/dev/api.ini:/usr/local/etc/php/conf.d/api.ini
env_file:
- ./docker/mysql/mysql.env
- ./docker/fpm/conf/dev/fpm.env
links:
- mysql
shm_size: 256M
extra_hosts:
- myapi.docker:nginx
My initial thought was to slap it in the extra_hosts option like:
extra_hosts:
- myapi.docker:nginx
But docker-compose up fails with:
ERROR: for apiwip_fpm_1 Cannot create container for service fpm: invalid IP address in add-host: "nginx"
I have seen some examples of people using docker's network configuration but it seems over the top to just resolve an address.
How can I resolve/eval the IP address of the container rather than just passing it literally?
Add network aliases in the default network.
version: "3.7"
services:
nginx:
# ...
networks:
default:
aliases:
- example.local
browser-sync:
# ...
depends_on:
- nginx
command: "browser-sync start --proxy http://example.local"
services:
nginx:
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
volumes:
- ./docker/nginx/conf/dev/api.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
networks:
my_network:
aliases:
- myapi.docker
- docker_my_network
fpm:
build:
context: .
dockerfile: ./docker/fpm/Dockerfile
volumes:
- .:/var/www/html
- ./docker/fpm/conf/dev/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
- ./docker/fpm/conf/dev/api.ini:/usr/local/etc/php/conf.d/api.ini
env_file:
- ./docker/mysql/mysql.env
- ./docker/fpm/conf/dev/fpm.env
links:
- mysql
shm_size: 256M
networks:
- my_network
networks:
my_network:
driver: bridge
add custom network and add container to that network
by default if you curl nginx container from fpm you will curl localhost, so we need to add alias to be the same as your servername in the nginx configuration
with this solution you can curl myapi.docker from the fpm container
#edward let me know if this solution worked for you
Edit:
Jack_Hu is right, I removed extra_hosts. Network alias is enough.
I solved this kind of issue by using links instead of extra_hosts.
In that case, just set link alias can do your favor.
service fpm setting
links
- nginx:myapi.docker
See docker-compose links documentation, The alias name can be the domain which appears in your code.
From the docker documentation,
https://docs.docker.com/compose/networking/
You should be able to use your service name (in this case nginx) as a hostname from within your docker network. So you can bash into my FPM container and call curl nginx and the docker will resolve it for you. Hope this helps.
Quick fix is to point it to a dynamic IP generated by docker. This may change though so... yeah.
Find your networks:
docker network ls
NETWORK ID NAME DRIVER SCOPE
72fef1ce7a50 apiwip_default bridge local <-- here
cdf9d5b885f6 bridge bridge local
2f4f1e7038fa host host local
a96919eea0f7 mgadmin_default bridge local
30386c421b70 none null local
5457b953fadc website2_default bridge local
1450ebeb9856 anotherapi_default bridge local
Copy the NETWORK ID
docker network inspect 72fef1ce7a50
"Containers": {
"345026453e1390528b2bb7eac4c66160750081d78a77ac152a912f3de5fd912c": {
"Name": "apiwip_nginx_1",
"EndpointID": "6504a3e4714a6ba599ec882b21f956bfd1b1b7d19b8e04772abaa89c02b1a686",
"MacAddress": "02:42:ac:14:00:05",
"IPv4Address": "172.20.0.5/16", <-- CIDR block
"IPv6Address": ""
},
"ea89d3089193825209d0e23c8105312e3df7ad1bea6b915ec9f9325dfd11736c": {
"Name": "apiwip_fpm_1",
"EndpointID": "dc4ecc7f0706c0586cc39dbf8a05abc9cc70784f2d44c90de2e8dbdc9148a294",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
Add the IP address from that CIDR block to the extra_hosts option:
extra_hosts:
- myapi.docker:172.20.0.5