I am having some issues getting my containers to connect using the container name.
I have 4 different containers... app_backend, app_web, app_redis, app_db.
In my docker-compose file I have defined a network, appnet, and put all the containers on the same network. I'll show this below.
The app_backend container connects to the app_redis and app_db container just fine with the container name as the host name... here is an example url that I am using:http://app_redis:6379 or http://app_db:3306.
The app_web container, is refusing to connect to my app_backend unless I specify the host name as localhost. For example... http://localhost:4000 works, but http://app_backend:4000 does not.
The app_backend container is running a express server, and I have confirmed by logging the servername/hostname that it is running at http://app_backend:4000.
If I ssh (docker exec -it web bash) into the app_web container and ping the app_backend container, ping app_backend, it returns a ping! Though if I ping the app_backend container with the port, ping http://app_backend:4000 or even just app_backend:4000, it returns Name or Service Not Known.
Either way... My frontend is trying to request data from my express api and I am not sure what to do in order to get this to work.
Previously, I'd request from the api at http://localhost:4000/api/thing-to-call. I am having some network issues with uploading files and I think it has something to do with this being a localhost... I'd like to get it to be like the rest of the connects such as, http://app_backend:4000/api/thing-to-call.
Thanks for taking the time to look at this and pointing me in the right direction...
version: '3'
services:
db:
image: mysql
restart: always
environment:
MYSQL_DATABASE: 'appdb'
MYSQL_USER: 'app_user'
MYSQL_PASSWORD: 'removed_for_this'
MYSQL_ROOT_PASSWORD: 'removed_for_this'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- appdb:/var/lib/mysql:Z
networks:
- appnet
redis:
build:
context: ./Assets/Docker/redis
image: registry.location.secret/app:redis
command: redis-server --requirepass removed_for_this
ports:
- '6379:6379'
container_name: app_redis
volumes:
- ./redis-data:/var/lib/redis:Z
- ./Assets/Docker/redis/redis-dev.conf:/usr/local/etc/redis/redis.conf:Z
environment:
- REDIS_REPLICATION_MODE=master
networks:
- appnet
app_backend:
build:
context: ./app-backend
image: registry.location.secret/app:backend
ports:
- '4000:4000'
expose:
- '4000'
container_name: app_backend
volumes:
- ./app-backend:/app:Z
- /app/node_modules
- ./Assets/_dev/backend/.env:/app/.env:Z
networks:
- appnet
app_web:
build:
context: ./app-web
image:
registry.location.secret/app:web
ports:
- '3000:3000'
container_name: app_web
stdin_open: true
volumes:
- ./app-web:/app/:Z
- /app/node_modules
- ./Assets/_dev/web/.env:/app/.env:Z
networks:
- appnet
volumes:
appdb:
networks:
appnet:
Here is an example of the ping..
root#f82cc599058d:/app# ping app_backend
PING app_backend (172.19.0.2) 56(84) bytes of data.
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=1 ttl=64 time=0.109 ms
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=2 ttl=64 time=0.080 ms
^C
--- app_backend ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1056ms
rtt min/avg/max/mdev = 0.080/0.094/0.109/0.017 ms
root#f82cc599058d:/app# ping app_backend:4000
ping: app_backend:4000: Name or service not known
Related
I have a couple of containers sitting behind an nginx reverse proxy and while nginx should and can talk to all of the containers, they shouldn't be able to talk to one another. The primary reason for wanting this is that the containers are independent of one another and in the event that one of them gets compromised, at least it won't allow access to the other ones.
Using my limited understanding of networking (even more so with Docker), I naively assumed that creating a separate network for each container and assigning all networks to the nginx container would do the job, though that doesn't seem to work. Here's what my docker-compose.yml looks like based on that attempt:
services:
nginx:
networks:
- net1
- net2
- net3
container1:
networks:
- net1
container2:
networks:
- net2
container3:
networks:
- net3
networks:
net1:
name: net1
net2:
name: net2
net3:
name: net3
Is what I'm trying to do possible? Is it even worth going through the effort of doing it for the sake of security?
You idea really works, see next example:
docker-compose.yaml:
version: "3.7"
services:
nginx:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
- net2
container1:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
container2:
image: debian:10
tty: true
stdin_open: true
networks:
- net2
networks:
net1:
name: net1
net2:
name: net2
verify:
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container1
PING container1 (172.19.0.2) 56(84) bytes of data.
64 bytes from 20221014_container1_1.net1 (172.19.0.2): icmp_seq=1 ttl=64 time=0.083 ms
--- container1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container2
PING container2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 20221014_container2_1.net2 (172.20.0.2): icmp_seq=1 ttl=64 time=0.062 ms
--- container2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
root#pie:~/20221014# docker-compose exec container1 ping -c 1 container2
ping: container2: Temporary failure in name resolution
root#pie:~/20221014# docker-compose exec container2 ping -c 1 container1
ping: container1: Temporary failure in name resolution
And, networking in compose demo an example which same scenario as you:
The proxy service is isolated from the db service, because they do not share a network in common - only app can talk to both.
services:
proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend
networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2" ```
I have two docker-compose files where each contains different services. By creating an external network and defining such network to each compose files, I was able to have a shared network between each containers and all containers are accessible via their hostnames. I can ping each containers that were created from these separate compose files via their hostnames. The problem Im having is that I could not curl a certain service via their exposed port. Actually, my goal is to access the rest api myapp-backend-api:81 from my myapp_frontend_php. May I know what is lacking from my setup?
Here are the results from ping and curl:
root#myapp-encryption-flask:/var/www# ping -c 3 myapp-backend-api
PING myapp-backend-api (172.18.0.7) 56(84) bytes of data.
64 bytes from abkd-proxy-1.myapp-shared-network (172.18.0.7): icmp_seq=1 ttl=64 time=0.052 ms
64 bytes from abkd-proxy-1.myapp-shared-network (172.18.0.7): icmp_seq=2 ttl=64 time=0.206 ms
64 bytes from abkd-proxy-1.myapp-shared-network (172.18.0.7): icmp_seq=3 ttl=64 time=0.060 ms
--- myapp-backend-api ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2076ms
rtt min/avg/max/mdev = 0.052/0.106/0.206/0.070 ms
root#myapp-encryption-flask:/var/www# curl myapp-backend-api:81
curl: (7) Failed to connect to myapp-backend-api port 81: Connection refused
root#myapp-encryption-flask:/var/www#
Here is compose file 1:
version: '3.7'
networks:
default:
external:
name: myapp-shared-network
services:
db:
image: mysql:5.7
ports:
- "3301:3306"
hostname: myapp-db
restart: always
volumes:
- production_db_volume:/var/lib/mysql
env_file:
- .env.dev
# or
# db:
# image: postgres:12.5
# ports:
# - "5432:5432"
# restart: always
# volumes:
# - production_db_volume:/var/lib/postgresql/data/
# env_file:
# - .env.dev
app:
build:
context: .
ports:
- "8001:8000"
volumes:
- production_static_data:/vol/web
hostname: myapp-backend
restart: always
env_file:
- .env.dev
depends_on:
- db
proxy:
build:
context: ./proxy
hostname: myapp-backend-api
volumes:
- production_static_data:/vol/static
restart: always
ports:
- "81:80"
depends_on:
- app
volumes:
production_static_data:
production_db_volume:
And, here is compose file 2
version: '3.8'
networks:
default:
external:
name: myapp-shared-network
services:
app:
build:
context: ./myapp_frontend_php
dockerfile: Dockerfile
hostname: myapp-app
container_name: app
restart: always
working_dir: /var/www/
volumes:
- ../src:/var/www
nginx:
image: nginx:1.19-alpine
hostname: myapp-nginx
container_name: nginx
restart: always
ports:
- 80:80
volumes:
- ../src:/var/www
- ./nginx:/etc/nginx/conf.d
myapp-processor:
build:
context: ./myapp_encryption_flask
dockerfile: Dockerfile
hostname: myapp-processor
container_name: myapp-processor
restart: always
ports:
- "5000:5000"
I have a fairly simple docker-compose file, as follows:
version: "3.8"
services:
synchronisation:
build: .
command: 'python3 manage.py runserver 0.0.0.0:8000'
networks:
- syncnet
ports:
- "8000:8000"
- "8008:8008"
volumes:
- .:/var/lib/django
- packages:/usr/local/lib/python3.6/site-packages
neo4j:
image: neo4j:3.5.3
networks:
- syncnet
ports:
- "7687:7687"
- "7474:7474"
- "7473:7473"
environment:
NEO4J_AUTH: none
volumes:
packages:
driver: local
driver_opts:
type: none
device: <somefolder>
o: bind
networks:
syncnet:
name: sync_network
Now from the synchronisation app I need to perform some requests to the app neo4j and to the 'outside world'. Connection to neo4j is successful, however the connections to the outside world seem to fail. In particular if I get into the synchronisation container and run
ping google.com
I get the result
PING google.com (172.217.17.46) 56(84) bytes of data.
^C
--- google.com ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6149ms
I have already set
sudo sysctl -w net.inet.ip.forwarding=1
How can I get access to the 'outside world' to work? It's good to know that I am working from a Mac 10.15.7.
I have a dockerized Laravel application and use docker-compose to run the application. When I run the application using docker and make a simple ping API call, it takes less than 200ms to respond. But when I run it using docker-compose, it takes more than 3 seconds to respond.
I used docker run -it --rm -p 8080:8080 senik_laravel:latest command to run the container and here is the response time:
curl 127.0.0.1:8080/ping -w %{time_total}
The response is:
PONG
0.180260
You see that it takes 0.180260 second to respond.
When I run the application using the docker-compose file, it takes more than 3 seconds to respond.
curl 127.0.0.1:8080/ping -w %{time_total}
The response is:
PONG
3.834007
You see that it takes 3.834007 seconds to respond.
Here is the full docker-compose file:
version: '3.7'
networks:
app_net:
driver: bridge
services:
laravel:
build:
context: ./laravel
dockerfile: Dockerfile
container_name: senik_laravel
volumes:
- ./laravel:/var/www/html
working_dir: /var/www/html
ports:
- '80:8080'
networks:
- app_net
mysql-master:
image: 'bitnami/mysql:8.0.19'
container_name: senik_mysql_master
restart: always
ports:
- '3306:3306'
volumes:
- ./mysql_master_data:/bitnami/mysql
- ./docker-configs/mysql/init:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=appdb
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_AUTHENTICATION_PLUGIN=mysql_native_password
networks:
- app_net
phpmyadmin:
image: 'bitnami/phpmyadmin:latest'
container_name: senik_phpmyadmin
ports:
- '8080:80'
environment:
DATABASE_HOST: mysql-master
PHPMYADMIN_PASSWORD: pass
restart: always
volumes:
- 'phpmyadmin_data:/bitnami'
depends_on:
- mysql-master
networks:
- app_net
volumes:
phpmyadmin_data:
driver: local
This ping API does not make any database call. It just returns pong.
I've tested an API with database call and it takes about 19 seconds to respond.
What's wrong? Is it due to the network configurations?
From docker-compose I got this yml:
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper:3.1.1
ports:
- "2080:2080"
environment:
- ZOOKEEPER_CLIENT_PORT=2080
- ZOOKEEPER_TICK_TIME=2000
kafka:
container_name: kafka
image: confluentinc/cp-kafka:3.1.1
ports:
- "9092:9092"
environment:
- KAFKA_CREATE_TOPICS=Topic1:1
- KAFKA_ZOOKEEPER_CONNECT=192.168.99.100:2080
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.99.100:9092
depends_on:
- zookeeper
schema-registry:
container_name: schema-registry
image: confluentinc/cp-schema-registry:3.1.1
ports:
- "8081:8081"
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=192.168.99.100:2080
- SCHEMA_REGISTRY_HOST_NAME=localhost
depends_on:
- zookeeper
- kafka
When I stand up this docker the console output ends with:
schema-registry | Error while running kafka-ready.
schema-registry | org.apache.kafka.common.errors.TimeoutException: Timed out waiting for Kafka to create /brokers/ids in Zookeeper. timeout (ms) = 40000
schema-registry exited with code 1
It seems like kafka never connect Zookeper or something like that, does anyone knows why this is happening?
Does changing
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=192.168.99.100:2080
into
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2080
help?
Additionally, KAFKA_ZOOKEEPER_CONNECT=192.168.99.100:2080 should mention zookeeper as well, instead of an IP address. Or, how can you be sure of that IP address?
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.99.100:9092 mentions an IP address you might not be able to guarantee as well. For the latter, that IP address could be changed into kafka.
I also had challenges in getting Kafka and Zookeeper to work in Docker (via Docker Compose). In the end, https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/kafka-single-node/docker-compose.yml worked for me. You could use that as a source of inspiration.