docker-compose docker containers connection refused - docker

I have a container running a service on port 5001, and I am trying to access that service from another container. Whenever I try this I get a connection refused error. I have my docker-compose.yaml file configured like below.
version: "3"
services:
server:
build:
context: ./server
ports:
- 5001:5001
networks:
- local
client:
build:
context: ./client/wanderingreader
ports:
- 3000:3000
depends_on:
- server
networks:
- local
networks:
local:
driver: bridge
I am able to ping the server from the client using ping server, but nmap shows all ports are closed.
/wanderingreader # nmap server
Starting Nmap 7.92 ( https://nmap.org ) at 2022-06-26 17:54 UTC
Nmap scan report for server (172.22.0.2)
Host is up (0.0000060s latency).
rDNS record for 172.22.0.2: wanderingreader-server-1.wanderingreader_local
All 1000 scanned ports on server (172.22.0.2) are in ignored states.
Not shown: 1000 closed tcp ports (reset)
MAC Address: 02:42:AC:16:00:02 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds
This is the output of a docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b0c46283abb wanderingreader_client "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:3000->3000/tcp, 5001/tcp wanderingreader-client-1
8d96da5e2629 wanderingreader_server "/bin/sh -c 'python …" 5 minutes ago Up 5 minutes 0.0.0.0:5001->5001/tcp wanderingreader-server-1
I have tried other solutions, but I can't seem to get it to work properly. Any help is greatly appreciated.

Related

Why does my docker-compose port config work?

I'm in the process of creating a docker-compose config which maintains:
a node.js server, and
a separate postgres server.
Tutorials emphasise that postgres port 5432 must be exposed or forwarded so that the node container can access it: facilitated in the below docker-compose.yml.
version: "3.7"
services:
db:
container_name: db
image: postgres:alpine
ports:
- "5010:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: verysecretpass
POSTGRES_DB: pg-dev
server:
container_name: dashboard-api
build: .
volumes:
- .:/server
ports:
- "5000:5000"
This produces the below docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0d790cd4929e server_server "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5000->5000/tcp dashboard-api
818296c1fc02 postgres:alpine "docker-entrypoint.s…" 7 minutes ago Up 4 minutes 0.0.0.0:5010->5432/tcp pg
In the above state, node gets ECONN REFUSED when attempting to connect with this url: postgres://postgres:verysecretpass#db:5010/pg-dev
Yet, the same connection string can connect when using 5432 instead of 5010.
In fact, using 5432, connection succeeds even when pg container has no port configuration whatsoever. The below docker ps output reflects no-port-config state in which node container can happily connect:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76da96c15c05 server_server "docker-entrypoint.s…" 7 seconds ago Up 7 seconds 0.0.0.0:5000->5000/tcp dashboard-api
51c221ac2c54 postgres:alpine "docker-entrypoint.s…" 8 seconds ago Up 7 seconds 5432/tcp db
Why does this work? What am I missing here?
Using:
Docker version 20.10.0, build 7287ab3
docker-compose version 1.27.4, build 40524192
Unless otherwise configured, the services in a docker-compose document are automatically added to a network. There's no need to expose ports in this network.
If you want to expose ports on a container to the outside world, you will need to explicitly map these as you did. This however does not change anything for communication between services in the same network. If you have no reason to access the database from outside the network (e.g. inspect data using a DB tool on your own machine), you don't have to map / expose any ports of the db container.

Unable to create Kafka topics with Kafka and Zookeeper running on Docker

I have Kafka and Zookeeper running on two separate Docker containers:
<private-domain>/wurstmeister-kafka:0.10.1.0-2
<private-domain>/wurstmeister-zookeeper:3.4.9
Both containers seem to be up, but when I try to create Kafka topics by getting in to the first container:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
I get this error:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2020-06-07 03:10:55,293] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
Please notice that I did read other related questions and tried adding arguments to the command, such as -e ZK_HOSTS="localhost:2181". I know of other people working in the environment as mine who were able to run the commands successfully, so I suspect this might be a configuration issue on my side. Can you please guide?
EDIT: Here are the Docker Compose files:
version: '2'
services:
kafka:
image: <private-domain>/wurstmeister-kafka:0.10.1.0-2
container_name: kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: 127.0.0.1:2181
restart:
"unless-stopped"
and
version: '2'
services:
zk:
image: <private-domain>/wurstmeister-zookeeper:3.4.9
container_name: zk
ports:
- "2181:2181"
restart:
"unless-stopped"
and the output of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bf67a49da57a wurstmeister-kafka:0.10.1.0-2 "start-kafka.sh" 5 months ago Up 29 minutes 0.0.0.0:9092->9092/tcp kafka
ef3e908d82b3 wurstmeister-zookeeper:3.4.9 "/bin/sh -c '/usr/sbin/sshd && bash /usr/bin/start-zk.sh'" 5 months ago Up 29 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp zk
You have two Compose files. Thus, Your containers are on separated networks, and cannot refer each other.
You must add both services in one file, under one services: block, and run only one docker-compose up command
You can find working compose files here across the internet, or you could use minikube / oc with Kafka Helm Charts or Operators, which is how the large companies are testing Kafka in containers.

iOS postgres in docker-compose with port mapping

I have the following docker compose:
version: '3'
services:
postgres:
container_name: test-psql
image: postgres:10.4-alpine
restart: always
ports:
- 5432:5432
volumes:
- ~/docker_data/postgres:/data/postgres
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
I start everything with docker-compose up and then docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4cae94afaa6 postgres:10.4-alpine "docker-entrypoint.s…" 48 seconds ago Up 47 seconds 0.0.0.0:5432->5432/tcp test-psql
But when trying to connect to it with psql
psql -h localhost -U test
I get the error
psql: could not connect to server: Connection refused
Is the server running on host “localhost” (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host “localhost” (127.0.0.1) and accepting
TCP/IP connections on port 5432?
What you're trying is correct - I tested it on my end just in case and it's working as intended. I'd suggest checking your firewall configuration.

Docker Compose: Expose not working

docker-ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b1503d2e7c app_nginx "nginx -g 'daemon ..." 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp app_nginx_1
c9dd2231e554 app_web "/home/start.sh" 2 hours ago Up 2 hours 8000/tcp app_web_1
baad0fb1fabf app_gremlin "/start.sh" 2 hours ago Up 2 hours 8182/tcp app_gremlin_1
b663a5f026bc postgres:9.5.1 "docker-entrypoint..." 25 hours ago Up 2 hours 5432/tcp app_db_1
They all work fine:
app_nginx connects well with app_web
app_web connects well with postgres
No working file:
app_web is not able to connect with app_gremlin
docker-compose.yaml
version: '3'
services:
db:
image: postgres:9.5.12
web:
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh
Errors:
Basically I am not able to connect to gremlin container from my app_web container.
All below have been executed inside web_app container
curl:
root#49a8f08a7b82:/# curl 0.0.0.0:8182
curl: (7) Failed to connect to 0.0.0.0 port 8182: Connection refused
netstat
root#49a8f08a7b82:/# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:42681 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
udp 0 0 127.0.0.11:54232 0.0.0.0:*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
nmap
root#49a8f08a7b82:/# nmap -p 8182 0.0.0.0
Starting Nmap 7.60 ( https://nmap.org ) at 2018-06-22 09:28 UTC
Nmap scan report for 0.0.0.0
Host is up.
PORT STATE SERVICE
8182/tcp filtered vmware-fdm
Nmap done: 1 IP address (1 host up) scanned in 2.19 seconds
nslookup
root#88626de0c056:/# nslookup app_gremlin_1
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: app_gremlin_1
Address: 172.19.0.3
Experimenting:
For Gremlin container I did,
ports:
- "8182:8182"
Then from Host I can connect to gremlin container BUT still no connection between web and gremlin container
I am working on creating a re-creating sample Docker file (minimal stuff to recreate the issue) meanwhile anyone has any idea what the issue might be?
curl 0.0.0.0:8182
The 0.0.0.0 address is a wild card that tells an app to listen on all network interfaces, you do not connect to this interface as a client. For container to container communication, you need:
containers on the same user generated network (compose does this for you by default)
connect to the name of the service (or container name)
connect to the port inside the other container, not the published port.
In your case, the command should be:
curl http://gremlin:8182
Networking is namespaced in apps running inside containers, so each container gets it's open loopback interface and ip address on a bridge network. So moving an app into containers means you need to listen on 0.0.0.0 and connect to the bridge ip using DNS.
You should also remove links and depends_on from your Dockerfile, they don't apply in version 3. Links have long since been deprecated in favor of shared networks. And depends_on doesn't work in swarm mode along with probably not doing what you wanted since it never checked for the target app to be running, only the start of that container to have been kicked off.
One last note, expose doesn't affect the ability to communicate between containers on common networks or publish ports on the host. Expose simply sets meta data on the image that is documentation between the person creating the image and the person running the image. Applications are not required to use that value, but it's a good habit to make your app default to that value for the benefit of downstream users. Because of its role, unless you have another app checking for the exposed port list, like a self updating reverse proxy, there's no need to expose the port in the compose file unless you're giving the compose file to another person and they need the documentation.
There is no link configured in the docker-compose.yaml between web and gremlin. Try to use the following:
version: '3'
services:
db:
image: postgres:9.5.12
web:
links:
- gremlin
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh

docker compose up nginx reverse proxy not adding containers to docker0 bridge

After I carry out docker-compose up, it starts the containers.
when I do docker ps I get the below, which tells me that the containers are running. However when I do docker network inspect bridge the result shows me that there are no containers part of the docker0 bridge.
When I then carry out docker run meanchat_myserver it actually does show up on docker0 and I am also getting the data that the server is running on port 3000.
Which I don't get by using docker-compose.
What am I doing wrong here?
I am reading that when I use docker0 I can only refer to IP's to connect to other containers and not the name. Can I assume the ip's don't change on the containers and that this works without issue on deploying the app in production?
02cf08b1c3da d57f06ba9c68 "npm start" 33 minutes ago Up 33 minutes 4200/tcp meanchat_client_1
e257063c9e21 meanchat_myserver "npm start" 33 minutes ago Up 33 minutes 3000/tcp meanchat_myserver_1
02441c2e43f5 e114a298eabd "npm start" About an ago Up 33 minutes 0.0.0.0:80->80/tcp meanchat_nginx_1
88d9841d2553 mongo "docker-entrypoint..." 3 hours ago Up 3 hours 27017/tcp meanchat_mongo_1
compose
version: '3'
services:
# Build the container using the client Dockerfile
client:
build: ./
# This line maps the contents of the client folder into the container.
volumes:
- ./:/usr/src/app
myserver:
build: ./express-server
volumes:
- ./:/usr/src/app
depends_on:
- mongo
nginx:
build: ./nginx
# Map Nginx port 80 to the local machine's port 80
ports:
- "80:80"
# Link the client container so that Nginx will have access to it
mongo:
environment:
- AUTH=yes
- MONGO_INITDB_ROOT_USERNAME=superAdmin
- MONGO_INITDB_ROOT_PASSWORD=admin123
- MONGO_INITDB_DATABASE=d0c4ae452a5c
image: mongo
volumes:
- /var/mongodata/data:/data/db
By default Compose sets up a single network for your app.
For more detail, refer this link.
This means containers with compose won't be located in default bridge network by default.
You can check which network the containers with compose are using with the command.
docker inspect $container_name -f "{{.NetworkSettings.Networks}}"
However, If you want containers to be in default bridge network, you can use network_mode.
services:
service_name:
# other options....
network_mode: bridge

Resources