Redis connection refused between Vagrant and Docker - docker

I have a docker like this:
version: '3.5'
services:
RedisServerA:
container_name: RedisServerA
image: redis:3.2.11
command: "redis-server --port 26379"
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:26379
expose:
- 26379
RedisServerB:
container_name: RedisServerB
image: redis:3.2.11
command: "redis-server --port 6379"
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
expose:
- 6379
Now I do a vagrant ssh and do
ping RedisServerA
ping RedisServerB
They both work.
Now I try to connect to the redis server:
redis-cli -h RedisServerB
Works fine
Then I try to connect to the other
redis-cli -h RedisServerA -p 26739
It says:
Could not connect to Redis at RedisServerA:26739: Connection refused
Could not connect to Redis at RedisServerA:26739: Connection refused
Twice.
What am I missing here?

Typically in this setup you'd let each container run on its "natural" port. For connections from outside Docker you need the ports: mapping, and you'd access a container via its published port on the host's IP address. For connections between Docker containers (assuming they're on the same network, and if you used bare docker run, you manually created that network), you use the container name and the container's internal port number.
We can clean up the docker-compose.yml file by removing some unnecessary lines (container_name: and expose: don't really have a practical effect) and letting the image run its default command: on the default port, and only remapping with ports:. We'd get:
version: '3.5'
services:
RedisServerA:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:6379
RedisServerB:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
Between containers, you'd use the default port
redis-cli -h RedisServerA
redis-cli -h RedisServerB
From outside Docker you'd use the server's host name and the published ports
redis-cli -h server.example.com -p 23679
redis-cli -h server.example.com

Related

Cannot connect to docker container (redis) in host mode

This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above
According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.

Private "host" for docker compose network

Given a docker-compose file something like this
version: "3.8"
services:
service-one:
ports:
- "8881:8080"
image: service-one:latest
service-one:
ports:
- "8882:8080"
image: service-two:latest
what happens is that service-one is exposed to the host network on port 8881 and service-two would be exposed on the host network at port 8882.
What I'd like to be able to arrange is that in the network created for the docker-compose there be a "private host" on which service-one will be exposed at port 8881 and service-two will be exposed on port 8882 such that any container in the docker-compose network will be able to connect to the "private host" and connect to the services on their configured HOST_PORT but not on the actual docker host. That is, to have whatever network configuration that usually bridges from the CONTAINER_PORT to the HOST_PORT happen privately within the docker-compose network without having the opportunity for there to be port conflicts on the actual host network.
I tweak this to fit to your case. The idea is to run socat in a gateway so that containers nor images changed (just service names). So, from service-X-backend you are able to connect to:
service-one on port 8881, and
service-two on port 8882
Tested with nginx containers.
If you wish to make some ports public, you need to publish them from the gateway itself.
version: "3.8"
services:
service-one-backend:
image: service-one:latest
networks:
- gw
service-two-backend:
image: service-two:latest
networks:
- gw
gateway:
image: debian
networks:
gw:
aliases:
- service-one
- service-two
depends_on:
- service-one-backend
- service-two-backend
command: sh -c "apt-get update
&& apt-get install -y socat
&& nohup bash -c \"socat TCP-LISTEN: 8881,fork TCP:service-one-backend:8080 2>&1 &\"
&& socat TCP-LISTEN: 8882,fork TCP:service-two-backend:8080"
networks:
gw:

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

How to access docker container using localhost address

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Access endpoint in one docker container from another

I have a docker-compose file with two services: app and httpd
app
app:
image: primus852/machinelearning:latest
ports:
- 5001:5000
expose:
- "5001"
restart: always
networks:
- default
volumes:
- ./api:/app
environment:
- FLASK_APP=app/source/__init__.py
- FLASK_ENV=development
httpd
httpd:
image: primus852/mitswiki:latest
ports:
- 80:80
restart: always
networks:
- default
volumes:
- ./project:/var/www/html
Flask app
The app container has an endpoint like this:
#app.route('/predict', methods=['GET'])
def predict():
...DO STH....
I can open http://localhost:5001/predict in my browser, works...
I can curl from my cmd: curl localhost:5001/predict, works...
But when I am inside my httpd container this does not work from the console: curl localhost:5001/predict
curl: (7) Failed to connect to localhost port 5001: Connection refused
So I thought I address the app container as I address my mysql from inside my httpd container: curl app:5001/predict but it has the same result.
Can anyone see what I am doing wrong?
According to your yaml:
ports:
- 5001:5000
Inside container you have to use port 5000
Inside the httpd container localhost refers to just that httpd container. It cannot access other containers by default.
Another thing which might be occuring is that your app is not open for 'remote' access. A connection from one container to another one is a remote connection.
Within your docker-compose files you can link containers to eachother
While the containers are linked you can then use curl to get the /predict page by using curl app:5001/predict

Resources