Cannot connect to docker container (redis) in host mode - docker

This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above

According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.

Related

How to use redis-cli with Redis on Docker?

Run Redis in Docker
File docker-compose.yml
version: "3.8"
services:
redis:
image: redis
volumes:
- ./data:/data
ports:
- 6379:6379
docker pull redis
docker-compose up
docker-compose up -d
docker container ls
telnet localhost 6379
How to connect to Redis server with redis-cli ?
You can run attach to docker container and use redis-cli directly.
This command will attach you to the container's shell
docker exec -it CONTAINER_ID /bin/sh
Then you can use redis-cli just like it installed directly on the host system

My docker cannot connect to the internet - not even with proper dns

I am running a docker container on CentOS Linux release 8.2.2004. The CentOs Server itself has a stable internet connection. Ultimately I am trying to start a docker container with the following docker-compose.yaml
version: '3'
services:
postgres:
restart: always
image: postgres:12.1
environment:
POSTGRES_PASSWORD: myusername
POSTGRES_USER: mypass
networks:
- myname
volumes:
- /home/user/data/myname:/var/lib/postgresql/data
ports:
- 5432:5432
web:
restart: always
build: .
environment:
JPDA_ADDRESS: 8001
JPDA_TRANSPORT: dt_socket
networks:
- myname
depends_on:
- postgres
ports:
- 80:8080
- 8001:8001
volumes:
- /home/user/data/images:/data/images
networks:
myname:
driver: bridge
But upon docker-compose build maven repositories cannot be reached (due to a missing internet connection from with in the docker). Adding dns rules to the yaml doesn't change anything, neither does setting network_mode: "host"
When I try and execute
docker run --dns 8.8.8.8 busybox nslookup google.com
;; connection timed out; no servers could be reached
Upon trying a normal ping, the connection fails too
docker run -it busybox ping -c 1 8.8.8.8
1 packets transmitted, 0 packets received, 100% packet loss
However
docker run --rm -it busybox ping 172.17.0.1
seems to work just fine.
docker run --net=host -it busybox ping -c 1 8.8.8.8
Works too
how can I get docker to connect to the internet?
Looks like it is a known issue of busybox. Check this thread here: nslookup can not get service ip on latest busybox
On short, you must use busybox versions before 1.28.4.
I just ran the following command on a CentOS 7 with Docker 19 and it worked fine:
# docker run --dns 8.8.8.8 busybox:1.28.0 nslookup google.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: google.com
Address 1: 2a00:1450:4016:807::200e muc11s04-in-x0e.1e100.net
Address 2: 216.58.207.174 muc11s04-in-f14.1e100.net
I had to rewrite the build section in the docker-compose.yaml to the following in order to fix the problem
build:
context: .
network: host

How can i verify that cassandra is working

I have installed 3 docker containers with this docker-composer.yml below
version: '3'
services:
nginx:
image: nginx:alpine
volumes:
- ./app:/app
- ./nginx-config/:/etc/nginx/conf.d/
ports:
- 80:80
depends_on:
- php
php:
image: php:7.1-fpm-alpine
volumes:
- ./app:/app
cassandra:
image: 'docker.io/bitnami/cassandra:3-debian-10'
ports:
- '7000:7000'
- '9042:9042'
volumes:
- ./app:/app
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
My question is how to put localhost:7000 or even localhost:9042 nothing is working.
All containers is working perfectly when i run docker ps
Both ports that you have tired on the browser is not HTTP port.
- '7000:7000'
- '9042:9042'
By default, Cassandra uses 7000 for cluster communication (7001 if SSL is enabled), 9042 for native protocol clients, and 7199 for JMX. The internode communication and native protocol ports are configurable in the Cassandra Configuration File. The JMX port is configurable in cassandra-env.sh (through JVM options). All ports are TCP.
Cassandra Ports
You can verify cassandra status or connectivity from the inside container or you need to install the client on the host to check connectivity.
run docker ps and copy cassandra container name, then run below command.
docker exec -it container_name bash -c "cqlsh -u cassandra -p cassandra"
You can expect output like
[cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

Redis connection refused between Vagrant and Docker

I have a docker like this:
version: '3.5'
services:
RedisServerA:
container_name: RedisServerA
image: redis:3.2.11
command: "redis-server --port 26379"
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:26379
expose:
- 26379
RedisServerB:
container_name: RedisServerB
image: redis:3.2.11
command: "redis-server --port 6379"
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
expose:
- 6379
Now I do a vagrant ssh and do
ping RedisServerA
ping RedisServerB
They both work.
Now I try to connect to the redis server:
redis-cli -h RedisServerB
Works fine
Then I try to connect to the other
redis-cli -h RedisServerA -p 26739
It says:
Could not connect to Redis at RedisServerA:26739: Connection refused
Could not connect to Redis at RedisServerA:26739: Connection refused
Twice.
What am I missing here?
Typically in this setup you'd let each container run on its "natural" port. For connections from outside Docker you need the ports: mapping, and you'd access a container via its published port on the host's IP address. For connections between Docker containers (assuming they're on the same network, and if you used bare docker run, you manually created that network), you use the container name and the container's internal port number.
We can clean up the docker-compose.yml file by removing some unnecessary lines (container_name: and expose: don't really have a practical effect) and letting the image run its default command: on the default port, and only remapping with ports:. We'd get:
version: '3.5'
services:
RedisServerA:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:6379
RedisServerB:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
Between containers, you'd use the default port
redis-cli -h RedisServerA
redis-cli -h RedisServerB
From outside Docker you'd use the server's host name and the published ports
redis-cli -h server.example.com -p 23679
redis-cli -h server.example.com

Resources