I've got a small docker swarm with three nodes.
$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
jmsidw84mom3k9m4yoqc7rkj0 ip-172-31-a-x.region.compute.internal Ready Active 19.03.1
qg1njgopzgiainsbl2u9bmux4 * ip-172-31-b-y.region.compute.internal Ready Active Leader 19.03.1
yn9sj3sp5b3sr9a36zxpdt3uw ip-172-31-c-z.region.compute.internal Ready Active 19.03.1
And I'm running three redis containers.
$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
6j9mmnpgk5j4 redis replicated 3/3 172.31.m.n:5000/redis
But I can't get redis sentinel working between them - reading the logs it looks as though there are connection failures.
Just standing them up as three separate redis instances I've been testing connectivity and I can telnet from a shell on any host to the host IP of another node and it connects to the service running on the container. If I do the same from a shell on the container it can't connect out.
i.e.
[centos#172.31.a.x ~]$ telnet 172.31.b.y 6379
Trying 172.31.b.y...
Connected to 172.31.b.y.
Escape character is '^]'.
^CConnection closed by foreign host.
[centos#172.31.a.x ~]$ sudo docker exec -it 4d5abad441b8 sh
/ # telnet 172.31.14.12 6379
And then it hangs. Similarly I can't telnet to google.com on 443 from within a container but I can on the host. Curiously though, ping does get out of the container.
Any suggestions?
Ugh.
The redis side is a red herring, I can debug that now - I was mulling over that telnet isn't on the container (alpine linux) by default so there must be some connectivity, but I couldn't telnet to the webserver port it claimed it was downloading from as it installs.
Turns out there's something up with the version of the telnet client alpine linux installs - nmap and curl behave as expected.
Related
I am unable to connect (timeout) to a dockerized redis server1 which was started with the command of:
docker run -it --rm -p 6379:6379 redis:alpine
I've tried to start the server with configs:
set bind to 0.0.0.0
protected-mode no
I am however able to connect to my redis server with another docker container with the following command:
docker run -it --rm redis:alpine redis-cli -h host.docker.internal -p 6379
and also tried configuring the same parameters through cli as well.
When I try to connect the connection times out, I tried with both internal ip
172.17.0.x
and with the internal domain name:
host.docker.internal
to no avail. Further note that I was able to connect to redis server when installed with
brew install redis
on the host.
What am I missing, how can I resolve the issue so that I can connect to redis-server that is within a docker container from the container's host?
Environment details
OS: MacOS Monterey Version: 12.6 (21G115)
Docker version 20.10.17, build 100c701
1 More specifically I've tried with both
rdcli -h host.docker.internal in the mac terminal and also on application side with StackExchange.Redis.
More specifically I've tried with both rdcli -h host.docker.internal in the mac terminal
The host.docker.internal is a DNS name to access the docker host from inside a container. It's a bad practice to use this inside one container to talk to another container. Instead you'd create a network, create both containers in that network, and use the container name to connect. When that's done, the port doesn't even need to be published.
From the host, that's when you connect to the published port. Since the container is deployed with -p 6379:6379, you should be able to access that port from the hostname of your host, or localhost on that host:
rdcli -h localhost
What could be the reason for Docker containers not being able to connect via ports to the host system?
Specifically, I'm trying to connect to a MySQL server that is running on the Docker host machine (172.17.0.1 on the Docker bridge). However, for some reason port 3306 is always closed.
The steps to reproduce are pretty simple:
Configure MySQL (or any service) to listen on 0.0.0.0 (bind-address=0.0.0.0 in ~/.my.cnf)
run
$ docker run -it alpine sh
# apk add --update nmap
# nmap -p 3306 172.17.0.1
That's it. No matter what I do it will always show
PORT STATE SERVICE
3306/tcp closed mysql
I've tried the same with an ubuntu image, a Windows host machine, and other ports as well.
I'd like to avoid --net=host if possible, simply to make proper use of containerization.
It turns out the IPs weren't correct. There was nothing blocking the ports and the services were running fine too. ping and nmap showed the IP as online but for some reason it wasn't the host system.
Lesson learned: don't rely on route in the container to return the correct host address. Instead check ifconfig or ipconfig on the Linux or Windows host respectively and pass this IP via environment variables.
Right now I'm transitioning to using docker-compose and have put all required services into containers, so the host system doesn't need to get involved and I can simply rely on Docker's DNS. This is much more satisfying.
I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.
I'm trying to use Docker - Build, Ship, and Run Any App, Anywhere with Simple Cloud Infrastructure for Developers | DigitalOcean, using following container: Docker Hub, np1/docker-tor-clientonly.
Per author's instructions, I was able to run container:
mbp:~ alexus$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bdcdabe8ab1d nagev/tor "/bin/sh -c '/usr/loc" 40 minutes ago Up 40 minutes 127.0.1.1:9150->9150/tcp tor_instance
mbp:~ alexus$
What IP address should I use to set proxy inside of my browser?
If you install docker with toolbox, here is the fix:
# run the container with all IPs, do not limit to 127.0.1.1
$ docker run -d --name tor_instance -p 9150:9150 nagev/tor
# find out the docker IP
$ docker-machine ip default
192.168.99.100
# test the IP and port is available.
telnet 192.168.99.100 9150
Trying 192.168.99.100...
Connected to 192.168.99.100.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Then you should be fine to set the sockets now.
I am learning cassandra 2.1.1:
I have a docker container running a cassandra node. I am able to csqlsh into the cassandra node itself from within the node. It says "127.0.0.1:4096". I am aware it is localhost and has something to do with "listen_address" in the cassandra.yml setting.
When using the boot2docker ip address it does not work. I have started the container with -p port 4096 & 9160 but with no luck. I have tried to change "listen_address" to the boot2docker ip address, but same error when cqlsh
Info:
1. the cqlsh client and the cassandra node is running cassandra 2.1.1
2. I have started cassandra on the node by running ./cassandra
Any suggestions?
Thanks
Happened to me, able to cqlsh within container, but unable to connect from outside the docker container host, when I realized using v2.1 that DOES NOT use 9160:
From the cqlsh documentation:
So, you should get clqsh client to use IP of the host containing the docker container at 9042 and not 9106. Use docker ps and netstat -atun | grep LIST and telnet to confirm the right port is in LISTEN status.
docker run -d -p 9042:9042 cassandra:2.2
docker run -d -p 9042:9042 poklet/cassandra
Requirements
In Cassandra 2.1, the cqlsh utility uses the native protocol. In Cassandra 2.1, which uses the Datastax python driver, the default cqlsh listen port is 9042.
In Cassandra 2.0, the cqlsh utility uses the Thrift transport. In
Cassandra 2.0.x, the default cqlsh listen port is 9160.
Cassandra cqlsh - connection refused
You should set the listen_address to the IP address assigned to the container. You can find this using 'boot2docker inspect' against the container.
Normally the container's IP address will be on the private network established by boot2docker, and so will not be accessible outside the boot2docker VM unless you route traffic through the VM something like this: https://gist.github.com/bhyde/be920f1a390db5f4148e