cqlsh to Cassandra single node running in docker - docker

I am learning cassandra 2.1.1:
I have a docker container running a cassandra node. I am able to csqlsh into the cassandra node itself from within the node. It says "127.0.0.1:4096". I am aware it is localhost and has something to do with "listen_address" in the cassandra.yml setting.
When using the boot2docker ip address it does not work. I have started the container with -p port 4096 & 9160 but with no luck. I have tried to change "listen_address" to the boot2docker ip address, but same error when cqlsh
Info:
1. the cqlsh client and the cassandra node is running cassandra 2.1.1
2. I have started cassandra on the node by running ./cassandra
Any suggestions?
Thanks

Happened to me, able to cqlsh within container, but unable to connect from outside the docker container host, when I realized using v2.1 that DOES NOT use 9160:
From the cqlsh documentation:
So, you should get clqsh client to use IP of the host containing the docker container at 9042 and not 9106. Use docker ps and netstat -atun | grep LIST and telnet to confirm the right port is in LISTEN status.
docker run -d -p 9042:9042 cassandra:2.2
docker run -d -p 9042:9042 poklet/cassandra
Requirements
In Cassandra 2.1, the cqlsh utility uses the native protocol. In Cassandra 2.1, which uses the Datastax python driver, the default cqlsh listen port is 9042.
In Cassandra 2.0, the cqlsh utility uses the Thrift transport. In
Cassandra 2.0.x, the default cqlsh listen port is 9160.
Cassandra cqlsh - connection refused

You should set the listen_address to the IP address assigned to the container. You can find this using 'boot2docker inspect' against the container.
Normally the container's IP address will be on the private network established by boot2docker, and so will not be accessible outside the boot2docker VM unless you route traffic through the VM something like this: https://gist.github.com/bhyde/be920f1a390db5f4148e

Related

How to connect to dockerized Redis Server?

I am unable to connect (timeout) to a dockerized redis server1 which was started with the command of:
docker run -it --rm -p 6379:6379 redis:alpine
I've tried to start the server with configs:
set bind to 0.0.0.0
protected-mode no
I am however able to connect to my redis server with another docker container with the following command:
docker run -it --rm redis:alpine redis-cli -h host.docker.internal -p 6379
and also tried configuring the same parameters through cli as well.
When I try to connect the connection times out, I tried with both internal ip
172.17.0.x
and with the internal domain name:
host.docker.internal
to no avail. Further note that I was able to connect to redis server when installed with
brew install redis
on the host.
What am I missing, how can I resolve the issue so that I can connect to redis-server that is within a docker container from the container's host?
Environment details
OS: MacOS Monterey Version: 12.6 (21G115)
Docker version 20.10.17, build 100c701
1 More specifically I've tried with both
rdcli -h host.docker.internal in the mac terminal and also on application side with StackExchange.Redis.
More specifically I've tried with both rdcli -h host.docker.internal in the mac terminal
The host.docker.internal is a DNS name to access the docker host from inside a container. It's a bad practice to use this inside one container to talk to another container. Instead you'd create a network, create both containers in that network, and use the container name to connect. When that's done, the port doesn't even need to be published.
From the host, that's when you connect to the published port. Since the container is deployed with -p 6379:6379, you should be able to access that port from the hostname of your host, or localhost on that host:
rdcli -h localhost

Docker swarm containers can't connect out

I've got a small docker swarm with three nodes.
$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
jmsidw84mom3k9m4yoqc7rkj0 ip-172-31-a-x.region.compute.internal Ready Active 19.03.1
qg1njgopzgiainsbl2u9bmux4 * ip-172-31-b-y.region.compute.internal Ready Active Leader 19.03.1
yn9sj3sp5b3sr9a36zxpdt3uw ip-172-31-c-z.region.compute.internal Ready Active 19.03.1
And I'm running three redis containers.
$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
6j9mmnpgk5j4 redis replicated 3/3 172.31.m.n:5000/redis
But I can't get redis sentinel working between them - reading the logs it looks as though there are connection failures.
Just standing them up as three separate redis instances I've been testing connectivity and I can telnet from a shell on any host to the host IP of another node and it connects to the service running on the container. If I do the same from a shell on the container it can't connect out.
i.e.
[centos#172.31.a.x ~]$ telnet 172.31.b.y 6379
Trying 172.31.b.y...
Connected to 172.31.b.y.
Escape character is '^]'.
^CConnection closed by foreign host.
[centos#172.31.a.x ~]$ sudo docker exec -it 4d5abad441b8 sh
/ # telnet 172.31.14.12 6379
And then it hangs. Similarly I can't telnet to google.com on 443 from within a container but I can on the host. Curiously though, ping does get out of the container.
Any suggestions?
Ugh.
The redis side is a red herring, I can debug that now - I was mulling over that telnet isn't on the container (alpine linux) by default so there must be some connectivity, but I couldn't telnet to the webserver port it claimed it was downloading from as it installs.
Turns out there's something up with the version of the telnet client alpine linux installs - nmap and curl behave as expected.

Docker - connecting to an open port in a container

I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)

How to connect to Neo4j inside Docker with Spring?

I have Neo4j running in Docker container:
docker run --publish=7474:7474 --publish=7687:7687 --name=neo4j -e NEO4J_AUTH=neo4j/psswd neo4j:latest
I can access Neo4j with this URL: http://localhost:7474/browser/.
And also I can connect to Neo4j with Spring outside Docker with this URI: bolt://localhost:7687/mydb.
But when I try to connect to Neo4j with Spring inside Docker with another URI bolt://neo4j:7687/mydb:
docker run -p 8080:8080 -t myapp --link neo4j:neo4j
I get the exception:
java.net.UnknownHostException: neo4j
And when I try the same with localhost or 127.0.0.1, I get the exception:
java.net.ConnectException: Connection refused
What URI should I use? And what am I doing wrong?
Neo4j logs look like this:
======== Neo4j 3.3.4 ========
Starting...
Bolt enabled on 0.0.0.0:7687.
Started.
Remote interface available at http://localhost:7474/
You can use a user-defined bridge network, so that you have DNS resolution between containers.
From the docs:
User-defined bridges provide automatic DNS resolution between containers.
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
And when I try the same with localhost or 127.0.0.1, I get the exception...
When you are inside a container and try to access "localhost" or "127.0.0.1" you are referring to the container itself, not the host.

Docker port exposed to outside world

I've installed docker in a VM which is publicy available on internet. I've installed mongodb in a docker container in the VM.Mongodb is listening on 27017 port.
I've installed using the following steps
docker run -p 27017:27017 --name da-mongo -v ~/mongo-data:/data/db -d mongo
The port from container is redirected to the host using the -p flag. But the port 27017 is exposed on the internet. I don't want it to happen.
Is there any way to fix it?
Well, if you want it available for certain hosts then you need a firewall. But, if all you need is it working on localhost (your VM machine), then you don't need to expose/bind the port with the host. I suggest you to run the container without the -p option, then, run the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' your_container_id_or_name
After that, it will display an IP, it is the IP of the container you've just ran (Yes, docker uses somewhat an internal virtual network connecting your containers and your host machine between them).
After that, you can connect to it using the IP and port combination, something like:
172.17.0.2:27017
When you publish the port, you can select which host interface to publish on:
docker run -p 127.0.0.1:27017:27017 --name da-mongo \
-v ~/mongo-data:/data/db -d mongo
That will publish the container port 27017 to host interface 127.0.0.1 port 27017. You can only add the interface to the host port, the container itself must still bind to 0.0.0.0.

Resources