Why can I not connect to DSE nodes running in Docker? - docker

I'm trying to set up a cassandra ring with five nodes in docker using dse-server and dse-studio. The docker containers are up and running and I can access the casandra database and do CRUD operations, but it does not connect to all the nodes. I believe I have not created the docker compose networks correctly or it may be another issue. Here is the code for the project:
https://github.com/juanpujazon/DockerCassandraNodes
If I use the connector connecting to 192.168.3.19:9042 I can do the CRUD for the tables but only the conection to the first node is succesfull. The CRUD completes succesfully, but all the hosts ips other than the first one get the error "Connection[/172.30.0.4:9042-1, inFlight=0, closed=false] Error connecting to /172.30.0.4:9042 (connection timed out: /172.30.0.4:9042)"
I tried to create a connector adding all the ips from the different nodes as contact points but is not working as intended:
Exception in thread "main" java.lang.IllegalArgumentException: Failed to add contact point: "127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6"
at com.datastax.driver.core.Cluster$Builder.addContactPoint(Cluster.java:943)
at cassandra.java.client.CassandraConnector.connectNodes(CassandraConnector.java:30)
at cassandra.java.client.Main.main(Main.java:13)
Caused by: java.net.UnknownHostException: Host desconocido ("127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6")
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:933)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:852)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1377)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1305)
at com.datastax.driver.core.Cluster$Builder.addContactPoint(Cluster.java:939)
Any idea about what should I change?

If you can only connect to the cluster on 192.168.3.19, it indicates to me that the containers are not accessible on the host. You will need to configure your Docker environment so that the containers are exposed to public access.
For this error:
Connection[/172.30.0.4:9042-1, inFlight=0, closed=false] \
Error connecting to /172.30.0.4:9042 (connection timed out: /172.30.0.4:9042)
you are connecting to the container using the default CQL port 9042 but you've exposed it on a different port in your docker-compose.yml:
ports:
- 9044:9042
I recommend you re-map all the container ports to just 9042 to make it easier for yourself to connect to them. Otherwise, you'll need to specify the port together with the IP addresses when you configure the contact points like:
"ip1:port1", "ip2:port2", "ip3:port3"
I've also noted that you've included localhost in the contact points:
Failed to add contact point: "127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6"
If you have a node that is only listening for client connections on localhost then it is configured incorrectly and you need to fix its configuration.
Finally, if your goal is to build a cluster for app development, you might want to consider using Astra DB so you don't have to worry about configuring/maintaining your own Cassandra installation. With Astra DB, you can launch a cluster on the free tier with literally just 5 clicks in just over a minute with no credit card required. Cheers!

Related

Big IP remove tcp/ip route and block communication to Docker Container

Short Version :
Why Big IP delete some route when establishing VPN connection ?
This impact “Docker Desktop for Windows” by blocking any communication with docker container because TCP/IP route to reach container is delete by Big IP.
Long Version :
Context: Docker is use to run application (Microsoft SQL Server) in container. Communication with container is done by NAT interface create by Docker.
Issue description: Unable to connect to my Docker Container when Big IP is running.
Overview : When I start new docker container that contains SQL Server, I can connect on it and execute SQL Query… but if I’m starting Big IP to connect on ICN, no connection to my Docker container that running SQL Server is possible…. even if my container still to run (and SQL server too)
Root cause: TCP/IP Route to my Docker container is delete by Big Ip.
Step by step to reproduce
Step 1 : Start my Docker container
docker run -e "ACCEPT_EULA=Y" --name MyLocalServer -p 1433:1433 -e "SA_PASSWORD=XXXXX" -d microsoft/mssql-server-windows-developer
Step 2 : Able to connect to SQL located in Docker container
Step 3 : Docker network details
Return technical information about network subnet for my docker container.
 
Step 4 : Route table before VPN connection
We see the route for my container
 
Step 5 : When connecting my VPN, Big IP remove route for my Docker container
Big IP log :
Step 6 : Route table appear like this after VPN connection established
Note : route for 172.29.48.0/20 disappear
Step 7 : Now, unable to connect on SQL Container
Got following error “A network-related or instance-specific error has occurred while establishing a connection to SQL Server.”
Step 8 : When I disconnect my VPN, deleted routes are restore by Big IP
Step 9 : And, now, access to my SQL is possible
Conclusion
Big IP removing routes that allow communication with Docker Container.
I have try to:
#1 : Add route manually after Big Ip connection with following command:
*route add 172.29.48.0 mask 255.255.240.0 0.0.0.0 METRIC 10 IF 34*
… but Big IP remove new entry in routing table automatically as previously seen when BIG IP Connecting.
#2 : I try to change range of IP user by Docker to access container to use 192.168.1.x (previously : 172.29.48.0)
But as previously, Big IP remove route for this range too :
This question is for your network administrator, who probably only follows the security policy of the company giving you the VPN access.
Based on K49720803: BIG-IP Edge Client operations guide | Chapter 3: Common approaches to configuring VPN, you would ask for disabling the Prohibit routing table changes option or maybe try adding a second network card dedicated to your Docker, with hopes it would not be managed by the VPN client at all - but I didn't try.

contact points for a local cassandra instance

I have created 2 cassandra instances by deploying it on docker. One on port 9042 other one on 9043.
I have 2 applications, one is to be connected to 9042 other one to 9043.
1st application is connected to 9042 and is running successfully.
The properties i've given for the db are :
contactpoints = localhost,
port = 9042
The 2nd application which is to be brought up by the second db instance i.e., 9043 is throwing error :
om.datastax.driver.core.Cluster - You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found in the control host's system.peers at startup
The properties i am giving for the db are :
contactpoints = localhost,
port = 9043
How can i connect to the cassandra intsance 9043 while the first application is running?
You're specifying localhost, but inside Docker, every localhost is local to the running image, but not to the host machine. As I see you have ports bound to the host network, so you need to specify the IP address of your machine instead of localhost.
P.S. Also, why are you packing the application with Cassandra? It's not how the Docker works - every process should run in separate container...
Every node in Cassandra should bind to a separate ip address,even on physical servers or docker on which 2 instances/nodes are running.

Hyperledger Fabric V1.2 network setup with Zookeeper and Kafka for multiple Orderers

I've got 4 hosts, each with an Orderer. I understanad that for multiple Orderers setup we must utilise Kafka and Zookeeper.
How would I go about deploying the network with Kafka and Zookeeper? I've tried using
docker-compose ... up
to deploy the docker services as usual but this doesn't work, eventhough I've included the extra-hosts key in the services mapping the service name with the IP of the machines they're on so that the services can find other.
The error I'm getting is
Cannot open channel to 2 at election address zookeeper2/x.x.x.x:3888
java.net.ConnectException: Connection refused (Connection refused)
I've also checked that on all the machines the ports that the docker services uses are open and accessible.

Can't access Heapster's InfluxDB port 8083

I follow this guide to deploy my Kubernetes cluster and this guide to launch Heapster.
However, when I open Granfa's website UI, it always says "Dashboard init failed: Template variables could not be initialized: undefined". Moreover, I can't access InfluxDB via port 8083. Is there anything I missed?
I've tried several versions of Kubernetes. I can't deploy DNS with some of them, for now I'm using 1.1.4, but I need to create "kube-system" namespace manually. The docker version is 1.7.1.
Edit:
I can curl ports 8083 and 8086 in the influxdb pod. However, I get connection refused if I do that in the node running the container. This is my services status:

Is it possible to have only one consul server to serve key-pairs for rails app?

I would like to know if it is possible to server consul server key-value pairs with only one consul server.
I am trying to setup consul server for only storing key value pairs for my rails app. I am setting up only one consul server which will act as agent. However getting problems in setting up the web-ui of consul.
I tried to run one physical instance as a consul server to serve the web-ui of consul
consul agent -server -data-dir /tmp/consul -ui-dir /home/ubuntu/dist/
Than to access the consul web-ui on the public ip I run the following command
consul members -rpc-addr=X.X.X.X:8400
Got the following error
Error connecting to Consul agent: dial tcp X.X.X.X:8400: connection refused
Where X.X.X.X is the private ip of the instance
By default agent is starting a client at localhost, due to documentation:
Client Addr: This is the address used for client interfaces to the agent. This includes the ports for the HTTP, DNS, and RPC interfaces. The RPC address is used by other consul commands (such as consul members, consul join, etc) which query and control a running agent. By default, this binds only to localhost. If you change this address or port, you'll have to specify a -rpc-addr whenever you run commands such as consul members to indicate how to reach the agent. Other applications can also use the RPC address and port to control Consul.
So you need to set a flag -client=X.X.X.X to set IP address for remote access to the client. Try to start your server with this command:
consul agent -server -bootstrap -data-dir /tmp/consul -ui-dir /home/ubuntu/dist/ -client=X.X.X.X
Where X.X.X.X your IP address. To check that option is accepted, check the server output, it contains a line like:
Client Addr: X.X.X.X(HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
And to access to the WebUI, open in your web browser this link: http://X.X.X.X:8500/ui
As for consul members command, it just prints you a list of known members by this agent, so there is no need to execute it, to use WebUI.
Yes, it is possible but it is not a very good idea. Consul is optimized to operate in a distributed architecture where tolerance to a Network Partition is a primary concern. With just one node you would be much better severed with something like Redis. It will be much faster and scale to a larger set of clients. Read up on CAP Theorum.
Redis is optimized for Consistency and Availability.
Consul is optimized for Availability and Partition Tolerance (if run as a Cluster).
With a single node there is no reason to use Consul for K/V's, although if you also wanted it's Service Discovery, DNS, Events, and Locking features then there is a reason.

Resources