I am using micro-service to access hadoop and hbase to get data but it's not accessible from pod.
It shows only:
INFO ipc.Client: Retrying connect to server: hdpcluster.internal/10.160.0.2:8020. Already tried 3 time(s); maxRetries=45
IP 10.160.0.2 is accessible from all nodes and they are on GCP.
You probably need to open a firewall rule to allow port 8020 on your Hbase nodes that your Kubernetes nodes can connect to them. Something like this on your Hbase firewall rules (for your Hbase nodes):
Related
I'm trying to set up a cassandra ring with five nodes in docker using dse-server and dse-studio. The docker containers are up and running and I can access the casandra database and do CRUD operations, but it does not connect to all the nodes. I believe I have not created the docker compose networks correctly or it may be another issue. Here is the code for the project:
https://github.com/juanpujazon/DockerCassandraNodes
If I use the connector connecting to 192.168.3.19:9042 I can do the CRUD for the tables but only the conection to the first node is succesfull. The CRUD completes succesfully, but all the hosts ips other than the first one get the error "Connection[/172.30.0.4:9042-1, inFlight=0, closed=false] Error connecting to /172.30.0.4:9042 (connection timed out: /172.30.0.4:9042)"
I tried to create a connector adding all the ips from the different nodes as contact points but is not working as intended:
Exception in thread "main" java.lang.IllegalArgumentException: Failed to add contact point: "127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6"
at com.datastax.driver.core.Cluster$Builder.addContactPoint(Cluster.java:943)
at cassandra.java.client.CassandraConnector.connectNodes(CassandraConnector.java:30)
at cassandra.java.client.Main.main(Main.java:13)
Caused by: java.net.UnknownHostException: Host desconocido ("127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6")
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:933)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:852)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1377)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1305)
at com.datastax.driver.core.Cluster$Builder.addContactPoint(Cluster.java:939)
Any idea about what should I change?
If you can only connect to the cluster on 192.168.3.19, it indicates to me that the containers are not accessible on the host. You will need to configure your Docker environment so that the containers are exposed to public access.
For this error:
Connection[/172.30.0.4:9042-1, inFlight=0, closed=false] \
Error connecting to /172.30.0.4:9042 (connection timed out: /172.30.0.4:9042)
you are connecting to the container using the default CQL port 9042 but you've exposed it on a different port in your docker-compose.yml:
ports:
- 9044:9042
I recommend you re-map all the container ports to just 9042 to make it easier for yourself to connect to them. Otherwise, you'll need to specify the port together with the IP addresses when you configure the contact points like:
"ip1:port1", "ip2:port2", "ip3:port3"
I've also noted that you've included localhost in the contact points:
Failed to add contact point: "127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6"
If you have a node that is only listening for client connections on localhost then it is configured incorrectly and you need to fix its configuration.
Finally, if your goal is to build a cluster for app development, you might want to consider using Astra DB so you don't have to worry about configuring/maintaining your own Cassandra installation. With Astra DB, you can launch a cluster on the free tier with literally just 5 clicks in just over a minute with no credit card required. Cheers!
My scenario has 7 nodes, 4 running in AWS (each one in a different account), 1 running in LINODE, 1 running in Google Cloud and 1 running in Oracle Cloud. Every node is using external IP, and I checked firewall ports into the provider and ensure that is disabled on the VM. I also edited the hosts files in each node to ensure that they will be reachable, all they are pinging ok.
All machines running in AWS and Linode can join the SWARM both as a worker or as a manager, but the machines running in the Google Cloud and Oracle, just can join as a worker.
Using one AWS node as Leader, I got the following error messages...
docker node ls on leader
trying join node from Oracle
trying join node from Google Cloud
At last, I tried to make the Google Cloud node as a leader into a new SWARM, and tried to join the Linode and Oracle Nodes into it and got the following error message
trying to join o a new swarm
In this last attempt, the node that I tried to add says that he is into a swarm but when I run a docker node ls into the Leader, no new nodes are added...
Anyone already used Google Cloud or Oracle to run dockers and swarm can help me to figure out what more I need configure or what port or protocol more I need to allow. I already tried to permit all traffic from the nodes IP... in theory, everything would be allowed...
My best regards,
Leonardo Lima
Google Cloud Platform handles implied Firewall rules and also have default Ingress rules added once a new VPC is created. If you don't explicitly allow the Ingress traffic to specific ports in the node/nodes within the VPC, connection will timeout. Therefore, you need to allow the traffic to the node through the manager port (2377) from 0.0.0.0/0 (any source). So, these are the networking configurations that we need to review before understand why you can't connect to your node as Manager.
What port for TCP/UDP communication needs to be open between the nodes and the master of azure kubernetes services, when the nodes are in a subnet that uses advanced networking?
For security reasons we have to use a Network Security Group on every subnet that is connected to the onpremises network via VPN in azure. This NSG has to deny every implicit traffic between machines even in the same subnet to hinder attackes from traversing between systems.
So it is the same for the azure kubernetes services with advanced networking, that uses a subnet which is connected via vnet peering.
We couldn't find an answer if it is a supported scenario to have a NSG on the subnet of the aks advanced network and what ports are needed to make it work.
We tried our default NSG which denies inter traffic between host, but this hinders us from connecting to the services and from nodes to come up without errors.
AKS is a managed cluster. And the managed cluster master means that you don't need to configure components like a highly available etcd store, but it also means that you can't access the cluster master directly.
When you create an AKS cluster, a cluster master is automatically created and configured. And the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as kubectl or the Kubernetes dashboard.
For more details, see Kubernetes core concepts for Azure Kubernetes Service (AKS). If you need to configure the cluster master and other things all by yourself, you can deploy your own Kubernetes cluster using aks-engine.
For the security of your pods, you can use the network policy to improve it. Although it's just a preview version.
Also, it's not recommended to expose the remote connectivity to the AKS cluster nodes if you want to connect to the AKS nodes. The suggestion is that create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks. For more details, see Securely connect to nodes through a bastion host.
If you have more questions, please let me know. I'm glad to provide more help.
I have been building a distributed load testing application using Kubernetes and Locust (similar to this).
I currently have a multi-node cluster running on bare-metal (running on an Ubuntu 18.04 server, set up using Kubeadm, and with Flannel as my pod networking addon).
The architecture of my cluster is as follows:
I have a 'master instance' of the Locust application running on my master node.
I have 'slave instances' of the Locust application running on all of my other nodes. These slave instances must be able to bind to a port (5558 by default) of the master instance.
As of now, I don't believe that that is happening. My cluster shows that all of my deployments are healthy and running, however I am unable to access the logs of any of my slave instances which are running on nodes other than my master node. This leads me to believe that my pods are unable to communicate with each other across different nodes.
Is this an issue with my current networking or deployment setups (I followed the linked guides pretty-much verbatim)? Where should I start in debugging this issue?
How slaves instances try to join the master instance. You have to create master service (with labels) to access master pod. Also, make sure your SDN is up and master is reachable to slave instances. You can test using telnet to master pod IP from slave instances.
Based on your description of the problem I can guess that you have a connection problem caused by firewall or network misconfiguration.
From the network perspective, there are requirements mentioned in Kubernetes documentation:
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
From the firewall perspective, you need to ensure the cluster traffic can pass the firewall on the nodes.
Here is the list of ports you should have opened on the nodes provided by CoreOS website:
Master node inbound: TCP: 443 from Worker Nodes, API Requests, and End-Users
UDP: 8285,8472 from Master & Worker Nodes
Worker node inbound: TCP: 10250 from Master Nodes
TCP: 10255 from Heapster
TCP: 30000-32767 from External Application Consumers
TCP: 1-32767 from Master & Worker Nodes
TCP: 179 from Worker Nodes
UDP: 8472 from Master & Worker Nodes
UPD: 179 from Worker Nodes
Etcd node inbound: TCP: 2379-2380 from Master & Worker Nodes
see ip forwarding is enabled on all the nodes.
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
if not enable it like this and test it.
echo 1 > /proc/sys/net/ipv4/ip_forward
I would like to host a Kubernetes master node in AWS (or other cloud provider) and then add nodes from home to that cluster. I do however not have a static IP from my internet provider, so the question is: will this work and what happens when my IP address change?
Here could get some info about Master-Node communication in kubernetes.
For communication from Node to Mater, it will use kube-apiserver to do requests. So normally it should be work, and when your node IP is changed, node info in ETCD for your node will be update, and you could check your nodes status with command kubectl get nodes -o wide
But if some specific kubernetes feature may be affected, such as NodePort for Service.
Hope this could help !