Java : Connecting to Redis cluster running in minikube - docker

I have a Redis cluster with 3 master and 3 slaves running in minikube.
PS D:\redis\main\kubernetes-redis-cluster> kubectl exec -ti redis-1-2723908297-prjq5 -- /bin/bash
root#redis-1:/data# redis-cli -p 7000 -c
127.0.0.1:7000> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:2
cluster_stats_messages_ping_sent:9131
cluster_stats_messages_pong_sent:9204
cluster_stats_messages_meet_sent:3
cluster_stats_messages_sent:18338
cluster_stats_messages_ping_received:9202
cluster_stats_messages_pong_received:9134
cluster_stats_messages_meet_received:2
cluster_stats_messages_received:18338
127.0.0.1:7000> cluster nodes
de9a4780d93cb7eab8b77abdaaa96a081adcace3 172.17.0.7:7000#17000 slave ee4deab0525d054202e612b317924156ff587021 0 15099603
02577 4 connected
b3a3c05225e0a7fe8ae683dd4316e724e7a7daa6 172.17.0.5:7000#17000 myself,master - 0 1509960301000 2 connected 5461-10922
8bebd48850ec77db322ac51501d59314582865a3 172.17.0.6:7000#17000 master - 0 1509960302000 3 connected 10923-16383
ee4deab0525d054202e612b317924156ff587021 172.17.0.4:7000#17000 master - 0 1509960303479 1 connected 0-5460
28a1c75e9976bc375a13e55160f2aae48defb242 172.17.0.8:7000#17000 slave b3a3c05225e0a7fe8ae683dd4316e724e7a7daa6 0 15099603
02477 5 connected
32e9de12324b8571a6256285682fa066d79161ab 172.17.0.9:7000#17000 slave 8bebd48850ec77db322ac51501d59314582865a3 0 15099603
02000 6 connected
127.0.0.1:7000>
I am able to set/fetch key/values via redis-cli without any issue.
Now I am trying to connect to redis cluster from a simple java program running from eclipse.
I understand that I need to forward the port. I executed below command.
kubectl port-forward redis-0-334270214-fd4k0 7000:7000
Now when I execute below program.
public class Main {
public static void main(String[] args) {
GenericObjectPoolConfig config = new GenericObjectPoolConfig();
config.setMaxTotal(500);
config.setMaxIdle(500);
config.setMaxWaitMillis(60000);
config.setTestOnBorrow(true);
config.setMaxWaitMillis(20000);
Set<HostAndPort> jedisClusterNode = new HashSet<HostAndPort>();
jedisClusterNode.add(new HostAndPort("192.168.99.100", 31695));
JedisCluster jc = new JedisCluster(jedisClusterNode, config);
jc.set("prime", "1 is primeee");
String keyVal = jc.get("prime");
System.out.println(keyVal);
}
}
Then I get below exception.
Exception in thread "main" redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.util.Pool.getResource(Pool.java:53)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:226)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:66)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:116)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31)
at redis.clients.jedis.JedisCluster.set(JedisCluster.java:103)
at com.redis.main.Main.main(Main.java:25)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: connect timed out
at redis.clients.jedis.Connection.connect(Connection.java:207)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:93)
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1767)
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:106)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:819)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:429)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
at redis.clients.util.Pool.getResource(Pool.java:49)
... 10 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at redis.clients.jedis.Connection.connect(Connection.java:184)
... 17 more
Redis services are created and running as well..
PS C:\Users\rootmn> kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 11h
redis-0 10.0.0.105 <pending> 7000:31695/TCP,17000:31596/TCP 10h
redis-1 10.0.0.7 <pending> 7000:30759/TCP,17000:30646/TCP 10h
redis-2 10.0.0.167 <pending> 7000:32591/TCP,17000:30253/TCP 10h
redis-3 10.0.0.206 <pending> 7000:31644/TCP,17000:31798/TCP 10h
redis-4 10.0.0.244 <pending> 7000:30186/TCP,17000:32701/TCP 10h
redis-5 10.0.0.35 <pending> 7000:30628/TCP,17000:32396/TCP 10h
Telnet to redis ip port works fine.
Am I doing something wrong here. What would cause this issue ?

you need to define the service with type Nodeport. from the below yaml file update the selection section to match your redis pod label.
apiVersion: v1
kind: Service
metadata:
name: redis-master-nodeport
labels:
app: redis
spec:
ports:
- port: 7000
selector:
app: redis
role: master
tier: backend
type: NodePort
Create the service kubectl.exe create -f redis-master-service.yaml
then check the service it will give you the port number.
eg.
kubectl.exe get svc redis-master-nodep
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master-nodep 10.0.0.62 <nodes> 6379:30277/TCP 16s
Now using minkikube ip:30277 port you can connect to redis.
Hope this helps

Related

Kubernetes - how to solve secret exchange problems during pod creation

This question belongs to the problem
Deployment of Ingress-controler with Helm failed
but i want also understand more about the background.
Basic situation is: Pod creation fails with error:
{"err":"Get "https://10.96.0.1:443/api/v1/namespaces/ingress-nginx/secrets/ingress-nginx-admission": dial tcp 10.96.0.1:443: i/o timeout","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:232","time":"2022-02-22T10:47:49Z"}
i can see that the pod tries to get something from my kubernetes cluster-IP which listen on 443:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 121d
default nextcloud-service ClusterIP 10.98.154.93 <none> 82/TCP 13d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 120d
My questions are:
Can i - somehow - check with a command if the URL-path really exist?
When will this secret be created, and how can i observe this?
Can i manipulate the cluster to use another port for this, like 8080 (non secure) or so?
When i check my secrets with command kubectl get secrets -A i see following results
NAMESPACE NAME TYPE DATA AGE
default default-token-95b8q kubernetes.io/service-account-token 3 122d
ingress-nginx default-token-fbvmd kubernetes.io/service-account-token 3 21h
ingress-nginx ingress-nginx-admission-token-cdfbf kubernetes.io/service-account-token 3 11m
can i somehow tell the deployment script (in values.yaml) the exact name of this secret?

Kubernetes: Why my NodePort can not get an external ip?

Environment information:
Computer detail: One master node and four slave nodes. All are CentOS Linux release 7.8.2003 (Core).
Kubernetes version: v1.18.0.
Zero to JupyterHub version: 0.9.0.
Helm version: v2.11.0
Recently, I try to deploy "Zero to Jupyterhub" on kubernetes. My jupyterhub config file such below:
config.yaml
proxy:
secretToken: "2fdeb3679d666277bdb1c93102a08f5b894774ba796e60af7957cb5677f40706"
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
singleuser:
storage:
dynamic:
storageClass: local-storage
capacity: 10Gi
Note: I set the service type as NodePort, because I not have any cloud provider(deploy on my lab servers cluster), and I try using nginx-ingress also then got failure, that reason why I do not using LoadBalance.
But when I using this config file to install jupyterhub via Helm, I can not access jupyterhub from browser, even all Pods running. These pods detail like below:
kubectl get pod --namespace jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-8gxxk 1/1 Running 0 27m
continuous-image-puller-8tmdh 1/1 Running 0 27m
continuous-image-puller-lwdcx 1/1 Running 0 27m
continuous-image-puller-pszsr 1/1 Running 0 27m
hub-7b9cbbcf59-fbppq 1/1 Running 0 27m
proxy-6b699b54c8-2pxmb 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-9vmfr 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-lqfrh 1/1 Running 0 27m
and its services like this:
kubectl get service --namespace jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.10.55.78 <none> 8081/TCP 28m
proxy-api ClusterIP 10.10.27.133 <none> 8001/TCP 28m
proxy-public NodePort 10.10.97.11 <none> 443:30443/TCP,80:30080/TCP 28m
Is seem to work well, right? (I guessed.) But the fact is that I can not use ip 10.10.97.11 to access the jupyter main page, and I did not get any external ip also.
So, my problems are:
Do my config have any wrong?
How to get an external ip?
Finally, thank you for save my day so much!
For NodePort service you will not get EXTERNAL-IP. You can not use the CLUSTER-IP to access it from outside the kubernetes cluster because CLUSTER-IP is for accessing it from inside the kubernetes cluster typically from another pod.For accessing from outside the kubernetes cluster you need to use NodeIP:NodePort where NodeIP is your kubernetes nodes IP address.

Having problem to access deployed application in multiclustering kubernetes environment in VirtualBox

I have create multicluster kubernetes environment and my node details is:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
16-node-121 Ready <none> 32m v1.14.1 192.168.0.121 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.9.2
master-16-120 Ready master 47m v1.14.1 192.168.0.120 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.9.2
And I created a service and exposed the service using following command:
$kubectl expose deployment hello-world --port=80 --target-port=8080
The is created and exposed. My service detail information is:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world ClusterIP 10.105.7.156 <none> 80/TCP 33m
I exposed my deployment by following command:
kubectl expose deployment hello-world --port=80 --target-port=8080
service/hello-world exposed
Unfortunately when I try to access my service using curl command I'm getting timeout error:
My service details are following:
master-16-120#master-16-120:~$ kubectl describe service hello-world
Name: hello-world
Namespace: default
Labels: run=hello-world
Annotations: <none>
Selector: run=hello-world
Type: ClusterIP
IP: 10.105.7.156
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.1.2:8080
Session Affinity: None
Events: <none>
curl http://10.105.7.156:80
curl: (7) Failed to connect to 10.105.7.156 port 80: Connection timed out
Here I'm using calico for my multicluster network which is :
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
My Pod networking specification is:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
At last I have got the solution. Thanks to Daniel's comment which helps me to reach my solution.
I change my kubernetis pod network CIDR and calico as follow:
--pod-network-cidr=10.10.0.0/16
And also configure master which is master-16-120 Hosts (/etc/hosts):
master-16-120 192.168.0.120
16-node-121 192.168.0.121
And in the node which is 16-node-121 Hosts (/etc/hosts)
master-16-120 192.168.0.120
16-node-121 192.168.0.121
Now my kubernetes is ready to go.

Routing between Kubernetes cluster and Docker container in the VM

I have setup Kubernates cluster in the VM (Ubuntu 18.04.1 LTS) on the Azure cloud using preconfigured scripts.
MongoDB docker container is running along with K8s cluster. My aim is to connect MongoDB to CMS container which is running inside the K8s.
Docker containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3883f7b397cf mongo "docker-entrypoint.s…" 5 hours ago Up 5 hours 0.0.0.0:27017->27017/tcp mongodb
299239d90cbb mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 8080/tcp kube-node-2
34c8bd5fad2e mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 8080/tcp kube-node-1
15a2d6521e6e mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 127.0.0.1:32768->8080/tcp kube-master
Kubernates nodes:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
mycms LoadBalancer 10.97.53.114 <pending> 80:31664/TCP 18s app=mycms,tier=frontend
Kubernates service:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
mycms LoadBalancer 10.97.53.114 <pending> 80:31664/TCP 112s app=mycms,tier=frontend
Kubernates pods:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mycms-dc4978ffc-khvj2 1/1 Running 0 4m8s 10.244.2.13 kube-node-1 <none>
MongoDB container's IP address is 172.17.0.2
Kubernates master container IP address is 10.192.0.2
Kubernates node 1 container IP address is 10.192.0.3
Kubernates node 2 container IP address is 10.192.0.4
As CMS pod is running on 10.244.2.13 which is inside the k8s container.
For testing, I have installed mongo-client on the host and test the connection which works well.
But CMS doesn't reach MongoDB container (I am passing Mongo String to pod in an environmental variable).
CMS pod's log
MongoError: failed to connect to server [172.17.0.2:27017] on first connect [MongoError: connect EHOSTUNREACH 172.17.0.2:27017]
How do I route MongoDB container and CMS container? Is anything wrong/missed in my approach?
Please let me know if you need further information. Thanks!
You need to use the IP address of the host where Docker is installed, not internal MongoDB container's IP address, to connect to MongoDB from the Kubernetes cluster or form any other host. According to the results of your docker ps -a, you have exposed 27017 port for MongoDB container, therefore <hostIP>:27017 should be used, not 172.17.0.2:27017.
By default in Kubernetes, there are no restrictions to connect outside the cluster.
Also, you may have firewall rules in Azure that forbid connections between hosts.

openshift internal docker registry repo address is no same as docker-registry service cluster ip

My steps in one of my cluster Master server:
Create router:
# oadm router ose-router --replicas=1 --credentials='/etc/origin/master/openshift-router.kubeconfig' --images='openshift-register.com.cn:5000/openshift3/ose-${component}:v3.4' --service-account=router
Create docker registry:
# oadm registry --config=/etc/origin/master/admin.kubeconfig --service-account=registry --images='openshift-register.com.cn:5000/openshift3/ose-${component}:v3.4'
Router and registry are created successfully and we can see the docker-registry cluster-IP address is 172.30.182.170:
# oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.182.170 <none> 5000/TCP 22s
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 6d
ose-router 172.30.80.196 <none> 80/TCP,443/TCP,1936/TCP 1m
Query the iamge stems in namespace openshift, the Docker repo IP is 172.30.137.159:
# oc get is -n openshift
NAME DOCKER REPO TAGS
jenkins 172.30.137.159:5000/openshift/jenkins 2,1
mariadb 172.30.137.159:5000/openshift/mariadb 10.1
mongodb 172.30.137.159:5000/openshift/mongodb 3.2,2.6,2.4
mysql 172.30.137.159:5000/openshift/mysql 5.7,5.6,5.5
nodejs 172.30.137.159:5000/openshift/nodejs 4,0.10
perl 172.30.137.159:5000/openshift/perl 5.24,5.20,5.16
php 172.30.137.159:5000/openshift/php 5.5,7.0,5.6
postgresql 172.30.137.159:5000/openshift/postgresql 9.5,9.4,9.2
python 172.30.137.159:5000/openshift/python 3.4,3.3,2.7 +
more...
redis 172.30.137.159:5000/openshift/redis 3.2
ruby 172.30.137.159:5000/openshift/ruby 2.3,2.2,2.0
My concern and question are the docker repo IP address should be using my docker-registry service cluster IP by default, but why it generated a new IP address and I have no idea how/where to find this docker repo IP address? I'm really new openshift users, so if you can give any suggestion that would be appreciated.
[root#ocp-master01 ~]# oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.182.170 <none> 5000/TCP 49m
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 7d
ose-router 172.30.80.196 <none> 80/TCP,443/TCP,1936/TCP 50m
[root#ocp-master01 ~]# oc get svc -o yaml | grep IP
clusterIP: 172.30.182.170
sessionAffinity: ClientIP
type: ClusterIP
clusterIP: 172.30.0.1
sessionAffinity: ClientIP
type: ClusterIP
clusterIP: 172.30.80.196

Resources