How do I find the master IP on Docker Desktop Kubernetes Cluster? - docker

I am using Docker Desktop on Windows and have created a local Kubernetes cluster. I've been following this (quick start guide) and am running into issues identifying my external IP. When creating a service I'm supposed to list the "master server's IP address".
I've identified the master node kubectl get node:
NAME STATUS ROLES AGE VERSION
docker-desktop Ready master 11m v1.14.7
Then used kubectl describe node docker-desktop...but an external IP is not listed anywhere.
Where can I find this value?

Use the following command so you can see more information about the nodes.
kubectl get nodes -o wide
or
kubectl get nodes -o json
You'll be able to see the internal-ip and external-ip.
Pd: In my cluster, the internal-ip works as external-ip, even tho the external-ip is listed as none.

Related

Minikube M1 - minikube service not working

I try to follow the instructions on the link to deploy a hello world app in the Minikube. So far I created the deployment and exposed the deployment.
Deployment:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/web 1/1 1 1 10m
Service:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 11m
default service/web NodePort 10.107.89.59 <none> 8080:30841/TCP 10m
When I run below "minikube service web --url" I only got:
minikube service web --url
🏃 Starting tunnel for service web.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
If I run it without the "--url" option, my browser says that the server is not responding.
Did anyone have any similar problems? Why is not the hello world app loading in the browser as in the guide?
Thank you in advance
The tunnel is created between your Mac and Cluster IP instead of Node IP.
Here is a workaround for your case
Get the service cluster IP
kubectl get svc -o wide
Get the tunnel PORT on your Mac
ps -ef | grep ssh
Access it from localhost
curl http://127.0.0.1:PORT
if you are using ingress, please refer to #12089
Remember The bridge network in mac is different from in Linux #7332 (comment)

Kubernetes Slave Error - The connection to the server localhost:8080 was refused

I have been trying to set up a kubernetes cluster.
I have two ubuntu droplets on digital ocean I am using to do this.
I have set up the master and joined the slave
I am now trying to create a secret for my docker credentials so that I can pull private images on the node, however when i run this command (or any other kubectl command e.g. kubectl get nodes) i get this error: The connection to the server localhost:8080 was refused - did you specify the right host or port?
This is however all set up as kubectl on its own shows help.
Does any one know why i might be getting this issue and how I can solve it?
sorry, i have just started with kubernetes, but i am trying to learn.
I understand that you have to set up the cluster on a user that is not root on the master (which I have) is it ok to use root on slaves?
thanks
kubectl is used to connect and run commands to kubernetes API plane. There is no need to have it configured on worker (slave) nodes.
However if You really need to make kubectl work from worker node You would need to do the following:
Createa .kube catalog on worker node:
mkdir -p $HOME/.kube
Copy the configuration file from master node
/etc/kubernetes/admin.conf to $HOME/.kube/config on worker node.
Then run the following command on worker node:
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Update:
To address Your question in comment.
That is not how kubernetes nodes work.
From kubernetes documentation about Kubernetes Nodes:
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you’ll rarely interact with nodes directly.
This means that the images pulled from private repository will be "handled" by master nodes configuration which is synchronized between all nodes. There is no need to configure anything on the worker (slave) nodes.
Additional information about Kubernetes Control Plane.
Hope this helps.

Accessing a k8s service with cluster IP in default namespace from a docker container

I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.

Connection refused when trying to connect to services in Kubernetes

I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?
The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.
Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.

Is it possible to host Kubernetes node from network with dynamic ip?

I would like to host a Kubernetes master node in AWS (or other cloud provider) and then add nodes from home to that cluster. I do however not have a static IP from my internet provider, so the question is: will this work and what happens when my IP address change?
Here could get some info about Master-Node communication in kubernetes.
For communication from Node to Mater, it will use kube-apiserver to do requests. So normally it should be work, and when your node IP is changed, node info in ETCD for your node will be update, and you could check your nodes status with command kubectl get nodes -o wide
But if some specific kubernetes feature may be affected, such as NodePort for Service.
Hope this could help !

Resources