Minikube M1 - minikube service not working - docker

I try to follow the instructions on the link to deploy a hello world app in the Minikube. So far I created the deployment and exposed the deployment.
Deployment:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/web 1/1 1 1 10m
Service:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 11m
default service/web NodePort 10.107.89.59 <none> 8080:30841/TCP 10m
When I run below "minikube service web --url" I only got:
minikube service web --url
🏃 Starting tunnel for service web.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
If I run it without the "--url" option, my browser says that the server is not responding.
Did anyone have any similar problems? Why is not the hello world app loading in the browser as in the guide?
Thank you in advance

The tunnel is created between your Mac and Cluster IP instead of Node IP.
Here is a workaround for your case
Get the service cluster IP
kubectl get svc -o wide
Get the tunnel PORT on your Mac
ps -ef | grep ssh
Access it from localhost
curl http://127.0.0.1:PORT
if you are using ingress, please refer to #12089
Remember The bridge network in mac is different from in Linux #7332 (comment)

Related

Could not access Minikube(v1.18.1) Ingress on Docker-Driver Windows 10

My issue is exactly the same as this. But replication the question again for your reference::
I am facing the problem which is that I could not access the Minikube Ingress on the Browser using it's IP. I have installed Minikube on Windows 10 Home, and starting the minikube with docker driver(minikube start --driver=docker).
System info:
Windows 10 Home
Minikube(v1.18.1)
Docker(driver for minikube) - Docker
engine version 20.10.5
I am following this official document - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
First I created the deployment by running this below command on Minikube.
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
The deployment get created which can be seen on the below image: enter image description here
Next, I exposed the deployment that I created above. For this I ran the below command.
kubectl expose deployment web --type=NodePort --port=8080
This created a service which can be seen by running the below command:
kubectl get service web
The screenshot of the service is shown below:
I can now able to visit the service on the browser by running the below command:
minikube service web
In the below screenshot you can see I am able to view it on the browser.
Next, I created an Ingress by running the below command:
kubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml
The ingress gets created and I can verify it by running the below command:
kubectl get ingress
The screenshot for this is given below:
The ingress ip is listed as 192.168.49.2. So that means if I should open it in the browser then it should open, but unfortunately not. It is showing site can't be reached. See the below screenshot.
What is the problem. Please provide me a solution for it?
I also added the mappings on etc\hosts file.
192.168.49.2 hello-world.info
Then I also tried opening hello-world.info on the browser but no luck.
In the below picture I have done ping to hello-world.info which is going to IP address 192.168.49.2. This shows etc\hosts mapping is correct:
I also did curl to minikube ip and to hello-world.info and both get timeout. See below image:
The kubectl describe services web provides the following details:
Name: web
Namespace: default
Labels: app=web
Annotations: <none>
Selector: app=web
Type: NodePort
IP: 10.100.184.92
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31880/TCP
Endpoints: 172.17.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The kubectl describe ingress example-ingress gives the following output:
Name: example-ingress
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
hello-world.info
/ web:8080 172.17.0.4:8080)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1
Events: <none>
The issue seems to have been resolved there, by following the below instructions(as posted in the comments):
Once you setup the ingress with necessary change, i guess you are in
the powershell of windows with minikube running right? Make sure you
‘enable addons ingress’ and have a separate console running ‘minikube
tunnel’ as well. Also, add the hostname and ip address to windows’
host table. Then type ‘minikue ssh’ in powershell, it gives you
command line. Then you can ‘curl myapp.com’ then you should get
response as expected.
Nevertheless, in my case, the minikube tunnel is not responding upon giving the minikube tunnel command. :
I am not able to curl hello-world.info even through minikube ssh. Kindly help!
On Windows
After some decent amount of time, I came to the conclusion that ingress has some conflicts to work with Docker on Windows10-Home. Things are working fine if we want to expose a service of NodePort type but Ingress is troublesome.
Further, I tried to set up WSL 2 with Ubuntu in Windows 10 but no luck.
Finally, the following worked for Ingress of Minikube on Windows10 Home:
Install VirtualBox
Uncheck the Virtual Machine Platform and Windows Hypervisor Platform options from Control Panel -> Programs -> Turn Windows Features on and off (under Programs and Features) and then click ok. Restart your computer if prompted to.
Now, execute the following commands in a new cmd
minikube delete
minikube start --driver=virtualbox
if minikube start --driver=virtualbox doesn't work, then use minikube start --driver=virtualbox --no-vtx-check.
This process solved my problem and ingress is working fine on my Windows 10 Home Minikube.
On Ubuntu
Finally, Docker on Ubuntu is inherently supporting Minikube Ingress seamlessly without any glitch.
Same issues here with docker driver.
But everything ok with hyperv driver.
Enable hyperv on your windows and start with: minikube start --driver=hyperv or
minikube config set driver hyperv and minikube start

How to access services exposed via ClusterIP when using Docker For Windows:

I installed Docker for windows and the built in k8s single node cluster for dev purposes on a local workstation (windows 10 pro).
I'd like to know how to access services hosted on this cluster. It's not documented very well
I don't have a load balancer installed and don't need a K8s Ingress. How can I access a service hosted at 10.105.245.65:80 . localhost and 127.0.0.1 don't work and 10.105.245.65 has no meaning on the host windows machine .
I could use NodePort (that works) but I'd like to understand how to access via ClusterIP .
C:\Users\balamuvi>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
webserver ClusterIP 10.105.245.65 <none> 80/TCP 48m ===> how do I access this service
C:\Users\balamuvi>kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443 =======> resolves to localhost
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ClusterIP is not accessible from outside the cluster. You will have to exec into another pod and use curl to access it.
CLusterIP cannot be accessible from outside the cluster.
Other method beside execing into pod (as mentioned by Arghya) would be using kubectl port-forward command.
Just run:
kubectl port-forward pods/<pod-name> <local-port>:<pod-port>
and then you can access the pod under localhost:<local-port>
Refer to kubernetes documentation for more information about port forwarding.

How do I find the master IP on Docker Desktop Kubernetes Cluster?

I am using Docker Desktop on Windows and have created a local Kubernetes cluster. I've been following this (quick start guide) and am running into issues identifying my external IP. When creating a service I'm supposed to list the "master server's IP address".
I've identified the master node kubectl get node:
NAME STATUS ROLES AGE VERSION
docker-desktop Ready master 11m v1.14.7
Then used kubectl describe node docker-desktop...but an external IP is not listed anywhere.
Where can I find this value?
Use the following command so you can see more information about the nodes.
kubectl get nodes -o wide
or
kubectl get nodes -o json
You'll be able to see the internal-ip and external-ip.
Pd: In my cluster, the internal-ip works as external-ip, even tho the external-ip is listed as none.

Other pc can't visit k8s dashboard

My mac can visit k8s dashboard, but other pc can't. What's the reason ?
#kubernetes/UI #kubernetes/dashboard
I have tried with the latest version of my channel (Stable or Edge)
macOS Version: 10.14
Docker for Mac: version: 19.03.1
k8s version : 1.14.3
eneble k8s on docker for mac setting
apply k8s dashboard.yaml
my mac ip is : 192.168.0.200
kubectl get service --all-namaspaces
NAMESPACE NAME TYPE
CLUSTER-IP EXTERNAL-IP PORT(S) default .........
kubernetes ......... ClusterIP .........
10.96.0.1 ......... 443/TCP kube-system .......... kubernetes-dashboard .........
NodePort ......... 10.104.38.247 .........
443:31317/TCP
kubectl cluster-info
Kubernetes master is running at
https://kubernetes.docker.internal:6443 KubeDNS is running at
https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
I can visit dashboard with below url on my pc.
kubernetes.docker.internal:31317 localhost:31317
127.0.0.1:31317
192.168.0.200:31317
And I have stopped firewall.
One Lan, other pc can't visit 192.168.0.200:31317
And I don't know why.
help me , thanks.
Do you need other info?
Actually, I ask same question on github, and they suggest me to ask it on stackoverflow.
And this is my first time ask question on stackoverflow, and if I do something wrong, please tell me.
I excepted that other pc including windows and mac on the same LAN can visit my mac's k8s dashboard.
You need to run kubectl proxy locally for accessing the dashboard outside the kubernetes cluster. You need to scp admin.conf file (file on your kubernetes master at /etc/kubernetes/admin.conf) to the machine from which you want to access the dashboard and pass it to kubectl command. Please, refer to following posts:
How to access/expose kubernetes-dashboard service outside of a cluster?
Kubernetes dashboard
To access the Dashboard navigate your browser to https://<server_IP>:31317

Connection refused when trying to connect to services in Kubernetes

I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?
The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.
Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.

Resources