I am setting up and airflow k8s cluster using kind deployment on a WSL2 setup. When I execute standard helm install $RELEASE_NAME apache-airflow/airflow --namespace $NS it fails. Further investigation shows that cluster worker node cannot connect to registry-1.docker.io.
Error log for one the image pull
Failed to pull image "redis:6-buster": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/redis:6-buster": failed to resolve reference "docker.io/library/redis:6-buster": failed to do request: Head "https://registry-1.docker.io/v2/library/redis/manifests/6-buster": dial tcp: lookup registry-1.docker.io on 172.19.0.1:53: no such host
I can access all other websites from this node e.g. google.com, yahoo.com merriam-webster.com etc. ; even docker.com works. This issue is very specific to registry-1.docker.io.
All the search and links seems to be around general internet connection issue.
Current solution:
If I manually change the /etc/resolv.conf on the kind worker node to point to the IP address from /etc/resolv.conf of the WSL2 Debian main IP address, then it works.
But, this is a dynamic cluster and node and I cannot do this every time. I am currently searching for a way as to how the make it a part of the cluster configuration. Some way that makes it work just by saying kind create cluster and one should be able to use kubectl or helm by default.
However, I am more interested in figuring out why this network setup fails specifically for registry-1.docker.io. Is there some configuration that can be done to avoid changing DNS to host IP or google DNS? As the current network configuration seems to work pretty much for the rest of the internet.
I have documented all the steps and investigation details including some of network configuration details on github repositroy. If you need any further information to help solve the issue, please let me know. I will keep on updating the github documentation as I make progress.
Setup:
Windows 11 with WSL2 without any Docker desktop
WSL2 image : Debian bullseye (11) with docker engine on linux
Docker version : 20.10.2
Kind version : 0.11.1
Kind image: kindest/node:v1.20.7#sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c
67c671ff9
I am not sure, if it is an answer or not. After spending 2 days trying to find solution. I thought to change the node image version. On the Kind release page, it says 1.21 as the latest image for the kind version 0.11.1. I had problems with 1.21 to even start the cluster. 1.20 faced this strange DNS image. So went with 1.23. It all worked fine with thus image.
However, to my surprise, when I changed the cluster configuration back to 1.20, the DNS issue was gone. So, I do not what changed due to switch of of the image, but I cannot reproduce the issue again! Maybe it will help someone else
I find that i have found the correct workaround for this bug: Switching IPTables to legacy mode has fixed this for me.
https://github.com/docker/for-linux/issues/1406#issuecomment-1183487816
I have an issue. I typed the minikube start command and it stuck. What should I do? Is deleting minikube the only solution?
Restarting existing docker container for "minikube"
You have provided too little information to conclusively solve your problem. But one way is to actually delete the minikube and restart. You can see this similar question. Make sure that you have proper privileges to run docker containers.
Generally, this problem occurs quite often on Ubuntu. You can find very extensive thread on github.
In addition to the Restarting existing docker container for "minikube", you should also get some other information (like specific error). If they are insufficient, you can always open an issue on github.
In the thread above you can find a couple of potential solutions. Here is one of them:
When I run minikube --start --driver=docker --alsologtostderr, I get the same error message with "no such file or directory".
Edit: I was able to fix this by changing to .deb docker instead of snap docker.
Per https://kubernetes.io/docs/tasks/tools/install-minikube/:
"If you're using the none driver in Debian or a derivative, use the .deb packages for Docker rather than the snap package, which does not work with Minikube. You can download .deb packages from Docker."
I did $ snap remove docker, then followed these instructions:
https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
Maybe the error messages could be amended to tell this to the user?
You can try sudo minikube delete to delete the container first,
then minikube start and see if the issue is fixed or not.
I have installed Docker Desktop (version : 2.3.0.4) and enabled Kubernetes.
I deployed couple of PODS and everything was working fine, Since yesterday I am facing a weird issue mentioned below:
Unable to connect to the server: dial tcp 127.0.0.1:6443: connectex: No
connection could be made because the target machine actively refused it.
As such, no changes were made on my system. I am using Linux Containers on Windows 10 machine.
Following steps I have tried:
Restarted the Docker Desktop
Tried the same with minikube and Docker Desktop both
Tried to disable the firewall but due to some permissions, I am not able to turn it off.
I have reset the kubernetes cluster as well.
I tried numerous different changes to fix docker desktop kubernetes failing to start. What finally worked for me is...
Clicked the troubleshooting icon (it's a bug icon) and then chose Clean/Purge Data.*
Finally,I found the solution for this.
VPN was causing the issue, I am using my office laptop and after restart, VPN was enabled and logged-in and due to this Kubernetes was not working.
After disabling the VPN, Kubernetes cluster working fine.
Hope that helps others as well.
For me, just "Clean and Purge" wasn't enough. Here is what I did.
Log off VPN
Go to bug and "Clean and Purge Data"
Also choose "Reset to Factory Defaults"
Restart Docker Desktop
Choose "Enable Kubernetes"
At this point, the "Starting" took a while for Kubernetes to be enabled. Now's it all good.
$ kubectl get namespace
NAME STATUS AGE
default Active 80s
kube-node-lease Active 82s
kube-public Active 82s
kube-system Active 82s
I tried clean/purge data and resetting factory settings but that didn't worked.
I had to reset kubernetes cluster from here.
In my case, the corporate proxy server caused the Kubernetes startup to fail. Addiing *.docker.internal to the no_proxy hosts solved the issue.
I had similar problem.
Install Minikube
I install minikube and I run as following on windows 10.
starting of kubectl
Then I gave permission for docker.
Check cluster-info
When I check cluster-info result as following
cluster info results
Try to get pods
When I try to get pods I did not get any error.
As #N-ate mentioned above, after clicking Clean/Purge Data which removes all downloaded images from my computer, now docker and kubernates are running properly.
As you can see in the image below, I only have kubernates images running on docker and it takes most of the allocated memory. I guess the failure of starting kubernates was related to this memory issue.
In my case, the Kubernetes (Docker Desktop on Mac) is not running properly though I can manage Pods, Services, etc., when I opened the Docker Desktop, it says
Kubernetes failed to start (red background)
I managed to fix the issue by resetting Docker Desktop and Prune/cleaning the storage.
Even I had similar problem after updating to Docker desktop(version 4.11.1). After I downgraded the version it works fine.
Troubleshooting steps
check is there any errors by running following command
kubectl get events|grep node
and make sure all pods are in running state.
kubectl get pods --namespace kube-system
I don't know for others but for some reasons, the above suggested options didn't work for me while fixing K8s on Docker Desktop on Windows. Tried fixing by cleaning the cluster, resetting to default, restarting pc, installing previous versions of Docker Desktop, enabling my pc HiperVisor, and giving it more resource priority, and others but yet still K8s failed to start, even though the Docker starts.
I chanced on Minikube as an alternative tool (without UI) to create my cluster and interacted with it using Kubectl.
And K8s worked for me locally.
I followed this guide - https://minikube.sigs.k8s.io/docs/start/
My docker-desktop is running behind the company proxy server.
I deleted following Proxy Env Variables from my windows OS.
HTTPS_PROXY:serveraddess
HTTP_PROXY:serveraddress
and I set up manual proxy in docker desktop.
My steps:
restart docker - it didn't help.
reset Kubernetes - it didn't help.
Adding missing 'wslconfig' file to C:\Users[MY USER] - it didn't help.
Restart the computer between any step - it didn't help.
stop using Wsl reuse Wsl - it didn't help.
uninstall docker and install again and enable Kubernetes - it didn't help.
Remove '.kube' folder from C:\Users[MY USER] and reset Kubernetes - It causes the Kubernetes to try stopping, and after failure - restart docker - which succeeded.
I have downloaded Kubernetes latest version from Kubernetes official site and referenced it in the PATH above the reference for Docker but It is still showing the version installed with Docker Desktop.
I understand that docker comes with Kubernetes installed out of the box but the docker version '1.15.5' doesn't work correctly with my Minikube version which is 'v1.9.2' which is causing me problems.
any suggestions on how to fix this issues? should I remove the Kubernetes binary from C:\Program Files\Docker\Docker\resources\bin I don't think that will be a good idea.
Can someone help me tackle this issue, along with some explanation on how the versions work with each other? Thanks
This is happening because windows always give you the first comment found in the PATH, both kubectl versions (Docker and yours) are in the PATH but Docker PATH in being referenced before your kubectl PATH.
To solve this really depends on what you need. If you are not using your Docker Kubernetes you have two alternatives:
1 - Fix your PATH and make sure that your kubectl PATH is referenced before Docker PATH.
2 - Replace Docker kubectl to yours.
3- Make sure you restart your PC after doing these changes, as kubectl will automatically update the configuration to point to the newer kubectl version the next time you use the minikube start command with a correct --kubernetes-version:
If you are using both from time to time, I would suggest you to create a script that will change your PATH according to your needs.
According to the documentation you must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.
excuse the extreme newbiness... I have done docker and kube courses on linux academy. I have a kube cluster master and 3 minions running on centos7 from repo =http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ kube version 1.5.2 working but as I went to set up an example guest book application, I found I have no DNS. Have found documents about how to test DNS works, but can't seem to find how to fix it if it isn't there..
DNS is an essential addon and is usually applied by kubeadm on it's own towards the end of a kudeadm init workflow but probably in the older version of this isn't the case. you can manually apply dns by this command kubeadm alpha phase addon kube-dns [Options] ref
In case kubeadm that doesn't work then you can use this yaml and modify accordingly https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/deployments/kube-dns.yaml