kube-cluster running, but no DNS - kube-dns

excuse the extreme newbiness... I have done docker and kube courses on linux academy. I have a kube cluster master and 3 minions running on centos7 from repo =http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ kube version 1.5.2 working but as I went to set up an example guest book application, I found I have no DNS. Have found documents about how to test DNS works, but can't seem to find how to fix it if it isn't there..

DNS is an essential addon and is usually applied by kubeadm on it's own towards the end of a kudeadm init workflow but probably in the older version of this isn't the case. you can manually apply dns by this command kubeadm alpha phase addon kube-dns [Options] ref
In case kubeadm that doesn't work then you can use this yaml and modify accordingly https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/deployments/kube-dns.yaml

Related

invalid capacity 0 on image filesystem, Lens ID Kubernetes

I am creating k8s cluster from digital ocean but every time I am getting same warning after I create cluster and open that cluster in lens ID.
Here is the screenshot of warning:
i did every soltion which i found but still can't able to remove the error.
Check first if k3s-io/k3s issue 1857 could help:
I was getting the same error when I installed kubernetes cluster via kubeadm.
After reading all the comments on the subject, I thought that the problem might be caused by containerd and the following two commands solved my problem, maybe it can help:
systemctl restart containerd
systemctl restart kubelet
And:
This will need to be fixed upstream. I suspect it will be fixed when we upgrade to containerd v1.6 with the cri-api v1 changes
So checking the containerd version can be a clue.

Unable to reach registry-1.docker.io from Kind cluster node on WSL2

I am setting up and airflow k8s cluster using kind deployment on a WSL2 setup. When I execute standard helm install $RELEASE_NAME apache-airflow/airflow --namespace $NS it fails. Further investigation shows that cluster worker node cannot connect to registry-1.docker.io.
Error log for one the image pull
Failed to pull image "redis:6-buster": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/redis:6-buster": failed to resolve reference "docker.io/library/redis:6-buster": failed to do request: Head "https://registry-1.docker.io/v2/library/redis/manifests/6-buster": dial tcp: lookup registry-1.docker.io on 172.19.0.1:53: no such host
I can access all other websites from this node e.g. google.com, yahoo.com merriam-webster.com etc. ; even docker.com works. This issue is very specific to registry-1.docker.io.
All the search and links seems to be around general internet connection issue.
Current solution:
If I manually change the /etc/resolv.conf on the kind worker node to point to the IP address from /etc/resolv.conf of the WSL2 Debian main IP address, then it works.
But, this is a dynamic cluster and node and I cannot do this every time. I am currently searching for a way as to how the make it a part of the cluster configuration. Some way that makes it work just by saying kind create cluster and one should be able to use kubectl or helm by default.
However, I am more interested in figuring out why this network setup fails specifically for registry-1.docker.io. Is there some configuration that can be done to avoid changing DNS to host IP or google DNS? As the current network configuration seems to work pretty much for the rest of the internet.
I have documented all the steps and investigation details including some of network configuration details on github repositroy. If you need any further information to help solve the issue, please let me know. I will keep on updating the github documentation as I make progress.
Setup:
Windows 11 with WSL2 without any Docker desktop
WSL2 image : Debian bullseye (11) with docker engine on linux
Docker version : 20.10.2
Kind version : 0.11.1
Kind image: kindest/node:v1.20.7#sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c
67c671ff9
I am not sure, if it is an answer or not. After spending 2 days trying to find solution. I thought to change the node image version. On the Kind release page, it says 1.21 as the latest image for the kind version 0.11.1. I had problems with 1.21 to even start the cluster. 1.20 faced this strange DNS image. So went with 1.23. It all worked fine with thus image.
However, to my surprise, when I changed the cluster configuration back to 1.20, the DNS issue was gone. So, I do not what changed due to switch of of the image, but I cannot reproduce the issue again! Maybe it will help someone else
I find that i have found the correct workaround for this bug: Switching IPTables to legacy mode has fixed this for me.
https://github.com/docker/for-linux/issues/1406#issuecomment-1183487816

What is the best alternative for docker-machine with digitalocean droplets

I've been using docker-machine for creating and connecting to digitalocean droplets. In August 2021 it is deprecated and with the latest MacOs Monterey, I started to get more errors. I can find some ways to work it out, but I don't want to spend more time to a deprecated library.
Docker-machine was really helpful at creating digital ocean droplets. Writing a one line of code was creating the droplet, installing docker to the remote droplet, configuring local certs and machine folders.
docker-machine create --driver digitalocean --digitalocean-image ubuntu-20-04-x64 --digitalocean-size "s-2vcpu-2gb" --digitalocean-region=lon1 --digitalocean-access-token <acces-token> droplet-name
A couple of months ago I started adding this bit otherwise, it was not able to create the droplet.
--engine-install-url "https://releases.rancher.com/install-docker/19.03.14.sh"
After creating the droplet connecting it is as easy as eval $(docker-machine env droplet-name)
I would really like to know what are other alternatives people use. According to digitalocean it requires a bunch of steps
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
There should be some other ways that I can create and connect to remote droplets. Please share any methods you are using for docker operations on remotes hosts.
I came up with using docker context is the easiest solution for who are using docker-machine. I couldn't manage to run docker-machine after upgrading my operating system to macOS Monterey
Closest docker-machine experience is creating a context with
docker context create remote --docker “host=ssh://user#remotemachine”
The remote host is named as 'remote' here.
You can switch between hosts by using docker context use remote
The best part of using docker-machine or docker context is, I don't need to copy my docker-compose files to remote. It provides local terminal experience which is very helpful when there are multiple hosts to deploy.
For more information, you can see the official documentation below
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/

kubectl version showing the wrong version number

I have downloaded Kubernetes latest version from Kubernetes official site and referenced it in the PATH above the reference for Docker but It is still showing the version installed with Docker Desktop.
I understand that docker comes with Kubernetes installed out of the box but the docker version '1.15.5' doesn't work correctly with my Minikube version which is 'v1.9.2' which is causing me problems.
any suggestions on how to fix this issues? should I remove the Kubernetes binary from C:\Program Files\Docker\Docker\resources\bin I don't think that will be a good idea.
Can someone help me tackle this issue, along with some explanation on how the versions work with each other? Thanks
This is happening because windows always give you the first comment found in the PATH, both kubectl versions (Docker and yours) are in the PATH but Docker PATH in being referenced before your kubectl PATH.
To solve this really depends on what you need. If you are not using your Docker Kubernetes you have two alternatives:
1 - Fix your PATH and make sure that your kubectl PATH is referenced before Docker PATH.
2 - Replace Docker kubectl to yours.
3- Make sure you restart your PC after doing these changes, as kubectl will automatically update the configuration to point to the newer kubectl version the next time you use the minikube start command with a correct --kubernetes-version:
If you are using both from time to time, I would suggest you to create a script that will change your PATH according to your needs.
According to the documentation you must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.

How to create a local development environment for Kubernetes?

Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such).
During development you want to be as close as possible to production environment with some important changes:
Deployed locally (or at least somewhere where you and only you can access)
Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman).
Similarly one may want a non-public environment to do continuous integration.
Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?
Update (2016-07-15)
With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development.
You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
We've been working on a tool to do this. Basic idea is you have remote Kubernetes cluster, effectively a staging environment, and then you run code locally and it gets proxied to the remote cluster. You get transparent network access, environment variables copied over, access to volumes... as close as feasible to remote environment, but with your code running locally and under your full control.
So you can do live development, say. Docs at http://telepresence.io
The sort of "hot reload" is something we have plans to add, but is not as easy as it could be today. However, if you're feeling adventurous you can use rsync with docker exec, kubectl exec, or osc exec (all do the same thing roughly) to sync a local directory into a container whenever it changes. You can use rsync with kubectl or osc exec like so:
# rsync using osc as netcat
$ rsync -av -e 'osc exec -ip test -- /bin/bash' mylocalfolder/ /tmp/remote/folder
I've just started with Skaffold
It's really useful to apply changes in the code automatically to a local cluster.
To deploy a local cluster, the best way is Minikube or just Docker for Mac and Windows, both includes a Kubernetes interface.
EDIT 2022: By now, there are obviously dozens of way to provision k8s, unlike 2015 when we started using it. kubeadm, microk8s, k3s, kube-spray, etc.
My advice: (If your cluster can't fit on your workstation/laptop,) Rent a Hetzner server for 40 euro a month, and run WSL2 if on Windows.
Set up k8s cluster on the remote machine (with any of the above, I prefer microk8s these days). Set up Docker and Telepresence on your local Linux/Mac/WSL2 env. Install kubectl and connect it to the remote cluster.
Telepresence will let you replace a remote pod with a local docker pod, with access to local files (hopefully the same git repo that's used to build the pod you're developing/replacing), and possibly nodemon (or other language-specific auto-source-code-reload system).
Write bash functions. I cannot stress this enough, this will save you hundreds of hours of time. If replacing the pod and starting to develop isn't one line / two words, then you're doing it not-well-enough.
2016 answer below:
Another great starting point is this Vagrant setup, esp. if your host OS is Windows. The obvious advantages being
quick and painless setup
easy to destroy / recreate the machine
implicit limit on resources
ability to test horizontal scaling by creating multiple nodes
The disadvantages - you need lot of RAM, and VirtualBox is VirtualBox... for better or worse.
A mixed advantage / disadvantage is mapping files through NFS. In our setup, we created two sets of RC definitions - one that just download a docker image of our application servers; the other with 7 extra lines that set up file mapping from HostOS -> Vagrant -> VirtualBox -> CoreOS -> Kubernetes pod; overwriting the source code from the Docker image.
The downside of this is NFS file cache - with it, it's problematic, without it, it's problematically slow. Even setting mount_options: 'nolock,vers=3,udp,noac' doesn't get rid of caching problems completely, but it works most of the time. Some Gulp tasks ran in a container can take 5 minutes when they take 8 seconds on host OS. A good compromise seems to be mount_options: 'nolock,vers=3,udp,ac,hard,noatime,nodiratime,acregmin=2,acdirmin=5,acregmax=15,acdirmax=15'.
As for automatic code reload, that's language specific, but we're happy with Django's devserver for Python, and Nodemon for Node.js. For frontend projects, you can of course do a lot with something like gulp+browserSync+watch, but for many developers it's not difficult to serve from Apache and just do traditional hard refresh.
We keep 4 sets of yaml files for Kubernetes. Dev, "devstable", stage, prod. The differences between those are
env variables explicitly setting the environment (dev/stage/prod)
number of replicas
devstable, stage, prod uses docker images
dev uses docker images, and maps NFS folder with source code over them.
It's very useful to create a lot of bash aliases and autocomplete - I can just type rec users and it will do kubectl delete -f ... ; kubectl create -f .... If I want the whole set up started, I type recfo, and it recreates a dozen services, pulling the latest docker images, importing the latest db dump from Staging env and cleaning up old Docker files to save space.
See https://github.com/kubernetes/kubernetes/issues/12278 for how to mount a volume from the host machine, the equivalent of:
docker run -v hostPath:ContainerPath
Having a nice local development feedback loop is a topic of rapid development in the Kubernetes ecosystem.
Breaking this question down, there are a few tools that I believe support this goal well.
Docker for Mac Kubernetes
Docker for Mac Kubernetes (Docker Desktop is the generic cross platform name) provides an excellent option for local development. For virtualization, it uses HyperKit which is built on the native Hypervisor framework in macOS instead of VirtualBox.
The Kubernetes feature was first released as beta on the edge channel in January 2018 and has come a long way since, becoming a certified Kubernetes in April 2018, and graduating to the stable channel in July 2018.
In my experience, it's much easier to work with than Minikube, particularly on macOS, and especially when it comes to issues like RBAC, Helm, hypervisor, private registry, etc.
Helm
As far as distributing your code and pulling updates locally, Helm is one of the most popular options. You can publish your applications via CI/CD as Helm charts (and also the underlying Docker images which they reference). Then you can pull these charts from your Helm chart registry locally and upgrade on your local cluster.
Azure Draft
You can also use a tool like Azure Draft to do simple local deploys and generate basic Helm charts from common language templates, sort of like buildpacks, to automate that piece of the puzzle.
Skaffold
Skaffold is like Azure Draft but more mature, much broader in scope, and made by Google. It has a very pluggable architecture. I think in the future more people will use it for local app development for Kubernetes.
If you have used React, I think of Skaffold as "Create React App for Kubernetes".
Kompose or Compose on Kubernetes
Docker Compose, while unrelated to Kubernetes, is one alternative that some companies use to provide a simple, easy, and portable local development environment analogous to the Kubernetes environment that they run in production. However, going this route means diverging your production and local development setups.
Kompose is a Docker Compose to Kubernetes converter. This could be a useful path for someone already running their applications as collections of containers locally.
Compose on Kubernetes is a recently open sourced (December 2018) offering from Docker which allows deploying Docker Compose files directly to a Kubernetes cluster via a custom controller.
Kubespary is helpful setting up local clusters. Mostly, I used vagrant based cluster on local machine.
Kubespray configuration
You could tweak these variables to have the desired kubernetes version.
The disadvantage of using minkube is that it spawns another virtual machine over your machine. Also, with latest minikube version it minimum requires to have 2 CPU and 2GB of RAM from your system, which makes it pretty heavy If you do not have the system with enough resources.
This is the reason I switched to microk8s for development on kubernetes and I love it. microk8s supports the DNS, local-storage, dashboard, istio, ingress and many more, everything you need to test your microservices.
It is designed to be a fast and lightweight upstream Kubernetes installation isolated from your local environment. This isolation is achieved by packaging all the binaries for Kubernetes, Docker.io, iptables, and CNI in a single snap package.
A single node kubernetes cluster can be installed within a minute with a single command:
snap install microk8s --classic
Make sure your system doesn't have any docker or kubelet service running. Microk8s will install all the required services automatically.
Please have a look at the following link to enable other add ons in microk8s.
https://github.com/ubuntu/microk8s
You can check the status using:
velotio#velotio-ThinkPad-E470:~/PycharmProjects/k8sClient$ microk8s.status
microk8s is running
addons:
ingress: disabled
dns: disabled
metrics-server: disabled
istio: disabled
gpu: disabled
storage: disabled
dashboard: disabled
registry: disabled
Have a look at https://github.com/okteto/okteto and Okteto Cloud.
The value proposition is to have the classical development experience than working locally, prior to docker, where you can have hot-reloads, incremental builds, debuggers... but all your local changes are immediately synchronized to a remote container. Remote containers give you access to the speed of cloud, allow a new level of collaboration, and integrates development in a production-like environment. Also, it eliminates the burden of local installations.
As specified before by Robert, minikube is the way to go.
Here is a quick guide to get started with minikube. The general steps are:
Install minikube
Create minikube cluster (in a Virtual Machine which can be VirtualBox or Docker for Mac or HyperV in case of Windows)
Create Docker image of your application file (by using Dockerfile)
Run the image by creating a Deployment
Create a service which exposes your application so that you can access it.
Here is the way I did a local set up for Kubernetes in Windows 10: -
Use Docker Desktop
Enable Kubernetes in the settings option of Docker Desktop
In Docker Desktop by default resource allocated for Memory is 2GB so to use Kubernetes
with Docker Desktop increase the memory.
Install kubectl as a client to talk to Kubernetes cluster
Run command kubectl config get-contexts to get the available cluster
Run command kubectl config use-context docker-desktop to use the docker desktop
Build a docker image of your application
Write a YAML file (descriptive method to create your deployment in Kubernetes) pointing
to the image created in above step cluster
Expose a service of type node port for each of your deployment to make it available to
the outside world

Resources