kubeadm docker flannel integration - docker

Before kubeadm I use these steps to take flannel ip & mtu value to docker.
Step 1: stop Docker and Flannel
Step 2: start Flannel and check its status;
step 3: update Docker startup script like this
source /run/flannel/subnet.env
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
Step 4: start Docker and check its status.
How this steps done with kubeadm? I see Docker deamon process start first then Flannel starts as container trying to understate the integration process.
Thanks
SR

Here are the steps I took to set up flannel in Kubernetes v1.7.3.
Install flannel
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
You will see the flannel pod created but it falls into a "CrashLoopBackOff" state and restart forever.
After flannel is installed by Kubeadm, the subnet info will be recorded in file /run/flannel/subnet.env.
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Setup these environment variables for docker
mkdir -p /usr/lib/systemd/system/docker.service.d
sudo cat << EOF > /usr/lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker
EOF
sudo cat << EOF > /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.244.0.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.244.0.1/24 --ip-masq=false --mtu=1450"
Note: do set ip-masq as false for docker, otherwise kube-dns would not work well.
Reload the service configuration, then the changes will take effect.
sudo systemctl daemon-reload`
Voila, everything works after that.

Related

How to install kubernetes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
This post was edited and submitted for review last year and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
How to install Kubernetes and How to add worker-nodes to Kubernetes-Master
Let's consider 1 Kubernetes-Cluster and 2 Worker-Nodes
While launching AWS ec2 instances, make sure that you open all the
ports in the related security groups.
All EC2 instances need to be on the same VPC and in the same
availability zone.
For K8-Cluster minimum requirement is t2.large
For Worker-Nodes minimum requirement is t2.micro
Login to putty/terminal to connect to your EC2 instances.
Run the following Commands on both K8-Cluster and Worker-nodes
Be a root user. Install Docker and start Docker service. Also, enable
the docker service so that the docker service starts on the system
restarts.
sudo su
yum install docker -y
systemctl enable docker && systemctl start docker
Create proper yum repo files so that we can use yum commands to install the components of Kubernetes.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
sysctl command is used to modify kernel parameters at runtime. Kubernetes needs to have access to the kernel’s IP6 table and so we need to do some more modifications. This includes disabling secure Linux.
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
setenforce 0
Install kubelet, kubeadm and kubectl; start kubelet daemon
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
restart docker and kubelet; reload daemon
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
Only on the Master Node:
On the master node initialize the cluster.
kubeadm init --ignore-preflight-errors all
To make kubeconfig file permanent , paste the export KUBECONFIG=/etc/kubernetes/admin.conf after export PATH in .bash_profile
ls -al
vi .bash_profile
export KUBECONFIG=/etc/kubernetes/admin.conf
On All Worker nodes :
Copy kubeadm join command from the output of kubeadm init on the master node.
<kubeadm join command copied from master node>
# kubeadm join 172.31.37.128:6443 --token sttk5b.vco0jw5bkkf1toa4 \
--discovery-token-ca-cert-hash sha256:d77b5f865c1e30b73ea4dd7ea458f79f56da94f9de9c8d7a26b226d94fd0c49e
On the Master Node :
Create weave-net.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get nodes
That's it :)
The easiest way to start Kubernetes on Amazon Linux AMI (or any other Linux AMI) is to use Microk8s (a lightweight distribution of Kubernetes).
The following steps will help you get started with Kubernetes on EC2 instance:
Install Microk8s on EC2 instance
sudo snap install microk8s --classic
Check the status while Kubernetes starts
microk8s status --wait-ready
Turn on the services you want
microk8s enable dashboard dns registry istio
Start using Kubernetes
microk8s kubectl get all --all-namespaces
Access the Kubernetes dashboard
microk8s dashboard-proxy
Start and stop Kubernetes to save battery
microk8s start and microk8s stop
This way you can install a local version of Kubernetes with Microk8s. You can also follow this tutorial for detailed instructions on the above steps.

Connection refused error on worker node in kubernetes

I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node.
After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help
root#kubework:/etc/kubernetes# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes#
root#kubework:/etc/kubernetes# kubectl get nodes The connection to the
server localhost:8080 was refused - did you specify the right host or
port?
Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and ~/.kube/config.
For worker nodes, we don’t need to use kube-apiserver but what we want is using the master configuration to pass by it.
To achieve it we have to copy the ~/.kube/config file from the master to the ~/.kube/config on the worker. Value ~ with the user executing kubcetl on the worker and master (that may be different of course).
Once that done you could use the kubectl command from the worker node exactly as you do that from the master node.
Yes these files needed. Move these files into respective .kube/config folder on worker nodes.
This is expected behavior even using kubectl on master node as non root account, by default this config file is stored for root account in /etc/kubernetes/admin.conf:
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively on the master, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Optionally Controlling your cluster from machines other than the control-plane node
scp root#<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
Note:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The KUBECONFIG environment variable is not required. If the KUBECONFIG environment variable doesn't exist, kubectl uses the default kubeconfig file, $HOME/.kube/config.
I tried many of the solutions which just copy the /etc/kubernetes/admin.conf to ~/.kube/config. But none worked for me.
My OS is ubuntu and is resolved by removing, purging and re-installing the following :
sudo dpkg -r kubeadm kubectl
sudo dpkg -P kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" (downloading kubectl again, this actually worked)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
mymaster Ready control-plane,master 31h v1.20.4
myworker Ready 31h v1.20.4
Davidxxx's solution worked for me.
In my case, I found out that there is a file that exists in the worker nodes at the following path:
/etc/kubernetes/kubelet.conf
You can copy this to ~/.kube/config and it works as well. I tested it myself.
If I try the following commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
getting this error
[vikram#node2 ~]$ kubectl version
Error in configuration:
* unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
* unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
Then this works which is really a workaround and not a fix.
sudo kubectl --kubeconfig /etc/kubernetes/kubelet.conf version
Was able to fix it by copying the kubelet-client-current.pem from /var/lib/kubelet/pki/ to a location inside $HOME and modifying to reflect the path of the certs in HOME/.kube/config file. Is this normal?
I came across the same issue - I found that my 3 VMs are sharing the same IP (since I was using NAT network on Virtual-box), therefore I switched to Bridge network and have 3 different IPs for 3 different VMs and then followed the installation guide for successful installation of k8s cluster.
Amit

How to modify the `nodePort` range when using Docker Desktop?

I tried to open nodePort 80/443, but it failed because it was outside the default nodePort range.
Solution is add - --service-node-port-range option to static pod kube-apiserver-docker-desktop. But how can I modify the static pod using Docker Desktop on Windows? I tried to edit this pod directly but failed.
kubectl edit pod kube-apiserver-docker-desktop -n kube-system
You need to run a privileged docker container :
$ docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Then edit kubernetes configuration here :
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add the command line param in the list with the ports you need:
- --service-node-port-range=80-36000
This StackOverflow question explain how to process :
Location of Kubernetes config directory with Docker Desktop on Windows

Kubernetes pods are running but docker ps does not give any output

I have been trying to run tomcat container on port 5000 on cluster using kubernetes. But when i am using kubectl create -f tmocat_pod.yaml , it creates pod but docker ps does not give any output. Why is it so?
Ideally, when it is running a pod, it means it is running a container inside that pod and that container is defined in yaml file.
Why is that docker ps does not show any containers running?
I am following the below URLs:
http://containertutorials.com/get_started_kubernetes/k8s_example.html
https://blog.jetstack.io/blog/k8s-getting-started-part2/
How can I get it running and see tomcat running on browser on port 5000.
The docker containers should be running on the virtual machine. Since I only installed minikube on my local machine, I confirmed the following will bring what you want:
minikub ssh
...
docker ps
Just try the kubernetes equivalent of minikube ssh.
In Kubernetes, Docker contaienrs are run on Pods, and Pods are run on Nodes, and Nodes are run on your machine (minikube/GKE)
When you run kubectl create -f tmocat_pod.yaml you basically create a pod and it runs the docker container on that pod.
The node that holds this pod, is basically a virtual instance, if you could 'SSH' into that node, docker ps would work.
What you need is:
kubectl get pods <-- It is like docker ps, it shows you all the pods (think of it as docker containers) running
kubectl get nodes <-- view the host machines for your pods.
kubectl describe pods <pod-name> <-- view system logs for your pods.
kubectl logs <pod-name> <-- Will give you logs for the specific pod.
You can connect your Terminal with the docker server what is running inside your Node/VM.
With this command in your terminal: eval $(minikube docker-env)
This only configures your current terminal window.
illustration
may be you are not using docker as container runtime.
I faced the same issue, and i forgot that i switched to gVisor with runsc as handler.
cat /etc/default/kubelet
KUBELET_EXTRA_ARGS="--container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock"
If so, you need to use runsc command instead of docker.
I'm not sure where you are running the docker ps command, but if you are trying to do that from your host machine and the k8s cluster is located elsewhere, i.e. your machine is not a node in the cluster, docker ps will not return anything since the containers are not tied to your docker host.
Assuming your pod is running, kubectl get pods will display all of your running pods. To check further details, you can use kubectl describe pod <yourpodname> to check the status of each container (in great detail). To get the pod names, you should be able to use tab-complete with the kubernetes cli. Also, if your pod contains multiple containers, you will need to give the container name as well, which you can use tab-complete for after you've selected your pod.
The output will look similar to:
kubectl describe pod comparison-api-dply-reborn-6ffb88b46b-s2mtx
Name: comparison-api-dply-reborn-6ffb88b46b-s2mtx
Namespace: default
Node: aks-nodepool1-99697518-0/10.240.0.5
Start Time: Fri, 20 Apr 2018 14:08:21 -0400
Labels: app=comparison-pod-reborn
pod-template-hash=2996446026
...
Status: Running
IP: *.*.*.*
Controlled By: ReplicaSet/comparison-api-dply-reborn-6ffb88b46b
Containers:
rabbit-mq:
...
Port: 5672/TCP
State: Running
...
If your containers and pods are already running, then you shouldn't need to troubleshoot them too much. To make them accessible from the Public Internet, take a look at Services (https://kubernetes.io/docs/concepts/services-networking/service/) to make your API's IP address fixed and easily reachable.
Have you tried a "docker ps -a" to see if the container is dead? If it is there you can see its logs with "docker logs " and maybe this gives you a hint.
If your pod is running successfully and if you are looking for the container on the node where the pod is scheduled the issue could be kubernetes is using a different container runtime.
Example
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl exec -it nginx-8586cf59-h92ct bash
root#nginx-8586cf59-h92ct:/# exit
exit
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-8586cf59-h92ct 1/1 Running 0 47s 10.20.0.3 renjith-laptop
root#renjith-laptop:/home/renjith/raspbery-k8s# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root#renjith-laptop:/home/renjith/raspbery-k8s#
Here I am able exec to the pod, and I am in the same node where pod is scheduled, but docker ps doesn't show the container. In my case kubelet is using different container runtime, one of the argument to kubelet service is --container-runtime-endpoint=unix:///var/run/cri-containerd.sock
From Kubernetes documentation to get container images running on your system:
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
Then you get back something like:
2 registry.k8s.io/coredns/coredns:v1.9.3
1 registry.k8s.io/etcd:3.5.4-0
1 registry.k8s.io/kube-apiserver:v1.25.1
1 registry.k8s.io/kube-controller-manager:v1.25.1
3 registry.k8s.io/kube-proxy:v1.25.1
1 registry.k8s.io/kube-scheduler:v1.25.1

"Running Kubernetes Locally via Docker" Guide is not working at all for MacOS, ssh command just hanging

I am following this Guide at http://kubernetes.io/docs/getting-started-guides/docker/ to start using Kubernetes on MacOS. Is this guide valid?
When I am doing this step:
docker-machine sshdocker-machine active-N -L 8080:localhost:8080
They command is hanging, no repsponse at all;
Looking at docker ps -l, I have
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b1e95e26f46d
gcr.io/google_containers/hyperkube-amd64:v1.2.3 "/hyperkube apiserver"
About an hour ago Up About an hour
k8s_apiserver.c08c1df_k8s-master-127.0.0.1_default_d95a6048198f747c5fcb74ee23f1f25c_d0c6d2fc
So it means kubernete is running
I run this command:
docker-machine `sshdocker-machine active` -L 8080:localhost:8080
I can login to docker machine, then exit, run kubectl get nodes again, hanging, no response
Anything wrong here?
If this step can not pass, how can I use Kubernetes?
"docker-machine ssh docker-machine active -N -L 8080:localhost:8080" sets up a ssh tunnel. Similar to ssh tunnel, you can run that command in the background by passing the -f option. More useful tips here
I'd recommend running the ssh command in a separate terminal, so that it will be easy to bring down the tunnel.
As long as the above command is running, kubectl should work.

Resources