Use this guide to install Kubernetes on Vagrant cluster:
https://kubernetes.io/docs/getting-started-guides/kubeadm/
At (2/4) Initializing your master, there came some errors:
[root#localhost ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
I checked the /proc/sys/net/bridge/bridge-nf-call-iptables file content, there is only one 0 in it.
At (3/4) Installing a pod network, I downloaded kube-flannel file:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And run kubectl apply -f kube-flannel.yml, got error:
[root#localhost ~]# kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Until here, I don't know how to goon.
My Vagrantfile:
# Master Server
config.vm.define "master", primary: true do |master|
master.vm.network :private_network, ip: "192.168.33.200"
master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh'
end
In order to set /proc/sys/net/bridge/bridge-nf-call-iptables by editing /etc/sysctl.conf. There you can add [1]
net.bridge.bridge-nf-call-iptables = 1
Then execute
sudo sysctl -p
And the changes will be applied. With this the pre-flight check should pass.
[1] http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Update #2019/09/02
Sometimes modprobe br_netfilter is unreliable, you may need to redo it after relogin, so use the following instead when on a systemd sytem:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
systemctl restart systemd-modules-load.service
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
YES, the accepted answer is right, but I faced with
cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
So I did
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
sudo sysctl -p
Then solved.
On Ubuntu 16.04 I just had to:
modprobe br_netfilter
Default value in /proc/sys/net/bridge/bridge-nf-call-iptables is already 1.
Then I added br_netfilter to /etc/modules to load the module automatically on next boot.
As mentioned in K8s docs - Installing kubeadm under the Letting iptables see bridged traffic section:
Make sure that the br_netfilter module is loaded. This can be done
by running lsmod | grep br_netfilter. To load it explicitly call
sudo modprobe br_netfilter.
As a requirement for your Linux Node's iptables to correctly see
bridged traffic, you should ensure
net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Regardng the preflight erros - you can see in Kubeadm Implementation details under the preflight-checks:
Kubeadm executes a set of preflight checks before starting the init,
with the aim to verify preconditions and avoid common cluster startup
problems..
The following missing configurations will produce errors:
.
.
if /proc/sys/net/bridge/bridge-nf-call-iptables file does not exist/does not contain 1
if advertise address is ipv6 and /proc/sys/net/bridge/bridge-nf-call-ip6tables does not exist/does not contain 1.
if swap is on
.
.
The one-liner way:
sysctl net.bridge.bridge-nf-call-iptables=1
Related
I Try to Run this command to start minikube:
minikube start \
--extra-config=apiserver.Authorization.Mode=RBAC \
--extra-config=kubelet.cgroup-driver=systemd \
--driver=docker
And Also Try,
minikube start \
--extra-config=apiserver.Authorization.Mode=RBAC \
--driver=docker
But It's giving me errors.
How can I solve this problem...........?
errr like:
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-84-generic\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Try to start minikube with --bootstrapper=kubeadm and this will enable RBAC for you
# Start minikube with kubeadm (Its the default so it's not mandatory)
minikube start --bootstrapper=kubeadm
# Create the default role binding [kube-system:default]
kubectl create clusterrolebinding \
add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
I'm having an error trying to have docker set iptables false when minikube start fails.
Below are my logs:
minikube v1.20.0 on Centos 7.6.1810 (amd64)
* Using the none driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
stderr:
[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Error you included states that you are misising bridge-nf-call-iptables.
bridge-nf-call-iptables is exported by br_netfilter.
What you need to do is issue the command
sudo modprobe br_netfilter
and then ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
cat <<EOF > /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
This should fix your problem
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
During the installation of kubernetes, an error is reported when I initialize the master node. I am using the arm platform server and the operating system is centos-7.6 aarch64. Does kubernetes support deploying master nodes on the arm platform?
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
6月 30 22:53:04 master kubelet[54238]: W0630 22:53:04.188966 54238 pod_container_deletor.go:75] Container "51615bc1d926dcc56606bca9f452c178398bc08c78a2418a346209df28b95854" not found in pod's containers
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.189353 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.218672 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.236484 54238 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.112:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.238898 54238 certificate_manager.go:400] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://192.168.1.112:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.260520 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.289516 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.389666 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.436810 54238 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.1.112:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.489847 54238 kubelet.go:2248] node "master" not found
To start kubernetes cluster, make sure you have minimum requirement of kubernetes platfrom.
If you want kubernetes cluster with low compute you could discus with me in seperatly.
You need :
Docker
Compute Node at least 4GB Memory 2CPU.
I will write answer depends on your node.
Docker
On each of your machines, install Docker. Version 19.03.11 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
Use the following commands to install Docker on your system:
Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
Create /etc/docker
mkdir /etc/docker
Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
Restart Docker
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Kubernetes
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
systemctl daemon-reload
systemctl restart kubelet
Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
Master
Init kubernetes cluster (Running this on master node)
kubeadm init --pod-network-cidr 192.168.0.0/16
Note : I will calico here. so the cidr use 192.168.0.0/16
Move kube config to user directory (assume root)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Worker Node
Join other nodes (Running below command from your worker node)
kubeadm join <IP_PUBLIC>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
Note : you will get this when you successfully init master
Master Node
Applying calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify cluster
kubectl get nodes
Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
KubeletNotReady
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
message:docker: network plugin is not ready: cni config uninitialized
I don't know how to make the network plugin ready
While you run kubectl describe node <node_name>
In the Conditions table, the Ready type will contain this message if you did not initialized cni. Proper initialization can be obtained by installing network addon. I will point you to 2 most used: Weave and Flannel
1) Weave
$ export kubever=$(kubectl version | base64 | tr -d '\n')
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
After executing those two commands you should see node in status "Ready"
$ kubectl get nodes
You could also check status
$ kubectl get cs
2) Flannel
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
3) Kubernetes documentation will explain how install other network addons. In this article each CNI provider have a short description.
In my case, update systemd from 30.el7_3.9 to 67.el7_7.4 solved this.
I'm using Window Linux Subsystem (Debian stretch). Followed the instruction on Docker website, I installed docker-ce, but it cannot start. Here is the info:
$ sudo service docker start
grep: /etc/fstab: No such file or directory
[ ok ] Starting Docker: docker.
$ sudo service docker status
[FAIL] Docker is not running ... failed!
What should I do with /etc/fstab not found?
to fix fstab
touch /etc/fstab
if you run dockerd, it will give you the failed message:
INFO[2022-01-27T17:55:14.100489400+07:00] Loading containers: start.
WARN[2022-01-27T17:55:14.191666800+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING`, error: exit status 4
INFO[2022-01-27T17:55:14.493716300+07:00] stopping event stream following graceful shutdown error="<nil>" module=libcontainerd namespace=moby
INFO[2022-01-27T17:55:14.494906600+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2022-01-27T17:55:14.495048400+07:00] stopping healthcheck following graceful shutdown module=libcontainerd
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING
(exit status 4)
that is Debian nat issue, fix it with:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
now you can start the service again
you can follow this to make it start on startup https://askubuntu.com/a/1356147/138352
Edited:
if the issue with IP table still persisted try to set WSL version to 2, run the command from Windows shell:
wsl --set-version <distribution name> 2
the distribution list can be found with command wsl -l
I was getting the same error. Apparently on my install of WSL with Debian, I didn't have an etc/fstab file. Surprisingly, just creating the file via 'touch' worked:
sudo touch /etc/fstab
Perhaps a good signal https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-17093
WSL now processes the /etc/fstab file during instance start [GH 2636].
For anybody stumbling across this years later like me, Docker doesn't work inside WSL.
But you can use Docker for Windows and WSL2 to run native containers inside your Linux Distro and the install and config is quite painless https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers