Microk8s Error When Trying to create local cluster - kubeflow

I'm blindly following the installation documentation to get Microk8s installed and configured with kubeflow, but I'm hitting an error like this below:
https://charmed-kubeflow.io/docs/quickstart
To be sure I am clean:
juju unregister -y microk8s-localhost
sudo snap remove microk8s --purge
sudo snap remove juju --purge
rm ~/.local/share/juju
Ubuntu version:
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
(If I do the same with Ubuntu 20.04, it's correct)
Then, I try to create a local cluster.
sudo snap install microk8s --classic --channel=1.22/stable
snap install juju --classic --channel=2.9/stable
microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49
microk8s status --wait-ready
juju bootstrap microk8s
The creation never ending
Creating Juju controller "microk8s-localhost" on microk8s/localhost
Bootstrap to Kubernetes cluster identified as microk8s/localhost
Fetching Juju Dashboard 0.8.1
Creating k8s resources for controller "controller-microk8s-localhost"
.........<waiting>.....
I try with a just installed Ubuntu VM, with the same problem.
The --debug
...
17:17:44 INFO cmd bootstrap.go:395 Creating k8s resources for controller "controller-microk8s-localhost"
17:17:44 DEBUG juju.kubernetes.provider bootstrap.go:628 creating controller service:
&Service{ObjectMeta:{controller-service controller-microk8s-localhost 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app.kubernetes.io/managed-by:juju app.kubernetes.io/name:controller] map[controller.juju.is/id:d99735d9-21f7-435e-8a76-65d05ac60830] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:api-server,Protocol:,Port:17070,TargetPort:{0 17070 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/name: controller,},ClusterIP:,Type:ClusterIP,ExternalIPs:[],SessionAffinity:,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:nil,ClusterIPs:[],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
17:17:44 DEBUG juju.caas.kubernetes.provider.proxy setup.go:177 polling caas credential rbac secret, in 1 attempt, token for secret "controller-proxy" not found
17:17:46 DEBUG juju.kubernetes.provider configmap.go:84 updating configmap "controller-configmap"
17:17:47 DEBUG juju.kubernetes.provider configmap.go:84 updating configmap "controller-configmap"
17:17:48 DEBUG juju.kubernetes.provider bootstrap.go:1209 mongodb container args:
printf 'args="--dbpath=/var/lib/juju/db --sslPEMKeyFile=/var/lib/juju/server.pem --sslPEMKeyPassword=ignored --sslMode=requireSSL --port=37017 --journal --replSet=juju --quiet --oplogSize=1024 --auth --keyFile=/var/lib/juju/shared-secret --storageEngine=wiredTiger --bind_ip_all"\nipv6Disabled=$(sysctl net.ipv6.conf.all.disable_ipv6 -n)\nif [ $ipv6Disabled -eq 0 ]; then\n args="${args} --ipv6"\nfi\nexec mongod ${args}\n'>/root/mongo.sh && chmod a+x /root/mongo.sh && /root/mongo.sh
17:17:48 DEBUG juju.kubernetes.provider k8s.go:2008 selecting units "app.kubernetes.io/name=controller" to watch
17:17:48 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller-0
17:17:48 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller
17:17:49 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller-0
17:17:50 DEBUG juju.kubernetes.provider.watcher k8swatcher.go:114 fire notify watcher for controller-0
.........<waiting>.....
end never ending.
Tanks

Related

Kubernetes Installation process guidance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
During the installation of kubernetes, an error is reported when I initialize the master node. I am using the arm platform server and the operating system is centos-7.6 aarch64. Does kubernetes support deploying master nodes on the arm platform?
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
6月 30 22:53:04 master kubelet[54238]: W0630 22:53:04.188966 54238 pod_container_deletor.go:75] Container "51615bc1d926dcc56606bca9f452c178398bc08c78a2418a346209df28b95854" not found in pod's containers
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.189353 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.218672 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.236484 54238 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.112:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.238898 54238 certificate_manager.go:400] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://192.168.1.112:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.260520 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.289516 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.389666 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.436810 54238 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.1.112:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.489847 54238 kubelet.go:2248] node "master" not found
To start kubernetes cluster, make sure you have minimum requirement of kubernetes platfrom.
If you want kubernetes cluster with low compute you could discus with me in seperatly.
You need :
Docker
Compute Node at least 4GB Memory 2CPU.
I will write answer depends on your node.
Docker
On each of your machines, install Docker. Version 19.03.11 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
Use the following commands to install Docker on your system:
Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
Create /etc/docker
mkdir /etc/docker
Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
Restart Docker
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Kubernetes
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
systemctl daemon-reload
systemctl restart kubelet
Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
Master
Init kubernetes cluster (Running this on master node)
kubeadm init --pod-network-cidr 192.168.0.0/16
Note : I will calico here. so the cidr use 192.168.0.0/16
Move kube config to user directory (assume root)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Worker Node
Join other nodes (Running below command from your worker node)
kubeadm join <IP_PUBLIC>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
Note : you will get this when you successfully init master
Master Node
Applying calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify cluster
kubectl get nodes
Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Delete kubernetes cluster on docker-for-desktop OSX?

What is the equivalent command for minikube delete in docker-for-desktop on OSX
As I understand, minikube creates a VM to host its kubernetes cluster but I do not understand how docker-for-desktop is managing this on OSX.
Tear down Kubernetes in Docker for OS X is quite an easy task.
Go to Preferences, open Reset tab, and click Reset Kubernetes cluster.
All object that have been created with Kubectl before that will be deleted.
You can also reset docker VM image (Reset disk image) and all settings (Reset to factory defaults) or even uninstall Docker.
In recent Docker Edge versions for Mac ( 2.1.7 ) Preferences design has been changed. Now you can reset Kubernetes cluster and other docker aspects by switching to the bug plane in the top right of Preferences window:
Note: You are able to reset Kubernetes cluster only if it's enabled. If you uncheck "Enable Kubernetes" checkbox, "Reset Kubernetes cluster" button becomes inactive.
For convenience "Reset Kubernetes cluster" is also present on the Kubernetes tab in the main Preferences plane:
To reset Docker-desktop Kubernetes cluster using command line, put the following content to a file (dd-reset.sh) and mark it executable ( chmod a+x dd-reset.sh )
#!/bin/bash
dr='docker run -it --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i'
${dr} sh -c 'export PATH=$PATH:/containers/services/docker/rootfs/usr/bin:/containers/services/docker/rootfs/usr/local/bin:/var/lib/kube-binary-cache/ && \
if [ ! -e /var/run/docker.sock ] ; then ln -s /containers/services/docker/rootfs/var/run/docker.sock /var/run/docker.sock ; fi && \
kube-reset.sh'
sleep 3
echo "cluster resetted. restarting docker-desktop..."
osascript -e 'quit app "Docker"'
open --background -a Docker
echo "docker-desktop started. Wait 3-5 mins for kubernetes to start."
Explanation:
This method uses internal scripts from Docker-desktop VM. To make it work, some preparation of user environment is required.
I wasn't able to start Kubernetes cluster using kube-start.sh script from inside the VM, so I've used MacOS commands to restart Docker application instead.
This method works even if your Kubernetes cluster is not enabled in Docker preferences at the moment, but it's required to enable Kubernetes at least once to use the script.
It was tested on Docker Edge for MacOS v2.2.2.0 (43066)
There is no guarantee that it will be compatible with earlier or later versions.
This version of Docker uses kubeadm to initialize Kubernetes cluster. Scripts are located in the folder /containers/services/docker/rootfs/usr/bin:
kube-pull.sh (brings kubernetes binaries to VM)
kube-reset.sh (runs kube-stop.sh and do kubeadm reset + some rm stuff)
kube-restart.sh (runs kube-stop.sh and kube-start.sh)
kube-start.sh (runs kube-pull.sh and kubelet.sh)
kube-stop.sh (kills kubelet and kube-apiserver processes, and all k8s containers)
kubeadm-init.sh (initializes Kubernetes cluster)
kubelet.sh (runs kubeadm-init.sh and starts kubelet binary)
Cluster configuration is located in the file /containers/services/docker/lower/etc/kubeadm/kubeadm.yaml
Resources used:
Restart Docker from command line
Use nsenter in priviledged container
It's really under the hood in the code. Docker for Mac uses these components: Hyperkit, VPNkit and DataKit
Kubernetes runs in the same Hyperkit VM created for docker and the kube-apiserver is exposed.
You can connect to the VM with this:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Then you can see all the Kubernetes processes in the VM:
linuxkit-025000000001:~# ps -Af | grep kube
1251 root 0:00 /usr/bin/logwrite -n kubelet /usr/bin/kubelet.sh
1288 root 0:51 kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cgroups-per-qos=false --enforce-node-allocatable= --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cadvisor-port=0 --kube-reserved-cgroup=podruntime --system-reserved-cgroup=systemreserved --cgroup-root=kubepods --hostname-override=docker-for-desktop --fail-swap-on=false
3564 root 0:26 kube-scheduler --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf
3616 root 1:45 kube-controller-manager --cluster-signing-key-file=/run/config/pki/ca.key --address=127.0.0.1 --root-ca-file=/run/config/pki/ca.crt --service-account-private-key-file=/run/config/pki/sa.key --kubeconfig=/etc/kubernetes/controller-manager.conf --cluster-signing-cert-file=/run/config/pki/ca.crt --leader-elect=true --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner
3644 root 1:59 kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --service-account-key-file=/run/config/pki/sa.pub --secure-port=6443 --insecure-port=8080 --insecure-bind-address=0.0.0.0 --requestheader-client-ca-file=/run/config/pki/front-proxy-ca.crt --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=192.168.65.3 --service-cluster-ip-range=10.96.0.0/12 --tls-private-key-file=/run/config/pki/apiserver.key --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --tls-cert-file=/run/config/pki/apiserver.crt --proxy-client-key-file=/run/config/pki/front-proxy-client.key --proxy-client-cert-file=/run/config/pki/front-proxy-client.crt --allow-privileged=true --client-ca-file=/run/config/pki/ca.crt --kubelet-client-certificate=/run/config/pki/apiserver-kubelet-client.crt --kubelet-client-key=/run/config/pki/apiserver-kubelet-client.key --authorization-mode=Node,RBAC --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/run/config/pki/etcd/ca.crt --etcd-certfile=/run/config/pki/apiserver-etcd-client.crt --etcd-keyfile=/run/config/pki/apiserver-etcd-client.key
3966 root 0:01 /kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
4190 root 0:05 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
4216 65534 0:03 /sidecar --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
4606 root 0:00 /compose-controller --kubeconfig --reconciliation-interval 30s
4905 root 0:01 /api-server --kubeconfig --authentication-kubeconfig --authorization-kubeconfig --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/etc/docker-compose/etcd/ca.crt --etcd-certfile=/etc/docker-compose/etcd/client.crt --etcd-keyfile=/etc/docker-compose/etcd/client.key --secure-port=9443 --tls-ca-file=/etc/docker-compose/tls/ca.crt --tls-cert-file=/etc/docker-compose/tls/server.crt --tls-private-key-file=/etc/docker-compose/tls/server.key
So if you uncheck the following box (unclear from the docs what command it uses):
You can see that the processes are removed:
linuxkit-025000000001:~# [ 6616.856404] cni0: port 2(veth5f6c8b28) entered disabled state
[ 6616.860520] device veth5f6c8b28 left promiscuous mode
[ 6616.861125] cni0: port 2(veth5f6c8b28) entered disabled state
linuxkit-025000000001:~#
linuxkit-025000000001:~# [ 6626.816763] cni0: port 1(veth87e77142) entered disabled state
[ 6626.822748] device veth87e77142 left promiscuous mode
[ 6626.823329] cni0: port 1(veth87e77142) entered disabled state
linuxkit-025000000001:~# ps -Af | grep kube
linuxkit-025000000001:~#
On docker desktop version 3.5.2 (engine version 20.10.7), the reset button has been moved inside the docker preferences.
You can get there by following the below steps:
Click on the docker icon in the menu bar and choose 'Preferences'.
Go to the Kubernetes tab.
Click on the Reset Kubernetes CLuster button. This is the red color button.
This will delete all pods and reset the kubernetes. You can execute the docker ps command at terminal to verify that there are no containers running.
Just delete the vm that holds the kubernetes resources.
$ minikube delete

Is it possible to delay jenkins from connecting to a slave before the cloud init script has run

I have a cloud init script
#cloud-config
package_upgrade: true
packages:
- openjdk-8-jdk
- apt-transport-https
- git
- jq
groups:
- docker
users:
- default
- name: jenkins
groups: docker
homedir: /var/lib/jenkins
lock_passwd: true
ssh_authorized_keys:
- ssh-rsa xyz
Which is given to the jenkins ec2-plugin when starting an ubuntu 18.04 AMI.
When jenkins tries to connect to the instance the logs show:
INFO: Verifying: java -fullversion
sh: 1: java: not found
Nov 01, 2018 8:22:10 PM null
INFO: Installing: sudo yum install -y java-1.8.0-openjdk.x86_64
sudo: no tty present and no askpass program specified
Nov 01, 2018 8:22:10 PM null
WARNING: Failed to install: sudo yum install -y java-1.8.0-openjdk.x86_64
sh: 1: java: not found
ERROR: Unable to launch the agent for Ubuntu 18.04 (i-xxx)
java.io.EOFException: unexpected stream termination
If I try to connect to the agent manually again after some time has elapsed (2/3 mins) all is fine:
Agent successfully connected and online
Should the cloud-init script have run before the SSH connection?
I have never had this trouble when using Amazon Linux AMI's where I install java 8 in the same way (via a cloud init script). Is this something specific to the way amazon linux runs cloud init scripts vs ubuntu?
In the end I decided it was easier to install java and create a new AMI to fully avoid this issue.
I think that perhaps my expectations that cloud init would run fully before connecting might be incorrect, mainly because of this comment in the documentation
Allow enough time for the instance to launch and execute the directives in your user data, and then check to see that your directives have completed the tasks you intended.
Perhaps one approach to help solve this might be to stop sshd in the run commands while things install and then start it again when all done, hopefully Jenkins would then connect only once everything is ready.

Can't install Kubernetes on Vagrant

Use this guide to install Kubernetes on Vagrant cluster:
https://kubernetes.io/docs/getting-started-guides/kubeadm/
At (2/4) Initializing your master, there came some errors:
[root#localhost ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
I checked the /proc/sys/net/bridge/bridge-nf-call-iptables file content, there is only one 0 in it.
At (3/4) Installing a pod network, I downloaded kube-flannel file:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And run kubectl apply -f kube-flannel.yml, got error:
[root#localhost ~]# kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Until here, I don't know how to goon.
My Vagrantfile:
# Master Server
config.vm.define "master", primary: true do |master|
master.vm.network :private_network, ip: "192.168.33.200"
master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh'
end
In order to set /proc/sys/net/bridge/bridge-nf-call-iptables by editing /etc/sysctl.conf. There you can add [1]
net.bridge.bridge-nf-call-iptables = 1
Then execute
sudo sysctl -p
And the changes will be applied. With this the pre-flight check should pass.
[1] http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Update #2019/09/02
Sometimes modprobe br_netfilter is unreliable, you may need to redo it after relogin, so use the following instead when on a systemd sytem:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
systemctl restart systemd-modules-load.service
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
YES, the accepted answer is right, but I faced with
cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
So I did
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
sudo sysctl -p
Then solved.
On Ubuntu 16.04 I just had to:
modprobe br_netfilter
Default value in /proc/sys/net/bridge/bridge-nf-call-iptables is already 1.
Then I added br_netfilter to /etc/modules to load the module automatically on next boot.
As mentioned in K8s docs - Installing kubeadm under the Letting iptables see bridged traffic section:
Make sure that the br_netfilter module is loaded. This can be done
by running lsmod | grep br_netfilter. To load it explicitly call
sudo modprobe br_netfilter.
As a requirement for your Linux Node's iptables to correctly see
bridged traffic, you should ensure
net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Regardng the preflight erros - you can see in Kubeadm Implementation details under the preflight-checks:
Kubeadm executes a set of preflight checks before starting the init,
with the aim to verify preconditions and avoid common cluster startup
problems..
The following missing configurations will produce errors:
.
.
if /proc/sys/net/bridge/bridge-nf-call-iptables file does not exist/does not contain 1
if advertise address is ipv6 and /proc/sys/net/bridge/bridge-nf-call-ip6tables does not exist/does not contain 1.
if swap is on
.
.
The one-liner way:
sysctl net.bridge.bridge-nf-call-iptables=1

Cannot install docker on OS X Version 10.9.5

I first tried installing VirtualBox by downloading "VirtualBox 5.0 for OS X hosts (amd64)" from the VirtualBox download page, and then installed boot2docker and docker via brew.
The first apparent issue appeared when creating the boot2docker-vm image:
$ boot2docker init
2015/07/27 21:38:13 Creating VM boot2docker-vm...
2015/07/27 21:38:13 Apply interim patch to VM boot2docker-vm (https://www.virtualbox.org/ticket/12748)
2015/07/27 21:38:13 Failed to modify VM "boot2docker-vm": exit status 1
Launching the VirtualBox manager application I can see the boot2docker-vm machine "Running", but looking at the log I see something like this which appears to indicate that the boot2docker-vm "machine" failed to boot:
00:00:04.169546 Guest Log: BIOS: Boot : bseqnr=1, bootseq=4231
00:00:04.169711 Guest Log: BIOS: Boot from Floppy 0 failed
00:00:04.170101 Guest Log: BIOS: Boot : bseqnr=2, bootseq=0423
00:00:04.170490 Guest Log: BIOS: CDROM boot failure code : 0002
00:00:04.170800 Guest Log: BIOS: Boot from CD-ROM failed
00:00:04.171190 Guest Log: BIOS: Boot : bseqnr=3, bootseq=0042
00:00:04.171795 Guest Log: int13_harddisk: function 02, unmapped device for ELDL=80
00:00:04.172304 Guest Log: BIOS: Boot from Hard Disk 0 failed
00:00:04.172706 Guest Log: BIOS: Boot : bseqnr=4, bootseq=0004
00:00:04.172991 Guest Log: BIOS: Booting from LAN...
00:00:04.191271 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0, flags=0x1
00:00:06.446949 Guest Log: BIOS: Boot from LAN failed
00:00:06.448852 Guest Log: Could not read from the boot medium! System halted.
I uninstalled everything and then tried downloading and installing from boot2docker download page, which installs VirtualBox, boot2docker, and docker all in one go. But I still see the same problem indicated above (the boot2docker-vm machine fails to boot).
I'm reluctant to make big changes to the OS X version on my laptop, since my hardware is old. But I'll try the installation sequence on a more modern machine and see if it works there.
Has anyone managed to make docker work on OS X Version 10.9.5?
EDIT (adding additional information which comments suggest might be relevant):
My machine has:
2.26GHz Intel Core 2 Duo
4Gb of RAM (1067 MHz DDR3)
NVIDIA GeForce 9400M 256 MB
OS X 10.9.5
I installed everything as the primary User (not root) on my system.
And the versions of everything which I installed are:
VirtualBox 4.3.30 r101610
boot2docker version 1.7.1
docker version 1.7.1
This issue on github might be of help (Latest version of virtual box 4.3.x works fine in the issue described). Though I would suggest to use docker-machine. Below are the steps (Installation):
$ docker-machine create --driver virtualbox dev
$ eval "$(docker-machine env dev)"
And then you can use docker commands as usual.
Some of the comments in the github issue suggested by nash_ag and this stackoverflow question pointed me in the right direction.
This is the sequence of steps I used to get VirtualBox, boot2docker, docker, and docker-machine working in my environment (where $USERNAME is my primary OS X User, who installed VirtualBox), with several wrong turns elided, and most output omitted:
$ rm -rf /Users/$USERNAME/VirtualBox\ VMs/
$ boot2docker delete
(ran VirtualBox Uninstall script from my desktop)
...
$ brew tap caskroom/cask
...
$ brew update
...
$ brew install brew-cask
...
$ brew cask install virtualbox
...
$ VBoxManage -v
5.0.0r101573
$ boot2docker -v
Boot2Docker-cli version: v1.7.1
Git commit: 8fdc6f5
$ VBoxManage list vms
(nothing)
$ boot2docker init -v
...
$ boot2docker up
...
$ eval "$(boot2docker shellinit)"
(writes .pem files)
$ brew install docker-machine
...
$ docker-machine -v
docker-machine version 0.3.1 (HEAD)
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
$ docker-machine create --driver virtualbox dev
...
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev virtualbox Running tcp://192.168.99.100:2376
$ VBoxManage list vms
"boot2docker-vm" {99d5c5c1-e7cc-49bf-93c7-b0cbf626d62c}
"dev" {341fd11e-86f9-46ca-89e6-39ee78458a4b}
$ eval "$(docker-machine env dev)"
$ docker run -d -p 8000:80 nginx
...
$ curl $(docker-machine ip dev):8000
<!DOCTYPE html>
...
At this point things appear to be working well enough for me to use the "standard" docs/instructions for docker and docker-machine, so my original problem is solved.

Resources