Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
This post was edited and submitted for review last year and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
How to install Kubernetes and How to add worker-nodes to Kubernetes-Master
Let's consider 1 Kubernetes-Cluster and 2 Worker-Nodes
While launching AWS ec2 instances, make sure that you open all the
ports in the related security groups.
All EC2 instances need to be on the same VPC and in the same
availability zone.
For K8-Cluster minimum requirement is t2.large
For Worker-Nodes minimum requirement is t2.micro
Login to putty/terminal to connect to your EC2 instances.
Run the following Commands on both K8-Cluster and Worker-nodes
Be a root user. Install Docker and start Docker service. Also, enable
the docker service so that the docker service starts on the system
restarts.
sudo su
yum install docker -y
systemctl enable docker && systemctl start docker
Create proper yum repo files so that we can use yum commands to install the components of Kubernetes.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
sysctl command is used to modify kernel parameters at runtime. Kubernetes needs to have access to the kernelβs IP6 table and so we need to do some more modifications. This includes disabling secure Linux.
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
setenforce 0
Install kubelet, kubeadm and kubectl; start kubelet daemon
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
restart docker and kubelet; reload daemon
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
Only on the Master Node:
On the master node initialize the cluster.
kubeadm init --ignore-preflight-errors all
To make kubeconfig file permanent , paste the export KUBECONFIG=/etc/kubernetes/admin.conf after export PATH in .bash_profile
ls -al
vi .bash_profile
export KUBECONFIG=/etc/kubernetes/admin.conf
On All Worker nodes :
Copy kubeadm join command from the output of kubeadm init on the master node.
<kubeadm join command copied from master node>
# kubeadm join 172.31.37.128:6443 --token sttk5b.vco0jw5bkkf1toa4 \
--discovery-token-ca-cert-hash sha256:d77b5f865c1e30b73ea4dd7ea458f79f56da94f9de9c8d7a26b226d94fd0c49e
On the Master Node :
Create weave-net.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get nodes
That's it :)
The easiest way to start Kubernetes on Amazon Linux AMI (or any other Linux AMI) is to use Microk8s (a lightweight distribution of Kubernetes).
The following steps will help you get started with Kubernetes on EC2 instance:
Install Microk8s on EC2 instance
sudo snap install microk8s --classic
Check the status while Kubernetes starts
microk8s status --wait-ready
Turn on the services you want
microk8s enable dashboard dns registry istio
Start using Kubernetes
microk8s kubectl get all --all-namespaces
Access the Kubernetes dashboard
microk8s dashboard-proxy
Start and stop Kubernetes to save battery
microk8s start and microk8s stop
This way you can install a local version of Kubernetes with Microk8s. You can also follow this tutorial for detailed instructions on the above steps.
Related
I'm setting up Docker Engine on my local machine using Minikube. There are two tutorials I've considered, with slight differences between them. I'd love to understand the difference. Can anyone clarify whether these commands would have any different outcome?
From this blog post, which I found first:
# Install hyperkit and minikube
brew install hyperkit
brew install minikube
# Install Docker CLI
brew install docker
brew install docker-compose
# Start minikube
minikube start
# Tell Docker CLI to talk to minikube's VM
eval $(minikube docker-env)
# Save IP to a hostname
echo "`minikube ip` docker.local" | sudo tee -a /etc/hosts > /dev/null
# Test
docker run hello-world
Or from this tutorial (on the minikube website, which I'm inclined to believe is authoritative):
# Install the Docker CLI
brew install docker
# Start minikube with a VM driver and `docker` container runtime if not already running.
minikube start --container-runtime=docker --vm=true
# Use the `minikube docker-env` command to point your terminal's Docker CLI to the Docker instance inside minikube.
eval $(minikube -p <profile> docker-env)
Context: I'm on MacOS, Ventura 13.0 (22A380)
Note: This is a more general question related to the specific one here.
As eloborated by Bijendra, both tutorials are same with very minimal difference. The echo "minikube ip docker.local" | sudo tee -a /etc/hosts > /dev/null command is fetching the IP address and is making an entry in your /etc/hosts file by doing this you can ping your machine using hostname instead of using IP address every time.
Since you are saying that you are a beginner. I hope the links below might help you. Happy learning.
[1]https://www.kubernetes.io/docs/tutorials/hello-minikube
[2]https://www.devopscube.com/kubernetes-minikube-tutorial
[3]https://www.youtube.com/watch?v=E2pP1MOfo3g
I am trying to setup minikube in a VM with ubuntu desktop 20.04 LTS installed, using docker driver.
I've followed the steps here, and also taken into consideration the limitations for the docker driver (reported here), that have to do with runtime security options. And when I try to start minikube the error I get is : Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key.
This is what I have done to have my brand new VM with minikube installed.
Install docker
Add my user to the docker group, otherwise minkube start would fail because dockerd runs as root (aka Rootless mode in docker terminology).
Install kubectl (that is not necessary, but I opted for this instead of the embedded kubectl in minikube)
Install minikube
When I start minikube, this is what I get:
ubuntuDesktop:~$ minikube start
π minikube v1.16.0 on Ubuntu 20.04
β¨ Using the docker driver based on user configuration
π Starting control plane node minikube in cluster minikube
π₯ Creating docker container (CPUs=2, Memory=4500MB) ...
β Stopping node "minikube" ...
π Powering off "minikube" via SSH ...
π₯ Deleting "minikube" in docker ...
π€¦ StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset051825440 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset051825440: no such file or directory
: exit status 1
π₯ Creating docker container (CPUs=2, Memory=4500MB) ...
πΏ Failed to start docker container. Running "minikube delete" may fix it: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
β Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
πΏ If the above advice does not help, please let us know:
π https://github.com/kubernetes/minikube/issues/new/choose
I suspect that the error has to do with the security settings issues with the docker driver, but this seems to be like a dog chasing its tail: if I don't use rootless mode in docker and I attempt to start minikube with sudo (so that docker can also start up with a privileged user), then I get this:
ubuntuDesktop:~$ sudo minikube start
[sudo] password for alberto:
π minikube v1.16.0 on Ubuntu 20.04
β¨ Automatically selected the docker driver. Other choices: virtualbox, none
π The "docker" driver should not be used with root privileges.
π‘ If you are running minikube within a VM, consider using --driver=none:
π https://minikube.sigs.k8s.io/docs/reference/drivers/none/
β Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
So, or either I am missing something or minikube doesn't work at all with docker driver, which I doubt.
Here is my environment info:
ubuntuDesktop:~$ docker version
Client:
Version: 19.03.11
API version: 1.40
Go version: go1.13.12
Git commit: dd360c7
Built: Mon Jun 8 20:23:26 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.12
Git commit: 77e06fd
Built: Mon Jun 8 20:24:59 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit:
docker-init:
Version: 0.18.0
GitCommit: fec3683
ubuntuDesktop:~$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1-dirty
If someone has minikube working on ubuntu 20.04 and could share versions and driver, I would appreciate. with the info in minikube and docker sites I don't know what else to check to make this work.
Solution:
As I mentioned in my comment you may just need to run:
docker system prune
then:
minikube delete
and finally:
minikube start --driver=docker
This should help.
Explanation:
Although as I already mentioned in my comment, it's difficult to say what was the issue in your specific case, such situation may happen as a consequence of previous unseccessful attempt to run your Minikube instance.
It happens sometimes also when different driver is used and it is run as a VM and basically deleting such VM may help. Usually running minikube delete && minikube start is enough.
In this case, when --driver=docker is used, your Minikube instance is configured as a container in your docker runtime but apart from container itself other things like networking or storage are configured.
docker system prune command removes all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes. So what we can say for sure it was one of the above.
Judging by the exact error message:
β Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
I guess it could be simply clearing some cached data that helped in your case and removing broken references to non-existing files. The above message explains quite clearly what couldn't be done, namely docker couldn't copy a public ssh key to the destination minikube:/home/docker/.ssh/authorized_keys as the source file /tmp/tmpf-memory-asset544814591, it attempted to copy it from, simply didn't exist. So it's actually very simple to say what happend but to be able to tell why it happened might require diving a bit deeper in both Docker and Minikube internals and analyze step by step how Minikube instance is provisioned when using --driver=docker.
It's a good point that you may try to analyze your docker logs but I seriously doubt that you will find there the exact reason why non-existing temporary file /tmp/tmpf-memory-asset544814591 was referenced or why it didn't exsist.
minikube start --force --driver=docker fixed it for me
The issue is that the docker driver should not be used with root privileges. And by default, the docker daemon always runs as the root user. To run the docker daemon not as root user, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
Run the following commands to fix this issue
Create the docker group if not exist
sudo groupadd docker
Add user to the docker group
sudo usermod -aG docker [user]
To activate changes to the group
newgrp docker
start minikube cluster
minikube start
This worked for me
minikube start --driver=docker --container-runtime=containerd
if you use Linux Desktop OS with docker and minikube already installed, just run
sudo usermod -aG docker $USER
and restart your computer.
It worked for me.
I was running into the same issue when I attempted to install Minikube on an Ubuntu 20.04 system.
The "docker system prune" didn't help in my case, but I figured out the cause for my issue was that /var was mounted with the nosuid option and I had to remove that and remount /var. The minikube cluster initialization then worked.
I might be too ignorant but I didn't find that info stated as a requirement.
Restarting my mac helped me.
I was getting below error earlier:
β Exiting due to DRV DOCKER NOT RUNNING: Found docker, but the docker service isn't running. Try restarting the docker service.
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo systemctl enable docker
systemctl status docker
sudo systemctl start/stop docker
sudo groupadd docker
sudo usermod -aG docker user_name --- to add the user to docker group.
newgrp docker -- to activate the grp
minikube start or minikube start --driver=docker ---to start minikube
On my Raspberry Pi this problem was resolved with:
sudo usermod -aG docker $USER && newgrp docker
Try the following:
minikube delete
Then try to delete all docker images with name like k8s... and minikube:
docker rmi <container id> <container id2> <container id3>
Finally:
minikube start
On my end just a docker system prune did the job (Ubuntu).
I had a few configurations I did not want to lose on my minikube profile and it recreated the container accordinlgy and booted fine.
So before the minikube profile deletion it is something to try first.
It's worth checking to see if it's running in Docker desktop on a Mac. If it is then run the kubectl command. If that returns the commands screen then you're good to go.
I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node.
After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help
root#kubework:/etc/kubernetes# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes#
root#kubework:/etc/kubernetes# kubectl get nodes The connection to the
server localhost:8080 was refused - did you specify the right host or
port?
Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and ~/.kube/config.
For worker nodes, we donβt need to use kube-apiserver but what we want is using the master configuration to pass by it.
To achieve it we have to copy the ~/.kube/config file from the master to the ~/.kube/config on the worker. Value ~ with the user executing kubcetl on the worker and master (that may be different of course).
Once that done you could use the kubectl command from the worker node exactly as you do that from the master node.
Yes these files needed. Move these files into respective .kube/config folder on worker nodes.
This is expected behavior even using kubectl on master node as non root account, by default this config file is stored for root account in /etc/kubernetes/admin.conf:
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively on the master, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Optionally Controlling your cluster from machines other than the control-plane node
scp root#<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
Note:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The KUBECONFIG environment variable is not required. If the KUBECONFIG environment variable doesn't exist, kubectl uses the default kubeconfig file, $HOME/.kube/config.
I tried many of the solutions which just copy the /etc/kubernetes/admin.conf to ~/.kube/config. But none worked for me.
My OS is ubuntu and is resolved by removing, purging and re-installing the following :
sudo dpkg -r kubeadm kubectl
sudo dpkg -P kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" (downloading kubectl again, this actually worked)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
mymaster Ready control-plane,master 31h v1.20.4
myworker Ready 31h v1.20.4
Davidxxx's solution worked for me.
In my case, I found out that there is a file that exists in the worker nodes at the following path:
/etc/kubernetes/kubelet.conf
You can copy this to ~/.kube/config and it works as well. I tested it myself.
If I try the following commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
getting this error
[vikram#node2 ~]$ kubectl version
Error in configuration:
* unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
* unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
Then this works which is really a workaround and not a fix.
sudo kubectl --kubeconfig /etc/kubernetes/kubelet.conf version
Was able to fix it by copying the kubelet-client-current.pem from /var/lib/kubelet/pki/ to a location inside $HOME and modifying to reflect the path of the certs in HOME/.kube/config file. Is this normal?
I came across the same issue - I found that my 3 VMs are sharing the same IP (since I was using NAT network on Virtual-box), therefore I switched to Bridge network and have 3 different IPs for 3 different VMs and then followed the installation guide for successful installation of k8s cluster.
Amit
I have recently installed WSL2 and installed Ubuntu from Microsoft Store.When i run docker using
Sudo service docker start, i get below message
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
I followed steps as per this Answer and did below
sudo groupadd docker
sudo usermod -aG docker $(whoami)
But still cant start docker..when checking Docker logs, i could see below
CONNECTING" module=grpc Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.6.1: can't initialize iptables table nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
`
I have tried a lot of steps based on the error below
can't initialize iptables table nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. (exit status 3) `
But Starting Terminal as administrator worked.Even though you run
sudo service docker start
The Terminal should be launched as Admin
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
from https://forums.docker.com/t/failing-to-start-dockerd-failed-to-create-nat-chain-docker/78269
I might be late but I faced similar problem and the solution was completely different.
I am posting here for someone if he/she gets similar issue.
Problem I Faced.
I setup wsl and docker in my new machine.
I can not run dockerd in wsl2. As TheGameiswar suggest I can start the dockerd if I run the terminal as Admin but I still can not run any image.
The root cause
By default the wsl is version 1 (wsl 1) and docker required wsl2.
Solution
Set the default wsl version to 2
wsl --set-default-version 2
set the installed distro to wsl2
wsl --set-version Ubuntu-20.04 2
There is a great guide here which gives some up to date instructions and some prerequisites on using WSL 2 and the new docker desktop.
Before kubeadm I use these steps to take flannel ip & mtu value to docker.
Step 1: stop Docker and Flannel
Step 2: start Flannel and check its status;
step 3: update Docker startup script like this
source /run/flannel/subnet.env
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
Step 4: start Docker and check its status.
How this steps done with kubeadm? I see Docker deamon process start first then Flannel starts as container trying to understate the integration process.
Thanks
SR
Here are the steps I took to set up flannel in Kubernetes v1.7.3.
Install flannel
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
You will see the flannel pod created but it falls into a "CrashLoopBackOff" state and restart forever.
After flannel is installed by Kubeadm, the subnet info will be recorded in file /run/flannel/subnet.env.
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Setup these environment variables for docker
mkdir -p /usr/lib/systemd/system/docker.service.d
sudo cat << EOF > /usr/lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker
EOF
sudo cat << EOF > /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.244.0.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.244.0.1/24 --ip-masq=false --mtu=1450"
Note: do set ip-masq as false for docker, otherwise kube-dns would not work well.
Reload the service configuration, then the changes will take effect.
sudo systemctl daemon-reload`
Voila, everything works after that.