I am trying to create a Kubernetes cluster using kubeadm tool. For this I installed the supported docker version as specified here
I could also install kubeadm successfully. I initiated the cluster with below command
sudo kubeadm init --pod-network-cidr=10.244.0.0/14 --apiserver-advertise-address=172.16.0.11
and I got the message to use kubeadm join to join the cluster as shown below
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.0.11:6443 --token ptdibx.r2uu0s772n6fqubc \
--discovery-token-ca-cert-hash sha256:f3e36a8e82fb8166e0faf407235f12e256daf87d0a6d0193394f4ce31b50255c
Used flannel for networking purpose
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
After that when I try to check the kubectl command to check the pods/nodes status it fails
$ sudo kubectl get nodes
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ sudo kubectl get pods --all-namespaces
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ echo $KUBECONFIG
/etc/kubernetes/admin.conf:/home/ltestzaman/.kube/config
Docker and kubernetes version are as follows:
$ sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2",
GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean",
BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
$ sudo docker version
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 08:10:07 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.0-ce
How to make the cluster work?
Output of admin.conf is as follows:
$ sudo cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpN
QjRYRFRJd01EUXhPREV6TlRnek1sb1hEVE13TURReE5qRXpOVGd6TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjdvCnhERGVP
UWd4TGsyWE05K0pxc0hYRkRPWHB2YXYxdXNyeXNYMldOZzlOOVFaeDVucFA2UTRpYkQrZDFjWFcyL0oKY0ZXUjhBSDliaG5hZlhLRUhiUVZWT0R2UTcwbFRmNXVtVlQ4Qk5ZUjRQTmlCWjFxVDNMNnduWlYrekl6a0ZKMwp0
eVVNK0prUy80K2dMazI3b01HMFZ4Rnpjd1ozREMxWEFqRXVxb3FrYVF5UGUzMk9XZmZ2N082TjhxeWNCNkdnClNxbWxNWldBMk1DL0J1cFpZWXRYNkUyYUtNUloxZjkzRlpCaFdYNG9DYjVQSGdSUEdoNTFiNERKZExoYlk4
aWMKdVRSa0EyTi95UDVrTlRIMW5pSTU3bTlUY2phcDZpV0p3dFhsdlpOTUpCYmhQS1VjTEFhZG1tTHFtWTNMTmhiaApGZ2orK0s4T3hXVk5KYWVuQnI4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0Ex
VWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHS2VwQURtZTEva0orZWpob3p4RXVIdXFwQTYKT3dkK3VNQlNPMWYzTTBmSzkxQmhWYkxWakZZeEUwSjVqc1BLNzNJM3cxRU5rb2p2UGdnc0pV
NHBjNnoyeGdsVgpCQ0tESWhWSEVPOVlzRVNpdERnd2g4QUNyQitpeEc4YjBlbnFXTzhBVjZ6dGNESGtJUXlLdDAwNmgxNUV1bi9YCmg0ZUdBMDQrRmNTZVltZndSWHpMVmFFS3F2UHZZWVdkTHBJTktWRFNHZ3J3U3cvbnU5
K2g1U09Ddms1YncwbEYKODhZNnlTaHk3U1B6amRNUHdRcks5cmhWY1ZXK1VvS3d6SE80aUZHdWpmNDR0WHRydTY4L1NGVm5uVnRHWkYyKwo2WmJYeE81Z3I2c1NBQU9NK0E1RmtkQlNmcXZDdmwvUzZFQk04V2czaGNjOUZL
cEFCV0tadHNoRlMxOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://172.16.0.11:6443
name: kubernetes
contexts:contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJS2RLRGs4MUpNKzh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp
1WlhSbGN6QWVGdzB5TURBME1UZ3hNelU0TXpKYUZ3MHlNVEEwTVRneE16VTRNelphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUl
CSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW44elI0RlVVM2F1QjluVGkKeWxJV1ZpZ3JkV1dHSXY0bmhsRnZuQU1mWVJyVklrMGN6eTZPTmZBQzNrb01tZ3ZPQnA5MmpsWmlvYXpJUGg1aAovaUR
xalE3dzN4cFhUN1QxT1kySy9mVyt1S1NRUVI3VUx1bjM4MTBoY1ZRSm5NZmV4UGJsczY2R3RPeE9WL2RQCm1tcEEyUFlzL0lwWWtLUEhqNnNvb0NXU1JEMUZIeG1SdWFhYXhpL0hYQXdJODZEN01uWS90KzZJQVIyKzZRM0s
KY2pPRFdEWlRpbHYyMXBCWFBadW9FTndoZ0s2bWhKUU5VRmc5VmVFNEN4NEpEK2FYbmFSUW0zMndid29oYXk1OAo3L0FnUjRoMzNXTjNIQ1hPVGUrK2Z4elhDRnhwT1NUYm13Nkw1c1RucFkzS2JFRXE5ZXJyNnorT2x5ZVl
GMGMyCkZCV3J4UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCVDVKckdCS1NxS1VkclNYNUEyQVZHNmFZSHl
1TkNkeWp1cQpIVzQrMU5HdWZSczJWZW1kaGZZN2VORDJrSnJXNldHUnYyeWlBbDRQUGYzYURVbGpFYm9jVmx0RjZHTXhRRVU2CkJtaEJUWGMrVndaSXRzVnRPNTUzMHBDSmZwamJrRFZDZWFFUlBYK2RDd3hiTnh0ZWpacGZ
XT28zNGxTSGQ3TFYKRDc5UHAzYW1qQXpyamFqZE50QkdJeHFDc3lZWE9Rd1BVL2Nva1RJVHRHVWxLTVV5aUdOQk5rQ3NPc3RiZHI2RApnQVRuREg5YWdNck9CR2xVaUlJN0Qvck9rU3IzU2QvWnprSGdMM1c4a3V5dXFUWWp
wazNyNEFtU3NXS1M4UUdNCjZ6bHUwRk4rYnVJbGRvMnBLNmhBZlBnSTBjcDZWQWk1WW5lQVVGQ2EyL2pJeXI3cXRiND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBbjh6UjRGVVUzYXVCOW5UaXlsSVdWaWdyZFdXR0l2NG5obEZ2bkFNZllSclZJazBjCnp5Nk9OZkFDM2tvTW1
ndk9CcDkyamxaaW9heklQaDVoL2lEcWpRN3czeHBYVDdUMU9ZMksvZlcrdUtTUVFSN1UKTHVuMzgxMGhjVlFKbk1mZXhQYmxzNjZHdE94T1YvZFBtbXBBMlBZcy9JcFlrS1BIajZzb29DV1NSRDFGSHhtUgp1YWFheGkvSFh
Bd0k4NkQ3TW5ZL3QrNklBUjIrNlEzS2NqT0RXRFpUaWx2MjFwQlhQWnVvRU53aGdLNm1oSlFOClVGZzlWZUU0Q3g0SkQrYVhuYVJRbTMyd2J3b2hheTU4Ny9BZ1I0aDMzV04zSENYT1RlKytmeHpYQ0Z4cE9TVGIKbXc2TDV
zVG5wWTNLYkVFcTllcnI2eitPbHllWUYwYzJGQldyeFFJREFRQUJBb0lCQVFDQXhjRHJFaVQ2Yk5jUwpFQ2NoK3Z4YytZbnIxS0EvV3FmbktZRFRMQUVCYzJvRmRqYWREbHN6US9KTHgwaFlhdUxmbTJraVVxS3d2bGV2CkZ
6VElZU1loL2NSRlJTak81bmdtcE5VNHlldWpSNW1ub0h4RVFlNjVnbmNNcURnR3kxbk5SMWpiYnV6R3B4YUsKOUpTRlR0SnJCQlpFZkFmYXB1Q04rZE9IR2ovQUZJbWt2ZXhSckwyTXdIem0zelJkMG5UdkRyOUh1dy9IMjE
1RAprNXBHZjluV1ZsNnZxSGZFYVF0S0NNemY2WE5MdEFjcEJMcmNwSExwWEFObVNMWTAvcFFnV0s5eVpkbVA5b0xCCjhvU1J0eFRsZlU0V1RLdERpNlozK0tTSytidnF4MDRGZTJYb2RlVUM3eDN1d3lwamszOXZjSG55UkM
2Tmhlem4KTExJcnVEbVJBb0dCQU1VbG8zRkpVTUJsczYrdHZoOEpuUjJqN2V6bU9SNllhRmhqUHVuUFhkeTZURFVxOFM2aQprSTZDcG9FZEFkeUE4ejhrdU01ZlVFOENyOStFZ05DT3lHdGVEOFBaV2FCYzUxMit6OXpuMXF
3SVg3QjY1M01lCk5hS2Y1Z3FYbllnMmdna2plek1lbkhQTHFRLzZDVjZSYm93Q3lFSHlrV0FXS3I4cndwYXNNRXQ3QW9HQkFNK0IKRGZZRU50Vmk5T3VJdFNDK0pHMHY1TXpkUU9JT1VaZWZRZTBOK0lmdWwrYnRuOEhNNGJ
aZmRUNmtvcFl0WmMzMQptakhPNDZ5NHJzcmEwb1VwalFscEc5VGtVWDRkOW9zRHoydlZlWjBQRlB2em53R1JOUGxzaTF1cUZHRkdyY0dTClJibzZiTjhKMmZqV0hGb2ppekhVb3Rkb1BNbW1qL0duM0RmVEw2Ry9Bb0dBQk8
rNVZQZlovc2ROSllQN003bkEKNW1JWmJnb2h1Z05rOFhtaXRLWU5tcDVMbERVOERzZmhTTUE2dlJibDJnaWNqcU16d1c4ZmlxcnRqbkk1NjM3Mwp3OEI2TXBRNXEwdElPOCt3VXI2M1lGMWhVQUR6MUswWCtMZDZRaCtqd1N
wa1BTaFhTR05tMVh0dkEwaG1mYWkwCmxPcm82c1hSSUEvT0NEVm5UUENJMFFzQ2dZQWZ3M0dQcHpWOWxKaEpOYlFFUHhiMFg5QjJTNmdTOG40cTU0WC8KODVPSHUwNGxXMXFKSUFPdEZ3K3JkeWdzTk9iUWtEZjZSK0V5SDF
NaVdqeS9oWXpCVkFXZW9SU1lhWjNEeWVHRwpjRGNkZzZHQ3I5ZzNOVE1XdXpiWjRUOGRaT1JVTFQvZk1mSlljZm1iemFxcFlhZDlDVCtrR2FDMGZYcXJVemF5CmxQRkZvUUtCZ0E4ck5IeG1DaGhyT1ExZ1hET2FLcFJUUWN
xdzd2Nk1PVEcvQ0lOSkRiWmdEU3g3dWVnVHFJVmsKd3lOQ0o2Nk9kQ3VOaGpHWlJld2l2anJwNDhsL2plM1lsak03S2k1RGF6NnFMVllqZkFDSm41L0FqdENSZFRxTApYQ3B1REFuTU5HMlFib0JuSU1UaXNBaXJlTXduS1Z
BZjZUMzM4Tjg5ZEo2Wi93QUlOdWNYCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
Not sure why most of the entries are null as shown below
$ sudo kubectl config view
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Most probably the kubeconfig file is not setup properly.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Also this should work
sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
Changed environmental variable KUBECONFIG to /home/ltestzaman/.kube/config and it works
$ echo $KUBECONFIG
/home/ltestzaman/.kube/config
$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
kubemaster-001 Ready master 117m v1.18.2
Or need to mention --config as identified by #Arghya Sadhu
$ sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
NAME STATUS ROLES AGE VERSION
kubemaster-001 Ready master 120m v1.18.2
You have to start the kubernetes cluster. Either with minikube start or if you are connecting to any cloud service, make sure kubernetes is started.
Related
I created an ubuntu instance on gcloud and installed minikube and all the required dependency in it.
Now I can do curl from gnode terminal "curl http://127.0.0.1:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/" I get the HTML response back.
But I want to Access this URL from my Laptop browser.
I tried opening these Ports in
firewall of instance-node tcp:8080,8085,443,80,8005,8006,8007,8009,8009,8010,7990,7992,7993,7946,4789,2376,2377
But still unable to access the above mentioned url while replacing it with my external(39.103.89.09) IP
i.e http://39.103.89.09:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I believe I need to do some networking related changes but don't know what.
I am very new to Cloud computing and networking so please help me.
I suspect that minikube binds to the VM's localhost interface making it inaccessible from a remote machine.
There may be a way to run minikube such that it binds to 0.0.0.0 and then you may be able to use it remotely.
Alternatively, you can keep the firewall limited to e.g. 22 and use SSH to port-forward the VM's port 8080 to your localhost. `gcloud' includes a helper for this too:
Ensure minikube is running on the VM
gcloud compute ssh ${INSTANCE} --project=${PROJECT} --zone=${ZONE} --ssh-flag="-L 8080:localhost:8080"
Try accessing Kubernetes endpoints from your local machine using localhost:8080/api/v1/...
Update
OK, I created a Debian VM (n1-instance-2), installed docker and minikube.
SSH'd into the instance:
gcloud compute ssh ${INSTANCE} \
--zone=${ZONE} \
--project=${PROJECT}
Then minikube start
Then:
minikube kubectl -- get namespaces
NAME STATUS AGE
default Active 14s
kube-node-lease Active 16s
kube-public Active 16s
kube-system Active 16s
minikube appears (I'm unfamiliar it) to run as a Docker container called minikube and it exposes 4 ports to the VM's (!) localhost: 22,2376,5000,8443. The latter is key.
To determine the port mapping, either eyeball it:
docker container ls \
--filter=name=minikube \
--format="{{.Ports}}" \
| tr , \\n
Returns something like:
127.0.0.1:32771->22/tcp
127.0.0.1:32770->2376/tcp
127.0.0.1:32769->5000/tcp
127.0.0.1:32768->8443/tcp
In this case, the port we're interested in is 32768
Or:
docker container inspect minikube \
--format="{{ (index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort }}"
32768
Then, exit the shell and return using --ssh-flag:
gcloud compute ssh ${INSTANCE} \
--zone=${ZONE} \
--project=${PROJECT} \
--ssh-flag="-L 8443:localhost:32768"
NOTE 8443 will be the port on the localhost; 32768 is the remote minikube port
Then, from another shell on your local machine (and while the port-forwarding ssh continues in the other shell), pull the ca.crt, client.key and client.crt:
gcloud compute scp \
$(whoami)#${INSTANCE}:./.minikube/profiles/minikube/client.* \
${PWD} \
--zone=${ZONE} \
--project=${PROJECT}
gcloud compute scp \
$(whoami)#${INSTANCE}:./.minikube/ca.crt \
${PWD} \
--zone=${ZONE} \
--project=${PROJECT}
Now, create a config file, call it kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ./ca.crt
server: https://localhost:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: ./client.crt
client-key: ./client.key
And, lastly:
KUBECONFIG=./kubeconfig kubectl get namespaces
Should yield:
NAME STATUS AGE
default Active 23m
kube-node-lease Active 23m
kube-public Active 23m
kube-system Active 23m
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
During the installation of kubernetes, an error is reported when I initialize the master node. I am using the arm platform server and the operating system is centos-7.6 aarch64. Does kubernetes support deploying master nodes on the arm platform?
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
6月 30 22:53:04 master kubelet[54238]: W0630 22:53:04.188966 54238 pod_container_deletor.go:75] Container "51615bc1d926dcc56606bca9f452c178398bc08c78a2418a346209df28b95854" not found in pod's containers
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.189353 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.218672 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.236484 54238 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.112:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.238898 54238 certificate_manager.go:400] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://192.168.1.112:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.260520 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.289516 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.389666 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.436810 54238 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.1.112:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.489847 54238 kubelet.go:2248] node "master" not found
To start kubernetes cluster, make sure you have minimum requirement of kubernetes platfrom.
If you want kubernetes cluster with low compute you could discus with me in seperatly.
You need :
Docker
Compute Node at least 4GB Memory 2CPU.
I will write answer depends on your node.
Docker
On each of your machines, install Docker. Version 19.03.11 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
Use the following commands to install Docker on your system:
Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
Create /etc/docker
mkdir /etc/docker
Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
Restart Docker
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Kubernetes
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
systemctl daemon-reload
systemctl restart kubelet
Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
Master
Init kubernetes cluster (Running this on master node)
kubeadm init --pod-network-cidr 192.168.0.0/16
Note : I will calico here. so the cidr use 192.168.0.0/16
Move kube config to user directory (assume root)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Worker Node
Join other nodes (Running below command from your worker node)
kubeadm join <IP_PUBLIC>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
Note : you will get this when you successfully init master
Master Node
Applying calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify cluster
kubectl get nodes
Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
Hey trying to install jenkins on GKE cluster with this command
helm install stable/jenkins -f test_values.yaml --name myjenkins
My version of helm and kubectl if matters
helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Values downloaded with this command helm inspect values stable/jenkins > test_values.yaml and modified:
cat test_values.yaml
Master:
adminPassword: 34LbGfq5LWEUgw // local testing
resources:
limits:
cpu: '500m'
memory: '1024'
podLabels:
nodePort: 32323
serviceType: ClusterIp
Persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
and some weird new error after update
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
As I can see you're trying to install stable/jekins which isn't in the helm repo instead of stable/jenkins. Please update your question if it's just misspelling and I'll update my answer , but I've tried your command:
$helm install stable/jekins --name myjenkins -f test_values.yaml
and got the same error:
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
EDIT To solve next errors like:
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:258:11: executing "jenkins/templates/deprecation.yaml" at <fail "Master.* values have been renamed, please check the documentation">: error calling fail: Master.* values have been renamed, please check the documentation
and
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:354:10: executing "jenkins/templates/deprecation.yaml" at <fail "Persistence.* values have been renamed, please check the documentation">: error calling fail: Persistence.* values have been renamed, please check the documentation
and so on you also need to edit test_values.yaml
master:
adminPassword: 34LbGfq5LWEUgw
resources:
limits:
cpu: 500m
memory: 1Gi
podLabels:
nodePort: 32323
serviceType: ClusterIP
persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
And after that it's deployed successfully:
$helm install stable/jenkins --name myjenkins -f test_values.yaml
NAME: myjenkins
LAST DEPLOYED: Wed Jan 8 15:14:51 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME AGE
myjenkins 1s
myjenkins-tests 1s
==> v1/Deployment
NAME AGE
myjenkins 0s
==> v1/PersistentVolumeClaim
NAME AGE
myjenkins 1s
==> v1/Pod(related)
NAME AGE
myjenkins-6c68c46b57-pm5gq 0s
==> v1/Role
NAME AGE
myjenkins-schedule-agents 1s
==> v1/RoleBinding
NAME AGE
myjenkins-schedule-agents 0s
==> v1/Secret
NAME AGE
myjenkins 1s
==> v1/Service
NAME AGE
myjenkins 0s
myjenkins-agent 0s
==> v1/ServiceAccount
NAME AGE
myjenkins 1s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default myjenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=myjenkins" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:8080
kubectl --namespace default port-forward $POD_NAME 8080:8080
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
The repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub
This is a bit tricky,I have a K8s cluster up and running and i am able to execute a docker image inside that cluster and i can see contents of command “kubectl get pods -o wide” .Now i have Gitlab setted up with this K8 cluster
I have set up variables $KUBE_URL $KUBE_USER and $KUBE_PASSWORD respectively in Gitlab with above K8 cluster
Here Gitlab runner console displays all these information as shown in console log below,at the end it fails for
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
Here is full console log:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on WotC-Docker-ip-10-102-0-70 d457d50a
Using Docker executor with image docker:latest …
Pulling docker image docker:latest …
Using docker image sha256:062267097b77e3ecf374b437e93fefe2bbb2897da989f930e4750752ddfc822a for docker:latest …
Running on runner-d457d50a-project-185-concurrent-0 via ip-10-102-0-70…
Fetching changes…
Removing cluster1-config
HEAD is now at 25846c4 Initial commit
From https://git.com/core-systems/gatling
25846c4…bcaa89b master -> origin/master
Checking out bcaa89bf as master…
Skipping Git submodules setup
$ uname -a
Linux runner-d457d50a-project-185-concurrent-0 4.14.67-66.56.amzn1.x86_64 #1 SMP Tue Sep 4 22:03:21 UTC 2018 x86_64 Linux
$ apk add --no-cache curl
fetch htt p://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch ht tp://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/4) Installing nghttp2-libs (1.32.0-r0)
(2/4) Installing libssh2 (1.8.0-r3)
(3/4) Installing libcurl (7.61.1-r1)
(4/4) Installing curl (7.61.1-r1)
Executing busybox-1.28.4-r1.trigger
OK: 6 MiB in 18 packages
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s ht tps : //storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
95 37.3M 95 35.8M 0 0 37.8M 0 --:–:-- --:–:-- --:–:-- 37.7M
100 37.3M 100 37.3M 0 0 38.3M 0 --:–:-- --:–:-- --:–:-- 38.3M
$ chmod +x ./kubectl
$ mv ./kubectl /usr/local/bin/kubectl
$ kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true
Cluster “nosebit” set.
$ kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
User “admin” set.
$ kubectl config set-context default --cluster=nosebit --user=admin
Context “default” created.
$ kubectl config use-context default
Switched to context “default”.
$ cat $HOME/.kube/config
apiVersion: v1
clusters:
cluster:
insecure-skip-tls-verify: true
server: https://18.216.8.240:443
name: nosebit
contexts:
context:
cluster: nosebit
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
name: admin
user:
password: |-
MIIDOzCCAiOgAwIBAgIJALOrUrxmhgpHMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV
BAMMDTEzLjU4LjE3OC4yNDEwHhcNMTgxMTI1MjIwNzE1WhcNMjgxMTIyMjIwNzE1
WjAYMRYwFAYDVQQDDA0xMy41OC4xNzguMjQxMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA4jmyesjEiy6T2meCdnzzLfSE1VtbY//0MprL9Iwsksa4xssf
PXrwq97I/aNNE2hWZhZkpPd0We/hNKh2rxwNjgozQTNcXqjC01ZVjfvpvwHzYDqj
4cz6y469rbuKqmXHKsy/1docA0IdyRKS1JKWz9Iy9Wi2knjZor6/kgvzGKdH96sl
ltwG7hNnIOrfNQ6Bzg1H6LEmFP+HyZoylWRsscAIxD8I/cmSz7YGM1L1HWqvUkRw
GE23TXSG4uNYDkFaqX46r4nwLlQp8p7heHeCV/mGPLd0QCUaCewqSR+gFkQz4nYX
l6BA3M0Bo4GHMIGEMB0GA1UdDgQW
BBQqsD7FUt9vBW2LcX4xbqhcO1khuTBIBgNVHSMEQTA/gBQqsD7FUt9vBW2LcX4x
bqhcO1khuaEcpBowGDEWMBQGA1UEAwwNMTMuNTguMTc4LjI0MYIJALOrUrxmhgpH
MAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAY
6mxGeQ90mXYdbLtoVxOUSvqk9+Ded1IzuoQMr0joxkDz/95HCddyTgW0gMaYsv2J
IZVH7JQ6NkveTyd42QI29fFEkGfPaPuLZKn5Chr9QgXJ73aYrdFgluSgkqukg4rj
rrb+V++hE9uOBtDzcssd2g+j9oNA5j3VRKa97vi3o0eq6vs++ok0l1VD4wyx7m+l
seFx50RGXoDjIGh73Gh9Rs7/Pvc1Pj8uAGvj8B7ZpAMPEWYmkkc4F5Y/14YbtfGc
2VlUJcs5p7CbzsqI5Tqm+S9LzZXtD1dVnsbbbGqWo32CIm36Cxz/O/FCf8tbITpr
u2O7VjBs5Xfm3tiW811k
username: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdzZqdDYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFiMjc2YzIxLWYxMDAtMTFlOC04YjM3LTAyZDhiMzdkOTVhMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNQifQ.RCQQWjDCSkH8YckBeck-EIdvOnTKBmUACXVixPfUp9gAmUnit5qIPvvFnav-C-orfYt552NQ5GTLOA3yR5-jmxoYJwCJBfvPRb1GqqgiiJE2pBsu5Arm30MOi2wbt5uCNfKMAqcWiyJQF98M2PFc__jH6C1QWPXgJokyk7i8O6s3TD69KrrXNj_W4reDXourLl7HwHWoWwNKF0dgldanug-_zjvE06b6VZBI-YWpm9bpe_ArIOrMEjl0JRGerWahcQFVJsmhc4vgw-9-jUsfKPUYEfDItJdQKyV9dgdwShgzMINuuHlU7w7WBxmJT6cqMIvHRnDHuno3qMKTJTuh-g
$ kubectl config view --minify > cluster1-config
$ export KUBECONFIG=$HOME/.kube/config
$ kubectl --kubeconfig=cluster1-config config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
default nosebit admin
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
==================================================================================================
Here is my .gitlab-ci.yml content, could you suggest why kubectl get pods not displaying pods of the remote cluster even when KUBECONFIG set up is done successfully?
image : docker:latest
variables:
CONTAINER_DEV_IMAGE: https://hub.docker.com/r/tarunkumard/gatling/:$CI_COMMIT_SHA
stages:
deploy
deploy:
stage: deploy
tags:
- docker
script:
‘uname -a’
‘apk add --no-cache curl’
‘curl -LO http s://storage.go ogleapis.com/kubernetes-release/release/$(curl -s htt ps:// storage.googlea pis .com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl’
‘chmod +x ./kubectl’
‘mv ./kubectl /usr/local/bin/kubectl’
‘kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true’
‘kubectl config set-credentials admin --username=" " --password="$KUBE_PASSWORD"’
‘kubectl config set-context default --cluster=nosebit --user=admin’
‘kubectl config use-context default’
‘cat $HOME/.kube/config’
‘kubectl config view --minify > cluster1-config’
‘export KUBECONFIG=$HOME/.kube/config’
‘kubectl --kubeconfig=cluster1-config config get-contexts’
'kubeconfig=cluster1-config kubectl get pods -o wide ’
Why gitlab runner failing to get pods from Kubernetes cluster(Note This cluster is up and running using and I am able to see pods using kubectl get pods command )
Basically,
kubectl config view --minify > cluster1-config
Won't do it, because the output will be something like this with no actual credentials/certs:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://<kube-apiserver>:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
You need:
kubectl config view --raw > cluster1-config
If that's not the issue. It could be that your credentials don't have the right RBAC permissions. I would try to find the ClusterRoleBinding or RoleBinding that is bound for that admin user. Something like:
$ kubectl get clusterrolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
$ kubectl get rolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
Once you find the role, you can see if it has the right permissions to view pods. For example:
$ kubectl get clusterrole cluster-admin -o=yaml
Followed official guide to install Kubernetes cluster with kubeadm on Vagrant.
https://kubernetes.io/docs/getting-started-guides/kubeadm/
master
node1
node2
Master
# kubeadm init --apiserver-advertise-address=192.168.33.200
# sudo cp /etc/kubernetes/admin.conf $HOME/
# sudo chown $(id -u):$(id -g) $HOME/admin.conf
# export KUBECONFIG=$HOME/admin.conf
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yaml
Node1 and Node2
# kubeadm join --token <token> 192.168.33.200:6443
...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
Until now all success.
But when check kubectl get nodes on master host, retunes only one node:
# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 25m v1.6.4
Sometimes, it retunes:
# kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
Edit
Add hostname to all the hosts.
Then check kubectl get nodes again from master:
[root#master ~]# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 4h v1.6.4
master Ready 12m v1.6.4
Just added a new current host name.