I have set up a kubernetes cluster and is working fine.
As of now, Kubernetes cluster deploying container in the master as well. I dont want this to be happened . Can anybody let me know, how to prevent to deploy container in the master ?
If you want no Pods to be scheduled on the master you will need to taint the master node:
kubectl taint nodes nameofmaster dedicated=master:NoSchedule
Read up on taints and tolerations to understand the consequences and how to schedule specific Pods on the now tainted master node. To undo use kubectl taint nodes nameofmaster dedicated-
Related
I would like to setup a Kubernetes cluster as follows:
Kubernetes will be installed on top of VMs depicted in pink.
I am going to use statefulsets or replicasets to deploy Jenkins master and Jenkins executors. I would like that the workspace folder on the Jenkins master to be always in sync on all replicas in eventuality of losing any worker VMs or server.
Can be achieved using internal mechanisms of replicasets or statefulsets or is any other way of keeping the workspace in sync?
Thank you,
Albert
You can't just assume that statefulset will do the job for you. You can configure a NFS server and point the PV to it and bind your PVC to this PV and your STS can point to your PVC. So, basically
STS -> PVC -> PV -> NFS Server
So, even if one worker node goes down, it won't impact the others.
i have only one instance machine which is centos-7 minimal installation. can i install kubernates cluster as a master and node same machine(without minikube and micro8s). is there any possible way?
Thanks.
You can use kind. Kind is a tool for running local Kubernetes clusters using Docker container. So on the same machine docker containers are created for each Kubernetes node.
Also you can install a single node Kubernetes cluster using kubeadm and the master node can be marked as schedule-able so that pods can be scheduled on the master node.
I have been trying to set up a kubernetes cluster.
I have two ubuntu droplets on digital ocean I am using to do this.
I have set up the master and joined the slave
I am now trying to create a secret for my docker credentials so that I can pull private images on the node, however when i run this command (or any other kubectl command e.g. kubectl get nodes) i get this error: The connection to the server localhost:8080 was refused - did you specify the right host or port?
This is however all set up as kubectl on its own shows help.
Does any one know why i might be getting this issue and how I can solve it?
sorry, i have just started with kubernetes, but i am trying to learn.
I understand that you have to set up the cluster on a user that is not root on the master (which I have) is it ok to use root on slaves?
thanks
kubectl is used to connect and run commands to kubernetes API plane. There is no need to have it configured on worker (slave) nodes.
However if You really need to make kubectl work from worker node You would need to do the following:
Createa .kube catalog on worker node:
mkdir -p $HOME/.kube
Copy the configuration file from master node
/etc/kubernetes/admin.conf to $HOME/.kube/config on worker node.
Then run the following command on worker node:
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Update:
To address Your question in comment.
That is not how kubernetes nodes work.
From kubernetes documentation about Kubernetes Nodes:
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you’ll rarely interact with nodes directly.
This means that the images pulled from private repository will be "handled" by master nodes configuration which is synchronized between all nodes. There is no need to configure anything on the worker (slave) nodes.
Additional information about Kubernetes Control Plane.
Hope this helps.
my Kubernetes setup:
v1.16.2 on bare metal
1 master node: used for Jenkins Master + Docker registry
5 slave nodes: used for Jenkins JNPL slaves
I use kubernetes-plugin to run slave docker agents. All slave k8 nodes labeled as "jenkins=slave". When I use nodeSelector ("jenkins=slave") for podTemplate, kubernetes always schedule new pod on same node regardless the amount of started Jenkins jobs.
Please give me advice, how I can configure kubernetes or kubernetes-plugin to schedule each next build by round-robin (across all labeled nodes in kubernetes cluster)
Thank you.
This is generally handled by the inter-pod anti affinity configuration https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity. You would set this in the pod template for your builder deployment. That said, it's more common to use the Kubernetes plugin for Jenkins which runs each build as a temporary pod, rather than having long-lived JNLP builders.
The master node for AKS cluster is a public endpoint.How does the master node communicate with the worker nodes? What are the ports that are required for communication?
For Azure Kubernetes, the master is managed by Azure and you do not need and also cannot manage it yourself unless you use the aks-engine. What you can manage are the nodes. The nodes communicate with the master using the API through port 443 like this:
As you see, you can get the list through the command kubectl cluster-info and get more details through the command kubectl cluster-info dump and it will be a lot of messages. I suggest you do not need to care about it because you cannot manage it. If you really want, just try the aks-engine.