I have a kubernetes cluster composed of two machines. I want one machine to be the scheduler and both to be the workers.
I installed the helm dask as follows per the official instructions:
helm install --name my-release dask/dask
However, I noticed on the kubernetes dashboard that schedule and workers are only installed on one node instead of both. How do I create workers on both nodes with the helm dask?
Even when I choose the dask-cuda-workeroption, it doesn't select the computer with the nvidia GPU.
Someone please help me get workers on both nodes?
You probably can do this with a node-selector. For your case maybe you want something like:
Label nodes:
kubectl label nodes accelerator=nvidia-tesla-p100
helm install --name my-release dask/dask --set nodeSelector.accelerator=nvidia-tesla-p100
Related
i have only one instance machine which is centos-7 minimal installation. can i install kubernates cluster as a master and node same machine(without minikube and micro8s). is there any possible way?
Thanks.
You can use kind. Kind is a tool for running local Kubernetes clusters using Docker container. So on the same machine docker containers are created for each Kubernetes node.
Also you can install a single node Kubernetes cluster using kubeadm and the master node can be marked as schedule-able so that pods can be scheduled on the master node.
I have been trying to set up a kubernetes cluster.
I have two ubuntu droplets on digital ocean I am using to do this.
I have set up the master and joined the slave
I am now trying to create a secret for my docker credentials so that I can pull private images on the node, however when i run this command (or any other kubectl command e.g. kubectl get nodes) i get this error: The connection to the server localhost:8080 was refused - did you specify the right host or port?
This is however all set up as kubectl on its own shows help.
Does any one know why i might be getting this issue and how I can solve it?
sorry, i have just started with kubernetes, but i am trying to learn.
I understand that you have to set up the cluster on a user that is not root on the master (which I have) is it ok to use root on slaves?
thanks
kubectl is used to connect and run commands to kubernetes API plane. There is no need to have it configured on worker (slave) nodes.
However if You really need to make kubectl work from worker node You would need to do the following:
Createa .kube catalog on worker node:
mkdir -p $HOME/.kube
Copy the configuration file from master node
/etc/kubernetes/admin.conf to $HOME/.kube/config on worker node.
Then run the following command on worker node:
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Update:
To address Your question in comment.
That is not how kubernetes nodes work.
From kubernetes documentation about Kubernetes Nodes:
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you’ll rarely interact with nodes directly.
This means that the images pulled from private repository will be "handled" by master nodes configuration which is synchronized between all nodes. There is no need to configure anything on the worker (slave) nodes.
Additional information about Kubernetes Control Plane.
Hope this helps.
So I installed RancherOS after that Rancher and got My Kubernetes Cluster.
Added singled node with all roles,
now thinking how to add more nodes on the same physical machine. Any advices what docker image u used to run rancher agent on it, so I can spin another node for k8s cluster?
I just want to run multiple nodes on a single physical machine.
You can try k3s (this is a product from the rancher labs).
You can run the kubernetes cluster on one machine via docker-compose
I am a little new with the concept of docker and containerization. What I want is to deploy multiple DataBases like MongoDB, Elassandra, ... in one cloud service.
As it mentioned in this link, we can create multiple VMs using the following command in Docker CLI:
docker-machine create --driver virtualbox <virtualbox-name>
So, my question is which of the following approaches performance is better in one cloud implementation:
To have one node at all
To have multiple node in cluster
You should definitely have multiple nodes in the cluster.
Having single node will results into single point of failure. if the node is down then your database is not accessible and would affect the users connected to the database.
I'm just getting my feet wet with Docker Swarm because we're looking at ways to configure our compute cluster to make it more containerized.
Basically we have a small farm of 16 computers and I'd like to be able to have each node pull down the same image, run the same container, and accept jobs from an OpenMPI program running on a master node.
Nothing is really OpenMPI specific about this, just that the containers have to be able to open SSH ports and the master must be able to log into them. I've got this working with a single Docker container and it works.
Now I'm learning Docker Machine and Docker Swarm as a way to manage the 16 nodes. From what I can tell, once I set up a swarm, I can then set it as the DOCKER_HOST (or use -H) to send a "docker run", and the swarm manager will decide which node runs the requested container. I got this basically working using a simple node list instead of messing with discovery services, and so far so good.
But I actually want to run the same container on all nodes in one command. Is this possible?
Docker 1.12 introduced global services and passing --mode global to run command Docker will schedule service to all nodes.
Using Docker Swarm you can use labels and negative affinity filters to gain the same result:
openmpi:
environment:
- "affinity:container!=*openmpi*"
labels:
- "com.myself.name=openmpi"