I have a set of docker images running in a Kubernates cluster on GKE. I have a Jenkins server running on a VM in GKE.
I have docker builds and GKE deploys running on the Jenkins server, but I would like to start up a 'local' cluster on the Jenkins server after successful builds, run my dockers in that cluster, run my tests towards the cluster, and then close down the local cluster before deploying the docker images to GKE.
I know about minikube, but they state that you are not able to run nested VM's, and I wonder if this blocks my dream of test my cluster before deploying it?
Do I have to run my local cluster on a physical server to be able to run my tests, or is there a solution to my problem?
Have you considered using kubeadm?
You can run a Kubernetes cluster within your Jenkins VM. Setup is a bit different than minikube and it's still in beta but it will let you test your cluster before the final deployment.
Related
I have a Intel Atom Dual Core with 4 GB RAM left over and want to use it to run docker images.
What possible solutions are there for such a local installation? I already found MicroK8s which looks promising, yet wondering which other alternatives there are. Is there maybe a complete distribution focused on only running docker containers?
If I would install MicroK8s, I still have to also manage the Ubuntu installation hosting it. Would be nice to have a distribution that only focuses on running docker containers and updates operating system and docker stuff together, so I know it always works fine together.
If you can run Docker, run Docker's Desktop Kubernetes Cluster.
You also can run minikube (on a top of docker, or hypervisor, or virtualbox)
kind - which is docker in docker k8s cluster.
This is a lab env for playing with Docker containers on Kubernetes without installing nothing: https://labs.play-with-k8s.com/
Minikube: https://github.com/kubernetes/minikube .
Docker Swarm: https://docs.docker.com/engine/swarm/ is an alternative to kubernetes with less features, but easy to setup. (comparison: https://medium.com/faun/kubernetes-vs-docker-swarm-whos-the-bigger-and-better-53bbe76b9d11)
Make your own cluster using VirtualBox: https://medium.com/#KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd
Local-machine Solutions
Community Supported Tools
Minikube is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn’t require a cloud provider account.
Kubeadm-dind is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster.
Kubernetes IN Docker is a tool for running local Kubernetes clusters using Docker container “nodes”. It is primarily designed for testing Kubernetes 1.11+. You can use it to create multi-node or multi-control-plane Kubernetes clusters.
Ecosystem Tools
Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you to start coding and deploying in containers in minutes on a single-node Kubernetes cluster.
Minishift installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an all-in-one VM (minishift start) for Windows, macOS, and Linux. The container start is based on oc cluster up (Linux only). You can also install the included add-ons.
MicroK8s provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.
IBM Cloud Private-CE (Community Edition) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for development and test scenarios. Scales to full multi-node cluster.
IBM Cloud Private-CE (Community Edition) on Linux Containers is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host.
Ubuntu on LXD supports a nine-instance deployment on localhost.
My very opionated answer: you should use k3s by Rancher Labs https://k3s.io/
I have a windows 10 PC and want to run asp.net windows containers on top of a single node Kubernetes for testing before deploying to the cloud, so is this feasible?.
yes, windows 10 natively supports docker. you should be able to run windows containers.
use minikube for run single node kubernetes cluster and then deploy your windows container
Referring to the above answer and your question I will try to dispel your doubts.
Minikube is a tool that helps to run Kubernetes locally. Minikube enables to run single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop. You can read about minikube here: minikube.
Windows 10 supports docker. You can run asp.net windows containers on top of a Kubernetes cluster.
So assuming you have already running Kubernetes cluster you don't have to care about minikube. Just deploy your windows container on top of it. You can read about it here: windows-setup
I hope it helps.
I have been working with docker to run my scripts on chrome-node and firefox -node and debug with the selenium-hub image where it runs smoothly, but when I use the same with k8s the whole system slows down. Why is this happening, any idea. I am using minikubes for kubernetes and docker toolbox and docker compose for docker.
Thanks,
There would definitely be an additional overhead when you start Kubernetes using minikube locally, compared to just starting a Docker container on the host.
In order to have a Kubernetes cluster, minikube creates a VM on the machine where the Kubernetes components will run in addition to the Docker container.
Anyway, minikube is not a production way for running Kubernetes. It is mostly meant for local development and testing. Therefore, you shouldn't evaluated kubernetes performance based on a minikube installation.
I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.
I have 3 node Mesos cluster with Marathon framework. On slaves i have Docker and I want deploy few Wildfly instances on one node.
How can i deploy few instances of Wildfly docker containers on one slave Mesos node by Marathon?
deploying a docker container using marathon is usually straight forward.
Do I understand correctly that want to deploy several containers onto a single slave? In that case you should look at marathon's contraints.