How do I scale up my cluster on Google Container Engine / Kubernetes? - docker

I have two instances of an app container (happens to be a Node.JS app, but that shouldn't matter) running in a Kubernetes cluster on Google Container Engine. I'd like to scale it up to three instances.
My cluster has a master and two minion nodes, with a replication controller and a load balancer service. The replication controller keeps my app container running happily on the two nodes.
I can see that there is a handy gcloud alpha container kubectl resize command which lets me change the number of replicas, but I don't see how or if I can increase the size of the cluster itself, so that it can spin up another minion node. I only see gcloud commands to create, delete, list and describe clusters; nothing to resize them.
If I can't resize my cluster, then to scale up I'd need to create a whole new cluster and kill the old one. Am I missing something?
Also, are there plans to support auto-scaling?

Update (June 2015): Kubernetes on GCE now uses managed instance groups which you can manually resize to add new nodes to your cluster.
There isn't currently a way to add nodes to your existing Google Container Engine cluster. We are currently adding support to Kubernetes to allow clusters to have nodes dynamically added but the work isn't quite finished yet. Once the feature is available in Kubernetes you can expect that it will show up in Google Container Engine shortly after the next Kubernetes release.
In the mean time, it should be possible to run more than two replicas of your node.js application on the existing two VMs.

This presentation: http://fr.slideshare.net/craigbox/autoscaling-kubernetes is about kubernetes horizontal scaling in Google Cloud. I put this link here to save some googling to next one that ends up on this thread.

Related

Unsure on how to Orchestrate docker containers

Im new to docker and am wanting to accomplish something but I am unsure on how to Orchestrate my docker containers to do this.
What I want to do:
I have an API that in simple does a calculation from a requested file. It loads the file (around 80mb) from disk to memory then keep it in memory for 2 hours (caching).
Im wanting to have an architecture where for example when the container gets overwhelmed with requests a new one fires up, and when the original container frees its memory and the requests slow down then the container shuts down.
Is Memory and CPU Container Orchestration possible?
Thank You,
/Jeremy
Docker itself is not dedicated to the orchestration multiple containers. You need to use some container orchestration environment. The most popular are Kubernetes, Docker Swarm, and Apache Mesos. Or if you want to run in the Cloud, then some vendor-specific, like AWS ECS.
Here's a good list of container clustering toolkit.
In all these environments it's possible to configure what you described. If you're completely new to the topic, then I recommend installing Docker-for-Desktop which comes with built-in Kubernetes and play with that in your local.
For sure, container orchestration system is what you want to be able efficiently manage your docker containers.
You can find current complete list of solutions for production environment in this spreadsheet
Tools, like kubernetes will give you reach set of benefits eg
Provisioning and deployment of containers
Redundancy and availability of containers
Scaling up or removing containers to spread application load evenly
across host infrastructure
Allocation of resources between containers
Load balancing of service discovery between containers
Health monitoring of containers and hosts
In Kubernetes there is a Horizontal Pod Autoscaler, that
automatically scales the number of pods in a replication controller,
deployment, replica set or stateful set based on observed CPU
utilization (or, with custom metrics support, on some other
application-provided metrics). Note that Horizontal Pod Autoscaling
does not apply to objects that can’t be scaled, for example,
DaemonSets.
As for beginning I would recommend you start with minikube.
More advanced ways are setup manually cluster using kubeadm either look into the cloud providers
Please be aware that you will not have option to modify cloud based control plane. More info in my related answer

Why use docker service?

This question illustrates the theoretical differences between docker run and docker service.
What I don't understand is when would one need to use the exact same container replicated multiple times (as per the Docker documentation example)?
There, they run the same web app replicated 5 times.
Is deployment on Kubernetes (for example) a potential use case, where the developer does not want to centralize the app on one host, in order to make it more resilient, hence why 5 replicas are created?
To understand, can someone please please with an example use case, where the docker service is useful?
swarm is an orchestrator just like kubernetes. docker service deploys services to swarm just as you deploy your services to kubernetes using kubectl.
swarm is essentially built-in primitive orchestrator. One possible case for replicas is running a proxy that directs requests to proper containers. You could expose multiple machines and have one take place of another in case another fails. Or any other high availability case you could think of.
Your question could be rephrased as "What's the difference between running a single container and running containers in a cluster?", which would be another question altogether, but that rephrasing might help illustrate what docker service does.
If you want to scale your application, you can run multiple instances of it (horizontal scaling) or you beef up the machine(s) that it runs on (vertical scaling). For the first, you would have to put a load balancer in front of your application so that the traffic is evenly distributed between the different instances. The idea is that those instances run on different hosts, so if one goes down, your application is still up. Some controlling instance (a Kubernetes service, for example) will notice that one of your instances has gone south and won't direct any more traffic to it. Nowadays, with all the cloud stuff going on, this is typically the way to go.
You don't need Kubernetes for such a setup, but you're right, this would be a typical use case for it. At least if you run your application in a Docker container.
Once use case is running on Docker swarm which consists of n number of nodes in your swarm cluster. You can run replicas of your application on the swarm cluster with a load balancer/reverse proxy to load balance your setup. If any one of the nodes goes down the application can still run.
But the exact use case for running multiple instances is scalabilty. Suppose you know that one instance of your app can serve 10000 users (Assume Bank authentication) at a time.
If you want your application to serve 50K users just run 5 replicas(using docker service create) .

Benefit to placing Database and Application in same Kubernetes pod

I know just the bare minimum of Kubernetes. However, I wanted to know if there would be any benefit in running 2 containers in a single pod:
1 Container running the application (e.g. a NodeJS app)
1 Container running the corresponding local database (e.g. a PouchDB database)
Would this would increase performance or the down-sides of coupling the two containers would overcome any benefits?
Pods, are designed to put together containers that share the same lifecyle. Containers inside the same pod, share some namespaces (like networking) and volumes.
This way, coupling an app with its database could look like a good idea, because the app could just connect to the database through localhost, etc. But it is not! As Diego Velez pointed out, one of the first limitations you could face is scaling your app. If you couple your app with your database, you are forced to scale your database whenever you scale your app, what is not optimal at all and prevents you from benefit of one of the main benefits of using a container orchestrator like kubernetes.
Some good use cases are:
Container with app + container with agent for app metrics, ci agents, etc.
CI/CD container (like jenkins agents) + container(s) with tools for CI/CD.
Container with app + container with proxy (like in istio making use of the sidecar pattern).
Lets say you neeed to scale your app (the pod), what would happen is that the DB will also be scaled, and that will cause an error because it is not set to be a cluster, just a single node.

Manually deleting unused Images on kubernetes (GKE)

I am running a managed kubernetes cluster on Google Cloud Platform with a single node for development.
However when I update Pod images too frequently, the ImagePull step fails due to insufficient disk space in the boot disk.
I noticed that images should be auto GC-ed according to documentation, but I have no idea what is the setting on GKE or how to change it.
https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#image-collection
Can I manually trigger a unused image clean up, using kubectl or Google Cloud console command?
How do I check/change the above GC setting above such that I wont encounter this issue in the future?
Since Garbage Collector is an automated service, there are no kubectl commands or any other commands within GCP to manually trigger Garbage Collector.
In regards to your second inquiry, Garbage Collector is handled by the Master node. The Master node is not accessible to users since it is a managed service. As such, users cannot configure Garbage Collection withing GKE.
The only workaround I can offer is to create a custom cluster from scratch within Google Compute Engine. This would provide you access to the Master node of your cluster so you can have the flexibility of configuring the cluster to your liking.
Edit: If you need to delete old images, I would suggest removing the old images using docker commands. I have attached a github article that provides several different commands that you can run on the node level to remove old images here.

How to keep a certain number of Docker containers running the same application and add/remove them as needed?

I've working with Docker containers. What Ive done is lunching 5 containers running the same application, I use HAProxy to redirect requests to them, I added a volume to preserve data and set restart policy as Always.
It works. (So far this is my load balancing aproach)But sometimes I need another container to join the pool as there might be more requests, or maybe at first I don't need 5 containers.
This is provided by the Swarm Mode addition in Docker 1.12. It includes orchestration that lets you not only scale your service up or down, but recover from an outage by automatically rescheduling the jobs to run on other nodes.
If you don't want to use Docker 1.12 (yet!), you can also use a Service Discovery like Consul, register your containers inside and use a tool like Consul Template to regenerate your load balancer configuration accordingly.
I made a talk 6 months ago about it. You can find the code and the configuration I used during my demo here: https://github.com/bargenson/dockerdemo

Resources