How do docker containers scale in Google App Engine? - docker

Google App Engine flexible allows you to deploy docker containers... how does scaling manifest itself?
Will a new VM be spun up each time the application needs to scale or can it spin up new container instances on an existing VM?
Can individual containers scale independent of each other? e.g. product container is under load but customer is not so only a new product container is spun up?
I realize GKE would be a better option for scaling containers, but I need to understand how this works on GAE for a multitude of reasons.

App Engine flex will only run one of your app container per VM instance. If it needs to scale up, it'll always create a new VM to run the new container.
As per your example, if you want to scale "product" and "customer" containers separately, you'll need to define them as separate App Engine services. Each service will have its own scaling set up and act independently.

If you have containers, you can have a look to Cloud Run, which scale to 0 and can scale up very quickly (there is no new VM to proviion, that can take several seconds on AppEngine Flex).
However, long run aren't supported (limited to 15 minutes). All depends you requirement in term of feature, portability, scalability.
Provide more details if you want more advices.

Google App Engine is a fully managed serverless platform, where you basically submit a code and GAE will manage the underlying infrastructure and the runtime environment (for example the version of a python interpreter). You can also customize the runtime environment with Dockerfiles.
In contrast, GKE provides more fine-grained control on your cluster infrastructure. You can configure your computer resources, network, security, how the services are exposed, custom scaling policies, etc. GKE can be considered a managed container orchestration plaform.
An alternative to GKE that can provide even more control is creating the resources you need in GCE and configuring Kubernetes by yourself.
Both GKE and GAE are based and priced on compute engine instances. Google Cloud Functions, however, is a more recent event-driven serverless service. GCF is great if you want to execute code on an event-driven basis (for example, sending a confirmation email after a user registers).
In terms of complexity and control over your code's environment I would order the different Google services as:
GCE(Compute Engine) > GKE(Kubernetes Engine) > GAE(App Engine) > GCF(Cloud Functions)
One point to consider is that the more low-level you go the easier it is to migrate your service to another platform.
Given that you seem to be deploying only containerized applications, I would recommend giving GKE a try, specially if you want to have a cluster of multiple services that interact with each other.
In terms of scaling, GAE will scale only VM instances and you have only one app per VM instance.
In GKE you have two types of scaling: container scaling and VM instance scaling. You can have multiple containers in one instance and those containers can be different apps. Based on limits you define (such as the CPU used in an app) GKE will try to efficiently allocate the containers across the instances of your cluster.

Related

Does it make sense to cluster NodeJs (in order to take advantage of multiple CPUs) if will be deployed with orchestration tool like Kubernetes?

Right now I am struggling with debugging of NodeJs application which is clustered and is running on Docker. Found on this link and this information in it:
Remember, Node.js is still single-threaded in most cases, so even on a
single server you’ll likely want to spin up multiple container
replicas to take advantage of multiple CPU’s
So what does it mean, clustering of NodeJs app is pointless when it is meant to be deployed on Kubernetes ?
EDIT: I should also say that, by clustering I mean forking workers with cluster.fork() and goal of the application is to build simple REST API with high load traffic.
Short answer is yes..
Containers are just mini VM's and kubernetes is the orchestration tool that manages all the running 'containers', checking for health, resource allocation, load etc.
So, if you are running your node application in a container with an orchestration tool like kubernetes, then clustering is moot as each 'container' will be using 1 CPU or partial CPU depending on how you have it configured. Multiple containers essentially just place a new VM in rotation and kubernetes will direct traffic to each.
Now, when we talk about clustering node, that really comes into play when using tools like PM2, lets say you have a beefy server with 8 CPU's, node can only use 1 per instance so tools like PM2 setup a cluster and will route traffic along each of the running instances.
One thing to keep in mind though is that your application needs to be cluster OR container ready. Meaning nothing should be stored on the ephemeral disk as with each container restart that data is lost OR in a cluster situation there is no guarantee the folders will be available to each running instance and if you cluster with multiple servers etc you are asking for trouble :D ( this is where an object store would come into play like S3)

Which architecture should I use to preinitialize N dockers in one server?

I want to create in just one server an application that runs inside a docker. I want the server to always have N active dockers running and if an N+1 user enters, then a new docker is launched.
I think that I need this architecture:
Nginx for load balancing
Kubernetes, to orchestrate the dockers
Docker, with the app inside this docker.
Is this correct? I am not sure if Kubernetes is what I need when I´m only going to use just one server.
You probably want to use the cluster autoscaler functionality of kubernetes. Docs here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
Your approach of using users number as a metric will probably work, but maybe you are better off using generic metrics, such as the CPU consumption or memory availability on your node. For that you define the maximal consumption that your containers are allowed to use.

Kubernetes scaling pods using custom algorithm

Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web and Mongo. Currently we run these containers on a single machine. However as our users are increasing we are looking for a solution to scale. Using Kubernetes we would form a multi container pod. If we are to replicate we need to replicate all 3 containers as a unit. Our cloud application is consumed by mobile app users. Our app can only handle approx 30000 users per Worker node and we intend to place a single pod on a single worker node. Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
We plan on using Kubernetes to manage the containers. Load balancing doesn't work for our use case as a mobile device needs to be tied to a single machine once assigned and each Pod works independently with its own persistent volume. However we need a way of spinning up new Pods on worker nodes if the number of users goes over 30000 and so on.
The idea is we have some sort of custom scheduler which assigns a mobile device a Worker Node ( domain/ IPaddress) depending on the number of users on that node.
Is Kubernetes a good fit for this design and how could we implement a custom pod scale algorithm.
Thanks
Piggy-Backing on the answer of Jonah Benton:
While this is technically possible - your problem is not with Kubernetes it's with your Application! Let me point you the problem:
Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web, and Mongo.
Here is your first problem: Is you can only deploy these three containers together and not independently - you cannot scale one or the other!
While MongoDB can be scaled to insane loads - if it's bundled with your web server and web application it won't be able to...
So the first step for you is to break up these three components so they can be managed independently of each other. Next:
Currently we run these containers on a single machine.
While not strictly a problem - I have serious doubt's what it would mean to scale your application and what the challenges that come with scalability!
Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
Now, this IS a problem. You're looking to run an application on Kubernetes but I do not think you understand the consequences of doing that: Kubernetes orchestrates your resources. This means it will move pods (by killing and recreating) between nodes (and if necessary to the same node). It does this fully autonomous (which is awesome and gives you a good night sleep) If you're relying on clients sticking to a single nodes IP, you're going to get up in the middle of the night because Kubernetes tried to correct for a node failure and moved your pod which is now gone and your users can't connect anymore. You need to leverage the load-balancing features (services) in Kubernetes. Only they are able to handle the dynamic changes that happen in Kubernetes clusters.
Using Kubernetes we would form a multi container pod.
And we have another winner - No! You're trying to treat Kubernetes as if it were your on-premise infrastructure! If you keep doing so you're going to fail and curse Kubernetes in the process!
Now that I told you some of the things you're thinking wrong - what a person would I be if I did not offer some advice on how to make this work:
In Kubernetes your three applications should not run in one pod! They should run in separate pods:
your webservers work should be done by Ingress and since you're already familiar with nginx, this is probably the ingress you are looking for!
Your web application should be a simple Deployment and be exposed to ingress through a Service
your database should be a separate deployment which you can either do manually through a statefullset or (more advanced) through an operator and also exposed to the web application trough a Service
Feel free to ask if you have any more questions!
Building a custom scheduler and running multiple schedulers at the same time is supported:
https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
That said, to the question of whether kubernetes is a good fit for this design- my answer is: not really.
K8s can be difficult to operate, with the payoff being the level of automation and resiliency that it provides out of the box for whole classes of workloads.
This workload is not one of those. In order to gain any benefit you would have to write a scheduler to handle the edge failure and error cases this application has (what happens when you lose a node for a short period of time...) in a way that makes sense for k8s. And you would have to come up to speed with normal k8s operations.
With the information provided, hard pressed to see why one would use k8s for this workload over just running docker on some VMs and scripting some of the automation.

What's the difference between launching a docker container in App Engine vs Compute Engine?

Given that you can upload docker images to both the app engine and the Compute Engine, what's the real difference for a person that always contains their apps in docker?
According to a similar question the difference boils down to PaaS vs IaaS, but with a docker file you always specify the OS and runtime environment. So the only difference I see is you might end up over specifying on App Engine by giving a container.
Fundamentally, if you just want your app to scale seamlessly and quickly without much input, use App Engine Flex. If you want more control that you can configure in different ways using other Google products, consider an unmanaged instance group with Compute Engine.
Some history
It's worth noting that the ability to add Docker images to Compute Engine instances was an extremely recent development. And before that, App Engine Flexible was the new kid in town because we used to only have App Engine standard which definitely didn't allow you to use Docker as a base.
Key Differences
Here are the key differences in my experience:
App Engine is designed as a PaaS product and so you can customize scaling parameters in your app.yaml and App Engine reads those and takes over from you. It's technically true that you can do this using Compute Engine but it involves more configuration - you need to set up an instance group, a backend and a frontend. With App Engine, all of that is taken care of for you
You can't setup any load balancers or any peripheral services or products on top of App Engine. App Engine Flexible went quite a way further to give the user more control and more (sorry) flexibility. However, it doesn't allow unfettered integration with other services. Whereas with Compute Engine, you can setup an HTTPs load balancer, add your machines to different networks and subnets, set custom tags etc...
On an additional note, a more detailed explanation on the difference between App Engine and Compute Engine can be found here.
App Engine is PaaS service and managed service from google. It has options as dynamic instance and resident instance to achieve scaling. It has predefined runtime setup for the supported languages, we just need to execute the code. It supports execution of multiple versions of the applications simultaneously, so we can release the code changes for separate group of users. It's inherently support container environment, so the basic details like number of pods or nodes need no to provide. Even for the App Engine Standard if there is no load than the node count reduced to zero means no cost. App Engine Flexible requires at-least one up instance. Here the deployment can be done by a single command i.e. gcloud app deploy app.yaml
Compute engine is IaaS so developer needs to create a machine, setup the desired software's, setup the docker container environment. There will be no scaling, version management, traffic control,security, firewall, health monitoring and repair etc. So with compute engine it's really tough to achieve the capabilities provided by App Engine. The batter alternative is Kubernetes Engine.

Container technologies: docker, rkt, orchestration, kubernetes, GKE and AWS Container Service

I'm trying to get a good understanding of container technologies but am somewhat confused. It seems like certain technologies overlap different portions of the stack and different pieces of different technologies can be used as the DevOps team sees fit (e.g., can use Docker containers but don't have to use the Docker engine, could use engine from cloud provider instead). My confusion lies in understanding what each layer of the "Container Stack" provides and who the key providers are of each solution.
Here's my layman's understanding; would appreciate any corrections and feedback on holes in my understanding
Containers: self-contained package including application, runtime environment, system libraries, etc.; like a mini-OS with an application
It seems like Docker is the de-facto standard. Any others that are notable and widely used?
Container Clusters: groups of containers that share resources
Container Engine: groups containers into clusters, manages resources
Orchestrator: is this any different from a container engine? How?
Where do Docker Engine, rkt, Kubernetes, Google Container Engine, AWS Container Service, etc. fall between #s 2-4?
This may be a bit long and present some oversimplification but should be sufficient to get the idea across.
Physical machines
Some time ago, the best way to deploy simple applications was to simply buy a new webserver, install your favorite operating system on it, and run your applications there.
The cons of this model are:
The processes may interfere with each other (because they share CPU and file system resources), and one may affect the other's performance.
Scaling this system up/down is difficult as well, taking a lot of effort and time in setting up a new physical machine.
There may be differences in the hardware specifications, OS/kernel versions and software package versions of the physical machines, which make it difficult to manage these application instances in a hardware-agnostic manner.
Applications, being directly affected by the physical machine specifications, may need specific tweaking, recompilation, etc, which means that the cluster administrator needs to think of them as instances at an individual machine level. Hence, this approach does not scale. These properties make it undesirable for deploying modern production applications.
Virtual Machines
Virtual machines solve some of the problems of the above:
They provide isolation even while running on the same machine.
They provide a standard execution environment (the guest OS) irrespective of the underlying hardware.
They can be brought up on a different machine (replicated) quite quickly when scaling (order of minutes).
Applications typically do not need to be rearchitected for moving from physical hardware to virtual machines.
But they introduce some problems of their own:
They consume large amounts of resources in running an entire instance of an operating system.
They may not start/go down as fast as we want them to (order of seconds).
Even with hardware assisted virtualization, application instances may see significant performance degradation over an application running directly on the host.
(This may be an issue only for certain kinds of applications)
Packaging and distributing VM images is not as simple as it could be.
(This is not as much a drawback of the approach, as it is of the existing tooling for virtualization.)
Containers
Then, somewhere along the line, cgroups (control groups) were added to the linux kernel. This feature lets us isolate processes in groups, decide what other processes and file system they can see, and perform resource accounting at the group level.
Various container runtimes and engines came along which make the process of creating a "container", an environment within the OS, like a namespace which has limited visibility, resources, etc, very easy. Common examples of these include docker, rkt, runC, LXC, etc.
Docker, for example, includes a daemon which provides interactions like creating an "image", a reusable entity that can be launched into a container instantly. It also lets one manage individual containers in an intuitive way.
The advantages of containers:
They are light-weight and run with very little overhead, as they do not have their own instance of the kernel/OS and are running on top of a single host OS.
They offer some degree of isolation between the various containers and the ability to impose limits on various resources consumed by them (using the cgroup mechanism).
The tooling around them has evolved rapidly to allow easy building of reusable units (images), repositories for storing image revisions (container registries) and so on, largely due to docker.
It is encouraged that a single container run a single application process, in order to maintain and distribute it independently. The light-weight nature of a container make this preferable, and leads to faster development due to decoupling.
There are some cons as well:
The level of isolation provided is a less than that in case of VMs.
They are easiest to use with stateless 12-factor applications being built afresh and a slight struggle if one tries to deploy legacy applications, clustered distributed databases and so on.
They need orchestration and higher level primitives to be used effectively and at scale.
Container Orchestration
When running applications in production, as the complexity grows, it tends to have many different components, some of which scale up/down as necessary, or may need to be scaled. The containers themselves do not solve all our problems. We need a system that solves problems associated with real large-scale applications such as:
Networking between containers
Load balancing
Managing storage attached to these containers
Updating containers, scaling them, spreading them across nodes in a multi-node cluster and so on.
When we want to manage a cluster of containers, we use a container orchestration engine. Examples of these are Kubernetes, Mesos, Docker Swarm etc. They provide a host of functionality in addition to those listed above and the goal is to reduce the effort involved in dev-ops.
GKE (Google Container Engine) is hosted Kubernetes on Google Cloud Platform. It lets a user simply specify that they need an n-node kubernetes cluster and exposes the cluster itself as a managed instance. Kubernetes is open source and if one wanted to, one could also set it up on Google Compute Engine, a different cloud provider, or their own machines in their own data-center.
ECS is a proprietary container management/orchestration system built and operated by Amazon and available as part of the AWS suite.
To answer your questions specifically:
Docker engine: A tool to manage the lifecycle of a docker container and docker images. Create, restart, delete docker containers. Create, rename, delete docker images.
rkt: Analogous to docker engine, but different implementation
Kubernetes: A collection of tools to manage the lifecycle of a distributed application that uses containers. Contains tooling to manage containers, groups of containers, configuration for containers, orchestrating containers, scheduling them on actual instances, tooling to help developers write and maintain other services/tools to deal with containers.
Google Container Engine: Instead of getting VMs, installing "docker-engine" on them, installing kubernetes on them and getting it all to work with things like the right permissions to your infrastructure etc. imagine if it all came together so that you can choose the types of machines and the size of your cluster that has all of this just working. Things like pulling images from your project specific docker repository (google container registry) or claiming persistent volumes, or provisioning load-balancers just work without worrying about service accounts and permissions and what not.
ECS: Analogous to GKE (4) but without Kubernetes.
To address the points in your understanding: you are loosely right about things (except container engine I think). It's important to understand that the only important thing to understand is what a container is. The rest of it is just marketing/product names. It's also important to understand that today's understanding of containers is very warped by what Docker containers are and a lot of the opinions enforced by Docker and tooling around Docker. Containers have been around for a long time.
So once you understand what a (docker) container is, a container engine is just a tool to manage them, a container cluster is a just a group of containers, an orchestrator is just a tool to manage where containers run based on some parameters. IMHO, you really don't need to worry too much about what the rest of the tooling is once you understand and build a solid mental model around containers. The rest will just fit in automatically.
The best way to understand all of this? Build & deploy a decently complex application with Docker (persist data/use a database in your app) and everything will make sense.

Resources