How does Openwhisk decide how many runtime containers to create? - docker

I am working on a project that is using Openwhisk. I have created a Kubernetes cluster on Google cloud with 5 nodes and installed OW on it. My serverless function is written in Java. It does some processing based on arguments I pass to it. The processing can last up to 30 seconds and I invoke the function multiple times during these 30 seconds which means I want to have a greater number of runtime containers(pods) created without having to wait for the previous invocation to finish. Ideally, there should be a container for each invocation until the resources are finished.
Now, what happens is that when I start invoking the function, the first container is created, and then after few seconds, another one to serve the first two invocation. From that point on, I continue invoking the function (no more than 5 simultaneous invocation) but no containers are started. Then, after some time, a third container is created and sometimes, but rarely, a fourth one, but only after long time. What is even weirded is that the containers are all started on a single cluster node or sometimes on two nodes (always the same two nodes). The other nodes are not used. I have set up the cluster carefully. Each node is labeled as invoker. I have tried experimenting with memory assigned to each container, max number of containers, I have increased the max number of invocations I can have per minute but despite all this, I haven't been able to increase the number of containers created. Additionally, I have tried with different machines used for the cluster (different number of cores and memory) but it was in vain.
Since Openwhisk is still relatively a young project, I don't get enough information from the official documentation unfortunately. Can someone explain how does Openwhisk decide when to start a new container? What parameters can I change in values.yaml such that I achieve greater number of containers?

The reason why very few containers were created is the fact that worker nodes do not have Docker Java runtime image and that it needs be downloaded on each of the nodes the first this environment is requested. This image weights a few hundred MBs and it needs time to be downloaded (a couple of seconds in google cluster). I don't know why Openwhisk controller decided to wait for already created pods to be available instead of downloading the image on other nodes. Anyway, once I downloaded the image manually on each of the nodes, using the same application with the same request rate, a new pod was created for each request that could not be served with an existing pod.

The OpenWhisk scheduler implements several heuristics to map an invocation to a container. This post by Markus Thömmes https://medium.com/openwhisk/squeezing-the-milliseconds-how-to-make-serverless-platforms-blazing-fast-aea0e9951bd0 explains how container reuse and caching work and may be applicable for what you are seeing.
When you inspect the activation record for the invokes in your experiment, check the annotations on the activation record to determine if the request was "warm" or "cold". Warm means container was reused for a new invoke. Cold means a container was freshly allocated to service the invoke.
See this document https://github.com/apache/openwhisk/blob/master/docs/annotations.md#annotations-specific-to-activations which explains the meaning of waitTime and initTime. When the latter is decorating the annotation, the activation was "cold" meaning a fresh container was allocated.
It's possible your activation rate is not fast enough to trigger new container allocations. That is, the scheduler decided to allocate your request to an invoker where the previous invoke finished and could accept the new request. Without more details about the arrival rate or think time, it is not possible to answer your question more precisely.
Lastly, OpenWhisk is a mature serverless function platform. It has been in production since 2016 as IBM Cloud Functions, and now powers multiple public serverless offerings including Adobe I/O Runtime and Naver Lambda service among others.

Related

RabbitMQ in ECS Cluster with Autoscaling

I have the following situation:
Two times a day for about 1h we receive a huge inflow in messages which are currently running through RabbitMQ. The current Rabbit cluster with 3 nodes can't handle the spikes, otherwise runs smoothly. It's currently setup on pure EC2 instances. The instance type is currenty t3.medium, which is very low, unless for the other 22h per day, where we receive ~5 msg/s. It's also setup currently has ha-mode=all.
After a rather lengthy and revealing read in the rabbitmq docs, I decided to just try and setup an ECS EC2 Cluster and scale out when cpu load rises. So, create a service on it and add that service to the service discovery. For example discovery.rabbitmq. If there are three instances then all of them would run on the same name, but it would resolve to all three IPs. Joining the cluster would work based on this:
That would be the rabbitmq.conf part:
cluster_formation.peer_discovery_backend = dns
# the backend can also be specified using its module name
# cluster_formation.peer_discovery_backend = rabbit_peer_discovery_dns
cluster_formation.dns.hostname = discovery.rabbitmq
I use a policy ha_mode=exact with 2 replicas.
Our exchanges and queues are created manually upfront for reasons I cannot discuss any further, but that's a given. They can't be removed and they won't be re-created on the fly. We have 3 exchanges with each 4 queues.
So, the idea: during times of high load - add more instances, during times of no load, run with three instances (or even less).
The setup with scale-out/in works fine, until I started to use the benchmarking tool and discovered that queues are always created on one single node which becomes the queue master. Which is fine considering the benchmarking tool is connected to one single node. Problem is, after scale-in/out, also our manually created queues are not moved to other nodes. This is also in line with what I read on the rabbit 3.8 release page:
One of the pain points of performing a rolling upgrade to the servers of a RabbitMQ cluster was that queue masters would end up concentrated on one or two servers. The new rebalance command will automatically rebalance masters across the cluster.
Here's the problems I ran into, I'm seeking some advise:
If I interpret the docs correctly, scaling out wouldn't help at all, because those nodes would sit there idling until someone would manually call rabbitmq-queues rebalance all.
What would be the preferred way of scaling out?

Docker as "Function" (Create a Docker per request)

Is there a simple way to create an istance of a docker container for each request?
I have a Docker container that takes a very long time to compute a mathematical algorithm. When running, no other requests can be processed in parallel. Lambda Functions would be the best solution, but the container needs to download more than 1gb of data and needs at least 10 cores and 5GB ram to be executed, and therefore Lambda would be too expensive.
We have a big cluster (1000 cores, 0.5TB RAM) and I was considering to use a NGINX Load balancer or a Kubernetes bare metal.
Is it possible to configure in a way that creates an instance per request (similar to a Lambda Function)?
There are tools like Airflow or Argo that are designed for these things.
basically you can create a DAG will run very much like a function as a service but on what ever custom docker container you want.
You probably need to decouple the HTTP service from the backend processing. If the job takes minutes or longer to run, most browsers and other HTTP clients will time out before it will finish, so the HTTP end of it needs to start the job in some way and immediately return some sort of success message.
Once you’ve done that, you might find a job queue like RabbitMQ a useful piece of infrastructure technology. Again, this decouples the queue of jobs from the mechanism to actually run them. In a Docker/Kubernetes space you’d launch some number of persistent workers that all listened to the queue and did work as it appeared there. You wouldn’t necessarily launch one worker per job; or possibly you would have just one worker that launched other Docker containers or Kubernetes Jobs; but if the work backlog got too long you could launch additional workers.
In a pure-Docker space it’s theoretically possible to use the Docker API to launch additional containers. However, doing this gives your process unlimited root-level access to the host; if you are running this in the context of an HTTP server you need to be extremely careful about security considerations. Kubernetes also has an API and from a security point of view this is probably better: you can set up a service account that has permissions only to launch Jobs, and launch a Job per inbound job that arrives. (Security is still important but it’s much harder for a malicious input to root the host.)

Can a Docker Task in a Swarm know which instance number or how many total instances there are?

Is it possible for a Docker Task to know which task number and how many total tasks there are running of a particular Service?
E.g. I'd like to have a Service that works on different ranges in a job queue. The range of jobs that any one Service instance (i.e. Task) works on is dependent on the total number of Tasks and which Task the current one is. So if it's the 5th task out of 20, then it will work on one range of jobs, but if it's the 12th task out of 20, it will work on a different range.
I'm hoping there is an DOCKER_SERVICE_TASK_NUMBER environment variable or something like that.
Thanks!
I've seen this requested a few times, so you're not alone, but it's not a current feature of docker's swarm mode. Implementing this would be non-trivial because of the need to support scaling a service along with other updates to the swarm service. If you scale a service down and docker cleans up task 2 of 5 because it's on the busiest node, you're left in an odd situation where the starting count is less than the current count and there's a hole in the current numbering scheme.
If you have this requirement, then an external service discovery tool like consul or etcd may be useful. You can also try implementing your own solution taking advantage of the tasks.$service_name DNS entry that's available inside the container. That gives you the IP's of all the other containers providing this service just like you had with the round robin load balancer before the swarm mode VIP abstracted that away.

Instance only when needed - GCP

I have a video editing task that needs to be completed occasionally. The task is relatively intensive and therefore needs a powerful machine to do it. It can take up to about 10 minutes to complete. I might get 10-20 such requests per day, though that will increase in the future.
I have created a docker container that currently is a consumer that pulls jobs from PubSub. I was thinking to have an instance of this container on Google Container Engine. However, as I understand it, I would need to have at least one instance of this (large / powerful / expensive) container running at all times, even if the majority of time it is sat idle. Therefore my cost for running this service would be overly high until my usage increased.
Is there an alternative way of running my container (GCP or otherwise) where I push a job to some service, which then starts an instance of a powerful machine, processes the job, then shuts down? Therefore I am paying for my CPU hours used.
Have a look at the cluster autoscaler: https://cloud.google.com/container-engine/docs/cluster-autoscaler

What's the main advantage of using replicas in Docker Swarm Mode?

I'm struggling to understand the idea of replica instances in Docker Swarm Mode. I've read that it's a feature that helps with high availability.
However, Docker automatically starts up a new task on a different node if one node goes down even with 1 replica defined for the service, which also provides high availability.
So what's the advantage of having 3 replica instances rather than 1 for an arbitrary service? My assumption was that with more replicas, Docker spends less time creating a new instance on another node in the event of failure, which aids performance. Is this correct?
What Makes a System Highly Available?
One of the goals of high availability is to eliminate single points of
failure in your infrastructure. A single point of failure is a
component of your technology stack that would cause a service
interruption if it became unavailable.
Let's take your example of a replica that consists of a single instance. Now let's suppose there is a failure. Docker Swarm will notice that the service failed and restart it. The service restarts, but a restart isn't instant. Let's say the restart takes 5 seconds. For those 5 seconds your service is unavailable. Single point of failure.
What if you had a replica that consists of 3 instances. Now when one of them fails (no service is perfect), Docker Swarm will notice that one of the instances is unavailable and create a new one. During that time you still have 2 healthy instances serving requests. To a user of your service, it appears as if there was no down time. This component is no longer a single point of failure.
ROMANARMY answer is very good and i just wanted to mention that the replicas can be on different nodes, so if one of your servers goes down(become unavailable) the container(replica) on the other server can be run without problem.

Resources