AWS Fargate startup time - docker

Currently I'm researching on how our dockerised microservices could be orchestrated on AWS.
The Fargate option of ECS looks promising eliminating the need of managing EC2 instances.
Although it's a surprisingly long time needed to start a "task" in Fargate, even for a simple one-container setup. A 60 seconds to 90 seconds are typical for our Docker app images. And I heard it may take even more time like minutes or so.
So the question is: while Docker containers typically may start in say seconds what is exactly a reason for such an overhead in Fargate case?
P.S. The search on related questions returns such options:
Docker image load/extract time
Load Balancer influence -
registering, healthchecks grace period etc
But even in simplest possible config with no Load Balancer deployed and assuming the Docker image is not cached in ECS, it is still at least ~2 times slower to start task with single Docker image in Fargate (~ 60 sec) than launch the same Docker image on bare EC2 instance (25 sec)

Yes takes a little longer but we can't generalize the startup time for fargate. You can reduce this time tweaking some settings.
vCPU is directly impacting the start up time, So you have to keep in mind that in bare EC2 instance you have complete vCPU at your disposal , while in cases of fargate you may be assigning portion of it.
Since AWS manages servers for you they have to do few underline things. Assigning the VM into your VPC to docker images download/extract, assigning IPs and running the container can take this much time.
It's a nice blog and at the end of following article you can find good practices.
Analyzing AWS Fargate

Related

Is there any better way of deploying Docker Compose stack/services other than Swarm in scenario where autoscaling/load balancing isn't required?

Let me brief about my project, I'm building containerized security training environments (Future - Randomized Security Environments), aimed to help local students, organizations on their Information security training needs.
Current working - I have an instance group which auto scales according to load running a script to add and remove nodes from swarm, I use pub/sub topics to cater deployment needs which are deployed through (docker stack deploy command). It was tested by 4-5 people and was thought to be working perfectly until, we started trails on my own college students.
It got issues such as port numbers not being assigned to new deployments after 20-25 people deployed onto swarm, I am not understanding why, I mean resource usage is optimal, but swarm isn't assigning ports, after restarting the whole instance, swarm was updated with ports.
I knew there was a swarm option for task-history-limit which default set to 5, maybe that was the issue and it that's why it wasn't able to concurrently deploy. Later same thing happened (After 40 deployments) even after setting it to a higher number (Upgraded infra, low utilization in logs). Even now I'm getting nightmares on not knowing the correct reason of why this is happening.
Sample Deployment stack - https://gist.github.com/Mre11i0t/d16ed39e543094b50019d58d7e4bff99, Aim is to deploy this environment on-demand basis, each environment being isolated to respective user.

Slow install / upgrade through Helm (for Kubernetes)

Our application consists of circa 20 modules. Each module contains a (Helm) chart with several deployments, services and jobs. Some of those jobs are defined as Helm pre-install and pre-upgrade hooks. Altogether there are probably about 120 yaml files, which eventualy result in about 50 running pods.
During development we are running Docker for Windows version 2.0.0.0-beta-1-win75 with Docker 18.09.0-ce-beta1 and Kubernetes 1.10.3. To simplify management of our Kubernetes yaml files we use Helm 2.11.0. Docker for Windows is configured to use 2 CPU cores (of 4) and 8GB RAM (of 24GB).
When creating the application environment for the first time, it takes more that 20 minutes to become available. This seems far to slow; we are probably making an important mistake somewhere. We have tried to improve the (re)start time, but to no avail. Any help or insights to improve the situation would be greatly appreciated.
A simplified version of our startup script:
#!/bin/bash
# Start some infrastructure
helm upgrade --force --install modules/infrastructure/chart
# Start ~20 modules in parallel
helm upgrade --force --install modules/module01/chart &
[...]
helm upgrade --force --install modules/module20/chart &
await_modules()
Executing the same startup script again later to 'restart' the application still takes about 5 minutes. As far as I know, unchanged objects are not modified at all by Kubernetes. Only the circa 40 hooks are run by Helm.
Running a single hook manually with docker run is fast (~3 seconds). Running that same hook through Helm and Kubernetes regularly takes 15 seconds or more.
Some things we have discovered and tried are listed below.
Linux staging environment
Our staging environment consists of Ubuntu with native Docker. Kubernetes is installed through minikube with --vm-driver none.
Contrary to our local development environment, the staging environment retrieves the application code through a (deprecated) gitRepo volume for almost every deployment and job. Understandibly, this only seems to worsen the problem. Starting the environment for the first time takes over 25 minutes, restarting it takes about 20 minutes.
We tried replacing the gitRepo volume with a sidecar container that retrieves the application code as a TAR. Although we have not modified the whole application, initial tests indicate this is not particularly faster than the gitRepo volume.
This situation can probably be improved with an alternative type of volume that enables sharing of code between deployements and jobs. We would rather not introduce more complexity, though, so we have not explored this avenue any further.
Docker run time
Executing a single empty alpine container through docker run alpine echo "test" takes roughly 2 seconds. This seems to be overhead of the setup on Windows. That same command takes less 0.5 seconds on our Linux staging environment.
Docker volume sharing
Most of the containers - including the hooks - share code with the host through a hostPath. The command docker run -v <host path>:<container path> alpine echo "test" takes 3 seconds to run. Using volumes seems to increase runtime with aproximately 1 second.
Parallel or sequential
Sequential execution of the commands in the startup script does not improve startup time. Neither does it drastically worsen.
IO bound?
Windows taskmanager indicates that IO is at 100% when executing the startup script. Our hooks and application code are not IO intensive at all. So the IO load seems to originate from Docker, Kubernetes or Helm. We have tried to find the bottleneck, but were unable to pinpoint the cause.
Reducing IO through ramdisk
To test the premise of being IO bound further, we exchanged /var/lib/docker with a ramdisk in our Linux staging environment. Starting the application with this configuration was not significantly faster.
To compare Kubernetes with Docker, you need to consider that Kubernetes will run more or less the same Docker command on a final step. Before that happens many things are happening.
The authentication and authorization processes, creating objects in etcd, locating correct nodes for pods scheduling them and provisioning storage and many more.
Helm itself also adds an overhead to the process depending on size of chart.
I recommend reading One year using Kubernetes in production: Lessons learned. Author goes into explaining what have they achieved by switching to Kubernetes as well differences in overhead:
Cost calculation
Looking at costs, there are two sides to the story. To run Kubernetes, an etcd cluster is required, as well as a master node. While these are not necessarily expensive components to run, this overhead can be relatively expensive when it comes to very small deployments. For these types of deployments, it’s probably best to use a hosted solution such as Google's Container Service.
For larger deployments, it’s easy to save a lot on server costs. The overhead of running etcd and a master node aren’t significant in these deployments. Kubernetes makes it very easy to run many containers on the same hosts, making maximum use of the available resources. This reduces the number of required servers, which directly saves you money. When running Kubernetes sounds great, but the ops side of running such a cluster seems less attractive, there are a number of hosted services to look at, including Cloud RTI, which is what my team is working on.

spring-cloud-netflix zero downtime deployments on AWS ECS

We're running spring-cloud microservices using eureka on AWS ECS. We're also doing continuous deployment, and we've run into an issue where rolling production deployments cause a short window of service unavailability. I'm focusing here on #LoadBalanced RestTemplate clients using ribbon. I think I've gotten retry working adequately in my local testing environment, but I'm concerned about new service instance eureka registration lag time and the way ECS rolling deployments work.
When we merge a new commit to master, if the build passes (compiles and tests pass) our jenkins pipeline builds and pushes a new docker image to ECR, then creates a new ECS task definition revision pointing to the updated docker image, and updates the ECS service. As an example, we have an ECS service definition with desired task count set to 2, minimum percent available set to 100%, and maximum percent available set to 200%. The ECS service scheduler starts 2 new docker containers using the new image, leaving the existing 2 docker container running on the old image. We use container health checks that pass once the actuator health endpoint returns 200, and as soon as that happens, the ECS service scheduler stops the 2 old containers running on the old docker image.
My understanding here could be incorrect, so please correct me if I'm wrong about any of this. Eureka clients fetch the registry every 30 seconds, so there's up to 30 seconds where all the client has in the server list is the old service instances, so retry won't help there.
I asked AWS support about how to delay ECS task termination during rolling deploys. When ECS services are associated with an ALB target group, there's a deregistration delay setting that ECS respects, but no such option exists when a load balancer is not involved. The AWS response was to run the java application via an entrypoint bash script like this:
#!/bin/bash
cleanup() {
date
echo "Received SIGINT, sleeping for 45 seconds"
sleep 45
date
echo "Killing child process"
kill -- -$$
}
trap 'cleanup' SIGTERM
"${#}" &
wait $!
When ECS terminates the old instances, it send SIGTERM to the docker container, this script traps it, sleeps for 45 seconds, then continues with the shutdown. I'll also have to change an ecs config parameter in /etc/ecs that controls the grace period before ECS sends a SIGKILL after the SIGTERM, which defaults to 30 seconds, which is not quite long enough.
This feels dirty to me. I'm not sure that script isn't going to cause some other unforeseen issue; does it forward all signals appropriately? It feels like an unwanted complication.
Am I missing something? Can anyone spot anything wrong with AWS support's suggested entrypoint script approach? Is there a better way to handle this and achieve the desired result, which is zero downtime rolling deployments on services registered in eureka on ECS?

Docker Swarm CPU overload on deploy with Spring Boot containers

I have created a number of Spring Boot application, which all work like magic in isolation or when started up one of the other manually.
My challenge is that I want to deploy a stack with all the services in a Docker Swarm.
Initially I didn't understand what was going on, as it seemed like all my containers were hanging.
Turns out running a single Spring Boot application spikes up my CPU utilization to max it out for a good couple of seconds (20s+ to start up).
Now the issue is that Docker Swarm is launching 10 of these containers simultaneously and my load average goes above 80 and the system grinds to a halt. The container HEALTHCHECKS starts timing out and eventually Docker restarts them. This is an endless cycle and may or may not stabilize and if it does stabilize it takes a minimum of 30 minutes. So much for micro services vs big fat Java EE applications :(
Is there any way to convince Docker to rollout the containers one by one? I'm sure this will help a lot.
There is a rolling update parameter - https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/ - but is does not seem applicable to startup deployment.
Your help will be greatly appreciated.
I've also tried systemd (which isn't ideal for distributed micro services). It worked slightly better than Docker, but have the same issue when deploying all the applications at once.
Initially I wanted to try Kubernetes, but I've got enough on my plate and if I can get away with Docker Swarm, that would be awesome.
Thanks!

Understanding Docker in Production

I've been learning how to use Docker to setup dev environments, however, I'm
curious how these ideas translate to a production stack. As an example, I have a Laravel (Php) app, which uses MySQL, Redis, and Nginx
So in production, let's say I would normally have 2 application ec2 instances behind a load balancer on AWS. When setting up a similar production situation using Docker...
1) because I'd be using RDS and Elasticache, there would be no need for containers for those. So basically, id only need containers for PHP-Fpm and Nginx?
2) to have high availability, I would still have 2 (or least more than 1) ec2 instances behind the ELB. So I suppose each instance would run the above containers (PHP and Nginx). But that sounds no different than my previous VM setup, where each server runs what it needs to serve the application. Is that accurate?
3) with VMs, I would traditionally bake the code into an AMI and add those AMIs to a Launch Configuration and an Auto Scaling group, and that group would spin up instances as needed. So for deployment, I would tear down the old ec2 instances and spin up new ones. With Docker, Since these containers would be running on ec2 instances, wouldn't i still have to spin up / tear down the VMs, or would I just replace the containers and keep the VMs running?
Its reasonable to keep RDS, Elasticache and other fully managed services, outside of docker environment. Yes for high availability you need multiple EC2 instances having docker daemon running.
The real advantage is not coming with having two EC2 instances running two web server docker containers on each of them. Real advantages comes when you break down your application to microservices, where multiple containers in combination construct your web application providing the benefits of microservices.
Apart from that the DevOps flow would be different compared to traditional web application deployment in EC2 with autoscaling and load balancing and have many benefits. For example your source code will contain the container code as well, which will guarantee, the environment will work uniformly in your staging and production. Also you will be having images pointing to branches/tags in your source control, which allows to get new updates(delta downloads) for new releases.
If you are going to setup docker in AWS, its recommended to go with AWS ECS to reduce management overhead.
You're right, you will only need to run your code in a container and it will simply access the remote services. The only thing you'll have to consider is to ensure connectivity to them.
You're right again, you'll need to have everything you previously had in your VMs in the Docker container so that your code works as before. Anyway, with Docker containers it is possible to run multiple instances of your app on the same EC2 instance. Of course, your app will try to run on the same port, so some extra networking layer is needed for managing ports is necessary, but it's possible. All the EC2 instances needs to have is docker installed.
Instead of creating AMIs and closing and spinning up EC2 instances, you'll only have to pull the new Docker image and restart the container with the new image. This means just a few seconds compared to minutes in the EC2 instances flow. This is means you have a really quick way of reverting buggy deploys and opens the doors for a setup in which 0% downtime can be reached.

Resources