Does it make sense to run multiple similar processes in a container? - docker

a brief background to give context on the question.
Currently my team and i are in the midst of migrating our microservices to k8s to lessen the effort of having to maintain multiple deployment tools & pipelines.
One of the microservices that we are planning to migrate is an ETL worker that listens to messages on SQS and performs multi-stage processing.
It is built using PHP Laravel and we use supervisord to control how many processes to run on each worker instance on aws ec2. Each process basically executes a laravel command to poll different queues for new messages. We also periodically adjust the number of processes to maximize utilization of each instance's compute power.
So the questions are:
is this method of deployment still feasible when moving to k8s? Is there still a need to "maximize" compute usage? Are we better off just running 1 process in each container using the "container way" (not sure what is the tool called. runit?)
i read from multiple sources (e.g https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container) that it is ideal that for a container to run only 1 process. There's also the case of recovering crashed processes and how running supervisord might interfere with how container performs recovery. But i am not very sure if it applies for our use case.

You should absolutely restructure this to run one process per container and one container per pod. You do not typically need an init system or a process manager like supervisord or runit (there is an argument to have a dedicated init like tini that can do the special pid-1 things).
You mention two concerns here, restarting failed processes and process placement in the cluster. For both of these, Kubernetes handles these automatically for you.
If the main process in a Pod fails, Kubernetes will restart it. You don't need to do anything for this. If it fails repeatedly, it will start delaying the restarts. This functionality only works if the main process fails – if your container's main process is a supervisor process, you will never get a pod restart and you may not directly notice if a process can't start up at all.
Typically you'll run containers via Deployments that have some number of identical replica Pods. Kubernetes itself takes responsibility for deciding which node will run each pod; you don't need to manually specify this. The smaller the pods are, the easier it is to place them. Since you're controlling the number of replicas of a pod, you also want to separate concerns like Web servers vs. queue workers so you can scale these independently.
Kubernetes has some ability to auto-scale, though the typical direction is to size the cluster based on the workload: in a cloud-oriented setup if you add a new pod that requests more CPUs than your cluster currently has available, it will provision a new node. The HorizonalPodAutoscaler is something of an advanced setup, but you can configure it so that the number of workers is a function of your queue length. Again, this works better if the only thing it's scaling is the worker pods, and not a collection of unrelated things packaged together.

Related

Architectural question about user-controlled Docker instances

I got a website in Laravel where you can click on a button which sends a message to a Python daemon which is isolated in Docker. This works for an easy MVP to prove a concept, but it's not viable in production because a user would most likely want to pause, resume and stop that process as well because that service is designed to never stop otherwise considering it's a scanner which is looped.
I have thought about a couple of solutions for this, such as fixing it in the software layer but that would add complexity to the program. I have googled Docker and I have found that it is actually possible to do what I want to do with Docker itself with the commands pause, unpause, run and kill.
It would be optimal if I had a service which would interact with the Docker instances with the criteria of above and would be able to take commands from HTTP. Is Docker Swarm the right solution for this problem or is there an easier way?
There are both significant security and complexity concerns to using Docker this way and I would not recommend it.
The core rule of Docker security has always been, if you can run any docker command, then you can easily take over the entire host. (You cannot prevent someone from docker run a container, as container-root, bind-mounting any part of the host filesystem; so they can reset host-root's password in the /etc/shadow file to something they know, allow remote-root ssh access, and reboot the host, as one example.) I'd be extremely careful about connecting this ability to my web tier. Strongly coupling your application to Docker will also make it more difficult to develop and test.
Instead of launching a process per crawling job, a better approach might be to set up some sort of job queue (perhaps RabbitMQ), and have a multi-user worker that pulls jobs from the queue to do work. You could have a queue per user, and a separate control queue that receives the stop/start/cancel messages.
If you do this:
You can run your whole application without needing Docker: you need the front-end, the message queue system, and a worker, but these can all run on your local development system
If you need more crawlers, you can launch more workers (works well with Kubernetes deployments)
If you're generating too many crawl requests, you can launch fewer workers
If a worker dies unexpectedly, you can just restart it, and its jobs will still be in the queue
Nothing needs to keep track of which process or container belongs to a specific end user

Is there a reason running CI builds on kubernetes cluster?

I don't know much about kubernetes, but as far as I know, it is a system that enables you to control and manage containerized applications. So, generally speaking, the essence of the benefit that we get from kubernetes is the ability to "tell" kubernetes what containers we want running, how many of them, on which machines, among other details, and kubernetes will take care of doing that for us. Is that correct?
If so, I just can't see the benefit of running a CI pipeline using a kubernetes pod, as I understand that some people do. Let's say you have your build tools on Docker containers instead of having them installed on a specific machine, that's great - you can just use those containers in the build process, why kubernetes? Is there any performance gain or something like this?
Appreciate some insights.
It is highly recommended to get a good understanding of what Kubernetes is and what it can and cannot do.
Generally, containers combined with an orchestration tools can provide a better management of your machines and services. It can significantly improve the reliability of your application and reduce the time and resources spent on DevOps.
Some of the features worth noting are:
Horizontal infrastructure scaling: New servers can be added or removed easily.
Auto-scaling: Automatically change the number of running containers, based on CPU utilization or other application-provided metrics.
Manual scaling: Manually scale the number of running containers through a command or the interface.
Replication controller: The replication controller makes sure your cluster has an equal amount of pods running. If there are too many pods, the replication controller terminates the extra pods. If there are too few, it starts more pods.
Health checks and self-healing: Kubernetes can check the health of nodes and containers ensuring your application doesn’t run into any failures. Kubernetes also offers self-healing and auto-replacement so you don’t need to worry about if a container or pod fails.
Traffic routing and load balancing: Traffic routing sends requests to the appropriate containers. Kubernetes also comes with built-in load balancers so you can balance resources in order to respond to outages or periods of high traffic.
Automated rollouts and rollbacks: Kubernetes handles rollouts for new versions or updates without downtime while monitoring the containers’ health. In case the rollout doesn’t go well, it automatically rolls back.
Canary Deployments: Canary deployments enable you to test the new deployment in production in parallel with the previous version.
However you should also know what Kubernetes is not:
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a
Service) system. Since Kubernetes operates at the container level
rather than at the hardware level, it provides some generally
applicable features common to PaaS offerings, such as deployment,
scaling, load balancing, and lets users integrate their logging,
monitoring, and alerting solutions. However, Kubernetes is not
monolithic, and these default solutions are optional and pluggable.
Kubernetes provides the building blocks for building developer
platforms, but preserves user choice and flexibility where it is
important.
Especially in your use case note that Kubernetes:
Does not deploy source code and does not build your application.
Continuous Integration, Delivery, and Deployment (CI/CD) workflows are
determined by organization cultures and preferences as well as
technical requirements.
The decision is yours but having in mind the main concepts above will help you make it.
An important detail is that you do not tell Kubernetes what nodes a given pod should run on; it picks itself, and if the cluster is low on resources, in many cases it can actually allocate more nodes on its own (via the cluster autoscaler).
So if your CI system is fairly busy, and uses all containers for everything, it could make more sense to run an individual build job as a Kubernetes Job. If you have 100 builds that all start at the same time, it's possible for the cluster to give itself more hardware, and the build queue will clear out faster. Particularly if you're using Kubernetes for other tasks, this can save you same administrative effort over maintaining a dedicated pool of CI-system workers that need to be separately updated and will sit mostly idle until that big set of builds arrives.
Kubernetes's security settings are also substantially better than Docker's. Say your CI system needs to launch containers as part of a build. In Kubernetes, it can run under a service account, and be given permissions to create and delete deployments in a specific namespace, and nothing else. In Docker the standard approach is to give your CI system access to the host's Docker socket, but this can be easily exploited to take over the host.

schedule docker-compose startup order

for a large data processing pipeline I have, I built a bunch of docker containers grouped into a swarm with docker-compose.yaml file. they're not http servers or this kind of micro-services, just plain (sometimes replicated) executables and clients performing batch workloads.
as it's a pipeline - I sometime need one workload to terminate before other ones starts. the docker documentation advocates for tools like wait-for-it.sh and dockerize which I find are aimed towards servers and services and not towards clients (they don't expose a port or anything I can listen to).
my question is what's the best way to signal another fleet of services their start-condition was met, there must be some way to bind on termination of another service. I don't want to use more complex tools like rabbitmq when all I need is to know what service stopped
so eventually I didn't exactly control the startup order (although dockerize definitively can do that) because I understood that in such a distributed setup I need something more structured to schedule tasks.
I ended up using rabbitmq but I guess airflow etc. could also do the trick

IIS Process Vs Container Process

I am trying to understand what is the difference between Docker Container Process and IIS Process? From Container perspective, it is not advisable to have more than one process in a single container and in IIS also you can not do that. Each application is executed in it's own process. So if IIS provides me the the same Process Isolation then why should I use Containers?
Although it is process isolation per app on iis docker provides another layer of isolation where memory used and kernel access is also isolated. Bear in mind that containers contain all that is needed to run something including the os. Only thing is the physical os memory and kernel that is used together as is the case with vm-s for example. So in a way containers give you even higher isolation than just having a separate process for an application.
But that is not the main selling point of containers. The main selling point is that they are scalable solutions that are basically infrastructure as code and thus easier to manage and deploy on any environment. Also being that means that it will work the same wherever you deploy it to since you include all is needed in them. And if your apps have lots of traffic with load balancing you ca deploy multiples of the same container in a cluster and do not have those bottlenecks.
Second point is that during development there is historical data of what was added to the container and removed to have a stable environment. That and the ability to deploy dev instances alongside prod instances and just switching over makes it reduce possible downtime for unforeseen error minimal as you can just redirect to old prod container until the fix is out.
A bit of a rant there and yet there is more.

What's a typical ElasticSearch/Logstash/Kibana deployment model look like

Being a novice to docker/elastic search worlds, I am trying to build a deployment model of using elastic search via containers in one of my project.
I have few application servers, each of which have some logs. I would like to have all these logs at one place. Below is what I have in my mind.
All application servers install filebeat to push data to a Logstash server (in a docker image). This LogStash server forward these logs to elasticsearch docker image that also have kibana.
Does this make sense? Is it OK to have logstash in one image and ElasticSearch/Kibana on a different one? Are there any pros/cons of this approach? What could be alternative approaches to architect this?
The policy of Docker is that 1 container does 1 thing and 1 thing good. So I would go for a docker image for ElasticSearch, 1 for Kibana and one for LogStash. Add them together with docker compose.
https://docs.docker.com/v17.09/engine/userguide/eng-image/dockerfile_best-practices/#use-multi-stage-builds
Each container should have only one concern
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
You may have heard that there should be “one process per container”. While this mantra has good intentions, it is not necessarily true that there should be only one operating system process per container. In addition to the fact that containers can now be spawned with an init process, some programs might spawn additional processes of their own accord. For instance, Celery can spawn multiple worker processes, or Apache might create a process per request. While “one process per container” is frequently a good rule of thumb, it is not a hard and fast rule. Use your best judgment to keep containers as clean and modular as possible.
If containers depend on each other, you can use Docker container networks to ensure that these containers can communicate.

Resources