Does Dask LocalCluster Shutdown when kernel restarts - dask

If I restart my jupyter kernel will any existing LocalCluster shutdown or will the dask worker processes keep running?
I know when I used a SLURM Cluster the processes keep running if I restart my kernel without calling cluster.close() and I have to use squeue to see them and scancel to cancel them.
For local processes however how can I tell that all the worker processes are gone after I have restarted my kernel. If they do not disappear automatically how can I manually shut them down if I no longer have access to cluster (the kernel restarted)
I try to remember to call cluster.close but I often forget. Using a context manager doesn't work for my jupyter needs.

During the normal termination of your kernel python process, all objects will be finalised. For the cluster object, this includes calling close() automatically, and you don't normally need to worry about it.
It is perhaps possible that close does not have a chance to run, in the case that the kernel is more forcibly killed as opposed to a normal termination. Since all LocalCluster processes are children of the kernel that started then, this will still result in the cluster stopping, but perhaps with some warnings about connections that didn't have time to clean themselves up. You should be able to ignore such warnings.

Related

memory leak in docker container will disappear after the container is been killed?

I am writing and testing a cpp program in a docker container. And I did not designate the max memory size for the container.
docker run -it xxx:latest /bin/bash
And cpp program will sometimes cause memory leak such as not free the allocated heap memory.
So I am curious that if the memory leak will disappear in the host linux when I kill the container?
Or this memory leak caused by the program in the container still exists in the host?
A Docker container is a wrapper around a single process. Killing the container also kills that process; conversely, if the process exits on its own, that causes the container to exit too.
Ending a process will release all of the memory that process used. So, if you have a C++ program, and it calls new without a corresponding delete, it will leak memory, but ending the process will reclaim all of the process's memory, even space the application has lost track of. This same rule still applies if the process is running in a container.
This also applies to other leak-like behavior and in other languages; for example, appending a very large number of values to a list and then ignoring them, so they're still "in use" unnecessarily. Some other OS resources like file descriptors are cleaned up this way, but some are not. In particular, if you fork(2) a subprocess, you must clean up after it yourself. Similarly, if you have access to the Docker API, you must clean up any related containers you spawn yourself.

What happens to multiprocess applications such as Postgres running in Docker?

From my understanding Docker encourages a single process in a container.
How does this work and impact applications such as Postgres which can use multiple processes when querying?
Does docker restrict Postgres to only use one process or does it enable it to run multiple processes and if so how?
At a technical level, when Docker creates a container, it launches a single process in that container. In the container's process namespace, the single process that Docker launches has the process ID 1, with the rights and responsibilities that entails. When that process exits, the container exits too.
There aren't any particular limitations on that process launching subprocesses. If you have something like PostgreSQL, Python multiprocessing, or Apache that launches multiple child-process workers, these work fine. These don't break the design rule that a container shouldn't do more than one thing.
The one thing to watch out for is if those subprocesses themselves launch subprocesses. Say A starts B, which starts C, but then B exits. The standard Unix rule is that C (the "grandchild" process) will have its parent process ID reset to 1 (the init process); in a Docker context this is the main container process. If you're not prepared for this then you can have zombie processes inside your container or unexpected SIGCHLD notifications. A common solution to this is to run a lightweight dedicated init process (tini for example) as process 1, and have it launch the main process as its only child.
Conversely, at a technical level you could run a multi-process manager like supervisord or, with some dedication, a heavy-weight kitchen-sink init system like systemd as the main container process. This does break the "do only one thing" design rule. These init processes take responsibility for monitoring their child processes, capturing log output, and other things that ordinarily Docker would do, and it means that if you need to delete and recreate the container (a pretty routine maintenance task) you're taking every process in the container with it.

Does it make sense to run multiple similar processes in a container?

a brief background to give context on the question.
Currently my team and i are in the midst of migrating our microservices to k8s to lessen the effort of having to maintain multiple deployment tools & pipelines.
One of the microservices that we are planning to migrate is an ETL worker that listens to messages on SQS and performs multi-stage processing.
It is built using PHP Laravel and we use supervisord to control how many processes to run on each worker instance on aws ec2. Each process basically executes a laravel command to poll different queues for new messages. We also periodically adjust the number of processes to maximize utilization of each instance's compute power.
So the questions are:
is this method of deployment still feasible when moving to k8s? Is there still a need to "maximize" compute usage? Are we better off just running 1 process in each container using the "container way" (not sure what is the tool called. runit?)
i read from multiple sources (e.g https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container) that it is ideal that for a container to run only 1 process. There's also the case of recovering crashed processes and how running supervisord might interfere with how container performs recovery. But i am not very sure if it applies for our use case.
You should absolutely restructure this to run one process per container and one container per pod. You do not typically need an init system or a process manager like supervisord or runit (there is an argument to have a dedicated init like tini that can do the special pid-1 things).
You mention two concerns here, restarting failed processes and process placement in the cluster. For both of these, Kubernetes handles these automatically for you.
If the main process in a Pod fails, Kubernetes will restart it. You don't need to do anything for this. If it fails repeatedly, it will start delaying the restarts. This functionality only works if the main process fails – if your container's main process is a supervisor process, you will never get a pod restart and you may not directly notice if a process can't start up at all.
Typically you'll run containers via Deployments that have some number of identical replica Pods. Kubernetes itself takes responsibility for deciding which node will run each pod; you don't need to manually specify this. The smaller the pods are, the easier it is to place them. Since you're controlling the number of replicas of a pod, you also want to separate concerns like Web servers vs. queue workers so you can scale these independently.
Kubernetes has some ability to auto-scale, though the typical direction is to size the cluster based on the workload: in a cloud-oriented setup if you add a new pod that requests more CPUs than your cluster currently has available, it will provision a new node. The HorizonalPodAutoscaler is something of an advanced setup, but you can configure it so that the number of workers is a function of your queue length. Again, this works better if the only thing it's scaling is the worker pods, and not a collection of unrelated things packaged together.

Does a docker container when paused have similar proprieties of a VM when snapshoted?

More specifically, if the memory of a docker container is preserved the same way a snapshoted VM is.
It's not the same. The typical definition of a VM snapshot is the filesystem at a point in time.
The container pause command freezes the process, but the processes still exist in memory. This is not gathering a point in time to revert to, but rather a way to control an application running inside of a container. Originally this was just a SIGSTOP sent to each process but has since been changed to use a cgroup freezer setting that cannot be detected or trapped by the container. See the docs on the pause command here:
https://docs.docker.com/engine/reference/commandline/pause/
If you're looking more for a live migration type of functionality, there have been some projects to do this, but that does not exist directly in docker at this time.

Sensor won't be re-scheduled on worker failure

I'm in the process of learning ins-and-outs of Airflow to end all our Cron woes. When trying to mimic failure of (CeleryExecutor) workers, I've got stuck with Sensors. I'm using ExternalTaskSensors to wire-up top-level DAGs together as described here.
My current understanding is that since Sensor is just a type of Operator, it must inherit basic traits from BaseOperator. If I kill a worker (the docker container), all ordinary (non-Sensor) tasks running on it get rescheduled on other workers.
However upon killing a worker, ExternalTaskSensor does not get re-scheduled on a different worker; rather it gets stuck
Then either of following things happen:
I just keep waiting for several minutes and then sometimes the ExternalTaskSensor is marked as failed but workflow resumes (it has happened a few times but I don't have a screenshot)
I stop all docker containers (including those running scheduler / celery etc) and then restart them all, then the stuck ExternalTaskSensor gets rescheduled and workflow resumes. Sometimes it takes several stop-start cycles of docker containers to get the stuck ExternalTaskSensor resuming again
Sensor still stuck after single docker container stop-start cycle
Sensor resumes after several docker container stop-start cycles
My questions are:
Does docker have a role in this weird behaviour?
Is there a difference between Sensors (particularly ExternalTaskSensor) and other operators in terms of scheduling / retry behaviour?
How can I ensure that a Sensor is also rescheduled when the worker it is running on gets killed?
I'm using puckel/docker-airflow with
Airflow 1.9.0-4
Python 3.6-slim
CeleryExecutor with redis:3.2.7
This is the link to my code.

Resources