We have DSE 5.0.7 on 3 node cluster. For last 2 days once the compaction on rollups60 table starts DSE service stops. Don't see any errors in the log files.
Job takes several hours and DSE service goes down couple of times.
Can you please help if we can control the compaction job so that we can schedule it during off peak time?
Related
I'm using airflow with composer (GCP) to extract data from cloud sql for gcs and after gcs for bigquery, I have some tables between 100 Mb and 10 Gb. My dag has two tasks to do what I mentioned before. with the smaller tables the dag runs smoothly, but with slightly larger tables the cloud sql extraction task ends in a few seconds with failure, but does not bring any logs except "negsignal.sigkill", I have already tried to increase the composer capacity , among other things, but nothing has worked yet.
I'm using the mysql_to_gcs and gcs_to_bigquery operators
The first thing you should check when you get negsinal.SIGKILL is your Kubernetes resources. This is surely a problem with resources limits.
I think you should monitor your Kubernetes Cluster Nodes. Inside GCP, go to Kubernetes Engine > Clusters. You should have a cluster containing the environment that Cloud Composer uses.
Now, head to the nodes of your cluster. Each node provides you metrics about CPU, memory & disk usage. You will also see the limit for the resources that each node uses. Also, you will see the pods that each node has.
If you are not very familiar with K8s, let me explain this briefly. Airflow uses Pods inside nodes to run your Airflow tasks. These pods are called airflow-worker-[id]. That way you can identify your worker pods inside the Node.
Check your pods list. If you have evicted airflow-worker pods, then Kubernetes is stopping your workers for some reason. Since Composer uses CeleryExecutor, a evicted airflow-worker points to a problem. This is not the case if you use KubernetesExecutor, but that is not available yet in Composer.
If you click in some evicted pod, you will see the reason for eviction. That should give you the answer.
If you don't see a problem with your pod eviction, don't panic, you still have some options. From that point on, your best friend will be logs. Be sure to check your pods logs, node logs and cluster logs, in that order.
I'm in the process of learning ins-and-outs of Airflow to end all our Cron woes. When trying to mimic failure of (CeleryExecutor) workers, I've got stuck with Sensors. I'm using ExternalTaskSensors to wire-up top-level DAGs together as described here.
My current understanding is that since Sensor is just a type of Operator, it must inherit basic traits from BaseOperator. If I kill a worker (the docker container), all ordinary (non-Sensor) tasks running on it get rescheduled on other workers.
However upon killing a worker, ExternalTaskSensor does not get re-scheduled on a different worker; rather it gets stuck
Then either of following things happen:
I just keep waiting for several minutes and then sometimes the ExternalTaskSensor is marked as failed but workflow resumes (it has happened a few times but I don't have a screenshot)
I stop all docker containers (including those running scheduler / celery etc) and then restart them all, then the stuck ExternalTaskSensor gets rescheduled and workflow resumes. Sometimes it takes several stop-start cycles of docker containers to get the stuck ExternalTaskSensor resuming again
Sensor still stuck after single docker container stop-start cycle
Sensor resumes after several docker container stop-start cycles
My questions are:
Does docker have a role in this weird behaviour?
Is there a difference between Sensors (particularly ExternalTaskSensor) and other operators in terms of scheduling / retry behaviour?
How can I ensure that a Sensor is also rescheduled when the worker it is running on gets killed?
I'm using puckel/docker-airflow with
Airflow 1.9.0-4
Python 3.6-slim
CeleryExecutor with redis:3.2.7
This is the link to my code.
I have a cluster of 5 containers running the same app.
This app needs to perform a scheduled recurring task (every 10 minutes).
But this task need to execute in only one container of the cluster, not in all the 5 ones.
How can I achieve this?
What I'm trying to do is query a database table for every 10 minutes looking for new records and publishes them to a RabbitMQ server and then delete this database records. So, the messages are distributed for processing through all the 5 containers with the same app.
I can't execute the same schedule task at the same time in all containers because all the 5 containers will query the same records from database and duplicate the messages in RabbitMQ.
Also, I don't want to perform a database lock in order to allow only one container at the time to query the database because this is not scale well, among other problems.
My first solution would be, to start the different containers with unique container names.
Expected your scheduler service trigger the additional task from outside of your service containers than access them with the given unique name and start the desired function with a 'docker exec' command.
Let's say we have a swarm1 (1 manager and 2 workers), I am going to back up this swarm on a daily basis, so if there is a problem some day, I could restore all the swarm to a new one (swarm2 = 1 manager and 2 workers too).
I followed what described here but it seems that while restoring, the new manager get the same token as the old manager, as a result : the 2 workers get disconnected and I end up with a new swarm2 with 1 manager and 0 worker.
Any ideas / solution?
I don't recommend restoring workers. Assuming you've only lost your single manager, just docker swarm leave on the workers, then join again. Then on the manager you can always cleanup old workers later (does not affect uptime) with docker node rm.
Note that if you loose the manager quorum, this doesn't mean the apps you're running go down, so you'll want to keep your workers up and providing your apps to your users until you fix your manager.
If your last manager fails or you lose quorum, then focus on restoring the raft DB so the swarm manager has quorum again. Then rejoin workers, or create new workers in parallel and only shutdown old workers when new ones are running your app. Here's a great talk by Laura Frank that goes into it at DockerCon.
I followed Alex Ellis' excellent tutorial that uses kubeadm to spin-up a K8s cluster on Raspberry Pis. It's unclear to me what the best practice is when I wish to power-cycle the Pis.
I suspect sudo systemctl reboot is going to result in problems. I'd prefer not to delete and recreate the cluster each time starting with kubeadm reset.
Is there a way that I can shutdown and restart the machines without deleting the cluster?
Thanks!
This question is quite old but I imagine others may eventually stumble upon it so I thought I would provide a quick answer because there is, in fact, a best practice around this operation.
The first thing that you're going to want to ensure is that you have a highly available cluster. This consists of at least 3 masters and 3 worker nodes. Why 3? This is so that at any given time they can always form a quorum for eventual consistency.
Now that you have an HA Kubernetes cluster, you're going to have to go through every single one of your application manifests and ensure that you have specified Resource Requests and Limitations. This is so that you can ensure that a pod will never be scheduled on a pod without the required resources. Furthermore, in the event that a pod has a bug that causes it to consume a highly abnormal amount of resources, the limitation will prevent it from taking down your cluster.
Now that that is out of the way, you can begin the process of rebooting the cluster. The first thing you're going to do is reboot your masters. So run kubectl drain $MASTER against one of your (at least) three masters. The API Server will now reject any scheduling attempts and immediately start the process of evicting any scheduled pods and migrating their workloads to your other masters.
Use kubectl describe node $MASTER to monitor the node until all pods have been removed. Now you can safely connect to it and reboot it. Once it has come back up, you can now run kubectl uncordon $MASTER and the API Server will once again begin scheduling Pods to it. Once again use kubectl describe $NODE until you have confirmed that all pods are READY.
Repeat this process for all of the masters. After the masters have been rebooted, you can safely repeat this process for all three (or more) worker nodes. If you properly perform this operation you can ensure that all of your applications will maintain 100% availability provided they are using multiple pods per service and have proper Deployment Strategy configured.