Flink Monitoring: Capture statistics for more than 5 min - monitoring

I am new to Flink community and I am trying to do a experimental study to capture performance of Flink for streaming data.
For this, I am trying to collect statistics of running jobs over a few hours. However, using Flinkā€™s UI I can only see the statistics for the last 5 minutes.
I tried to hit the Rest API but that does not contain the data of statistics other than bytes read/written.
The metrics provided in the UI under Task Metrics are very helpful but do not scale beyond 5 min. Is there a way in which I can capture the entire history of metrics.

You can configure a metrics reporter to record all the metrics that you are interested in over any period of time.
Flink 1.3 comes with reporters for JMX, Ganglia, Graphite, StatsD, and DataDog. The documentation describes to to configure a reporter.

Related

Monitoring short-lived SCDF tasks

I have an issue where I try to monitor with Prometheus and SCDF some short-lived tasks. We are using micrometer and sending the metrics to rsocket proxy that is scraped by prometheus, as shown in the scdf documentation.
The metrics do show up in prometheus but due to the way prometheus works and the fact that rsocket proxy removes the metrics once they are scraped we can't really graph or use the metrics since this task runs at 12 or 24h intervals.
Does anyone have any experience with this and found a way to properly monitor these short-lived tasks that get triggered quite far apart without resorting to pushgateway?

GCP Dataflow - Throughput gradually slows down, Workers underutilized

I have a Beam script running in GCP Dataflow. This data flow performs the below steps:
Read a number of files that are PGP encrypted. (Total size more than 100 GB, individual files are of 2 GB in size)
Decrypt the files to form a PCollection
Do a wait() on PCollection
Do some processing on each record in the PCollection before writing into an output file
Behavior seen with GCP Dataflow:
When reading the input files and decrypting the files, it starts with one workers, and then scales upto 30 workers. But, only one worker continues to be utilized, utilization in all other workers is less than 10 %
Initially, throughput was 150K records per second while decryption. So, 90% of the decryption gets completed in 1 hours, which is good. But, then the throughput slows down gradually, even to just 100 records per second. So, it takes another 1-2 hours to complete the remaining 10% of the workload.
Any idea why the workers are underutilized? If there is no utilization, why are they not scaled down? Here, I am paying unnecessarily for a large number of VM-s :-(. Second, why the throughput slows reduction towards the end, and thereby significantly increasing the time for completion?
There is an issue related to the throughput and input behavior of the Cloud Dataflow. I suggest you to track the improvements being made to the autoscaling and utilization behavior of workers here.
The default architecture for Dataflow worker processing and autoscaling is not as responsive in some cases compared to when the Dataflow Streaming Engine feature is enabled. I would recommend you to try running the relevant Dataflow pipeline with Streaming Engine enabled, since it provides a more responsive autoscaling performance based on CPU utilization for your pipeline.
I hope you find the above pieces of information useful.
Can you try to implement your solution without wait() ?
For example,
FileIO.match().filepattern() -> ParDo(DoFn to decrypt files) -> fileIO.readmatches() -> ParDo(DoFn to read files)
See the example here.
This should allow your pipeline to better parallelize.

Report a metric to Prometheus every hour, or every day

I'm using Prometheus to report metrics about my system. I wanted to ask what is the best way to report a counter which is an output of an hourly/daily job.
For example I have an hourly job with a numeric output and I would like to monitor the number and raise an alert if it is under a specific threshold.
Thanks,
Ori
I think what you are looking for is inside the node_exporter collector, if you read the doc you will see a textfile collector option inside it.
If you use cron job, I suggest you store the attended result inside a file and use this collector to get the datas.
You will find a bit more detail about how to do it here: https://www.robustperception.io/using-the-textfile-collector-from-a-shell-script
You can use push gateway and push the metrics into prometheus at end of your hourly / daily job (if it is not running as a service). If it is running as a service i hope you are aware of scrape interval configuration.

Dataflow: system lag keeps going up

We are testing Cloud Dataflow which pulls message from Pub/Sub subscription and convert data to BigQuery TableRow and load them to BigQuery as load job in every 1 min 30 sec.
We can see the pipeline works well and can process 500,000 elements per second with 40 workers. But when trying autoscaling, the number of workers unexpectedly goes up to 40 and stay there even if we send only 50,000 messages to Pub/Sub. In this situation, no unacknowledged message and workers' CPU utilizations are bellow 60%. One thing we noticed is that the Dataflow system lag goes up slowly.
Is system lag affects autoscaling?
If so, is there any solutions or ways to debugging this problem?
Is system lag affects autoscaling?
Google does not really expose the specifics of its autoscaling algorithm. Generally, though, it is based on CPU utilization, throughput and backlog. Since you're using Pub/Sub, backlog in by itself should be based on the number of unacknowledged messages. Still, the rate at which these are being consumed (i.e. the throughput at the Pub/Sub read stage) is also taken into account. Now, throughput as a whole relates to the rate at which each stage processes input bytes. As for CPU utilization, if the aforementioned don't "run smoothly", 60% usage is already too high. So, system lag at some stage could be interpreted as the throughput of that stage and therefore should affect autoscaling. Then again, these two should not always be conflated. If for example a worker gets stuck due to a hot key, system lag is high but there's no autoscaling, as the work is not parallelizable. So, all in all, it depends.
If so, is there any solutions or ways to debugging this problem?
The most important tools you have at hand are the execution graph, stackdriver logging and stackdriver monitoring. From monitoring, you should consider jvm, compute and dataflow metrics. gcloud dataflow jobs describe can also be useful, mostly to see how steps are fused and, by extension, see which steps are run in the same worker, like so:
gcloud dataflow jobs describe --full $JOB_ID --format json | jq '.pipelineDescription.executionPipelineStage[] | {"stage_id": .id, "stage_name": .name, "fused_steps": .componentTransform }'
Stackdriver monitoring exposes all three of the main autoscaling components.
Now, how you're going to take advantage of the above obviously depends on the problem. In your case, at first glance I'd say that, if you can work without autoscaling and 40 workers, you should normally expect that you can do the same with autoscaling when you've set maxNumWorkers to 40. Then again, the number of messages alone does not say the full story, their size/content also matters. I think you should start by analyzing your graph, check which step has the highest lag, see what's the input/output ratio and check for messages with severity>=WARNING in your logs. If you shared any of those here maybe we could spot something more specific.

Is this a good use-case for Dataflow?

We currently are using google taskqueues to batch up requests to store analytics data into Keen and Stathat (more performant with batch puts). In order to consume from the taskqueues, we have a set of process brokers and workers to consume from the taskqueues. Seeing as dataflow is something where we just write the logic for pushing to our analytics solutions and we can specify a batch size to pull when processing in our dataflow program, I was curious if the overhead (seems more taylored to much larger applications) of dataflow is a good fit.
Your use case seems like a good one for Dataflow. Rather than publishing to a task queue you could publish to pubsub as a way to stream your data to your Dataflow job. Your Dataflow job could use Dataflow windows and triggers to batch your data based on size and/or time. You could then write each batch to your datastore.
Dataflow should work well on small datasets. The overhead would likely be in the cost of unused CPU cycles of Dataflow workers. Dataflow allows you to control the number of workers so you can allocate a number of workers suitable for your data size.
Utilization will depend on how evenly your load is spread out in time. If your peak and average loads are quite different then you can make a tradeoff between latency and utilization. If you want to maintain low latency then you can pick the number of workers so that you keep up during peak times. On the other hand if you want to maximize utilization, you can provision the number of workers based on average load. During peak times you would start to accumulate a backlog of messages in pubsub. The system would get rid of that backlog during non-peak times when there was spare capacity.
Right now Dataflow doesn't support writing custom sinks for unbounded data. One way to work around this is to do the writes from a DoFn rather than a sink. This should work just fine provided you can do your writes in an idempotent way so that writing a record multiple times won't cause problems.
Windowing and triggers are a way of dividing your data into finite batches to which aggregations (e.g. grouping, summing, etc...) can be applied. This blog post explains it better than I could (look at the section "windowing").

Resources