Dart isolate group implementation - dart

I'm trying to understand how memory and processes like garbage collection are implemented in isolate groups in Dart.
The article Introduction to Dart VM at the time of this writing includes the following image:
However, this image was made prior to isolate groups that entered the stable branch in Dart 2.15. My question is, what has changed internally about the image above after the addition of isolate groups?
Related research:
https://dart.dev/guides/language/concurrency
https://github.com/dart-lang/sdk/issues/36097
https://github.com/dart-lang/sdk/issues/47164
https://github.com/dart-lang/sdk/issues/46754
https://www.youtube.com/watch?v=SXT7nir1B48
https://www.youtube.com/watch?v=yUMjt0AxVHU

Related

Replaying data into Apache Beam pipeline over Google Cloud Pub/Sub without overloading other subscribers

What I'm doing: I'm building a system in which one Cloud Pub/Sub topic will be read by dozens of Apache Beam pipelines in streaming mode. Each time I deploy a new pipeline, it should first process several years of historic data (stored in BigQuery).
The problem: If I replay historic data into the topic whenever I deploy a new pipeline (as suggested here), it will also be delivered to every other pipeline currently reading the topic, which would be wasteful and very costly. I can't use Cloud Pub/Sub Seek (as suggested here) as it stores a maximum of 7 days history (more details here).
The question: What is the recommended pattern to replay historic data into new Apache Beam streaming pipelines with minimal overhead (and without causing event time/watermark issues)?
Current ideas: I can currently think of three approaches to solving the problem, however, none of them seem very elegant and I have not seen any of them mentioned in the documentation, common patterns (part 1 or part 2) or elsewhere. They are:
Ideally, I could use Flatten to merge the real-time ReadFromPubSub with a one-off BigQuerySource, however, I see three potential issues: a) I can't account for data that has already been published to Pub/Sub, but hasn't yet made it into BigQuery, b) I am not sure whether the BigQuerySource might inadvertently be rerun if the pipeline is restarted, and c) I am unsure whether BigQuerySource works in streaming mode (per the table here).
I create a separate replay topic for each pipeline and then use Flatten to merge the ReadFromPubSubs for the main topic and the pipeline-specific replay topic. After deployment of the pipeline, I replay historic data to the pipeline-specific replay topic.
I create dedicated topics for each pipeline and deploy a separate pipeline that reads the main topic and broadcasts messages to the pipeline-specific topics. Whenever a replay is needed, I can replay data into the pipeline-specific topic.
Out of your three ideas:
The first one will not work because currently the Python SDK does not support unbounded reads from bounded sources (meaning that you can't add a ReadFromBigQuery to a streaming pipeline).
The third one sounds overly complicated, and maybe costly.
I believe your best bet at the moment is as you said, to replay your table into an extra PubSub topic that you Flatten with your main topic, as you rightly pointed out.
I will check if there's a better solution, but for now, option #2 should do the trick.
Also, I'd refer you to an interesting talk from Lyft on doing this for their architecture (in Flink).

How do I make sure my Dataflow pipeline scales?

We've often seen people write Dataflow pipelines that don't scale well. This is frustrating since Dataflow is meant to scale transparently, but there still are some antipatterns in Dataflow pipelines that make it difficult to scale. What are some common antipatterns and tips for avoiding them?
Scaling Your Dataflow Pipeline
Hi, Reuven Lax here. I’m a member of the Dataflow engineering team, where I lead the design and implementation of our streaming runner. Prior to Dataflow I led the team that built MillWheel for a number of years. MillWheel was described in this VLDB 2013 paper, and is the basis for the streaming technology underlying Dataflow.
Dataflow usually removes the need for you to think too much about how to make a pipeline scale. A lot of work has gone into sophisticated algorithms that can automatically parallelize and tune your pipeline across many machines. However as with any such system, there are some anti-patterns that can bottleneck your pipeline at scale. In this post we will go over three of these anti-patterns, and discuss how to address them. It’s assumed that you are already familiar with the Dataflow programming model. If not, I recommend beginning with our Getting Started guide and Tyler Akidau’s Streaming 101 and Streaming 102 blog posts. You may also read the Dataflow model paper published in VLDB 2015.
Today we’re going to talk about scaling your pipeline - or more specifically, why your pipeline might not scale. When we say scalability, we mean the ability of the pipeline to operate efficiently as input size increases and key distribution changes. The scenario: you’ve written a cool new Dataflow pipeline, which the high-level operations we provide made easy to write. You’ve tested this pipeline locally on your machine using DirectPipelineRunner and everything looks fine. You’ve even tried deploying it on a small number of Compute VMs, and things still look rosy. Then you try and scale up to a larger data volume, and the picture becomes decidedly worse. For a batch pipeline, it takes far longer than expected for the pipeline to complete. For a streaming pipeline, the lag reported in the Dataflow UI keeps increasing as the pipeline falls further and further behind. We’re going to explain some reasons this might happen, and how to address them.
Expensive Per-Record Operations
One common problem we see is pipelines that perform needlessly expensive or slow operations for each record processed. Technically this isn’t a hard scaling bottleneck - given enough resources, Dataflow can still distribute this pipeline on enough machines to make it perform well. However when running over many millions or billions of records, the cost of these per-record operations adds up to an unexpectedly-large number. Usually these problems aren’t noticeable at all at lower scale.
Here’s an example of one such operation, taken from a real Dataflow pipeline.
import javax.json.Json;
...
PCollection<OutType> output = input.apply(ParDo.of(new DoFn<InType, OutType>() {
public void processElement(ProcessContext c) {
JsonReader reader = Json.createReader();
// Perform some processing on entry.
...
}
}));
At first glance it’s not obvious that anything is wrong with this code, yet when run at scale this pipeline ran extremely slowly.
Since the actual business logic of our code shouldn't have caused a slowdown, we suspected that something was adding per-record overhead to our pipeline. To get more information on this, we had to ssh to the VMs to get actual thread profiles from workers. After a bit of digging, we found threads were often stuck in the following stack trace:
java.util.zip.ZipFile.getEntry(ZipFile.java:308)
java.util.jar.JarFile.getEntry(JarFile.java:240)
java.util.jar.JarFile.getJarEntry(JarFile.java:223)
sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:1005)
sun.misc.URLClassPath$JarLoader.findResource(URLClassPath.java:983)
sun.misc.URLClassPath$1.next(URLClassPath.java:240)
sun.misc.URLClassPath$1.hasMoreElements(URLClassPath.java:250)
java.net.URLClassLoader$3$1.run(URLClassLoader.java:601)
java.net.URLClassLoader$3$1.run(URLClassLoader.java:599)
java.security.AccessController.doPrivileged(Native Method)
java.net.URLClassLoader$3.next(URLClassLoader.java:598)
java.net.URLClassLoader$3.hasMoreElements(URLClassLoader.java:623)
sun.misc.CompoundEnumeration.next(CompoundEnumeration.java:45)
sun.misc.CompoundEnumeration.hasMoreElements(CompoundEnumeration.java:54)
java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:354)
java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
javax.json.spi.JsonProvider.provider(JsonProvider.java:89)
javax.json.Json.createReader(Json.java:208)
<.....>.processElement(<filename>.java:174)
Each call to Json.createReader was searching the classpath trying to find a registered JsonProvider. As you can see from the stack trace, this involves loading and unzipping JAR files. Doing this per record on a high-scale pipeline is not likely to perform very well!
The solution here was for the user to create a static JsonReaderFactory and use that to instantiate the individual reader objects. You might be tempted to create a JsonReaderFactory per bundle of records instead, inside Dataflow’s startBundle method. However, while this will work well for a batch pipeline, in streaming mode the bundles may be very small - sometimes just a few records. As a result, we don’t recommend doing expensive work per bundle either. Even if you believe your pipeline will only be used in batch mode, you may in the future want to run it as a streaming pipeline. So future-proof your pipelines, by making sure they’ll work well in either mode!
Hot Keys
A fundamental primitive in Dataflow is GroupByKey. GroupByKey allows one to group a PCollection of key-value pairs so that all values for a specific key are grouped together to be processed as a unit. Most of Dataflow’s built-in aggregating transforms - Count, Top, Combine, etc. - use GroupByKey under the cover. You might have a hot key problem if a single worker is extremely busy (e.g. high CPU use determined by looking at the set of GCE workers for the job) while other workers are idle, yet the pipeline falls farther and farther behind.
The DoFn that processes the result of a GroupByKey is given an input type of KV<KeyType, Iterable<ValueType>>. This means that the entire set of all values for that key (within the current window if using windowing) is modeled as a single Iterable element. In particular, this means that all values for that key must be processed on the same machine, in fact on the same thread. Performance problems can occur in the presence of hot keys - when one or more keys receive data faster than can be processed on a single cpu. For example, consider the following code snippet
p.apply(Read.from(new UserWebEventSource())
.apply(new ExtractBrowserString())
.apply(Window.<Event>into(FixedWindow.of(1, Duration.standardSeconds(1))))
.apply(GroupByKey.<String, Event>create())
.apply(ParDo.of(new ProcessEventsByBrowser()));
This code keys all user events by the user’s web browser, and then processes all events for each browser as a unit. However there is a small number of very popular browsers (such as Chrome, IE, Firefox, Safari), and those keys will be very hot - possibly too hot to process on one CPU. In addition to performance, this is also a scalability bottleneck. Adding more workers to the pipeline will not help if there are four hot keys, since those keys can processed on at most four workers. You’ve structured your pipeline so that Dataflow can’t scale it up without violating the API contract.
One way to alleviate this is to structure the ProcessEventsByBrowser DoFn as a combiner. A combiner is a special type of user function that allows piecewise processing of the iterable. For example, if the goal was to count the number of events per browser per second, Count.perKey() can be used instead of a ParDo. Dataflow is able to lift part of the combining operation above the GroupByKey, which allows for more parallelism (for those of you coming from the Database world, this is similar to pushing a predicate down); some of the work can be done in a previous stage which hopefully is better distributed.
Unfortunately, while using a combiner often helps, it may not be enough - especially if the hot keys are very hot; this is especially true for streaming pipelines. You might also see this when using the global variants of combine (Combine.globally(), Count.globally(), Top.largest(), among others.). Under the covers these operations are performing a per-key combine on a single static key, and may not perform well if the volume to this key is too high. To address this we allow you to provide extra parallelism hints using the Combine.PerKey.withHotKeyFanout or Combine.Globally.withFanout. These operations will create an extra step in your pipeline to pre-aggregate the data on many machines before performing the final aggregation on the target machines. There's no magic number for these operations, but the general strategy would be to split any hot key into enough sub-shards so that any single shard is well under the per-worker throughput that your pipeline can sustain.
Large Windows
Dataflow provides a sophisticated windowing facility for bucketing data according to time. This is most useful in streaming pipelines when processing unbounded data, however, it is fully supported for batch, bounded pipelines as well. When a windowing strategy has been attached to a PCollection, any subsequent grouping operation (most notably GroupByKey) performs a separate grouping per window. Unlike other systems that provide only globally-synchronized windows, Dataflow windows the data for each key separately. This is what us to provide flexible per-key windows such as sessions. For more information, I recommend that you read the windowing guide in the Dataflow documentation.
As a consequence of the fact that windows are per key, Dataflow buffers elements on the receiver side while waiting for each window to close. If using very-long windows - e.g. a 24-hour fixed window - this means that a lot of data has to be buffered, which can be a performance bottleneck for the pipeline. This can manifest as slowness (like for hot keys), or even as out of memory errors on the workers (visible in the logs). We again recommend using combiners to reduce the data size. The difference between writing this:
pcollection.apply(Window.into(FixedWindows.of(1, TimeUnit.DAYS)))
.apply(GroupByKey.<KeyType, ValueType>create())
.apply(ParDo.of(new DoFn<KV<KeyType, Iterable<ValueType>>, Long>() {
public void processElement(ProcessContext c) {
c.output(c.element().size());
}
}));
… and this ...
pcollection.apply(Window.into(FixedWindows.of(1, TimeUnit.DAYS)))
.apply(Count.perKey());
… isn’t just brevity. In the latter snippet Dataflow knows that a count combiner is being applied, and so only needs to store the count so far for each key, no matter how long the window is. In contrast, Dataflow understands less about the first snippet of code and is forced to buffer an entire day’s worth of data on receivers, even though the two snippets are logically equivalent!
If it’s impossible to express your operation as a combiner, then we recommend looking at the triggers API. This will allow you to optimistically process portions of the window before the window closes, and so reduce the size of buffered data.
Note that many of these limitations do not apply to the batch runner. However as mentioned above, you're always better off future proofing your pipeline and making sure it runs well in both modes.
We've talked about hot keys, large windows, and expensive per-record operations. Other guidance can be found in our documentation. Although this post has focused on challenges you may encounter with scaling your pipeline, there are many benefits to Dataflow that are largely transparent -- things like dynamic work rebalancing to minimize straggler effects, throughput-based autoscaling, and job resource management adapt to many different pipeline and data shapes without user intervention. We're always trying to make our system more adaptive, and plan to automatically incorporate some of the above strategies into the core execution engine over time. Thanks for reading, and happy Dataflowing!

Centreon/Icinga: command by services

I was wondering if it could be possible to recognize how a command is comformed with any services in Centreon? For example, which services contain the 'check_uptime' command?
Maybe its possible using some sql magic queries but I'ver done that.
Though your question reminded me of this week's icinga2 api development where dependency tracking for objects has been implemented - this is important in case someone deletes a checkcommand at runtime, with many hosts/services depending on it. By default the api will deny removal, but cascading deletes would cause the entire dependency tree being deleted.
A side-effect from that development is the output of such object dependencies inside the status query for these objects.
Check the screenshot over here: https://twitter.com/dnsmichi/status/637586226711764992
It may not help you now, but with 2.4 being released in November.

What makes erlang scalable?

I am working on an article describing fundamentals of technologies used by scalable systems. I have worked on Erlang before in a self-learning excercise. I have gone through several articles but have not been able to answer the following questions:
What is in the implementation of Erlang that makes it scalable? What makes it able to run concurrent processes more efficiently than technologies like Java?
What is the relation between functional programming and parallelization? With the declarative syntax of Erlang, do we achieve run-time efficiency?
Does process state not make it heavy? If we have thousands of concurrent users and spawn and equal number of processes as gen_server or any other equivalent pattern, each process would maintain a state. With so many processes, will it not be a drain on the RAM?
If a process has to make DB operations and we spawn multiple instances of that process, eventually the DB will become a bottleneck. This happens even if we use traditional models like Apache-PHP. Almost every business application needs DB access. What then do we gain from using Erlang?
How does process restart help? A process crashes when something is wrong in its logic or in the data. OTP allows you to restart a process. If the logic or data does not change, why would the process not crash again and keep crashing always?
Most articles sing praises about Erlang citing its use in Facebook and Whatsapp. I salute Erlang for being scalable, but also want to technically justify its scalability.
Even if I find answers to these queries on an existing link, that will help.
Regards,
Yash
Shortly:
It's unmutable. You have no variables, only terms, tuples and atoms. Program execution can be divided by breakpoint at any place. Fully transactional model.
Processes are even lightweight than .NET threads and isolated.
It's made for communications. Millions of connections? Fully asynchronous? Maximum thread safety? Big cross-platform environment, which built only for one purpose — scale&communicate? It's all Ericsson language — first in this sphere.
You can choose some impersonators like F#, Scala/Akka, Haskell — they are trying to copy features from Erlang, but only Erlang born from and born for only one purpose — telecom.
Answers to other questions you can find on erlang.com and I'm suggesting you to visit handbook. Erlang built for other aims, so it's not for every task, and if you asking about awful things like php, Erlang will not be your language.
I'm no Erlang developer (yet) but from what I have read about it some of the features that makes it very scalable is that Erlang has its own lightweight processes that are using message passing to communicate with each other. Because of this there is no such thing as shared state and locking which is the case when using for example a multi threaded Java application.
Another difference compared to Java is that the Erlang VM does garbage collection on every little process that is running which does not take any time at all compared to Java which does garbage collection only per VM.
If you get problem with bottlenecks from database connection you could start by using a database pooling app running against maybe a replicated PostgreSQL cluster or if you still have bottlenecks use a multi replicated NoSQL setup with Mnesia, Riak or CouchDB.
I think process restarts can be very useful when you are experiencing rare bugs that only appear randomly and only when specific criteria is fulfilled. Bugs that cause the application to crash as soon as you restart the app should optimally be fixed or taken care of with a circuit breaker so that it does not spread further.
Here is one way process restart helps. By not having to deal with all possible error cases. Say you have a program that divides numbers. Some guy enters a zero to divide by. Instead of checking for that possible error (and tons more), just code the "happy case" and let process crash when he enters 3/0. It just restarts, and he can figure out what he did wrong.
You an extend this into an infinite number of situations (attempting to read from a non-existent file because the user misspelled it, etc).
The big reason for process restart being valuable is that not every error happens every time, and checking that it worked is verbose.
Error handling is verbose typically, so writing it interspersed with the logic handling doing a task can make it harder to understand the code. Moving that logic outside of the task allows you to more clearly distinguish between "doing things" code, and "it broke" code. You just let the thing that had a problem fail, and handle it as needed by a supervising party.
Since most errors don't mean that the entire program must stop, only that that particular thing isn't working right, by just restarting the part that broke, you can keep operating in a state of degraded functionality, instead of being down, while you repair the problem.
It should also be noted that the failure recovery is bounded. You have to lay out the limits for how much failure in a certain period of time is too much. If you exceed that limit, the failure propagates to another level of supervision. Each restart includes doing any needed process initialization, which is sometimes enough to fix the problem. For example, in dev, I've accidentally deleted a database file associated with a process. The crashes cascaded up to the level where the file was first created, at which point the problem rectified itself, and everything carried on.

Cloud Dataflow Running really slow when reading/writing from Cloud Storage (GCS)

Since using the release of the latest build of Cloud Dataflow (0.4.150414) our jobs are running really slow when reading from cloud storage (GCS). After running for 20 minutes with 10 VMs we were only able to read in about 20 records when previously we could read in millions without issue.
It seems to be hanging, although no errors are being reported back to the console.
We received an email informing us that the latest build would be slower and that it could be countered by using more VMs but we got similar results with 50 VMs.
Here is the job id for reference: 2015-04-22_22_20_21-5463648738106751600
Instance: n1-standard-2
Region: us-central1-a
Your job seems to be using side inputs to a DoFn. Since there has been a recent change in how Cloud Dataflow SDK for Java handles side inputs, it is likely that your performance issue is related to that. I'm reposting my answer from a related question.
The evidence seems to indicate that there is an issue with how your pipeline handles side inputs. Specifically, it's quite likely that side inputs may be getting re-read from BigQuery again and again, for every element of the main input. This is completely orthogonal to the changes to the type of virtual machines used by Dataflow workers, described below.
This is closely related to the changes made in the Dataflow SDK for Java, version 0.3.150326. In that release, we changed the side input API to apply per window. Calls to sideInput() now return values only in the specific window corresponding to the window of the main input element, and not the whole side input PCollectionView. Consequently, sideInput() can no longer be called from startBundle and finishBundle of a DoFn because the window is not yet known.
For example, the following code snippet has an issue that would cause re-reading side input for every input element.
#Override
public void processElement(ProcessContext c) throws Exception {
Iterable<String> uniqueIds = c.sideInput(iterableView);
for (String item : uniqueIds) {
[...]
}
c.output([...]);
}
This code can be improved by caching the side input to a List member variable of the transform (assuming it fits into memory) during the first call to processElement, and use that cached List instead of the side input in subsequent calls.
This workaround should restore the performance you were seeing before, when side inputs could have been called from startBundle. Long-term, we will work on better caching for side inputs. (If this doesn't help fully resolve the issue, please reach out to us via email and share the relevant code snippets.)
Separately, there was, indeed, an update to the Cloud Dataflow Service around 4/9/15 that changed the default type of virtual machines used by Dataflow workers. Specifically, we reduced the default number of cores per worker because our benchmarks showed it as cost effective for typical jobs. This is not a slowdown in the Dataflow Service of any kind -- it just runs with less resources per worker, by default. Users are still given the options to override both the number of workers as well as the type of the virtual machine used by workers.
We had a similar issue. It is when the side-input is reading from a BigQuery table that has had its data streamed in, rather than bulk loaded. When we copy the table(s), and read from the copies instead everything works fine.
If your tables are streamed, try copying them and reading the copies instead. This is a workaround.
See: Dataflow performance issues

Resources