Setting a Timer to the minimum timestamp seen - google-cloud-dataflow

I would like to set a Timer in Event time that fires based on the smallest timestamp seen in the elements within my DoFn.

For performance reasons the Timer API does not support a read() operation, which for the vast majority of use cases is not a required feature. In the small set of use cases where it is needed, for example when you need to set a Timer in EventTime based on the smallest timestamp seen in the elements within a DoFn, we can make use of a State object to keep track of the value.
Java (SDK 2.10.0)
// In this pattern, a Timer is set to fire based on the lowest timestamp seen in the DoFn.
public class SetEventTimeTimerBasedOnEarliestElementTime {
private static final Logger LOG = LoggerFactory
.getLogger(SetEventTimeTimerBasedOnEarliestElementTime.class);
public static void main(String[] args) {
// Create pipeline
PipelineOptions options = PipelineOptionsFactory.
fromArgs(args).withValidation().as(PipelineOptions.class);
// We will start our timer at a fixed point
Instant now = Instant.parse("2000-01-01T00:00:00Z");
// ----- Create some dummy data
// Create 3 elements, incrementing by 1 minute
TimestampedValue<KV<String, Integer>> time_1 = TimestampedValue.of(KV.of("Key_A", 1), now);
TimestampedValue<KV<String, Integer>> time_2 = TimestampedValue
.of(KV.of("Key_A", 2), now.plus(Duration.standardMinutes(1)));
TimestampedValue<KV<String, Integer>> time_3 = TimestampedValue
.of(KV.of("Key_A", 3), now.plus(Duration.standardMinutes(2)));
Pipeline p = Pipeline.create(options);
// Apply a fixed window of duration 10 min and Sum the results
p.apply(Create.timestamped(time_3, time_2, time_1)).apply(
Window.<KV<String, Integer>>into(FixedWindows.<Integer>of(Duration.standardMinutes(10))))
.apply(ParDo.of(new StatefulDoFnThatSetTimerBasedOnSmallestTimeStamp()));
p.run();
}
/**
* Set timer to the lowest value that we see in the stateful DoFn
*/
public static class StatefulDoFnThatSetTimerBasedOnSmallestTimeStamp
extends DoFn<KV<String, Integer>, KV<String, Integer>> {
// Due to performance considerations there is no read on a timer object.
// We make use of this Long value to keep track.
#StateId("currentTimerValue") private final StateSpec<ValueState<Long>> currentTimerValue =
StateSpecs.value(BigEndianLongCoder.of());
#TimerId("timer") private final TimerSpec timer = TimerSpecs.timer(TimeDomain.EVENT_TIME);
#ProcessElement public void process(ProcessContext c,
#StateId("currentTimerValue") ValueState<Long> currentTimerValue,
#TimerId("timer") Timer timer) {
Instant timeStampWeWantToSet = c.timestamp();
//*********** Set Timer
// If the timer has never been set then we set it.
// If the timer has been set but is larger than our current value then we set it.
if (currentTimerValue.read() == null || timeStampWeWantToSet.getMillis() < currentTimerValue
.read()) {
timer.set(timeStampWeWantToSet);
currentTimerValue.write(timeStampWeWantToSet.getMillis());
}
}
#OnTimer("timer") public void onMinTimer(OnTimerContext otc,
#StateId("currentTimerValue") ValueState<Long> currentTimerValue,
#TimerId("timer") Timer timer) {
// Reset the currentTimerValue
currentTimerValue.clear();
LOG.info("Timer # {} fired", otc.timestamp());
}
}
}

Related

Measuring rate of events with Micrometer

In Dropwizard there is something like meter:
https://metrics.dropwizard.io/3.1.0/getting-started/#meters
It lets me measure rate of events just by invoking mark() method on the metric.
How can I do that in Micrometer?
I can use timers, but I don't want to pass Timer.Sample object to wherever place where I need to call stop() method.
The other missing thing in Micrometer comparing to Dropwizard is a metric that can contain a text message, like gauge in Dropwizard.
Micrometer leverages the strengths of modern metrics backends. So the specific answer to your question depends on which you are using. Take Prometheus for example. The backend can calculate the rate for you.
If you are measuring the rate of how often something is happening you can determine that using a Counter. Take the logback_events_total counter as an example. It is merely counting the number for log messages written.
When alerting or graphing you can then write a query like rate(logback_events_total[1m]) and you will be able to see the rate at which logs have been writen at the 1m rate. You have the ability to change to window from 1m, to 5m or 1h without changing the code.
Regarding text based metrics, those aren't useful for alerting (but can be useful when using a join clause). The typical solution in that case is to create a gauge with a value of 1 or 0 and make your text value a tag. For example:
registry.gaugle('app.info', Tags.of("version","1.0.beta3", this, () -> 1.0));
We had the same problem. In DropWizard we were able to use meters to get the rate of events per minute, but in Micrometer we could not find a built-in way that worked for us.
We needed rates for counters and percentiles for timers. The PrometheusMeterRegistry gave us percentiles, but no rates.
So we built our own Gauge that tracks a Counter. Every time getValue() is called, it fetches the value from the counter and adds it to the right bucket with the current timestamp. Then from all available measurements it can compute the rate over the last minute.
It looks like this:
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.MeterRegistry;
import java.util.LinkedList;
import java.util.function.Supplier;
public class OneMinuteRateGauge {
private static final int WINDOW_SECONDS = 60;
private final Supplier<Double> valueSupplier;
private final LinkedList<Bucket> buckets;
private final Clock clock;
public OneMinuteRateGauge(String name, Supplier<Double> valueSupplier, MeterRegistry meterRegistry) {
this(name, valueSupplier, meterRegistry, Clock.SYSTEM);
}
public OneMinuteRateGauge(String name, Supplier<Double> valueSupplier, MeterRegistry meterRegistry, Clock clock) {
this.valueSupplier = valueSupplier;
this.buckets = new LinkedList<>();
Gauge.builder(name, this::getValue).register(meterRegistry);
this.clock = clock;
// Collect one measurement so we have a faster start
getValue();
}
public synchronized double getValue() {
// Update the last bucket or create a new one
long now_millis = clock.monotonicTime() / 1_000_000;
long now_seconds = now_millis / 1_000;
short millis = (short) (now_millis - (now_seconds * 1000));
double value = valueSupplier.get();
if (buckets.size() != 0 && buckets.getLast().getSeconds() == now_seconds) {
buckets.getLast().updateValue(millis, value);
} else {
buckets.addLast(new Bucket(now_seconds, millis, value));
}
// Delete all buckets outside the window except one
while (2 < buckets.size() && buckets.get(1).getSeconds() + WINDOW_SECONDS < now_seconds) {
buckets.pollFirst();
}
if (buckets.size() == 1) {
// Not enough data
return 0;
} else if (now_seconds <= buckets.getFirst().getSeconds() + WINDOW_SECONDS) {
// First bucket is inside the window
return buckets.getLast().getValue() - buckets.getFirst().getValue();
} else {
// Find the weighted average between the first two points
Bucket p0 = buckets.get(0);
Bucket p1 = buckets.get(1);
double px = now_millis - (WINDOW_SECONDS * 1000);
double m = (p1.getValue() - p0.getValue()) / (p1.getTimestampInMillis() - p0.getTimestampInMillis());
double py = m * (px - p0.getTimestampInMillis()) + p0.getValue();
return value - py;
}
}
}
public class Bucket {
private long seconds; // Seconds since 1.1.1970, used as bucket ID
private short millis; // 0-999, used for a more exact calculation
private double value;
public Bucket(long seconds, short millis, double value) {
this.seconds = seconds;
this.millis = millis;
this.value = value;
}
public long getSeconds() {
return seconds;
}
public double getValue() {
return value;
}
public long getTimestampInMillis() {
return seconds * 1000 + millis;
}
public void updateValue(short millis, double value) {
this.millis = millis;
this.value = value;
}
}
An alternative way could have been to use CompositeMeterRegistry on the top level and then add both a PrometheusMeterRegistry and a StepMeterRegistry. Prometheus reports percentiles and Step reports rates. Our monitoring system would then have to query two endpoints.
This was a temporary solution until we modified our monitoring system to read the prometheus endpoint and calculate its own rates.

Can I use setWorkerCacheMb in Apache Beam 2.0+?

My Dataflow job (using Java SDK 2.1.0) is quite slow and it is going to take more than a day to process just 50GB. I just pull a whole table from BigQuery (50GB), join with one csv file from GCS (100+MB).
https://cloud.google.com/dataflow/model/group-by-key
I use sideInputs to perform join (the latter way in the documentation above) while I think using CoGroupByKey is more efficient, however I'm not sure that is the only reason my job is super slow.
I googled and it looks by default, a cache of sideinputs set as 100MB and I assume my one is slightly over that limit then each worker continuously re-reads sideinputs. To improve it, I thought I can use setWorkerCacheMb method to increase the cache size.
However it looks DataflowPipelineOptions does not have this method and DataflowWorkerHarnessOptions is hidden. Just passing --workerCacheMb=200 in -Dexec.args results in
An exception occured while executing the Java class.
null: InvocationTargetException:
Class interface com.xxx.yyy.zzz$MyOptions missing a property
named 'workerCacheMb'. -> [Help 1]
How can I use this option? Thanks.
My pipeline:
MyOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(MyOptions.class);
Pipeline p = Pipeline.create(options);
PCollection<TableRow> rows = p.apply("Read from BigQuery",
BigQueryIO.read().from("project:MYDATA.events"));
// Read account file
PCollection<String> accounts = p.apply("Read from account file",
TextIO.read().from("gs://my-bucket/accounts.csv")
.withCompressionType(CompressionType.GZIP));
PCollection<TableRow> accountRows = accounts.apply("Convert to TableRow",
ParDo.of(new DoFn<String, TableRow>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String line = c.element();
CSVParser csvParser = new CSVParser();
String[] fields = csvParser.parseLine(line);
TableRow row = new TableRow();
row = row.set("account_id", fields[0]).set("account_uid", fields[1]);
c.output(row);
}
}));
PCollection<KV<String, TableRow>> kvAccounts = accountRows.apply("Populate account_uid:accounts KV",
ParDo.of(new DoFn<TableRow, KV<String, TableRow>>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
TableRow row = c.element();
String uid = (String) row.get("account_uid");
c.output(KV.of(uid, row));
}
}));
final PCollectionView<Map<String, TableRow>> uidAccountView = kvAccounts.apply(View.<String, TableRow>asMap());
// Add account_id from account_uid to event data
PCollection<TableRow> rowsWithAccountID = rows.apply("Join account_id",
ParDo.of(new DoFn<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
TableRow row = c.element();
if (row.containsKey("account_uid") && row.get("account_uid") != null) {
String uid = (String) row.get("account_uid");
TableRow accRow = (TableRow) c.sideInput(uidAccountView).get(uid);
if (accRow == null) {
LOG.warn("accRow null, {}", row.toPrettyString());
} else {
row = row.set("account_id", accRow.get("account_id"));
}
}
c.output(row);
}
}).withSideInputs(uidAccountView));
// Insert into BigQuery
WriteResult result = rowsWithAccountID.apply(BigQueryIO.writeTableRows()
.to(new TableRefPartition(StaticValueProvider.of("MYDATA"), StaticValueProvider.of("dev"),
StaticValueProvider.of("deadletter_bucket")))
.withFormatFunction(new SerializableFunction<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#Override
public TableRow apply(TableRow row) {
return row;
}
}).withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
p.run();
Historically my system have two identifiers of users, new one (account_id) and old one(account_uid). Now I need to add new account_id to our event data stored in BigQuery retroactively, because old data only has old account_uid. Accounts table (which has relation between account_uid and account_id) is already converted as csv and stored in GCS.
The last func TableRefPartition just store data into BQ's corresponding partition depending on each event timestamp. The job is still running (2017-10-30_22_45_59-18169851018279768913) and bottleneck looks Join account_id part.
That part of throughput (xxx elements/s) goes up and down according to the graph. According to the graph, estimated size of sideInputs is 106MB.
If switching to CoGroupByKey improves performance dramatically, I will do so. I was just lazy and thought using sideInputs is easier to handle event data which doesn't have account info as well.
Try one of:
1) setting the option using some code:
options.as(DataflowWorkerHarnessOptions.class).setWorkerCacheMb(500);
2) having your application register DataflowWorkerHarnessOptions with the PipelineOptionsFactory
3) Having your own options class extend DataflowWorkerHarnessOptions
There's a few things you can do to improve the performance of your code:
Your side input is a Map<String, TableRow>, but you're using only a single field in the TableRow - accRow.get("account_id"). How about making it a Map<String, String> instead, having the value be the account_id itself? That'll likely be quite a bit more efficient than the bulky TableRow object.
You could extract the value of the side input into a lazily initialized member variable in your DoFn, to avoid repeated invocations of .sideInput().
That said, this performance is unexpected and we are investigating whether there's something else going on.

Stream window processing processing with Flink and kinesis streams is not working

I am reading a kinesis stream using Flink. It aggregates certain event based on the time window and the key. The code does not do anything after the reduce. No data is mapped of put in the output csv. I have waited for many minutes (even when the time window is just two minutes).
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.enableCheckpointing(CommonTimeConstants.TWO_MINUTES.toMilliseconds());
env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, Time.of(1, TimeUnit.MINUTES)));
Properties consumerConfig = new Properties();
consumerConfig.put(ConsumerConfigConstants.AWS_REGION, PropertyFileUtils.get("aws.region", ""));
consumerConfig.put(ConsumerConfigConstants.AWS_ACCESS_KEY_ID, PropertyFileUtils.get("aws.accessKeyId", ""));
consumerConfig.put(ConsumerConfigConstants.AWS_SECRET_ACCESS_KEY, PropertyFileUtils.get("aws.secretAccessKey", ""));
consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "TRIM_HORIZON");
DataStream<APIActionLog> apiLogRecords = env.addSource(new FlinkKinesisConsumer<>(
ProjectProperties.SOURCE_ENV_PREFIX, // stream name
new StreamedApiLogRecordDeserializationSchema(),
consumerConfig));
apiLogRecords.assignTimestampsAndWatermarks(API_LOG_RECORD_BOUNDED_OUT_OF_ORDERNESS_TIMESTAMP_EXTRACTOR);
DataStream<Tuple7<String, String, String, String, Timestamp, String, Integer>> skuPlatformTsCount =
apiLogRecords.flatMap(collecting events...)
.keyBy(Key based on some parameters of the event...)
.timeWindow(TWO_MINUTES)
.reduce(adding up event parameter..., window function...)
.map(Map to get a different tuple format...);
skuPlatformTsCount.writeAsCsv("/Users/uday/Desktop/out.csv", FileSystem.WriteMode.OVERWRITE);
env.execute("Processing ATC Log Stream");
}
private static final BoundedOutOfOrdernessTimestampExtractor<APIActionLog> API_LOG_RECORD_BOUNDED_OUT_OF_ORDERNESS_TIMESTAMP_EXTRACTOR =
new BoundedOutOfOrdernessTimestampExtractor<APIActionLog>(TEN_SECONDS) {
private static final long serialVersionUID = 1L;
#Override
public long extractTimestamp(APIActionLog apiActionLog) {
return apiActionLog.getTs().getTime();
}
};
It was a silly mistake.
apiLogRecords.assignTimestampsAndWatermarks(API_LOG_RECORD_BOUNDED_OUT_OF_ORDERNESS_TIMESTAMP_EXTRACTOR);
call returns a new stream with assigned watermarks. This returned value should be used it in later operations.

Apache Beam Combine Function not doing anything

I'm trying to use a simple Combine function for the first time applying fixed windows of 10 seconds. Currently I'm just printing out some logging as part of the transforms to see whether something is actually happening but it seems the transforms after the ExtractStreamingMeasures() never actually get called. I'm running the DirectRunner.
Am I missing something?
PipelineOptions options = PipelineOptionsFactory.create();
PubsubOptions dataflowOptions = options.as(PubsubOptions.class);
dataflowOptions.setStreaming(true);
Pipeline p = Pipeline.create(options);
p
.apply(Window.<Txn>into(FixedWindows.of(Duration.standardSeconds(10))))
.apply(ParDo.of(new ExtractStreamingMeasures()))
.apply(Count.<String>perElement())
.apply(ParDo.of(new DoSomething()));
Transforms:
static class ExtractStreamingMeasures extends DoFn<Txn, String> {
#ProcessElement
public void processElement(ProcessContext c) {
System.out.println(c.element().getLocationId()); // <= this prints
c.output(c.element().getLocationId());
}
}
static class DoSomething extends DoFn<KV<String, Long>, KV<String, Long>> {
#ProcessElement
public void processElement(ProcessContext c) {
System.out.println(c.element()); // <= this doesn't print
c.output(c.element());
}
}
Had to provide a different trigger in order for the window to fire properly. The following code will trigger an output every 10 seconds for a window size of 10 minutes.
p.apply("AssignToWindow", Window.<Txn>into(FixedWindows.of(Duration.standardMinutes(10)))
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(10))))
.accumulatingFiredPanes()
.withAllowedLateness(Duration.standardDays(1)))

Stateful ParDo not working on Dataflow Runner

Based on Javadocs and the blog post at https://beam.apache.org/blog/2017/02/13/stateful-processing.html, I tried using a simple de-duplication example using 2.0.0-beta-2 SDK which reads a file from GCS (containing a list of jsons each with a user_id field) and then running it through a pipeline as explained below.
The input data contains about 146K events of which only 50 events are unique. The entire input is about 50MB which should be processable in considerably less time than the 2 min Fixed window. I just placed a window there to make sure the per-key-per-window semantics hold without using a GlobalWindow. I run the windowed data through 3 parallel stages to compare the results, each of which are explained below.
just copies the contents into a new file on GCS - this ensures all the events were being processed as expected and I verified the contents are exactly the same as input
Combine.PerKey on the user_id and pick only the first element from the Iterable - this essentially should deduplicate the data and it works as expected. The resulting file has the exact number of unique items from the original list of events - 50 elements
stateful ParDo which checks if the key has been seen already and emits an output only when its not. Ideally, the result from this should match the deduped data as [2] but all I am seeing is only 3 unique events. These 3 unique events always point to the same 3 user_ids in a few runs I did.
Interestingly, when I just switch from the DataflowRunner to the DirectRunner running this whole process locally, I see that the output from [3] matches [2] having only 50 unique elements as expected. So, I am doubting if there are any issues with the DataflowRunner for the Stateful ParDo.
public class StatefulParDoSample {
private static Logger logger = LoggerFactory.getLogger(StatefulParDoSample.class.getName());
static class StatefulDoFn extends DoFn<KV<String, String>, String> {
final Aggregator<Long, Long> processedElements = createAggregator("processed", Sum.ofLongs());
final Aggregator<Long, Long> skippedElements = createAggregator("skipped", Sum.ofLongs());
#StateId("keyTracker")
private final StateSpec<Object, ValueState<Integer>> keyTrackerSpec =
StateSpecs.value(VarIntCoder.of());
#ProcessElement
public void processElement(
ProcessContext context,
#StateId("keyTracker") ValueState<Integer> keyTracker) {
processedElements.addValue(1l);
final String userId = context.element().getKey();
int wasSeen = firstNonNull(keyTracker.read(), 0);
if (wasSeen == 0) {
keyTracker.write( 1);
context.output(context.element().getValue());
} else {
keyTracker.write(wasSeen + 1);
skippedElements.addValue(1l);
}
}
}
public static void main(String[] args) {
DataflowPipelineOptions pipelineOptions = PipelineOptionsFactory.create().as(DataflowPipelineOptions.class);
pipelineOptions.setRunner(DataflowRunner.class);
pipelineOptions.setProject("project-name");
pipelineOptions.setStagingLocation(GCS_STAGING_LOCATION);
pipelineOptions.setStreaming(false);
pipelineOptions.setAppName("deduper");
Pipeline p = Pipeline.create(pipelineOptions);
final ObjectMapper mapper = new ObjectMapper();
PCollection<KV<String, String>> keyedEvents =
p
.apply(TextIO.Read.from(GCS_SAMPLE_INPUT_FILE_PATH))
.apply(WithKeys.of(new SerializableFunction<String, String>() {
#Override
public String apply(String input) {
try {
Map<String, Object> eventJson =
mapper.readValue(input, Map.class);
return (String) eventJson.get("user_id");
} catch (Exception e) {
}
return "";
}
}))
.apply(
Window.into(
FixedWindows.of(Duration.standardMinutes(2))
)
);
keyedEvents
.apply(ParDo.of(new StatefulDoFn()))
.apply(TextIO.Write.to(GCS_SAMPLE_OUTPUT_FILE_PATH).withNumShards(1));
keyedEvents
.apply(Values.create())
.apply(TextIO.Write.to(GCS_SAMPLE_COPY_FILE_PATH).withNumShards(1));
keyedEvents
.apply(Combine.perKey(new SerializableFunction<Iterable<String>, String>() {
#Override
public String apply(Iterable<String> input) {
return !input.iterator().hasNext() ? "empty" : input.iterator().next();
}
}))
.apply(Values.create())
.apply(TextIO.Write.to(GCS_SAMPLE_COMBINE_FILE_PATH).withNumShards(1));
PipelineResult result = p.run();
result.waitUntilFinish();
}
}
This was a bug in the Dataflow service in batch mode, fixed in the upcoming 0.6.0 Beam release (or HEAD if you track the bleeding edge).
Thank you for bringing it to my attention! For reference, or if anything else comes up, this was tracked by BEAM-1611.

Resources