So far been dealing with a scenario in Apache Beam where given a certain HTTP code, I may be preserving the elements to restart in the next iteration.
Been implementing this with inner code, only using a time trigger.
.apply(
"Sample Window",
Window.into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(1)))
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
I was hardcoding my logic to handle the request of, let's say, 200 events. And also storing in-memory those events in case the request failed.
However, checking the docs I saw combined triggering...
Repeatedly.forever(AfterFirst.of(
AfterPane.elementCountAtLeast(100),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(1))))
So did the same in my case.
.apply(
"Sample Window",
Window.<KV<String, String>>into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(
AfterFirst.of(
AfterPane.elementCountAtLeast(200),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(1))
)
)
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
So been wondering now...
If triggered by number of elements within the 1 minute timeframe, what happens with those events? Are they reprocessed again? Should I manually remove them from the Window?
I'm also talking in the case the 200 elements fail. How can I make them prevail in the window?
In your trigger you are setting .discardingFiredPanes()
This will "discards elements in a pane after they are triggered."
Any subsequent panes will not contain elements that have already been output.
Related
I am trying to optimise LCP for this page. I read an article on LCP optimisation where I also found a script which can help to determine which part of the LCP most time is spent on. Script:
const LCP_SUB_PARTS = [
'Time to first byte',
'Resource load delay',
'Resource load time',
'Element render delay',
];
new PerformanceObserver((list) => {
const lcpEntry = list.getEntries().at(-1);
const navEntry = performance.getEntriesByType('navigation')[0];
const lcpResEntry = performance
.getEntriesByType('resource')
.filter((e) => e.name === lcpEntry.url)[0];
// Ignore LCP entries that aren't images to reduce DevTools noise.
// Comment this line out if you want to include text entries.
if (!lcpEntry.url) return;
// Compute the start and end times of each LCP sub-part.
// WARNING! If your LCP resource is loaded cross-origin, make sure to add
// the `Timing-Allow-Origin` (TAO) header to get the most accurate results.
const ttfb = navEntry.responseStart;
const lcpRequestStart = Math.max(
ttfb,
// Prefer `requestStart` (if TOA is set), otherwise use `startTime`.
lcpResEntry ? lcpResEntry.requestStart || lcpResEntry.startTime : 0
);
const lcpResponseEnd = Math.max(
lcpRequestStart,
lcpResEntry ? lcpResEntry.responseEnd : 0
);
const lcpRenderTime = Math.max(
lcpResponseEnd,
// Prefer `renderTime` (if TOA is set), otherwise use `loadTime`.
lcpEntry ? lcpEntry.renderTime || lcpEntry.loadTime : 0
);
// Clear previous measures before making new ones.
// Note: due to a bug this does not work in Chrome DevTools.
// LCP_SUB_PARTS.forEach(performance.clearMeasures);
// Create measures for each LCP sub-part for easier
// visualization in the Chrome DevTools Performance panel.
const lcpSubPartMeasures = [
performance.measure(LCP_SUB_PARTS[0], {
start: 0,
end: ttfb,
}),
performance.measure(LCP_SUB_PARTS[1], {
start: ttfb,
end: lcpRequestStart,
}),
performance.measure(LCP_SUB_PARTS[2], {
start: lcpRequestStart,
end: lcpResponseEnd,
}),
performance.measure(LCP_SUB_PARTS[3], {
start: lcpResponseEnd,
end: lcpRenderTime,
}),
];
// Log helpful debug information to the console.
console.log('LCP value: ', lcpRenderTime);
console.log('LCP element: ', lcpEntry.element);
console.table(
lcpSubPartMeasures.map((measure) => ({
'LCP sub-part': measure.name,
'Time (ms)': measure.duration,
'% of LCP': `${
Math.round((1000 * measure.duration) / lcpRenderTime) / 10
}%`,
}))
);
}).observe({type: 'largest-contentful-paint', buffered: true});
For me, this was the result at the start in 4x CPU slowdown and Fast3G connection.
After that, since render delay was the area where I should focus on, I moved some of the scripts to the footer and also made the "deferred" scripts "async". This is the result:
We can see there is a clear improvement in LCP after the change but, when I test with lighthouse the result is different.
Before:
After:
I am in dilemma now about what step to take. Please suggest!!
I ran a trace of the URL you linked in your question, and the first thing I noticed is that your LCP resource finishes loading pretty early in the page, but it isn't able to render until a file called mirage2.min.js finishes loading.
This explains why your "Element render delay" portion of LCP is so long, and moving your scripts to the bottom of the page or seeing defer of them is not going to solve that problem. The solution is to make it so your LCP image can render without needing to wait until that JavaScript file finishes loading.
Another thing I noticed is this mirage2.min.js file is loaded from ajax.cloudflare.com, which made me think it's a "feature" offered by Cloudflare and not something you set up yourself.
Based on what I see here, I'm assuming that's true:
https://support.cloudflare.com/hc/en-us/articles/219178057
So my recommendation for you is to turn off this feature, because it's clearly not helping your LCP, as you can see in this trace:
There's one more thing you said that I think is worth clarifying:
After that, since render delay was the area where I should focus on, I moved some of the scripts to the footer and also made the "deferred" scripts "async". This is the result:
When I look at your "result" screenshot, I still see that the "element render delay" portion is still > 50%, so while you were correct when you said that "render delay was the area where I should focus on", the fact that it was still high after you made your changes (e.g. moving the scripts and using defer/async) was an indication that the changes you tried didn't fix the problem.
In this case, I believe that if you turn off the "Mirage" feature in your Cloudflare dashboard, you should see a big improvement.
Oh, one more thing, I noticed that you're using importance="high" on your image. This is old syntax that does not work anymore. You should replace that with fetchpriority="high" instead. See this post for details: https://web.dev/priority-hints/
OK, the docs are messy at best. I have huge issues fading in and out preloaded assets if I do not add 'false' to the instance of PreloadJS. But when I add it I completely lose the progress event ... what is it that's so deeply hidden in the docs, that I cannot find anything about this?
And has anyone got a complete example of HOW to actually and properly load an array (actually an object) of images without losing the progress event AND still have an asset that behaves as expected when adding it to the DOM and fade it in?
This was also posted in a question on GitHub.
The short answer is that loading with tags (setting the first param useXHR to false) doesn't support granular progress events because downloading images with tags doesn't give progress events in the browser.
You can still get progress events from the LoadQueue any time an image loads, but each image will just provide a single "complete" event.
#Lanny True for that part, but in my case I was also missing the 'true' in .getResult(), and the createObjectURL() for the image data:
…
var preloader = new createjs.LoadQueue();
…
…
function handleFileLoad ( e ) {
var item = e.item,
result = preloader.getResult(item.id, true),
blob_url = URL.createObjectURL( result );
…
So, that I was actually able to handle the image data as a blob … I couldn't find anything close to 'createObjectURL' in the docs. I guess that renders the docs 'not complete' at best …
I use different kinds of stop losses and would like to be notified (SendNotification()) about which kind of stop loss was hit upon trade exit.
Let's say I entered a trade by...
request.action = TRADE_ACTION_DEAL;
request.symbol = pSymbol;
request.type = pType;
request.sl = pStop;
request.tp = pProfit;
request.comment = pComment;
request.volume = pVolume;
request.price = SymbolInfoDouble(pSymbol,SYMBOL_ASK);
request.price = SymbolInfoDouble(pSymbol,SYMBOL_BID)
OrderSend(request,result);
I would now like to have the request.comment changed by the last stop loss like so:
request.action = TRADE_ACTION_SLTP;
request.symbol = pSymbol;
request.sl = pStop;
request.tp = pProfit;
request.comment = "Fixed SL";
PositionSelect(_Symbol);
request.order = PositionGetInteger(POSITION_IDENTIFIER);
OrderSend(request,result);
Unfortunately the second block of code does not change the first request.comment = pComment; though (instead the new comment is [sl 1.19724]).
Is it possible to change the comment via TRADE_ACTION_SLTP? What am I doing wrong?
Thank you!
I would now like to have the request.comment changed
There was never a way to do this in MQL4/5 trading platforms
Sad, but true.
The core-functionality was always focused on engineering a fast, reliable soft-real-time ( providing still just a best-effort scheduling alongside the stream of externally injected FxMarket-Event-Flow ), so bear with the product as-is.
Plus, there was always one more degree-of-uncertainty, the Broker-side automation was almost free for modifying the .comment-part of the Trade-position, so even if your OrderSend() was explicit on what ought be stored there, the result was unsure and the Broker-side could ( whenever, be it immediately or at any later stage ) change this field outside of any control ( which was not left on your side ), so the only semi-UUID# keys could have been placed into the .magic ( and your local-side application code always had to do all the job via some key:value storage extension to the otherwise uncertain Broker-side content.
Even the Trade number ( ID, ticket ) identifier is not always a persistent key and may change under some Trade management operations, so be indeed very carefull, before deciding your way.
like to be notified ( SendNotification() ) about which kind of stop loss was hit upon trade exit.
Doable, yet one will need to build all the middleware-logic on one's own :
The wish is clear and doable. Given a proper layer of middleware-logic will get built, one can enjoy whatever such automation.
Having built things like an augmented-visual-trading, remote AI/ML-quant-predictors or real-time fully-adaptive non-blocking GUI-quant-tools augmentations ( your trader gets online graphical visual aids inside GUI, automatically overlaid over other EA + Indicator tools on the GUI-surface, fully click-and-modify interactive / adaptive for fast visually augmented discretionary modifications of the traded asset management ), so only one's imagination and resources available are one's limit here.
Yet, one has to respect the published platform limits - the same as OrderModify() does not provide any means for the wish above, the add-on traded assets customer-specific reporting on position terminations is to be assembled by one's own initiative, as the platform does not provide ( for obvious reasons noted above ) any tools, relevant for such non-core activity.
I'm playing around with BigQueryIO write using loads. My load trigger is set to 18 hours. I'm ingesting data from Kafka with a fixed daily window.
Based on https://github.com/apache/beam/blob/v2.2.0/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BatchLoads.java#L213-L231 it seems that the intended behavior is to offload rows to the filesystem when at least 500k records are in a pane
I managed to produce ~ 600K records and waited for around 2 hours to see if the rows were uploaded to gcs, however, nothing was there. I noticed that the "GroupByDestination" step in "BatchLoads" shows 0 under "Output collections" size.
When I use a smaller load trigger all seems fine. Shouldn't the AfterPane.elementCountAtLeast(FILE_TRIGGERING_RECORD_COUNT)))) be triggered?
Here is the code for writing to BigQuery
BigQueryIO
.writeTableRows()
.to(new SerializableFunction[ValueInSingleWindow[TableRow], TableDestination]() {
override def apply(input: ValueInSingleWindow[TableRow]): TableDestination = {
val startWindow = input.getWindow.asInstanceOf[IntervalWindow].start()
val dayPartition = DateTimeFormat.forPattern("yyyyMMdd").withZone(DateTimeZone.UTC).print(startWindow)
new TableDestination("myproject_id:mydataset_id.table$" + dayPartition, null)
}
})
.withMethod(Method.FILE_LOADS)
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withSchema(BigQueryUtils.schemaOf[MySchema])
.withTriggeringFrequency(Duration.standardHours(18))
.withNumFileShards(10)
The job id is 2018-02-16_14_34_54-7547662103968451637. Thanks in advance.
Panes are per key per window, and BigQueryIO.write() with dynamic destinations uses the destination as key under the hood, so the "500k elements in pane" thing applies per destination per window.
I have an iPhone app, where I want to show how many people are currently viewing an item as such:
I'm doing that by running this transaction when people enter a view (Rubymotion code below, but functions exactly like the Firebase iOS SDK):
listing_reference[:listings][self.id][:viewing_amount].transaction do |data|
data.value = data.value.to_i + 1
FTransactionResult.successWithValue(data)
end
And when they exit the view:
listing_reference[:listings][self.id][:viewing_amount].transaction do |data|
data.value = data.value.to_i + -
FTransactionResult.successWithValue(data)
end
It works fine most of the time, but sometimes things go wrong. The app crashes, people loose connectivity or similar things.
I've been looking at "onDisconnect" to solve this - https://firebase.google.com/docs/reference/ios/firebasedatabase/interface_f_i_r_database_reference#method-detail - but from what I can see, there's no "inDisconnectRunTransaction".
How can I make sure that the viewing amount on the listing gets decremented no matter what?
A Firebase Database transaction runs as a compare-and-set operation: given the current value of a node, your code specifies the new value. This requires at least one round-trip between the client and server, which means that it is inherently unsuitable for onDisconnect() operations.
The onDisconnect() handler is instead a simple set() operation: you specify when you attach the handler, what write operation you want to happen when the servers detects that the client has disconnected (either cleanly or as in your problem case involuntarily).
The solution is (as is often the case with NoSQL databases) to use a data model that deals with the situation gracefully. In your case it seems most natural to not store the count of viewers, but instead the uid of each viewer:
itemViewers
$itemId
uid_1: true
uid_2: true
uid_3: true
Now you can get the number of viewers with a simple value listener:
ref.child('itemViewers').child(itemId).on('value', function(snapshot) {
console.log(snapshot.numChildren());
});
And use the following onDisconnect() to clean up:
ref.child('itemViewers').child(itemId).child(authData.uid).remove();
Both code snippets are in JavaScript syntax, because I only noticed you're using Swift after typing them.