Component Lifecycle in Svelte - When to read DOM properties - svelte-3

In Svelte how do I batch reads of properties that trigger relayout. In React you do measurements in useLayoutEffect. Things like intersectionObserver and resizeObserver are pretty noisy so I do not want to read when they fire. I want to read these properties batched to run once before all DOM mutations take place each render. I want to make sure reads and writes do not get interleaved (this causes DOM thrashing).
//bad DOM reads and writes are interleaved causes multiple re-layouts
el.className = "one"
a = el.clientWidth // triggers relayout
el2.className = "two"
b = el.scrollWidth // triggers relayout
//good DOM reads and writes are batched single re-layout
el.className = "one"
el2.className = "two"
a = el.clientWidth // triggers relayout
b = el.scrollWidth
I don't really understand what order things happen in Svelte's render.
If I have a page with many components are all beforeUpdate functions run before any DOM mutations take place?
Is beforeUpdate the best time to measure the DOM?
Is the render synchronous (i.e. all pending changes flushed to the DOM at once)?
(I understand I can bind to clientWidth, other properties like scrollWidth are not available)
This is my attempt is there a better way?
https://svelte.dev/repl/fb561a1aca45460489ecf2eb69a1948e?version=3.22.3

Related

How to optimize performance of Results change listeners in Realm (Swift) with a deep hierarchy?

We're using Realm (Swift binding currently in version 3.12.0) from the earliest days in our project. In some early versions before 1.0 Realm provided change listeners for Results without actually giving changeSets.
We used this a lot in order to find out if a specific Results list changed.
Later the guys at Realm exchanged this API with changeSet providing methods. We had to switch and are now mistreating this API just in order to find out if anything in a specific List changed (inserts, deletions, modifications).
Together with RxSwift we wrote our own implementation of Results change listening which looks like this:
public var observable: Observable<Base> {
return Observable.create { observer in
let token = self.base.observe { changes in
if case .update = changes {
observer.onNext(self.base)
}
}
observer.onNext(self.base)
return Disposables.create(with: {
observer.onCompleted()
token.invalidate()
})
}
}
When we now want to have consecutive updates on a list we subscribe like so:
someRealm.objects(SomeObject.self).filter(<some filter>).rx.observable
.subscribe(<subscription code that gets called on every update>)
//dispose code missing
We wrote the extension on RealmCollection so that we can subscribe to List type as well.
The concept is equal to RxRealm's approach.
So now in our App we have a lot of filtered lists/results that we are subscribing to.
When data gets more and more we notice significant performance losses when it comes to seeing a change visually after writing something into the DB.
For example:
Let's say we have a Car Realm Object class with some properties and some 1-to-n and some 1-to-1 relationships. One of the properties is a Bool, namely isDriving.
Now we have a lot of cars stored in the DB and bunch of change listeners with different filters listing to changes of the cars collection (collection observers listening for changeSets in order to find out if the list was changed).
If I take one car of some list and set the property of isDriving from false to true (important: we do writes in the background) ideally the change listener fires fast and I have the nearly immediate correct response to my write on the main thread.
Added with edit on 2019-06-19:
Let's make the scenario still a little more real:
Let's change something down the hierarchy, let's say the tires manufacturer's name. Let's say a Car has a List<Tire>, a Tire has a Manufacturer and a Manufacturer has aname.
Now we're still listing toResultscollection changes with some more or less complex filters applied.
Then we're changing the name of aManufacturer` which is connected to one of the tires which are connected to one of the cars which is in that filtered list.
Can this still be fast?
Obviously when the length of results/lists where change listeners are attached to gets longer Realm's internal change listener takes longer to calculate the differences and fires later.
So after a write we see the changes - in worst case - much later.
In our case this is not acceptable. So we are thinking through different scenarios.
One scenario would be to not use .observe on lists/results anymore and switch to Realm.observe which fires every time anything did change in the realm, which is not ideal, but it is fast because the change calculation process is skipped.
My question is: What can I do to solve this whole dilemma and make our app fast again?
The crucial thing is the threading stuff. We're always writing in the background due to our design. So the writes itself should be very fast, but then that stuff needs to synchronize to the other threads where Realms are open.
In my understanding that happens after the change detection for all Results has run through, is that right?
So when I read on another thread, the data is only fresh after the thread sync, which happens after all notifications were sent out. But I am not sure currently if the sync happens before, that would be more awesome, did not test it by now.

Operations on a stream produce a result, but do not modify its underlying data source

Unable to understand how "Operations on a stream produce a result, but do not modify its underlying data source" with reference to java 8 streams.
shapes.stream()
.filter(s -> s.getColor() == BLUE)
.forEach(s -> s.setColor(RED));
As per my understanding, forEach is setting the color of object from shapes then how does the top statement hold true?
The value s isn't being altered in this example, however no deep copy is taken, and there is nothing to stop you altering the object referenced.
Are able to can alter an object via a reference in any context in Java and there isn't anything to prevent it. You can only prevent shallow values being altered.
NOTE: Just because you are able to do this doesn't mean it's a good idea. Altering an object inside a lambda is likely to be dangerous as functional programming models assume you are not altering the data being process (always creating new object instead)
If you are going to alter an object, I suggest you use a loop (non functional style) to minimise confusion.
An example of where using a lambda to alter an object has dire consequences is the following.
map.computeIfAbsent(key, k -> {
map.computeIfAbsent(key, k -> 1);
return 2;
});
The behaviour is not deterministic, can result in both key/values being added and for ConcurrentHashMap, this will never return.
As mentioned Here
Most importantly, a stream isn’t a data structure.
You can often create a stream from collections to apply a number of functions on a data structure, but a stream itself is not a data structure. That’s so important, I mentioned it twice! A stream can be composed of multiple functions that create a pipeline that data that flows through. This data cannot be mutated. That is to say the original data structure doesn’t change. However the data can be transformed and later stored in another data structure or perhaps consumed by another operation.
AND as per Java docs
This is possible only if we can prevent interference with the data
source during the execution of a stream pipeline.
And the reason is :
Modifying a stream's data source during execution of a stream pipeline
can cause exceptions, incorrect answers, or nonconformant behavior.
That's all theory, live examples are always good.
So here we go :
Assume we have a List<String> (say :names) and stream of this names.stream(). We can apply .filter(), .reduce(), .map() etc but we can never change the source. Meaning if you try to modify the source (names) you will get an java.util.ConcurrentModificationException .
public static void main(String[] args) {
List<String> names = new ArrayList<>();
names.add("Joe");
names.add("Phoebe");
names.add("Rose");
names.stream().map((obj)->{
names.add("Monika"); //modifying the source of stream, i.e. ConcurrentModificationException
/**
* If we comment the above line, we are modifying the data(doing upper case)
* However the original list still holds the lower-case names(source of stream never changes)
*/
return obj.toUpperCase();
}).forEach(System.out::println);
}
I hope that would help!
I understood the part do not modify its underlying data source - as it will not add/remove elements to the source; I think you are safe since you alter an element, you do not remove it.
You ca read comments from Tagir and Brian Goetz here, where they do agree that this is sort of fine.
The more idiomatic way to do what you want, would be a replace all for example:
shapes.replaceAll(x -> {
if(x.getColor() == BLUE){
x.setColor(RED);
}
return x;
})

Processing Total Ordering of Events By Key using Apache Beam

Problem Context
I am trying to generate a total (linear) order of event items per key from a real-time stream where the order is event time (derived from the event payload).
Approach
I had attempted to implement this using streaming as follows:
1) Set up a non overlapping sequential windows, e.g. duration 5 minutes
2) Establish an allowed lateness - it is fine to discard late events
3) Set accumulation mode to retain all fired panes
4) Use the "AfterwaterMark" trigger
5) When handling a triggered pane, only consider the pane if it is the final one
6) Use GroupBy.perKey to ensure all events in this window for this key will be processed as a unit on a single resource
While this approach ensures linear order for each key within a given window, it does not make that guarantee across multiple windows, e.g. there could be a window of events for the key which occurs after that is being processed at the same time as the earlier window, this could easily happen if the first window failed and had to be retried.
I'm considering adapting this approach where the realtime stream can first be processed so that it partitions the events by key and writes them to files named by their window range.
Due to the parallel nature of beam processing, these files will also be generated out of order.
A single process coordinator could then submit these files sequentially to a batch pipeline - only submitting the next one when it has received the previous file and that downstream processing of it has completed successfully.
The problem is that Apache Beam will only fire a pane if there was at least one time element in that time window. Thus if there are gaps in events then there could be gaps in the files that are generated - i.e. missing files. The problem with having missing files is that the coordinating batch processor cannot make the distinction between knowing whether the time window has passed with no data or if there has been a failure in which case it cannot proceed until the file finally arrives.
One way to force the event windows to trigger might be to somehow add dummy events to the stream for each partition and time window. However, this is tricky to do...if there are large gaps in the time sequence then if these dummy events occur surrounded by events much later then they will be discarded as being late.
Are there other approaches to ensuring there is a trigger for every possible event window, even if that results in outputting empty files?
Is generating a total ordering by key from a realtime stream a tractable problem with Apache Beam? Is there another approach I should be considering?
Depending on your definition of tractable, it is certainly possible to totally order a stream per key by event timestamp in Apache Beam.
Here are the considerations behind the design:
Apache Beam does not guarantee in-order transport, so there is no use within a pipeline. So I will assume you are doing this so you can write to an external system with only the capability to handle things if they come in order.
If an event has timestamp t, you can never be certain no earlier event will arrive unless you wait until t is droppable.
So here's how we'll do it:
We'll write a ParDo that uses state and timers (blog post still under review) in the global window. This makes it a per-key workflow.
We'll buffer elements in state when they arrive. So your allowed lateness affects how efficient of a data structure you need. What you need is a heap to peek and pop the minimum timestamp and element; there's no built-in heap state so I'll just write it as a ValueState.
We'll set a event time timer to receive a call back when an element's timestamp can no longer be contradicted.
I'm going to assume a custom EventHeap data structure for brevity. In practice, you'd want to break this up into multiple state cells to minimize the data transfered. A heap might be a reasonable addition to primitive types of state.
I will also assume that all the coders we need are already registered and focus on the state and timers logic.
new DoFn<KV<K, Event>, Void>() {
#StateId("heap")
private final StateSpec<ValueState<EventHeap>> heapSpec = StateSpecs.value();
#TimerId("next")
private final TimerSpec nextTimerSpec = TimerSpec.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = firstNonNull(
heapState.read(),
EventHeap.createForKey(ctx.element().getKey()));
heap.add(ctx.element().getValue());
// When the watermark reaches this time, no more elements
// can show up that have earlier timestamps
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
#OnTimer("next")
public void onNextTimestamp(
OnTimerContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = heapState.read();
// If the timer at time t was delivered the watermark must
// be strictly greater than t
while (!heap.nextTimestamp().isAfter(ctx.timestamp())) {
writeToExternalSystem(heap.pop());
}
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
}
This should hopefully get you started on the way towards whatever your underlying use case is.

Dynamic Tag Management - Storing

We're in the process of moving to DTM implementation. We have several variables that are being defined on page. I understand I can make these variables available in DTM through data elements. Can I simply set up a data elem
So set data elements
%prop1% = s.prop1
%prop2% = s.prop2
etc
And then under global rules set
s.prop1 = %s.prop1%
s.prop2 = %s.prop2%
etc
for every single evar, sprop, event, product so they populate whenever they are set on a particular page. Good idea or terrible idea? It seems like a pretty bulky approach which raises some alarm bells. Another option would be to write something that pushes everything to the datalayer, but that seems like essentially the same approach with a redundant step when they can be grabbed directly.
Basically I want DTM to access any and all variables that are currently being set with on-page code, and my understanding is that in order to do that they must be stored in a data element first. Does anyone have any insight into this?
I use this spec for setting up data layers: Data Layer Standard
We create data elements for each key that we use from the standard data layer. For example, page name is stored here
digitalData.page.pageInfo.pageName
We create a data element and standardize the names to this format "page.pageInfo.pageName"
Within each variable field, you access it with the %page.pageInfo.pageName% notation. Also, within javascript of rule tags, you can use this:
_satellite.getVar('page.pageInfo.pageName')
It's a bit unwieldy at times but it allows you to separate the development of the data layer and tag manager tags completely.
One thing to note, make sure your data layer is complete and loaded before you call the satellite library.
If you are moving from a legacy s_code implementation to DTM, it is a good best practice to remove all existing "on page" code (including the reference to the s_code file) and create a "data layer" that contains the data from the eVars and props on the page. Then DTM can reference the object on the page and you can create data elements that map to variables.
Here's an example of a data layer:
<script type="text/javascript">
DDO = {} // Data Layer Object Created
DDO.specVersion = "1.0";
DDO.pageData = {
"pageName":"My Page Name",
"pageSiteSection":"Home",
"pageType":"Section Front",
"pageHier":"DTM Test|Home|Section Front"
},
DDO.siteData = {
"siteCountry":"us",
"siteRegion":"unknown",
"siteLanguage":"en",
"siteFormat":"Desktop"
}
</script>
The next step would be to create data elements that directly reference the values in the object. For example, if I wanted to create a data element that mapped to the page name element in my data layer I would do the following in DTM:
Create a new data element called "pageName"
Select the type as "JS Object"
In the path field I will reference the path to the page name in my data layer example above - DDO.pageData.pageName
Save the data element
Now this data element can be referenced in any variable field within any rule by simply typing a '%'. DTM will find any existing data elements and you can select them.
I also wrote about a simple script you can add to your implementation to help with your data layer validation.Validate your DTM Data Layer with this simple script
Hope this helps.

Is there an event that fires after the DOM is updated as the result of an observed property?

Is there an event that gets fired after the DOM is updated as the result of an observed property being changed?
Specifically, I need to alter the scrollTop property of a container to show new content as it's added. I have to use a timeout now to wait until the DOM is updated before setting the scrollTop to the new scrollHeight.
Thanks for the question. From my understanding, the changes propagate through the models and DOM asynchronously. It's not clear there is one "ok, literally everything all the way through is done updating" event (I could be wrong).
However, I've used Mutation Observers to know when something in my DOM changes. If you're able to watch a specific place in the DOM for a change (or changes), try Mutation Observers: https://github.com/sethladd/dart-polymer-dart-examples/tree/master/web/mutation_observers
Here is an example:
MutationObserver observer = new MutationObserver(_onMutation);
observer.observe(getShadowRoot('my-element').query('#timestamps'), childList: true, subtree: true);
// Bindings, like repeat, happen asynchronously. To be notified
// when the shadow root's tree is modified, use a MutationObserver.
_onMutation(List<MutationRecord> mutations, MutationObserver observer) {
print('${mutations.length} mutations occurred, the first to ${mutations[0].target}');
}

Resources