It is explained or implied somewhere in the DTM documentation, that the Delay Link Activation option is there so that the Analytics call can be made before the current page unloads.
Is this really necessary? It seems to assume that the Analytics call must be done asynchronously, in a race against the clock. Why isn't the Analytics call made synchronously within the click event handler? That would be generally much faster than 250ms, and would eliminate a race condition.
Related
How to test implemented fixed delay on webpage with Playwright (exactly: debounce)?
I have simple scenario. User, after entering input need to wait fixed time for response i.e. 1000ms.
How to test that exact wait with Plawright?
Looked at https://github.com/microsoft/playwright/issues/4405 but I wonder if there is more elegant way to do this?
Well usually there is a better way than to hardcode a waiting time.
Wait for certain API calls
Wait for network idle (= wait for started calls to finish)
Wait for an event
However there are times when hardcoding is inevitable. What I've used is page.waitForTimeout()
Wait for 2 seconds:
await page.waitForTimeout(2000);
Official docs: https://playwright.dev/docs/api/class-page#page-wait-for-timeout
"Waits for the given timeout in milliseconds.
Note that page.waitForTimeout() should only be used for debugging.
Tests using the timer in production are going to be flaky. Use signals
such as network events, selectors becoming visible and others
instead."
I have a general efficiency question about dart streams.
I have a project that makes some use of them, but it has been proposed that we convert nearly everything (functions and data) to be dart streams. This is in order to achieve a fully reactive architecture.
I don't know how streams really work under the hood, so I don't really know if this kind of design comes with any kind of memory or computational overhead.
Thanks for your attention to this question.
There is an overhead. It's not necessarily big, but it's there.
Streams have a well-defined asynchronous behavior, and it's documented how they react to listeners being added, paused or cancelled, even if that happens while an event is being delivered (because, most often, that is when it happens).
Streams are asynchronous, which means there is a delay between adding an event to the stream (through a StreamController), and that event being received by the listener. That delay makes it necessary to store (buffer) the event, schedule a microtask, and then unbuffer the event and deliver it in that later microtask. Scheduling a microtask costs. There might be zones involved, which can cost extra.
On top of that, because the stream needs to be able to react to pause and cancel events in a timely manner, which means that each event delivery is also flanked by extra checks of whether the event handler has paused or cancelled. It's not a lot of overhead, but it's there.
For single-subscription streams, that's about it.
For broadcast streams, which can have multiple listeners, there can be a little extra overhead to handle new listeners being added while delivering the event. Again, not a lot, but it's there. The state-space for a stream is actually quite complicated.
(You can create "a synchoronous StreamController" which delivers events "immediately", but most of the time, you shouldn't. Those are not for avoiding asynchrony, they are for avoiding adding extra asynchronous delays when propagating already synchronous events, and should be used very carefully to avoid breaking code assuming that they won't get events in the middle of something else. A properly implemented reactive framework will use such controllers in their implementation, but that will not get rid of the original inherent delay of delivering the original asynchronous event.)
Now, performance is not absolute. Using streams everywhere might make your life easier, and if the performance is good enough for your application (it's not dominating the actual computations), then the increased development speed and maintainability might pay for itself. You should measure (and have repeatable benchmarks to measure) before making a decision about an implementation strategy based on performance alone.
As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.
WebGL is nice and asynchronous in that you can send off a long list of rendering commands without waiting for them to complete. However, if for some reason you do need to wait for the rendering to complete, you have to do it synchronously with gl.finish(). Surely it would be better if gl.finish accepted a callback and returned immediately?
Question: Is there any way to emulate this reliably?
Usage case: I am rendering a large number of vertices to a large off-screen canvas and then using drawImage to copy sections of this large canvas to small canvases on the page. I don't actually use gl.finish() but drawImage() seems to have the same effect. In my application, re-rendering is only triggered when the user performs an action (e.g. clicking a button), and it may take several hundred milliseconds. It would be nice if during rendering the browser was still responsive allowing scrolling etc. I am looking in particular for a Chrome solution, though something that also works in Firefox and Safari would be good.
Possible (bad) answer: You could try and estimate how long rendering is going to take and then set a timeout that begins with the call to gl.finish(). However, reliably doing this estimation for all sizes of vertex buffer and all users is going to be pretty tricky and inaccurate.
Possible (non-)answer: requestAnimationFrame does what I'm looking for...it doesn't though, does it?
Possible answer in 2018: Perhaps the ImageBitmap API solves this problem - see MDN docs.
You've already partially hit on your answer: drawImage() does indeed have finish-like behavior in that it forces all outstanding drawing commands to complete before it reads back the image data. The problem is that even if gl.finish() did what you wanted it to, wait for rendering to complete, you would still have the same behavior using it as you do now. The main thread would be blocked while the rendering finishes, interrupting the user's ability to interact with the page.
Ideally what you would want in this scenario is some sort of callback that indicates when a set of draw commands have been completed without actually blocking to wait for them. Unfortunately no such callback exists (and it would be surprisingly difficult to provide one, given the way the browser's internals work!)
A decent middle-ground in your case may be to do some intelligent estimations of when you feel the image may be ready. For example, once you have dispatched your draw call spin through 3 or 4 requestAnimationFrames before you call drawImage. If you consistently observe it taking longer (10 frames?) then spin for longer. This would allow users to continue interacting with the page normally and either produce no delay when doing the draw image, because the contents have finished rendering, or much less delay because you do the synchronous step mid-way through the render. Depending on the intended usage of your site non-realtime rendering could probably even stand to spin for a full second or so before presenting.
This certainly isn't a perfect solution, and I wish I had a better answer for you. Perhaps WebGL will gain the ability to query this type of status in the future, because it would be valuable in cases like yours, but for now this is likely the best you can do.
I'd like to delay the handling for some captured events in ActionScript until a certain time. Right now, I stick them in an Array when captured and go through it when needed, but this seems inefficient. Is there a better way to do this?
Well, to me this seems a clean and efficient way of doing that.
What do you mean by delaying? you mean simply processing them later, or processing them after a given time?
You can always set a timout to the actual processing function in your event handler (using flash.utils.setTimeout), to process the event at a precise moment in time. But that can become inefficient, since you may have many timeouts dangeling about, that need to be handled by the runtime.
Maybe you could specify your needs a little more.
edit:
Ok, basically, flash player is single threaded - that is bytecode execution is single threaded. And any event, that is dispatched, is processed immediatly, i.e. dispatchEvent(someEvent) will directly call all registered handlers (thus AS bytecode).
Now there are events, which actually are generated in the background. These come either from I/O (network, userinput) or timers (TimerEvents). It may happen, that some of these events actually occur, while bytecode is executed. This usually happens in a background thread, which passes the event (in the abstract sense of the term) to the main thread through a (de)queue.
If the main thread is busy executing bytecode, then it will ignore these messages until it is done (notice: nearly any bytecode execution is always the implicit consequence of an event (be it enter frame, or input, or timer or load operation or whatever)). When it is idle, it will look in all queues, until it finds an available message, wraps the information into an ActionScript Event object, and dispatches it as previously described.
Thus this queueing is a very low level mechanism, that comes from thread-to-thread communication (and appears in many multi-threading scenarios), and is inaccessible to you.
But as I said before, your approach both is valid and makes sense.
Store them into Vector instead of Array :p
I think it's all about how you structure your program, maybe you can assign the captured event under the related instance? So that it's all natural to process the captured event with it instead of querying from a global vector