Does the entire code in the top level of the manifest v3 service worker run repeatedly every time it wakes up? - service-worker

Tested: In order to avoid repeated execution of some code (like chrome.contextMenus.create repeated execution makes
Unchecked runtime.lastError: Cannot create item with duplicate id
), it needs to be moved into chrome.runtime.onInstalled.addListener.
But some code (like chrome.action.onClicked.addListener) moved into chrome.runtime.onInstalled.addListener won't run on next wakeup.
If chrome.action.onClicked.addListener is placed at the top level of the service worker,
will the Listener be added again every time the service worker wakes up,
Will there be multiple duplicate listeners?
will the functions in the new added Listener and in Listener added previous be executed both?
https://developer.chrome.com/docs/extensions/mv3/service_workers/ saying:
A background service worker is loaded when it is needed, and unloaded when it goes idle. Some examples include:
The extension is first installed or updated to a new version.
The background page was listening for an event, and the event is
dispatched.
A content script or other extension sends a message.
Another view in the extension, such as a popup, calls
runtime.getBackgroundPage.
says 'unloaded when it goes idle', will the Listener added previous be unloaded too? ___if so,how awaking service worker again?
or only unload the functions in Listener added previous, and reserve the Listener empty shell just for awaking service worker ?

Yes, it reruns anew.
No, there'll be no duplicate listeners.
No multiple threads, no sleeping/suspending/resuming.
The confusion is caused by a rather inept description in the old version of the documentation, now it's rewritten. What actually happens is that after a certain timeout the service worker is simply terminated. It doesn't "unload" or "resume". It "terminates completely" and "starts fresh".
When it terminates, the JavaScript environment disappears (JS listeners, variables, everything).
When it's started by the browser in reaction to an event to which you subscribed via addListener in the previous run of the SW, your SW script runs in its entirety. Each addListener for a chrome event registers this listener internally. Then the event that woke the worker will be dispatched to the listeners. This is why it is important to register the listeners synchronously in the first task of the event loop when the script starts (the old documentation used a rather arcane term "top-level" from the makers of V8 and oversimplified it to the need to declare the listeners in the global scope of the script, which is not mandatory because you can certainly do it inside a function call as long as it's synchronous).
The contextMenus API is different: the data is saved inside Chrome's internal preferences so there's no need to recreate it on each run, doing it inside chrome.runtime.onInstalled is sufficient. Firefox doesn't save them yet, but I guess they will do it once they implement MV3.
P.S.
The lifetime duration is 30 seconds after the last incoming external event. Using a runtime port adds another 5 minutes to the timeout. Using native host messaging keeps the service worker alive indefinitely, but it is also possible to emulate a persistent service worker to a degree even without native messaging: more info.
Another view in the extension, such as a popup, calls runtime.getBackgroundPage.
This is not true anymore in MV3.

Related

Correctly killing newly spawned isolates

I am aware of the fact that when both microtask and event queues of an isolate are empty, the isolate is killed. However, I'm not able to find a reference on the documentation of how a worker isolate can be killed under certain circumstances.
Context
Let's make this example:
Future<void> main() {
final receivePort = ReceivePort();
final worker = await Isolate.spawn<SendPort>((_) {}, receivePort.sendPort);
await runMyProgram(receivePort, worker);
}
Here the main isolate is creating a new one (worker) and then the program starts doing stuff.
Question
How do I manually kill the newly spawned isolate when it's not needed anymore? I wasn't able to explicitly find this information on the documentation so I am kind of guessing. Do I have to do this?
receivePort.close();
worker.kill();
Or is it enough to just close the port, like this?
receivePort.close();
Note
I thought about this. If the worker isolate has both queues (microtask and event) empty and I close the receive port, it should be killed automatically. If this is the case, calling receivePort.close() should be enough!
If you want to make sure the child isolate shuts down, even if it's in the middle of doing something, you'll want to call Isolate.kill().
However, as you've already pointed out, an isolate will exit on its own if it has no more events to process and it holds no open ports (e.g., timers, open sockets, isolate ports, etc). For most cases, this is the ideal way to dispose of an isolate when it's no longer used since it eliminates the risk of killing the isolate while it's in the middle of doing something important.
Assuming your child isolate is good about cleaning up its own open ports when it's done doing what it needs to do, receivePort.close() should be enough for you to let it shut down.
You can kill an isolate from the outside, using the Isolate.kill method on an Isolate object representing that isolate.
(That's why you should be careful about giving away such isolate objects, and why you can create an isolate object without the "kill" capability, that you can more safely pass around.)
You can immediately kill an isolate from the inside using the static Isolate.exit.
Or using Isolate.current.kill. It's like Process.exit, but only for a single isolate.
Or you can make sure you have closed every open receive port in the isolate, and stopped doing anything.
That's the usual approach, but it can fail if you run code provided by others in your isolate. They might open receive ports or start periodic timers which run forever, and that you know nothing about.
(You can try to contain that code in a Zone where you control timers, but that won't stop them from creating receive ports, and they can always access Zone.root directly to leave the zone you put them in.)
Or someone might have Isolate.pauseed your isolate, so the worker code won't run.
If I wanted to be absolutely certain that an isolate is killed,
I'd start out by communicating with my own code running in that isolate (the port receiving worker instructions) and tell it to shut down nicely, as a part of the protocol I am already using to communicate.
The worker code can choose to use Isolate.exit when it's done, or just close all its own resources and hope it's enough. I'd probably tend to use Isolate.exit, but only after waiting for existing worker tasks getting done.
Such a worker task might be hanging (waiting for a future which will never complete). Or it might be live-locking everything by being stuck in a while (true){..can't stop, won't stop!..}. In that case, the waiting should have a timeout.
Because of that, I'd also listen for the isolate to shut down, using Isolate.addOnExitHandler, and start a timer for some reasonable duration, and if I haven't received an "on exit" notification before the timer runs out, or some feedback on the worker shutdown request telling me that things are fine, I'd escalate to isolate.kill(priority: Isolate.immediate); which can kill even a while (true) ... loop.

Spring amqp: detect shutdown and reconnect to another queue

We have this setup where we call a webservice to create a queue, and receive the queue name from the response.
Then we set up a SimpleMessageListenerContainer and set the queue name there, and then start it.
However, from time to time, the queue is deleted - resulting in a "404 could not declare queue XXXXXXXXX" error. In these cases, I need to call the webservice again and add the new QueueName to the SimpleMessageListenerContainer and then removing the old one.
The only way I figured out trigger any code to handle this was to create a custom CachedConnectionFactory and overriding the shutdownCompleted method.
However, shutdownCompleted seems to trigger when the SimpleMessageListenerContainer switches over as well, so it sticks in a loop. The ShutdownSignalException sent into shutdownCompleted does not seem to look any different if the trigger is external from the server or from the client handling the new queue, so I can't figure out how to skip the handling on the "second" go.
So what is the usual way to detect and run custom handling when the server kills the queue?
The container publishes a ListenerContainerConsumerFailedEvent when the listener fails.
Add an ApplicationListener<ListenerContainerConsumerFailedEvent>, stop the container, change the queues and restart.
You will likely get multiple events because, by default, the container will try to reconnect 3 times before giving up and stopping itself.

How to wait for termination of itself

my currently running application (A1) needs to be terminated but as well as run some other application (A2). But I need to run application A2 after fully terminated of A1. Now I have something like this:
begin
Application.Terminate;
wait(2000); <<<<<<<
ShellExecute(A2)...
end;
To be more exact - I need to call installation (A2) and want to be sure A1 is not running, because A2 is installation of A1. Please imagine that termination could last more time or it shows some modal dialog...
Is there any easy way how to do it (wait for it)? Of course without communication with or changing of A2! A2 could be anything else in the future.
VladimĂ­r
I need to call installation (A2) and want to be sure A1 is not running.
This is impossible. You cannot execute code in a process that has terminated. Once the process has terminated there is nothing that can execute code.
You'll need a new process. Start the new process with the sole task of waiting on its parent to terminate, and then do whatever is needed once the parent has terminated.
If want to make a proper installer/updater program don't worry about when and how do you execute it but instead how it will detect if your application is running or not.
Now if your main application already has a mechanizm to prevent starting of multiple instances of your application you already have half the work done. How?
Such mechanizms publish the information about an instance of your application already running to be available by other programs.
Most common appraoch to do so is by registering a named Mutex. So when second instance of your application starts it finds out that it can't create a new Mutex with the same name becouse one already exists. So now in most cases second instance of your application sends a custom message to the first instance to bring that instance to the front (restore application) and then closes itself.
If you want to read more about different mechanizms to controling how many instances of your application can be running at the same time I suggest you check the next article:
http://delphi.about.com/od/windowsshellapi/l/aa100703a.htm
So how do you use such mechanizm for your installer/updater?
Just as you would check in second instance of your application to see if another instance is already running you check this in your installer/updater instead. You don't even need to do this schecking at the start of installer/updater. You can do it even later (downloading the update files first).
If there is an instance of your application running you broadcast a custom message. But this message is different from the one that one instance would send to another.
This will now tell your application that it is about to be updated so it should begin the closing procedure.
If you form this custom message in such a way that it also contains information about your installer/updater application.handle you give yourself the ability for your main application to send a return response in which it notifies the installer/updater in which state it is. For instnace:
asClosing (main application is just about to close)
asWaitinUserInput (main application is waiting for user to confirm save for instance)
asProcessing (main application is doing some lengthy processing so it can't shut down at this time)
And if there is no response in certain amount of time your installer could asume that your main application might be hung so it notifies the user that automatic closure of main application has failed and so that the user should close it manually and then retry the updating process.
Using such approach would allow you to start your installer/updater at any time during execution of your main application.
And not only that. You can start your installer/updater by double clicking its executable, by a shourtcut, by some other application or even by a windows task system.

Sending a message to an application running on a secondary logged in user account

I'm trying to send a message to an application running under a different user account (a user that is also logged in with a different account on the computer, using quick user switch on XP and later, and executed the application).
The background is that my application can update itself, but in order to do that all running instances must be closed first.
The instances need to be shut down (instead of just killing the process), so the updater does that by sending a custom message to them (with SendMessage). In order to send a message I need a handle to the main window of the process.
This works fine using EnumWindows - as long as the instances are running under the same user account, because EnumWindows does not list windows belonging to a different user.
So I tried a different approach. I used CreateToolhelp32Snapshot to first list all running processes on the system, and then iterating through the threads calling CreateToolhelp32Snapshot again. With those thread ids I could then list their windows using EnumThreadWindows.
Once again this works fine, but.. once again only for the current logged in user. The problem here is that even though CreateToolhelp32Snapshot lists process ids belonging to a different user it does not list thread ids belonging to them. The code for this is a little lengthy, but if it's required I can edit it in - please leave a comment for that.
So, how could I get the main window handle of my application running on a different logged in user account?
Use something that's known to work across sessions; This kind of stuff is often used for desktop-service communications, so look for that if you want to google. Here's my suggestion:
Create an event that will only be used to trigger the "need to shut down" state. Use the CreateEvent function make sure you start your name with Global\ so it's valid across sessions.
On application startup create a thread that opens the named event (uses the same CreateEvent function, pay close attention to the ERROR_ALREADY_EXISTS non-error). That thread should simply wait for the event. When the event is triggered, send the required message to your main window. That thread can easily and safely do that because it's running inside your process. The thread will mostly be idle, waiting for the event to be triggered, so don't worry about CPU penalty.
Your application updateer should simply trigger the named event.
This is just one idea, I'm sure there are others.
Pipes are overkill. A global manual-reset event (e.g. "Global\MyApplicationShutdownEvent") which causes application instances to kill themselves should be enough.
At the risk of being scoffed at, have you looked at zeroMQ, this is a perfect use for it and it very reliable and stable.
There is a Delphi wrapper

Waiting for applications to finish loading [duplicate]

I have an application which needs to run several other applications in chain. I am running them via ShellExecuteEx. The order of running each of the apps is very important cause they are dependant on each other. For example:
Start(App1);
If App1.IsRunning then
Start(App2);
If App2.IsRunning then
Start(App3);
.........................
If App(N-1).IsRunning then
Start(App(N));
Everything works fine but there is a one possible problem:
ShellExecuteEx starts the application, and return almost immediately. The problem might arise when for example App1 has started properly but has not finished some internal tasks, it is not yet ready to use. But ShellExecuteEx is already starting App2 which depends on the App1, and App2 won't start properly because it needs fully initialized App1.
Please note, that I don't want to wait for App(N-1) to finish and then start AppN.
I don't know if this is possible to solve with ShellExecuteEx, I've tried to use
SEInfo.fMask := SEE_MASK_NOCLOSEPROCESS or SEE_MASK_NOASYNC;
but without any effect.
After starting the AppN application I have a handle to the process. If I assume that the application is initialized after its main window is created (all of Apps have a window), can I somehow put a hook on its message queue and wait until WM_CREATE appears or maybe WM_ACTIVATE? In pressence of such message my Application would know that it can move on.
It's just an idea. However, I don't know how to put such hook. So if you could help me in this or you have a better idea that would be great:)
Also, the solution must work on Windows XP and above.
Thanks for your time.
Edited
#Cosmic Prund: I don't understand why did you delete your answer? I might try your idea...
You can probably achieve what you need by calling WaitForInputIdle() on each process handle returned by ShellExecute().
Waits until the specified process has finished processing its initial input and is waiting for user input with no input pending, or until the time-out interval has elapsed.
If your application has some custom initialization logic that doesn't run in UI thread then WaitForInputIdle might not help. In that case you need a mechanism to signal the previous app that you're done initializing.
For signaling you can use named pipes, sockets, some RPC mechanism or a simple file based lock.
You can always use IPC and Interpocess Synchronization to make your application communicate with (and wait for, if needed) each other, as long as you code both applications.

Resources