I have a lambda function. It's deployed by serverless. So a rule always be created in my custom event bus automatically. After I send a test event to my bus. This rule can be triggered. But is always invoke fail. I try to find the reason but nothing can be saw beside invoke metrics.
Strangely, if I manually create a rule for the same. This manually created rule can always fire and successfully invoke the lambda function.
Let me answer it myself. I solved it.
AWS eventbridge has no logging. However, by configuring the dead letter queue to reduce the number of retries and the holding time of the unprocessed state, events that cannot be successfully called by the target service can be entered into the dead letter queue as soon as possible.
Then, we can see the error code and error message in the attributes of the message.
Related
Tested: In order to avoid repeated execution of some code (like chrome.contextMenus.create repeated execution makes
Unchecked runtime.lastError: Cannot create item with duplicate id
), it needs to be moved into chrome.runtime.onInstalled.addListener.
But some code (like chrome.action.onClicked.addListener) moved into chrome.runtime.onInstalled.addListener won't run on next wakeup.
If chrome.action.onClicked.addListener is placed at the top level of the service worker,
will the Listener be added again every time the service worker wakes up,
Will there be multiple duplicate listeners?
will the functions in the new added Listener and in Listener added previous be executed both?
https://developer.chrome.com/docs/extensions/mv3/service_workers/ saying:
A background service worker is loaded when it is needed, and unloaded when it goes idle. Some examples include:
The extension is first installed or updated to a new version.
The background page was listening for an event, and the event is
dispatched.
A content script or other extension sends a message.
Another view in the extension, such as a popup, calls
runtime.getBackgroundPage.
says 'unloaded when it goes idle', will the Listener added previous be unloaded too? ___if so,how awaking service worker again?
or only unload the functions in Listener added previous, and reserve the Listener empty shell just for awaking service worker ?
Yes, it reruns anew.
No, there'll be no duplicate listeners.
No multiple threads, no sleeping/suspending/resuming.
The confusion is caused by a rather inept description in the old version of the documentation, now it's rewritten. What actually happens is that after a certain timeout the service worker is simply terminated. It doesn't "unload" or "resume". It "terminates completely" and "starts fresh".
When it terminates, the JavaScript environment disappears (JS listeners, variables, everything).
When it's started by the browser in reaction to an event to which you subscribed via addListener in the previous run of the SW, your SW script runs in its entirety. Each addListener for a chrome event registers this listener internally. Then the event that woke the worker will be dispatched to the listeners. This is why it is important to register the listeners synchronously in the first task of the event loop when the script starts (the old documentation used a rather arcane term "top-level" from the makers of V8 and oversimplified it to the need to declare the listeners in the global scope of the script, which is not mandatory because you can certainly do it inside a function call as long as it's synchronous).
The contextMenus API is different: the data is saved inside Chrome's internal preferences so there's no need to recreate it on each run, doing it inside chrome.runtime.onInstalled is sufficient. Firefox doesn't save them yet, but I guess they will do it once they implement MV3.
P.S.
The lifetime duration is 30 seconds after the last incoming external event. Using a runtime port adds another 5 minutes to the timeout. Using native host messaging keeps the service worker alive indefinitely, but it is also possible to emulate a persistent service worker to a degree even without native messaging: more info.
Another view in the extension, such as a popup, calls runtime.getBackgroundPage.
This is not true anymore in MV3.
I'm using Neo4j. For large data imports from external csvs, parquets, etc. there is a very handful command for "fire and forget", the apoc.periodic.submit. There is also the apoc.periodic.list that list the background jobs.
During the execution of the background job it appears in the output of apoc.periodic.list. But after it finishes, either by an error or by a successful execution, it will disappear from this list without any feedback from the completion status.
Is there a general way to check if a background job finish status? Is there a more suitable API for my purposes?
If there is a way to directly check error status on the fire&forget routines, I don't see it in the documentation (they are fire&forget, so it comes with the territory?)
Ideas
don't background the query itself, background a process/task that waits for a blocking Cypher execution to finish and capture the error code...
check for success instead of failure? (if it didn't succeed you know it failed right?), this may be evident based on what the Cypher does, or you could add a graph content update for this purpose. E.g. Update property on a NODE with last_updated. Do that last so that if the cypher fails, the property is not updated
You could enable query log and then check there to see what happened, most likely this query has a unique signature and the last execution could be found easily in the log (with status/error code)
We have this setup where we call a webservice to create a queue, and receive the queue name from the response.
Then we set up a SimpleMessageListenerContainer and set the queue name there, and then start it.
However, from time to time, the queue is deleted - resulting in a "404 could not declare queue XXXXXXXXX" error. In these cases, I need to call the webservice again and add the new QueueName to the SimpleMessageListenerContainer and then removing the old one.
The only way I figured out trigger any code to handle this was to create a custom CachedConnectionFactory and overriding the shutdownCompleted method.
However, shutdownCompleted seems to trigger when the SimpleMessageListenerContainer switches over as well, so it sticks in a loop. The ShutdownSignalException sent into shutdownCompleted does not seem to look any different if the trigger is external from the server or from the client handling the new queue, so I can't figure out how to skip the handling on the "second" go.
So what is the usual way to detect and run custom handling when the server kills the queue?
The container publishes a ListenerContainerConsumerFailedEvent when the listener fails.
Add an ApplicationListener<ListenerContainerConsumerFailedEvent>, stop the container, change the queues and restart.
You will likely get multiple events because, by default, the container will try to reconnect 3 times before giving up and stopping itself.
In my application I use a winevent hook to get focus changes system-wide. Because there are no timing problems, I use an out-of-context hook, even if I know that it is slow. If there are multiple events fired quickly on after another, the system queues them and gives them to the hook callback function in the right order.
Now I would like to process only the newest focus change. So if there are already other messages in the queue, I want the callback function to stop and restart with the parameters of the newest message. Is there a way to do that?
When you receive a focus change, create an asynchronous notification to yourself, and cancel any previous notification(s) that may still be pending.
You can use PostMessage() and PeekMessage(PM_REMOVE) for that. Post a custom message to yourself, removing any previous custom message(s) that are still in the queue.
Or, you can use TTimer/SetTimer() to (re)start a timer on each focus change, and then process the last change when the timer elapses.
Either way, only the last notification will be processed once the messages slow down.
I am trying to use luasocket to connect to an Irc channel and send and receive messages within my game (Wolfenstein Enemy Territory, If that helps).
Right now I am able to do all of that, with one problem. Once I set it to listen for a message, it basically locks up. I have a fallback command if I type stoplisten in Irc it just stops the script, And I can see it got all the message, but the game itself is locked up while waiting for the messages.
Any Ideas on how I would do this without freezing the game? I have just recently learned a little of coroutines So I do not know if I am using them correctly.
I should also note I have access to a run frame functions which runs every millisecond if that helps (Though normally it is done like: if math.mod(currentTime, 50) ~= 0 then return end)
Here is the part in my code: http://pastebin.com/j1gCqm4R
(I wasnt gonna edit all my code with an indent just to post it here, so i just put it on pastebin)
Your problem is that all sockets are, by default, blocking, which means they will halt ('block') the current thread of execution (in this case, your game) until they either get the desired result or 'timeout'.
The solution is non-blocking sockets. invoke :settimeout(0) on your client socket object, and all future :send(...) :recieve(...) will return immediately, having either succeeded, or timed-out.
The LuaSocket reference contains the full details, but you will have to modify your code either to handle the 'timeout' failure state, or add calls to socket.select() to make sure that you only use sockets that are 'ready' to be used.