Why is my GLib timeout sometimes not destroyed? - glib

I use a timeout to perform an action periodically. Sometimes the timeout interval needs to be modified, so I set a new timeout and then destroy the old one by returning False from the callback. However, I have gotten bug reports that seem to pretty clearly show that sometimes the initial timer is not destroyed because the actions are running at both the old and new timeout intervals. Can you think of any reason this could happen? It seems to be an infrequent occurrence and I can not duplicate it.
Here's my python code for the callback function. There's essentially nothing happening between when I create the new timer (which succeeds) and when I return False (which seems to sometimes, rarely, fail to destroy the original timer).
Since this code was written I have modified it to store the timeout return value and to use GLib.SOURCE_CONTINUE and GLib.SOURCE_REMOVE instead of True/False, but that version has not been deployed yet. Nevertheless, I don't think those changes should be relevant here.
def on_interval_timeout(self, user_data):
# perform action here
# update timeout if required
if self.update:
interval = (self.model.props["interval-min"] * 60 +
self.model.props["interval-sec"])
GLib.timeout_add_seconds(interval, self.on_interval_timeout, None)
self.update = False
return False
return True

Related

Rails application taking more than 30 seconds to respond

I'm making a small rails application that fetch data from some different languages at github-api.
The problem is, when i click the button that will fetch the informations, it takes a long time to redirect to the correct page. What i got from network is, the TTFB is actually 30s (!) and is getting a response with the status 302.
The controller function that is doing the logic:
Language.delete_all
search_urls = Introduction.all.map { |introduction| "https://api.github.com/search/repositories?q=#{introduction.name}&per_page=1" }
search_urls.each do |search_url|
json_file = JSON.parse(open(search_url).read)
pl = Language.new
pl.hash_response = json_file['items'].first
pl.name = pl.hash_response['language']
pl.save
end
main_languages = %w[ruby javascript python elixir java]
deletable_languages = Introduction.all.reject do |introduction|
main_languages.include?(introduction.name)
end
deletable_languages.each do |language|
language.delete
end
redirect_to languages_path
end
I believe the bottleneck is the http request in which you are doing it one by one. You could have filtered the languages that you want before generating the url and fetch them.
However, if the count of the urls after filtered is still large, say 20-50, assuming each request take 200ms, this would take at least 4s to 10s just for http request. Thats already too long for the user to wait for. In that case you should make it a background job.
If you insist to do this synchronously, you may consider fire those http requess by spawning multiple threads and join all the results after all threads are completed. You will achieve some concurrency here as the GIL will not block thread for IO wait. But this is very prone to error as you need to manage the threads on your own.

Can Sidekiq run a loop with wait and see a change to the db?

I have a sidekiq worker that waits for a change to happen to a record made by a remote client. Something like the following:
#myworker async process to wait for client to confirm status
perform(myRecordID)
sendClient(myRecordID)
didClientAcknowledge = false
while !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
break
end
# wait for client to perform an update on the record to confirm status
sleep 5.seconds
end
Rails.logger.info("client got the message")
end
my problem is that although I can see that the client has in fact performed the acknowledgement and updated the record with correct status update (ACK_OK), my sidekiq thread continues to see the old status for myRecord.
I'm sure my logic is flawed here but it seems like the sidekiq process does not "see" changes to the DB...but if I used my rails console I can see that the client has in fact updated the DB as expected...
Thanks!
Edit 1
ok so here's a thought, instead of the loop, I'll schedule another call to the worker within 5 seconds... so here's the updated code:
perform(myRecordID, retry_count)
retry_count -= 1
if retry_count < 1
return
end
sendClient(myRecordID)
didClientAcknowledge = false
if !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
Rails.logger.info("client got the message")
return
end
# wait for client to perform an update on the record to confirm status
myWorker.perform_in(5.seconds)
end
Rails.logger.info("client got the message")
end
This seems to work, but will test a bit more..one challenge is having a retry count which means I need to maintain some sort of variable between calls to the worker...
edit2 possibly this can be done by passing in the time to the first call and then checking if a timeout has been surpassed before invoking the next instance...assuming time does not stand still as well inside the async call...
edit3 Adding the retry_count argument allows us to control how many times this worker will be spawned...

ng-block-ui not working with Angular 7 concatmap

I'm using NgBlockUI and BlockUIHttpModule with blockAllRequestsInProgress set to true in an app I'm working on. In general it's working fine, but on one page I'm using a concat map to perform some action and then update the data. The first request, the update, triggers BlockUI fine, but the second one doesn't. Otherwise, it executes properly. It's just a little jarring for the user since the results seem to update without warning. Here's the code for the function:
onUpdate(event: items[]) {
this.updateService.update(event).concatMap(
_ => this.seachService.search(this.cachedSearch)
).subscribe(
resp => this.handleResponse(resp),
err => this.handleError(err)
);
}
I tried calling BlockUI directly, but still no luck. As a last resort, I'm going to make the whole thing one request, but I'd like to at least understand why this isn't working.
This happened to me as well. This issue occurs for sequential HTTP calls (usually with await) wherein the second request is not blocked by ng-block-ui.
As fix what I did was set blockAllRequestsInProgress to false. The behavior is just the same but setting it to false yields more predictable results:
BlockUIHttpModule.forRoot({
blockAllRequestsInProgress: false,
requestFilters: [urlFilter]
}),
I've also updated to ng-block-ui to latest version as of this writing:
"ng-block-ui": "^2.1.8",

WoW Weakauras Custom Tricker

I am trying to get a trigger that will show with the sunfire debuff has less time then my nature's grace buff. the lua calls seem to be pulling the correct number, but it is constantly returning true?
function ()
_,_,_,_,_,_,sundur= UnitDebuff("target","Sunfire","player");
_,_,_,_,_,_,NGDur= UnitAura("player","Nature's Grace");
if sundur and NGDur then
if sundur<NGDur+2 then
return true
else
return false
end
end
end
The issue i found was that the ad don was allowing the declared variables to be saved globally which was causing it not to be updated properly even as i changed them. I also had to change one part of code, removing the "" around player only on the uniteDebuff "caster" filter.
local _,_,_,_,_,_,sundur= UnitDebuff("target","Sunfire",player);

how documentum method timeout performed?

I have documentum dm_method
create dm_method object
set object_name = 'xxxxxxxxxxx',
set method_verb = 'xxx.yyy.Foo',
set method_type = 'java',
set launch_async = false,
set use_method_server = true,
set run_as_server = true,
set timeout_min = 60,
set timeout_max = 600,
set timeout_default = 500
It invoked via dm_job with period 600 second.
But my method can work more than 600 second (depend on size of input data, produced by users)
Whats happens whan max_timeout exceeded on dm_method implemented in java?
DFC job manager send Thread.interrupt()?
DFC waits for finishing job and only log warning?
I didn't find detailed description in Documentum documentation.
See Discussion on https://forums.opentext.com/forums/discussion/153860/how-documentum-method-timeout-performed
Actually, it's possible that the Java method will continue running in
the JMS after timeout. However, the Content Server will already have
closed the OutputStream where the method can write the response. So
you will most likely see errors in the log, and also in the job object
if the method was called by a job. Depending on what the method does,
it might actually be able to complete whatever it needs to do.
However, you should try to set the default timeout to a value that
will give your job enough time to complete cleanly.

Resources