sidekiq delayed a job in millisecond or other unit of time? - delayed-job

I find in official doc exemple like: https://github.com/mperham/sidekiq/wiki/Scheduled-Jobs
MyWorker.perform_in(3.hours, 'mike', 1)
MyWorker.perform_at(3.hours.from_now, 'mike', 1)
I want to know where and how change the definition of units of time used?
If I want Scheduled a job in millisecond or microsecond, it's possible? how? What is the smallest unit of time can I use?

The unit is float, so you can say perform_in(3.5.seconds, ...). That won't do you much good though because Sidekiq's scheduler is not meant to be precise.
https://github.com/mperham/sidekiq/wiki/Scheduled-Jobs#checking-for-new-jobs

Related

How does Locust provide state over time for load testing?

I was trying to move from Gatling to Locust (Python is a nicer language) for load tests. In Gatling I can get data for a chart like 'Requests per seconds over time', 'Response time percentiles over time', etc. ( https://gatling.io/docs/2.3/general/reports/ ) and the really useful 'Responses per second over time'
In Locust I can see the two report (requests, distribution), where (if I understand it correctly), 'Distribution' is the one that does 'over time'? But I can't see where things started failing, or the early history of that test.
Is Locust able to provide 'over time' data in a CSV format (or something else easily graph-able)? If so, how?
Looked through logs, can output the individual commands, but it would be a pain to assemble them (it would push the balance toward 'just use Gatling')
Looked over https://buildmedia.readthedocs.org/media/pdf/locust/latest/locust.pdf but not spotting it
I can (and have) created a loop that triggers the locust call at incremental intervals
increment_user_count = [1, 10, 100, 1000]
# for total_users in range(user_min, user_max, increment_count):
for users in increment_user_count:
[...]
system(assembled_command)
And that works... but it loses the whole advantage of setting a spawn rate, and would be painful for gradually incrementing up to a large number (then having to assemble all the files back together)
Currently executing with something like
locust -f locust_base_testing.py --no-web -c 1000 -r 2 --run-time 8m30s --only-summary --csv=output_stats_20190405-130352_1000
(need to use this in automation so Web UI is not a viable use-case)
I would expect a flag, in the call or in some form of setup, that outputs the summary at regular ticks. Basically I'd expect (with no-web) to get the data that I could use to replicate the graph the web version seems to know about:
Actual: just one final summary of the overall test (and logs per individual call)

Measure the performance of a Redis Lua script

Is there any way to measure the performance of a Redis Lua script?
I have a lua script and I ended up to a slightly different implementation and I am wondering if there is any way to measure which of the two implementations is faster.
You can call Redis' TIME command to perform in-script "benchmarking". Something like the following should work:
local start = redis.call('TIME')
-- your logic here
local finish = redis.call('TIME')
return finish[1]-start[1]
I read in the comments that someone mensioned finish[2]-start[2] which is not a good idea because [2] has "the amount of microseconds already elapsed in the current second" and not the entire timestamp (so if we finish in a different second, this calculation will fail.)
based on: https://redis.io/commands/TIME
to get time as microseconds, I would do:
local start = redis.call('TIME')
-- your logic here
local finish = redis.call('TIME')
return (finish[1]-start[1])*1000000+(finish[2]-start[2])

How to calculate CPU time in elixir when multiple actors/processes are involved?

Let's say I have a function which does some work by spawning multiple processes. I want to compare CPU time vs real time taken by this function.
def test do
prev_real = System.monotonic_time(:millisecond)
# Code to complete some task
# Spawn different processes & give each process some task
# Receive result
# Finish task
current_real = System.monotonic_time(:millisecond)
diff_real = current_real - prev_real
IO.puts "Real time " <> to_string(diff_real)
IO.puts "CPU time ?????"
end
How to calculate CPU time required by the given function? I am interested in calculating CPU time/Real time ratio.
If you are just trying to profile your code rather than implement your own profiling framework I would recommend using already existing tools like:
fprof which will give you information about time spent in functions (real and own)
percept which will provide you information about which processes in your system ware working at any given time and on what
xprof which is design to help you find which calls to your function will cause it to take more time (trigger inefficient branch of code).
They take advantage of both erlang:trace to figure out which function is being executed and for how long and erlang:system_profile with runnable_procs to determine which processes are currently running. You might start a function, hit a receive or be preemptive rescheduled and wait without doing any actual work. Combining those two might be complicated, and I would recommend using already existing tools before trying glue together your own.
You could also look into tools like erlgrind and eflame if you are looking for more visual representations of your calls.

Quartz.net job to start daily at given time with millisecond interval

I'm trying out Quartz.net, which would possibly solve an issue of mine. However I can not seem to find a way to start a job at a given time of day (07:30) to run a number of times (1000) with a interval in milliseconds (1).
I've tried a CronSchedule, but intervals can not be set. With SimpleSchedule, a start time can not be set and with DailyTimeIntervalSchedule I can not set interval in milliseconds. I've also tried to combine a setup with the varoius with the fluent api, to no avale.
Is what I try to achieve actually not possible in Quartz.net?
This might be what you are after:
IJobDetail theJobToRun = JobBuilder.Create<NoOpJob>().Build();
var trigger = TriggerBuilder.Create()
.StartAt(DateBuilder.DateOf(7, 30, 0))
.WithSimpleSchedule(x => x
.WithInterval(TimeSpan.FromMilliseconds(1))
.WithRepeatCount(999))
.ForJob(theJobToRun)
.Build();
Just as sgmoore noted, you might not get millisecond precision as your thread pool will be saturated with jobs and it all depends how much work they true. Quartz.NET infrastructure will also take its own time watching for fire times and instantiating jobs.

Construct Quartz.Net Cron Expression for a requirement

I have a requirement to send email reminder to customers. I'm trying to trigger a Quartz job based on a datetime that is X weeks later after an event. In the Quartz job, I'm supposed to check for some condition whether it happened or not. If condition is false (e.g. no action from customer), I have to send another reminder Y weeks later. I then check again for the same condition, if false, I will send last reminder on a specific datetime that is known to me right at the start of this whole process.
Any idea how to construct the cron expression? Thanks
lyf
I suppose you're using C#, right?
You can use this cron:
var CronReminderExpression = string.Format("0 0 9 1/{0} * ? *", (PeriodicityLength*7).ToString());
where PeriodicityLength is the number of weeks. I am multiplying for 7 cause there's no proper expression for weeks or, at least, I haven't been able to find it.
You can find a cron expression builder here.
Quartz.net 2.0 supports a new trigger called CalendarIntervalTrigger. You can read more here.
You can chain jobs this way.

Resources