How to calculate CPU time in elixir when multiple actors/processes are involved? - erlang

Let's say I have a function which does some work by spawning multiple processes. I want to compare CPU time vs real time taken by this function.
def test do
prev_real = System.monotonic_time(:millisecond)
# Code to complete some task
# Spawn different processes & give each process some task
# Receive result
# Finish task
current_real = System.monotonic_time(:millisecond)
diff_real = current_real - prev_real
IO.puts "Real time " <> to_string(diff_real)
IO.puts "CPU time ?????"
end
How to calculate CPU time required by the given function? I am interested in calculating CPU time/Real time ratio.

If you are just trying to profile your code rather than implement your own profiling framework I would recommend using already existing tools like:
fprof which will give you information about time spent in functions (real and own)
percept which will provide you information about which processes in your system ware working at any given time and on what
xprof which is design to help you find which calls to your function will cause it to take more time (trigger inefficient branch of code).
They take advantage of both erlang:trace to figure out which function is being executed and for how long and erlang:system_profile with runnable_procs to determine which processes are currently running. You might start a function, hit a receive or be preemptive rescheduled and wait without doing any actual work. Combining those two might be complicated, and I would recommend using already existing tools before trying glue together your own.
You could also look into tools like erlgrind and eflame if you are looking for more visual representations of your calls.

Related

Wrapper for slow indicator for backtesting

If a technical indicator works very slow, and I wish to include it in an EA ( using iCustom() ), is there a some "wrapper" that could cache the indicator results to a file based on the particular indicator inputs?
This way I could get a better speed next time when I backtest it using the same set of parameters, since the "wrapper" could read the result from file rather than recalculate the result from the indicator.
I heard that some developers did that for their needs in order to speed up backtesting, but as far as i know, there's no publicly available solution.
If I had to solve this problem, I would create a class with two fields (datetime and indicator value, or N buffers of the indicator), and a collection class similar to CArrayObj.mqh but with an option to apply binary search, or to start looking for element from a specific index, not from the very beginning of the array.
Recent MT4 Builds added VERY restrictive conditions for Indicators
In early years of MT4, this was not so cruel as it is these days.
FACT#1: fileIO is 10.000x ~ 100.000x slower than memIO:
This means, there is no benefit from "pre-caching" values to disk.
FACT#2: Processing Performance has HARD CEILING:
All, yes ALL, Custom Indicators, that are being used in MetaTrader4 Terminal ( be it directly in GUI, or indirectly, via Template(s) or called via iCustom() calls & in Strategy Tester via .tpl + iCustom() ) ALL THESE SHARE A SINGLE THREAD ...
FACT#3: Strategy Tester has the most demanding needs for speed:
Thus - eliminate all, indeed ALL, non-core indicators from tester.tpl template and save it as "blank", to avoid any part of such non-core processing.
Next, re-design the Custom Indicator, where possible, so as to avoid any CPU-ops & MEM-allocation(s), that are not necessary.
I remember a Custom Indicatore designs with indeed deep-convolutions, which could have been re-engineered so as to keep just a triangular sparse-matrix with necessary updates, that has increased the speed of Indicator processing more than 10.000x, so code-revision is the way.
So, rather run a separate MetaTrader4 Terminal, just for BackTesting, than having to wait for many hours just due to un-compressible nature of numerical processing under a traffic-jam congestion in the shared use of the CustomIndicator-solo-Thread that none scheduling could improve.
FACT#4: O/S can increase a process priority:
Having got to the Devil's zone, it is a common practice to spin-up the PRIO for the StrategyTester MT4, up to the "RealTime PRIO" in the O/S tools.
One may even additionally "lock" this MT4-process onto a certain CPU-core(s) and setup all other processes with adjacent CPU-core-AFFINITY, so that these two distinct groups of processes do not jump one to the other group's CPU-core(s). Hard, but if squeezing the performance to the bleeding edge, this is a must.

How do you check the uptime of a Phoenix/Elixir/Erlang application?

How do you check the uptime of a Phoenix/Elixir/Erlang application? If you do :observer.start() and look at the System tab, you can see the uptime in the Statistics area. But I want to be able to pull that information programmatically and include it in a report. I've figured out where to get most of that data from, but I don't see where it pulls uptime from.
You can use either statistics(runtime):
Returns information about runtime, in milliseconds.
This is the sum of the runtime for all threads in the Erlang runtime
system and can therefore be greater than the wall clock time.
Or statistics(wall_clock):
Returns information about wall clock. wall_clock can be used in the
same manner as runtime, except that real time is measured as opposed
to runtime or CPU time.
In both cases, you need to call them at the beginning of your program in order to reset their timers. When you want to print the time passed just do:
{_, Time1} = statistics(runtime).
Or
{_, Time2} = statistics(wall_clock).
Accordingly, and then you will have the time in Time1 or Time2. For more information take a look at erlang:statistics/1
Note: If you want the total time elapsed since the Erlang VM started you can take the first element from the tuple: {Total_Time, Time_Since_Last_Call} = statistics(wall_clock).

Questions about the nextTuple method in the Spout of Storm stream processing

I am developing some data analysis algorithms on top of Storm and have some questions about the internal design of Storm. I want to simulate a sensor data yielding and processing in Storm, and therefore I use Spout to push sensor data into the succeeding bolts at a constant time interval via setting a sleep method in nextTuple method of Spout. But from the experiment results, it appeared that spout didn't push data at the specified rate. In the experiment, there was no bottleneck bolt in the system.
Then I checked some material about the ack and nextTuple methods of Storm. Now my doubt is if the nextTuple method is called only when the previous tuples are fully processed and acked in the ack method?
If this is true, does it means that I cannot set a fixed time interval to emit data?
Thx a lot!
My experience has been that you should not expect Storm to make any real-time guarantees, including in your case the rate of tuple processing. You can certainly write a spout that only emits tuples on some time schedule, but Storm can't really guarantee that it will always call on the spout as often as you would like.
Note that nextTuple should be called whenever there is room available for more pending tuples in the topology. If the topology has free capacity, I would expect Storm to try to fill it up if it can with whatever it can get.
I had a similar use-case, and the way I accomplished it is by using TICK_TUPLE
Config tickConfig = new Config();
tickConfig.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 15);
...
...
builder.setBolt("storage_bolt", new S3Bolt(), 4).fieldsGrouping("shuffle_bolt", new Fields("hash")).addConfigurations(tickConfig);
Then in my storage_bolt (note it's written in python, but you will get an idea) i check if message is tick_tuple if it is then execute my code:
def process(self, tup):
if tup.stream == '__tick':
# Your logic that need to be executed every 15 seconds,
# or what ever you specified in tickConfig.
# NOTE: the maximum time is 600 s.
storm.ack(tup)
return

Can I profile Lua scripts running in Redis?

I have a cluster app that uses a distributed Redis back-end, with dynamically generated Lua scripts dispatched to the redis instances. The Lua component scripts can get fairly complex and have a significant runtime, and I'd like to be able to profile them to find the hot spots.
SLOWLOG is useful for telling me that my scripts are slow, and exactly how slow they are, but that's not my problem. I know how slow they are, I'd like to figure out which parts of them are slow.
The redis EVAL docs are clear that redis does not export any timekeeping functions to lua, which makes it seem like this might be a lost cause.
So, short a custom fork of Redis, is there any way to tell which parts of my Lua script are slower than others?
EDIT
I took Doug's suggestion and used debug.sethook - here's the hook routine I inserted at the top of my script:
redis.call('del', 'line_sample_count')
local function profile()
local line = debug.getinfo(2)['currentline']
redis.call('zincrby', 'line_sample_count', 1, line)
end
debug.sethook(profile, '', 100)
Then, to see the hottest 10 lines of my script:
ZREVRANGE line_sample_count 0 9 WITHSCORES
If your scripts are processing bound (not I/O bound), then you may be able to use the debug.sethook function with a count hook:
The count hook: is called after the interpreter executes every
count instructions. (This event only happens while Lua is executing a
Lua function.)
You'll have to build a profiler based on the counts you receive in your callback.
The PepperfishProfiler would be a good place to start. It uses os.clock which you don't have, but you could just use hook counts for a very crude approximation.
This is also covered in PiL 23.3 – Profiles
In standard Lua C, you can't. It's not a built-in function - it only returns seconds. So, there are two options available: You either write your own Lua extension DLL to return the time in msec, or:
You can do a basic benchmark using a millisecond-resolution time. You can access the current millisecond time with LuaSocket. Though this adds a dependency to your project, it's an effective way to do trivial benchmarking.
require "socket"
t = socket.gettime();

rails high memory usage

I am planning on using delayed job to run some background analytics. In my initial test I saw tremendous amount of memory usage, so I basically created a very simple task that runs every 2 minutes just to observe how much memory is is being used.
The task is very simple and the analytics_eligbile? method always return false, given where the data is now, so basically none of the heavy hitting code is being called. I have around 200 Posts in my sample data in development. Post has_one analytics_facet.
Regardless of the internal logic/business here, the only thing this task is doing is calling the analytics_eligible? method 200 times every 2 minutes. In a matter of 4 hours my physical memory usage is at 110MB and Virtual memory at 200MB. Just for doing something this simple! I can't even begin to imagine how much memory this will eat if its doing real analytics on 10,000 Posts with real production data!! Granted it may not run evevery 2 minutes, more like every 30, still I don't think it will fly.
This is running ruby 1.9.7, rails 2.3.5 on Ubuntu 10.x 64 bit. My laptop has 4GB memory, dual core CPU.
Is rails really this bad or am I doing something wrong?
Delayed::Worker.logger.info('RAM USAGE Job Start: ' + `pmap #{Process.pid} | tail -1`[10,40].strip)
Post.not_expired.each do |p|
if p.analytics_eligible?
#this method is never called
Post.find_for_analytics_update(p.id).update_analytics
end
end
Delayed::Worker.logger.info('RAM USAGE Job End: ' + `pmap #{Process.pid} | tail -1`[10,40].strip)
Delayed::Job.enqueue PeriodicAnalyticsJob.new(), 0, 2.minutes.from_now
Post Model
def analytics_eligible?
vf = self.analytics_facet
if self.total_ratings > 0 && vf.nil?
return true
elsif !vf.nil? && vf.last_update_tv > 0
ratio = self.total_ratings / vf.last_update_tv
if (ratio - 1) >= Constants::FACET_UPDATE_ELIGIBILITY_DELTA
return true
end
end
return false
end
ActiveRecord is fairly memory-hungry - be very careful when doing selects, and be mindful that Ruby automatically returns the last statement in a block as the return value, potentially meaning that you're passing back an array of records that get saved as a result somewhere and thus aren't eligible for GC.
Additionally, when you call "Post.not_expired.each", you're loading all your not_expired posts into RAM. A better solution is find_in_batches, which specifically only loads X records into RAM at a time.
Fixing it could be something as simple as:
def do_analytics
Post.not_expired.find_in_batches(:batch_size => 100) do |batch|
batch.each do |post|
if post.analytics_eligible?
#this method is never called
Post.find_for_analytics_update(post.id).update_analytics
end
end
end
GC.start
end
do_analytics
A few things are happening here. First, the whole thing is scoped in a function to prevent variable collisions from holding onto references from the block iterators. Next, find_in_batches retrieves batch_size objects from the DB at a time, and as long as you aren't building references to them, become eligible for garbage collection after each iteration runs, which will keep total memory usage down. Finally, we call GC.start at the end of the method; this forces the GC to start a sweep (which you wouldn't want to do in a realtime app, but since this is a background job, it's okay if it takes an extra 300ms to run). It also has the very distinct benefit if returning nil, which means that the result of the method is nil, which means we can't accidentally hang on to AR instances returned from the finder.
Using something like this should ensure that you don't end up with leaked AR objects, and should vastly improve both performance and memory usage. You'll want to make sure you aren't leaking elsewhere in your app (class variables, globals, and class references are the worst offenders), but I suspect that this'll solve your problem.
All that said, this is a cron problem (periodic recurring work), rather than a DJ problem, in my opinion. You can have a one-shot analytics parser that runs your analytics every X minutes with script/runner, invoked by cron, which very neatly cleans up any potential memory leaks or misuses per-run (since the whole process terminates at the end)
Loading data in batches and using the garbage collector aggressively as Chris Heald has suggested is going to give you some really big gains, but another area people often overlook is what frameworks they're loading in.
Loading a default Rails stack will give you ActionController, ActionMailer, ActiveRecord and ActiveResource all together. If you're building a web application you may not be using all of these, but you're probably using most.
When you're building a background job, you can avoid loading things you don't need by creating a custom environment for that:
# config/environments/production_bg.rb
config.frameworks -= [ :action_controller, :active_resource, :action_mailer ]
# (Also include config directives from production.rb that apply)
Each of these frameworks will just be sitting around waiting for an email that will never be sent, or a controller that will never be called. There's simply no point in loading them. Adjust your database.yml file, set your background job to run in the production_bg environment, and you'll have a much cleaner slate to start with.
Another thing you can do is use ActiveRecord directly without loading Rails at all. This might be all that you need for this particular operation. I've also found using a light-weight ORM like Sequel makes your background job very light-weight if you're doing mostly SQL calls to reorganize records or delete old data. If you need access to your models and their methods you will need to use ActiveRecord, though. Sometimes it's worth re-implementing simple logic in pure SQL for reasons of performance and efficiency, though.
When measuring memory usage, the only number to be concerned with is "real" memory. The virtual amount contains shared libraries and the cost of these is spread amongst every process using them even though it is counted in full for each one.
In the end, if running something important takes 100MB of memory but you can get it down to 10MB with three weeks of work, I don't see why you'd bother. 90MB of memory costs at most about $60/year on a managed provider which is usually far less expensive than your time.
Ruby on Rails embraces the philosophy of being more concerned with your productivity and your time than about memory usage. If you want to trim it back, put it on a diet, you can do it but it will take a bit of effort.
If you are experiencing memory issues, one solution is to use another background processing tech, like resque. It is the BG processing used by github.
Thanks to Resque's parent / child
architecture, jobs that use too much
memory release that memory upon
completion. No unwanted growth
How?
On certain platforms, when a Resque
worker reserves a job it immediately
forks a child process. The child
processes the job then exits. When the
child has exited successfully, the
worker reserves another job and
repeats the process.
You can find more technical details in README.
It is a fact that Ruby consumes (and leaks) memory. I don't know if you can do much about it, but at least I recommend that you take a look on Ruby Enterprise Edition.
REE is an open source port which promises "33% less memory" among all the other good things. I have used REE with Passenger in production for almost two years now and I'm very pleased.

Resources