Rails: Troubleshooting a memory leak on Heroku ( perhaps Nokogiri) - ruby-on-rails

I am using Rails 3.1.1 and deploying on Heroku. I am using open-uri and Nokogiri.
I am trying to troubleshoot a memory leak (?) that occurs while I am trying to fetch and parse an xml-file. The XML feed I am fetching and trying to parse is 32 Mb.
I am using the following code for it:
require 'open-uri'
open_uri_fetched = open(feed.fetch_url)
xml_list = Nokogiri::HTML(open_uri_fetched)
where feed.fetch_url is an external xml-file.
It seems that while parsing the xml_list with Nokogiri (the last line in my code) the memory usage explodes up to 540 Mb usage and continues to increase. That doesn't seem logical since the XML-file is only 32 Mb.
I have looked all over for ways to analyze this better (e.g. ruby/ruby on rails memory leak detection) but I can't understand how to use any of them. MemoryLogic seems simple enough but the installation instructions seem to lack some info...
So, please help me to either determine whether the code above should use that much memory or (super simple) instructions on how to find the memory leak.
Thanks in advance!

Parsing a large xml file and turning it into a document tree will in general create an in memory representation that is far larger that the xml data itself. Consider for example
<foo attr="b" />
which is only 16 bytes long (assuming a single byte character encoding). The in memory representation of this document will include an object to represent the element itself, probably an (empty) collection of children, a collection of attributes for that element containing at least one thing. The element itself has properties likes its name, namespace pointers to its parent document and so on. The data structures for each of those things is probably going to be over 16 bytes, even before they're wrapped in ruby objects by nokogiri (each of which has a memory footprint which is almost certainly >= 16 bytes).
If you're parsing large xml files you almost certainly want to use a event driven parser like a SAX parser that responds to elements as they are encountered in the document rather than building a tree representation on the entire document an then working on that.

Are you sure you aren't running up against the upper limits of what heroku allows for 'long running tasks'?
I've timed out and had stuff just fail on me all the time due to some of the restrictions heroku puts on the freebie people.
I mean, can you replicate this in your dev? How long does it take on your machine to do what you want?
EDIT 1:
What is this too by the way?
open_uri_fetched = open(feed.fetch_url)
Where is the url it is fetching? Does it bork there or on the actually Nokogiri call. How long does this fetch take anyways?

Related

Zlib ruby - how to check if data is deflated/compressed before processing it?

I'm in a situation where some data in a database is not compressed, but I want to enable compression for new data coming in, without having to update all records currently in the database to make them compressed as well.
So I need to be able to say, if deflated, inflate it and process it, else just process it. But I can't see how to gracefully check whether the data is already compressed before trying to process it, unless I do a 'begin ... rescue' block:
begin
process(Zlib::Inflate.inflate(data))
rescue Zlib::DataError
process(data)
end
Is there a better way? I've seen references to magic numbers and checking the first couple of bytes of the file, but no good examples of how to achieve these things in ruby. Any help appreciated. Thanks.
What you propose is the best way. It will very quickly determine that the first two bytes are not a zlib header. If by accident the input data appears to be a zlib header (about a 1 in 1024 chance), then the decompression will detect an invalid deflate stream given random data within on the order of 30 bytes almost all the time.
As you suggested, you could either rescue the specific exception or manually validate the file type by reading the magic numbers.
Ruby IO's readpartial can read the specified number of bytes, which you can compare with the magic number.
I personally would stick to rescuing the exception as a lot of the core libraries perform the same magic number check before raising the exception.

HHVM staticly typing lookup tables and keeping them fully cached in RAM

I'm doing scientific research, processing through millions of combinations of multi-megabyte arrays.
For you to be capable of answering this question you will need to have knowledge/experience of all of the following
how HHVM is able to cache data structures in RAM between requests
how to tell HHVM data structures will be constant
how to declare array index and value types
I need to process the entire arrays, so it's a lot of data to be loaded and processed. (millions of requests within minutes on a LAN). The faster I can complete requests the quicker I can complete my work. If HHVM has to do work loading this data on each request, it accounts for a significant fraction of the time to complete the request (sometimes more than half, it depends on the complexity of the analysis I'm doing at the time).
I have found a method that has allowed me to keep these data structures cached in RAM (no loading from files, interpreting code, pushing to the array hundreds of thousands of times for no reason, no pointless repetitive unserialize etc), and thus I have eliminated this massive measurable delay.
I have 3 questions regarding how I can make this even faster:
Is the way I'm doing it now creating a global scope penalty?
How can I declare my arrays as constant and tell HHVM what data types to expect?
If I declare my arrays as constant is it even necessary to declare the types for HHVM?
Instead of using nested arrays, if I use 3 separate data structures ImmVector, PackedArray, or define a class would it be faster?
Keep in mind that anything that prevents HHVM from caching the data structure in RAM between requests should be regarded as unacceptable.
Lookuptable35543.php
<?php
$data = [
["uuid (20 chars)", 5336, 7373],
["uuid (20 chars)", 5336, 7373],
#more lines as above
];
?>
Some of these files are many MB in size and there are a lot of them
Main.php
<?php
function main() {
require /path/to/Lookuptable35543.php;
#(Do stuff with $data)
}
?>
This is working quite well, as Main.php gets thousands of requests, in a short period of time, HHVM keeps Lookuptable.php's data structure in memory. Avoiding pointless processing and IO, as it just sits in RAM, ready for use. (I have more than enough RAM)
Unfortunately, the only way I know how to make HHVM hold the lookup table in RAM is, I set $data in the global scope inside my lookup####.php file (then require the lookup file into a function in the data processing file: Main.php)? This way HHVM doesnt bother loading the file or re executing the code to create $data, because it can see that $data can be determined at compile time, and it will not ever change during runtime. This works but I dont know if there is a penalty from having the $data exist in the lookup###.php file's global scope. (Or maybe its not global at all because it is required into main.php's function?)
What if I return $data from a function inside Lookup.php and call that function from Main.php like this
Main.php
Would the HHVM JIT the result of getData() in RAM?
Somehow I associate functions with unpredictability... but maybe HHVM is clever enough to know that the functions result can be determined at compile time, and never changes?
I can't put the lookup table inside Main.php because I require different lookup tables based on the type of request.
Is there a way I can tell HHVM that my outer array will always have an integer index that never changes, and the values of the outer array will always be an array?
Perhaps I need to use ImmVector?
Then is there a way to tell HHVM that my inner array will always be a fixed length string followed by 2 integers, always, no extra elements, contents never changes?
I'd prefer not to use OO or create a class. How can I declare types, procedural style?
If a class is absolutely necessary can you please give example code suitable for my requirements above?
Will it be faster if I dont nest arrays?
I just realized I could have one array with integer index and values of fixed length string. Then a 2nd array with integer index and integer values, and a 3rd one with integer index and integer values.
If you're not familiar with this HHVM caching technique please do not waste mutual time suggesting a database, redis, APC, unserialize, etc. The fastest is for HHVM to just keep my various $data variables in RAM. Even unserializing $data from a ramdisk file is slow, because then the entire data structure must be parsed as a string and converted into a data structure in memory for every request. APC has the same problem as far as i know. I dont want to even have to copy $data. The lookup tables are immutable, read only. They must just stay fully structured in RAM. My current caching solution (at the top of this question) has already given me huge gains, but as per my 3 questions I think there may be more gains to be had?
Incase you're wondering, I have measured the latency of various data loading or caching methods.
Now I basically want to keep the caching situation I have, but give the HHVM JIT maximum confidence about how to type my data, so it can save time not running type or even bound (array size) checks.
Edit
Ok so nobody has been able to give me any code examples yet, so I'm just trying stuff out.
Here's what I've found out so far.
const arrays don't work yet in HHVM. const foo = ['uuid1',43,43];
throws an error about HHVM only supporting constants with scalar values.
Vector with Array values: I don't know how it will perform yet... I expect it will be better than a normal array. This is valid HH code.
This is progress, because HHVM should be able to cache this in the same way, HHVM knows this whole structure is constant, and HHVM knows the indexes are all integers.
What I'm still not entirely happy about with this structure is this:
Consider this code
for ($n=0;$n<count($iv);++$n) if ($x > $iv[$n][1]) dosomething();
Will HHVM perform a type check on $if[$n][2] on every loop iteration?
In my definition of $iv above, there is nothing that says the 2nd element of the inner array will be an integer.
How can I improve on this?
Can disabling the type checker be of any use? Does this only hide errors from the external type checker, or does it prevent HHVM from constantly doing type checks? (I'm thinking it's the first thing)
Perhaps if I could make my own user-defined type that would solve the problem?
<?hh
#I don't know what mechanisms for UDT's exist, so this code is made-up
CreateUDT foo = <string,int,int>;
$iv = ImmVector<foo> {
['uuid1',425,244],
['uuid2',658,836]
};
print_r($iv);
I found a reference to this at Hack Collections Literal Syntax Vector<Foo> unfortunately it might not be available to use yet.
I'm a software engineer at Facebook working on HHVM.
This entire question reeks of premature optimization to me. Have you done profiling and determined that loading this array is actually a bottleneck for your app? (Not just microbenchmarks, but how it actually affects the performance, latency, RPS, etc of realistic pageloads.) And also isolated from other effects, e.g., if this array is a cache or some sort of precomputed data, you need to isolate the win of precomputing the data from the actual time to load it by caching it in various different ways.
In general, HHVM is very good at dealing with arrays, since they are so hot in nearly every codepath -- and in particular at constant arrays like this one. To your questions about how to inform it of the shape and types of things in the arrays, HHVM can figure that all out for itself, and is very good at doing so on constant arrays composed entirely of constants. (And the ways it thinks about arrays aren't quite the ways you think about arrays, so it can probably do a better job anyway!) Basically, unless profiling says this is actually a hotspot -- which I'm pretty skeptical of -- I wouldn't worry too much about it. A couple general notes to be aware of:
Measure every performance diff. Don't prematurely optimize -- use profiling to guide. The developer productivity lost by premature optimizations getting in the way can be lethal.
Get things out of toplevel ("pseudomains") as much as possible. A function which returns a static or constant array should be just fine, and will in general help HHVM optimize code even better.
Avoid references as much as possible, especially in this array if you care about performance so much.
You probably should look into repo authoritiative mode which can help HHVM optimize lots of things even more -- but in particular for this case, the more aggressive inlining that repo auth mode can do might be a win.
Edit, aside:
because then the entire data structure must be parsed as a string and converted into a data structure in memory for every request. APC has the same problem as far as i know
This is exactly what I mean by premature optimization: you're rejecting APC without even trying it, even if it might be a cleaner way of doing what you want. It turns out that, in most cases, HHVM actually can optimize away the serialization/deserialization of storing arrays in APC, particularly if they are constant arrays that are never modified. As above, HHVM is very good at optimizing lots of common patterns. Just write code that's clean, profile it, and fix the hotspots.
Okay I've solved my first question.
I don't have any global scope issues. My require is being done from inside function main(), so it's as if the code from lookuptable####.php is being inserted into function main().
HHVM docs: "If the include occurs inside a function..."
Basically if you were to open lookuptable####.php it looks like the code is in global scope, but that's not the file that is being requested from hhvm. main.php is the one being requested, thus there is no code in global scope.
I think I've answered my 2nd question, it's currently at the bottom of my question. I'm not 100% convinced, but I'm pretty happy to move ahead and test it.

+ [NSString stringWithContentsOfFile:] performance

I need to load files in iOS, now I use the + [NSString stringWithContentsOfFile:].
The files are mostly 500kb to 5mb.
I load an approx. 4mb large file and Instruments and stopwatch told me it needs 1.5 seconds to load this file. In my opinion its a bit slow, is there a way to get the string faster?
EDIT:
I try some things and notice now, the creation of the NSString is my problem it takes 97% of the time and not the real loading from disk.
If you either know the encoding or can determine it (the API you use now is basic), you can just treat it as a char buffer (in a manner which is encoding aware).
I'd begin by opening it using memory mapped data (mmap, you can also approach this using NSData). madvise can be used to hint how you will access the file.
If memory mapped I/O consumes too much memory for your use, you should drop down to incremental reads, like C I/O facilities - fopen, fread, etc.. This will typically require more I/O events than memory mapped data (can be much slower, depending on how the data is accessed).
In both cases, you would treat the string as a C string -- don't simply convert the whole file to an NSString upon opening.
Foundation has a lot of tricks, so make sure this actually improves performance for your specific use case.
If those solutions are too 'core for your use, just consider using smaller files instead (dividing existing files).

importing and processing data from a CSV File in Delphi

I had an pre-interview task, which I have completed and the solution works, however I was marked down and did not get an interview due to having used a TADODataset. I basically imported a CSV file which populated the dataset, the data had to be processed in a specific way, so I used Filtering and Sorting of the dataset to make sure that the data was ordered in the way I wanted it and then I did the logic processing in a while loop. The feedback that was received said that this was bad as it would be very slow for large files.
My main question here is if using an in memory dataset is slow for processing large files, what would have been better way to access the information from the csv file. Should I have used String Lists or something like that?
It really depends on how "big" and the available resources(in this case RAM) for the task.
"The feedback that was received said that this was bad as it would be very slow for large files."
CSV files are usually used for moving data around(in most cases that I've encountered files are ~1MB+ up to ~10MB, but that's not to say that others would not dump more data in CSV format) without worrying too much(if at all) about import/export since it is extremely simplistic.
Suppose you have a 80MB CSV file, now that's a file you want to process in chunks, otherwise(depending on your processing) you can eat hundreds of MB of RAM, in this case what I would do is:
while dataToProcess do begin
// step1
read <X> lines from file, where <X> is the max number of lines
you read in one go, if there are less lines(i.e. you're down to 50 lines and X is 100)
to process, then you read those
// step2
process information
// step3
generate output, database inserts, etc.
end;
In the above case, you're not loading 80MB of data into RAM, but only a few hundred KB, and the rest you use for processing, i.e. linked lists, dynamic insert queries(batch insert), etc.
"...however I was marked down and did not get an interview due to having used a TADODataset."
I'm not surprised, they were probably looking to see if you're capable of creating algorithm(s) and provide simple solutions on the spot, but without using "ready-made" solutions.
They were probably thinking of seeing you use dynamic arrays and creating one(or more) sorting algorithm(s).
"Should I have used String Lists or something like that?"
The response might have been the same, again, I think they wanted to see how you "work".
The interviewer was quite right.
The correct, scalable and fastest solution on any medium file upwards is to use an 'external sort'.
An 'External Sort' is a 2 stage process, the first stage being to split each file into manageable and sorted smaller files. The second stage is to merge these files back into a single sorted file which can then be processed line by line.
It is extremely efficient on any CSV file with over say 200,000 lines. The amount of memory the process runs in can be controlled and thus dangers of running out of memory can be eliminated.
I have implemented many such sort processes and in Delphi would recommend a combination of TStringList, TList and TQueue classes.
Good Luck

Parsing Very Large XML file with Ruby on Rails (1.4GB) -- Is there a better way than SAXParser?

Currently, I'm using LIBXML::SAXParser::Callbacks to parse a large XML file containing data 140,000 products. I'm using a task to import the data for these products into my rails app.
My last import took just under 10 hours to complete:
rake asi:import_products --trace 26815.23s user 1393.03s system 80% cpu 9:47:34.09 total
The problem with the current implementation is that the complex dependency structure in the XML means, I need to keep track of the entire product node to know how to parse it properly.
Ideally, I'd like a way that I could process each product node by itself and have the ability to use XPATH, the file size restricts us from using a method that requires loading the entire XML file into memory. I cannot control the format or size of original XML. I have at most, 3GB worth of memory I can use on the process.
Is there a better way than this?
Current Rake Task code:
Snippet of the XML file:
Can you fetch whole file first? If so, then I'd suggest splitting an XML file into smaller chunks (say, 512MBs or so) so you could parse simultaneous chunks at one time (one per core), 'cause I believe you have modern CPU. Regarding the invalid or malformed xml - just append or prepend missing XML with simple string manipulation.
You can also try profiling your callback method. It's a big chunk of code, I'm pretty sure there should be at least one bottle neck which could save you a few minutes.

Resources