I need to load files in iOS, now I use the + [NSString stringWithContentsOfFile:].
The files are mostly 500kb to 5mb.
I load an approx. 4mb large file and Instruments and stopwatch told me it needs 1.5 seconds to load this file. In my opinion its a bit slow, is there a way to get the string faster?
EDIT:
I try some things and notice now, the creation of the NSString is my problem it takes 97% of the time and not the real loading from disk.
If you either know the encoding or can determine it (the API you use now is basic), you can just treat it as a char buffer (in a manner which is encoding aware).
I'd begin by opening it using memory mapped data (mmap, you can also approach this using NSData). madvise can be used to hint how you will access the file.
If memory mapped I/O consumes too much memory for your use, you should drop down to incremental reads, like C I/O facilities - fopen, fread, etc.. This will typically require more I/O events than memory mapped data (can be much slower, depending on how the data is accessed).
In both cases, you would treat the string as a C string -- don't simply convert the whole file to an NSString upon opening.
Foundation has a lot of tricks, so make sure this actually improves performance for your specific use case.
If those solutions are too 'core for your use, just consider using smaller files instead (dividing existing files).
Related
Background:
I am working on a 2d infinite world generation. It is tile based meaning my terrain is fully made out of squares. You can imagine it like 2d Minecraft (looking at the terrain from above).
I implemented standard chunk system where the terrain gets chopped into small 8x8 tile areas that get loaded and deleted as player moves around the world. This, so far mentioned, works perfectly smooth without any hiccups or lag. I am using Lua and Corona SDK.
The problem:
Since the player will be able to modify the terrain, I need a fast and efficient system of saving chunks in memory once the player loads a new chunk and a system of loading those chunks from memory if they have been loaded previously.
This is where the problem takes place. It needs to read from and save to files (memory) quite often which causes noticeable lag. Making chunks bigger is not an option.
Solutions I tried but all caused lag:
a) First and obvious solution I implemented was to just create a text file for each chunk with tile names as strings. It looked something like this: x12y10.txt and inside the file I just dumped all tile names in order they need to be placed on screen: "Grass Grass Water Sand Sand Sand Grass Grass...". That worked but loading strings was slow so I tried another solution: save tiles as indexes.
b) Saving tiles as their indexes. I paired every tile to a number. Since numbers are shorter, they take less memory and are faster to load. I gave each tile it's own index: Grass -> id 1, Water -> id 2, Sand -> id 3 and so on. This way I only needed to save 1 or 2 chars instead of full string per tile. My txt files looked like this now: "1 1 2 3 3 3 1 1...". This worked better but still caused lag.
c) Next improvement I did was with how chunks are organized in memory. Instead of dumping all the chunks in a single folder, for each x coordinate I made a folder and put all chunks that have that x value in there.
So instead of this:
Folder with all chunks: x0y0.txt, x0y1.txt, x0y2.txt, x1y0.txt, x1y1.txt, x1y2.txt
Inside folder with all chunks I had this:
Folder x0: x0y0.txt, x0y1.txt, x0y2.txt
Folder x1: x1y0.txt, x1y1.txt, x1y2.txt
I am not sure how much this helped for small number of chunks, but I am pretty sure for thousands of chunks, improvement is there.
Possible solutions?
I have some ideas for improvements, but I would like to hear your opinion on the solutions.
a) Saving terrain in binary files?
b) I have read about Minecraft region format, really tried to understand how it works, but did not get it since there is little information about it. So if anyone knows it and could explain their system to me, I would be really grateful.
c) Another faster file format?
d) Is making/accessing many folders slow? Is there a better alternative?
I really feel like this is cs-101 question, but cannot google up any answer right away so quick summary.
All files are just sequences of bytes. If we're talking about reading and writing raw bytes, no format will make 64 bytes appear in memory faster than another.
Text file is a sequence of bytes with slight limitations on their values (well, the limitation is if you want standard text programs to display it). A string "11" (sequence of bits: 110001110001) from a text file won't be loaded faster than sequence of unprintable bits 100000100000 from "binary" file.
Structuring directories at the very least reduces the number of nodes system checks when trying the file you've requested to open. But mechanisms underlying the filesystems are very complex and affected by a lot of factors. The overall guess is that frequent reading even of small files will be slow. And all files carry some stockpiling overhead (system info to keep them tracked and ordered), small files will have lower useful/auxiliary info ratio. I know of at least one 2d project with mutable map that was making hdds growl and grunt before they moved onto bigger files years ago.
You don't have to make chunks bigger, that's different thing, but you can write them into the same file.
Instead of million of files by 64 bytes you can have a single megabyte file (assuming you use a byte per chunk). A million chunks is lot for a player to modify or walk around. If you unpack that data to tables, it will take up more space but you don't have to decipher all the string, only the currently needed bytes. Yes, modifying a megabyte string in lua will cause creation of another megabyte string which is slow, but you don't have to do it every time, or you can split string into smaller ones and modify those. And only do writing when needed. I/O bufferization may even happen without your intervention but again it is usually helpful for big files.
Yes there will be more than a byte of info per tile (2^8 possible states per tile is a lot however), the system stays the same.
The same thing is done for textures, because loading data in a single big chunk in a single big scoop is faster than searching around for tiny bit here and there. Indexing a single long area of memory is also faster than chasing pointers around.
On top of that, you may try to read\write less bytes than you want in the memory. For example by compressing data.
In minecraft chunks are not stored unless they have been visited / modified, otherwise they are generated.
That would leave you a system where only blocks which have been modified by the player would need to be stored, with the un-modified areas being re-generated by using the same random seed, each time.
Creating a hierarchy of modifications ... A chunk is an 8x8 block, create a super-chunk which is 8x8 chunks, and only look for a file, if any of the 8x8 super-chunk has been modified.
Possibly store all of the super-chunk in one file, which would limit the number of files (adding more files does decrease the speed of the system, and also uses space on the system inefficiently).
If you have any spare time-space, perhaps have a cache of the chunks near the player, and pre-load the modified areas which are being approached. This would limit the visible lag required
I have to embed a large text file in the limited space of internal memory of a MCU. This MCU will use the content of the text file for some purposes later.
The memory limitation dos not allow me to embed file content directly in my code (suppose that I use a character array to store file content), but if I compress the content of the file (using a light-weight algorithm like zip or gzip) then everything would be OK.
Suppose that the MCU uses getBytes(i, len) function to read content of my array (where i is index of the begining of required byte & len is length of data to be readed),
Here the problem is that when I compress the content & store it on the device(in my character array) I can't use getBytes function anymore for getting target data, so if I can write a wrapper on top of the getBytes function to map compressed content to requested content, then my problem will be solve.
I've no processing limitation on the MCU, anymore the memory amount is limited, & as I know access to the content of a zip compressed file is sequential, so I don't know is it possible to do this in an acceptable manner using C or C++ in such environment?
It is definitely possible to do this in a simple and efficient manner.
However, it's better to use piecewise compression (at the expense of compression ratio) instead of compressing/decompressing the entire file at once, otherwise you need to store the entire decompressed file in RAM.
With a small piece, the compression ratio of a strong algorithm will not be much different from a relatively weak one. So I recommend using a simple compression algorithm.
Disk compression algorithms are best suited for such purposes as they are designed to compress/decompress blocks.
I am using Rails 3.1.1 and deploying on Heroku. I am using open-uri and Nokogiri.
I am trying to troubleshoot a memory leak (?) that occurs while I am trying to fetch and parse an xml-file. The XML feed I am fetching and trying to parse is 32 Mb.
I am using the following code for it:
require 'open-uri'
open_uri_fetched = open(feed.fetch_url)
xml_list = Nokogiri::HTML(open_uri_fetched)
where feed.fetch_url is an external xml-file.
It seems that while parsing the xml_list with Nokogiri (the last line in my code) the memory usage explodes up to 540 Mb usage and continues to increase. That doesn't seem logical since the XML-file is only 32 Mb.
I have looked all over for ways to analyze this better (e.g. ruby/ruby on rails memory leak detection) but I can't understand how to use any of them. MemoryLogic seems simple enough but the installation instructions seem to lack some info...
So, please help me to either determine whether the code above should use that much memory or (super simple) instructions on how to find the memory leak.
Thanks in advance!
Parsing a large xml file and turning it into a document tree will in general create an in memory representation that is far larger that the xml data itself. Consider for example
<foo attr="b" />
which is only 16 bytes long (assuming a single byte character encoding). The in memory representation of this document will include an object to represent the element itself, probably an (empty) collection of children, a collection of attributes for that element containing at least one thing. The element itself has properties likes its name, namespace pointers to its parent document and so on. The data structures for each of those things is probably going to be over 16 bytes, even before they're wrapped in ruby objects by nokogiri (each of which has a memory footprint which is almost certainly >= 16 bytes).
If you're parsing large xml files you almost certainly want to use a event driven parser like a SAX parser that responds to elements as they are encountered in the document rather than building a tree representation on the entire document an then working on that.
Are you sure you aren't running up against the upper limits of what heroku allows for 'long running tasks'?
I've timed out and had stuff just fail on me all the time due to some of the restrictions heroku puts on the freebie people.
I mean, can you replicate this in your dev? How long does it take on your machine to do what you want?
EDIT 1:
What is this too by the way?
open_uri_fetched = open(feed.fetch_url)
Where is the url it is fetching? Does it bork there or on the actually Nokogiri call. How long does this fetch take anyways?
I had an pre-interview task, which I have completed and the solution works, however I was marked down and did not get an interview due to having used a TADODataset. I basically imported a CSV file which populated the dataset, the data had to be processed in a specific way, so I used Filtering and Sorting of the dataset to make sure that the data was ordered in the way I wanted it and then I did the logic processing in a while loop. The feedback that was received said that this was bad as it would be very slow for large files.
My main question here is if using an in memory dataset is slow for processing large files, what would have been better way to access the information from the csv file. Should I have used String Lists or something like that?
It really depends on how "big" and the available resources(in this case RAM) for the task.
"The feedback that was received said that this was bad as it would be very slow for large files."
CSV files are usually used for moving data around(in most cases that I've encountered files are ~1MB+ up to ~10MB, but that's not to say that others would not dump more data in CSV format) without worrying too much(if at all) about import/export since it is extremely simplistic.
Suppose you have a 80MB CSV file, now that's a file you want to process in chunks, otherwise(depending on your processing) you can eat hundreds of MB of RAM, in this case what I would do is:
while dataToProcess do begin
// step1
read <X> lines from file, where <X> is the max number of lines
you read in one go, if there are less lines(i.e. you're down to 50 lines and X is 100)
to process, then you read those
// step2
process information
// step3
generate output, database inserts, etc.
end;
In the above case, you're not loading 80MB of data into RAM, but only a few hundred KB, and the rest you use for processing, i.e. linked lists, dynamic insert queries(batch insert), etc.
"...however I was marked down and did not get an interview due to having used a TADODataset."
I'm not surprised, they were probably looking to see if you're capable of creating algorithm(s) and provide simple solutions on the spot, but without using "ready-made" solutions.
They were probably thinking of seeing you use dynamic arrays and creating one(or more) sorting algorithm(s).
"Should I have used String Lists or something like that?"
The response might have been the same, again, I think they wanted to see how you "work".
The interviewer was quite right.
The correct, scalable and fastest solution on any medium file upwards is to use an 'external sort'.
An 'External Sort' is a 2 stage process, the first stage being to split each file into manageable and sorted smaller files. The second stage is to merge these files back into a single sorted file which can then be processed line by line.
It is extremely efficient on any CSV file with over say 200,000 lines. The amount of memory the process runs in can be controlled and thus dangers of running out of memory can be eliminated.
I have implemented many such sort processes and in Delphi would recommend a combination of TStringList, TList and TQueue classes.
Good Luck
I need to store my data into memory. My type data of my data is string. I want to minimize the memory usage. I guess I have to change string into byte. Am I right? If I convert string to byte, that means I have to convert string to TMemoryStream?
If you really want to convert it then this code will get it done
var
BinarySize: Integer;
InputString: string;
StringAsBytes: array of Byte;
begin
BinarySize := (Length(InputString) + 1) * SizeOf(Char);
SetLength(StringAsBytes, BinarySize);
Move(InputString[1], StringAsBytes[0], BinarySize);
But as already stated this will not save you memory. The ammount of it used will be practically the same. You will gain nothing from this alone. If you are having to many strings take a different approach. Like something from this list of choices:
Use a dictionary and only store each same string once
Only hold a portion of all strings in memory. Some sort of cache. Have others on hard drive and use streams to load them
If you have very large string consider compressing them.
If you are reading from file and you target is binary data, skip the string in the middle. Read the source directly into a byte buffer.
It is hard to give further help without knowing more about the problem.
EDIT:
If you really want a minimum memory footprint and you can live with a little lower speed (but still very fast) you can use Suffix Trie or B-Tree or event a simple Binary Tree. They can work directly from hard drive and can be very fast for searching. If you then cache a subset of the data to RAM, you get the optimal solution memory vs. speed wise.
Anyway given the ammount of data you claim to have it seems no memory optimization is needed at all. 22MB of RAM is hardly an issue and not worth optimizing.
Are you certain this is an optimization that is needed?
2000 lines that are 10 characters long is only 20000 characters.
In most environments, that's tiny. Most machines have considerably more RAM than that. Most disks are considerably larger than that. And, usually, sending and receiving that much information is trivial over the web.
Perhaps your situation is unique. Maybe you have large number of 20000 character data sets, or very slow web access over which to transmit this date, etc. But, I'd encourage you to consider whether you aren't perhaps trying to optimize something that even if you are very successful in implementing, won't significantly change your application's performance in the real world.
Make your storage type tutf8string. It can be simply assigned from tunicodestring, and conversion should be safe.