Zip stream implementation with C in embedded devices - stream

I have to embed a large text file in the limited space of internal memory of a MCU. This MCU will use the content of the text file for some purposes later.
The memory limitation dos not allow me to embed file content directly in my code (suppose that I use a character array to store file content), but if I compress the content of the file (using a light-weight algorithm like zip or gzip) then everything would be OK.
Suppose that the MCU uses getBytes(i, len) function to read content of my array (where i is index of the begining of required byte & len is length of data to be readed),
Here the problem is that when I compress the content & store it on the device(in my character array) I can't use getBytes function anymore for getting target data, so if I can write a wrapper on top of the getBytes function to map compressed content to requested content, then my problem will be solve.
I've no processing limitation on the MCU, anymore the memory amount is limited, & as I know access to the content of a zip compressed file is sequential, so I don't know is it possible to do this in an acceptable manner using C or C++ in such environment?

It is definitely possible to do this in a simple and efficient manner.
However, it's better to use piecewise compression (at the expense of compression ratio) instead of compressing/decompressing the entire file at once, otherwise you need to store the entire decompressed file in RAM.
With a small piece, the compression ratio of a strong algorithm will not be much different from a relatively weak one. So I recommend using a simple compression algorithm.
Disk compression algorithms are best suited for such purposes as they are designed to compress/decompress blocks.

Related

Are there any good DEFLATE-like compression algorithms that are additive and immutable?

I need to add more things onto a stacklike structure, but compress them additively such that addition of new data results in the expected compression gain, but new chunks are still stored as compressed data without altering any of the past data chunks.
So in other words, it must preserve the additive property over successive compressed chunks. If f() is the 'adding another chunk' function, I need the following to hold for all chunks 'x':
Sure. Deflate does this, if you just keep it running and terminate each chunk at a byte boundary, e.g. with Z_SYNC_FLUSH, so that it can be written to file up to and including that chunk.
So long as your chunks are large enough, you will get the same compression gain.

Reading file as stream of strings in Dart: how many events will be emitted?

Standard way to open a file in Dart as a stream is to use file.openRead() which returns a Stream<List<int>>.
The next standard step is to transform this stream with utf8.decoder SteamTranformer which returns Stream<String>.
I noticed that with the files I've tried this resulting stream only emits a single event with the whole file content represented as one string. But I feel like this should not be a general case since otherwise the API wouldn't need to return a stream of strings, a Future<String> would suffice.
Could you explain how can I observe the behavior when this stream emits more than one event? Is this dependent on the file size / disk IO rate / some buffers size?
It depends on file size and buffer size, and however the file operations are implemented.
If you read a large file, you will very likely get multiple events of a limited size. The UTF-8 decoder decodes chunks eagerly, so you should get roughly the same number of chunks after decoding. It might carry a few bytes across chunk boundaries, but the rest of the bytes are decoded as soon as possible.
Checking on my local machine, the buffer size seems to be 65536 bytes. Reading a file larger than that gives me multiple chunks.

Storing and loading 2d infinite procedurally generated tile based world

Background:
I am working on a 2d infinite world generation. It is tile based meaning my terrain is fully made out of squares. You can imagine it like 2d Minecraft (looking at the terrain from above).
I implemented standard chunk system where the terrain gets chopped into small 8x8 tile areas that get loaded and deleted as player moves around the world. This, so far mentioned, works perfectly smooth without any hiccups or lag. I am using Lua and Corona SDK.
The problem:
Since the player will be able to modify the terrain, I need a fast and efficient system of saving chunks in memory once the player loads a new chunk and a system of loading those chunks from memory if they have been loaded previously.
This is where the problem takes place. It needs to read from and save to files (memory) quite often which causes noticeable lag. Making chunks bigger is not an option.
Solutions I tried but all caused lag:
a) First and obvious solution I implemented was to just create a text file for each chunk with tile names as strings. It looked something like this: x12y10.txt and inside the file I just dumped all tile names in order they need to be placed on screen: "Grass Grass Water Sand Sand Sand Grass Grass...". That worked but loading strings was slow so I tried another solution: save tiles as indexes.
b) Saving tiles as their indexes. I paired every tile to a number. Since numbers are shorter, they take less memory and are faster to load. I gave each tile it's own index: Grass -> id 1, Water -> id 2, Sand -> id 3 and so on. This way I only needed to save 1 or 2 chars instead of full string per tile. My txt files looked like this now: "1 1 2 3 3 3 1 1...". This worked better but still caused lag.
c) Next improvement I did was with how chunks are organized in memory. Instead of dumping all the chunks in a single folder, for each x coordinate I made a folder and put all chunks that have that x value in there.
So instead of this:
Folder with all chunks: x0y0.txt, x0y1.txt, x0y2.txt, x1y0.txt, x1y1.txt, x1y2.txt
Inside folder with all chunks I had this:
Folder x0: x0y0.txt, x0y1.txt, x0y2.txt
Folder x1: x1y0.txt, x1y1.txt, x1y2.txt
I am not sure how much this helped for small number of chunks, but I am pretty sure for thousands of chunks, improvement is there.
Possible solutions?
I have some ideas for improvements, but I would like to hear your opinion on the solutions.
a) Saving terrain in binary files?
b) I have read about Minecraft region format, really tried to understand how it works, but did not get it since there is little information about it. So if anyone knows it and could explain their system to me, I would be really grateful.
c) Another faster file format?
d) Is making/accessing many folders slow? Is there a better alternative?
I really feel like this is cs-101 question, but cannot google up any answer right away so quick summary.
All files are just sequences of bytes. If we're talking about reading and writing raw bytes, no format will make 64 bytes appear in memory faster than another.
Text file is a sequence of bytes with slight limitations on their values (well, the limitation is if you want standard text programs to display it). A string "11" (sequence of bits: 110001110001) from a text file won't be loaded faster than sequence of unprintable bits 100000100000 from "binary" file.
Structuring directories at the very least reduces the number of nodes system checks when trying the file you've requested to open. But mechanisms underlying the filesystems are very complex and affected by a lot of factors. The overall guess is that frequent reading even of small files will be slow. And all files carry some stockpiling overhead (system info to keep them tracked and ordered), small files will have lower useful/auxiliary info ratio. I know of at least one 2d project with mutable map that was making hdds growl and grunt before they moved onto bigger files years ago.
You don't have to make chunks bigger, that's different thing, but you can write them into the same file.
Instead of million of files by 64 bytes you can have a single megabyte file (assuming you use a byte per chunk). A million chunks is lot for a player to modify or walk around. If you unpack that data to tables, it will take up more space but you don't have to decipher all the string, only the currently needed bytes. Yes, modifying a megabyte string in lua will cause creation of another megabyte string which is slow, but you don't have to do it every time, or you can split string into smaller ones and modify those. And only do writing when needed. I/O bufferization may even happen without your intervention but again it is usually helpful for big files.
Yes there will be more than a byte of info per tile (2^8 possible states per tile is a lot however), the system stays the same.
The same thing is done for textures, because loading data in a single big chunk in a single big scoop is faster than searching around for tiny bit here and there. Indexing a single long area of memory is also faster than chasing pointers around.
On top of that, you may try to read\write less bytes than you want in the memory. For example by compressing data.
In minecraft chunks are not stored unless they have been visited / modified, otherwise they are generated.
That would leave you a system where only blocks which have been modified by the player would need to be stored, with the un-modified areas being re-generated by using the same random seed, each time.
Creating a hierarchy of modifications ... A chunk is an 8x8 block, create a super-chunk which is 8x8 chunks, and only look for a file, if any of the 8x8 super-chunk has been modified.
Possibly store all of the super-chunk in one file, which would limit the number of files (adding more files does decrease the speed of the system, and also uses space on the system inefficiently).
If you have any spare time-space, perhaps have a cache of the chunks near the player, and pre-load the modified areas which are being approached. This would limit the visible lag required

+ [NSString stringWithContentsOfFile:] performance

I need to load files in iOS, now I use the + [NSString stringWithContentsOfFile:].
The files are mostly 500kb to 5mb.
I load an approx. 4mb large file and Instruments and stopwatch told me it needs 1.5 seconds to load this file. In my opinion its a bit slow, is there a way to get the string faster?
EDIT:
I try some things and notice now, the creation of the NSString is my problem it takes 97% of the time and not the real loading from disk.
If you either know the encoding or can determine it (the API you use now is basic), you can just treat it as a char buffer (in a manner which is encoding aware).
I'd begin by opening it using memory mapped data (mmap, you can also approach this using NSData). madvise can be used to hint how you will access the file.
If memory mapped I/O consumes too much memory for your use, you should drop down to incremental reads, like C I/O facilities - fopen, fread, etc.. This will typically require more I/O events than memory mapped data (can be much slower, depending on how the data is accessed).
In both cases, you would treat the string as a C string -- don't simply convert the whole file to an NSString upon opening.
Foundation has a lot of tricks, so make sure this actually improves performance for your specific use case.
If those solutions are too 'core for your use, just consider using smaller files instead (dividing existing files).

Buffering data for delimiter separated blocks

There is a question I have been wondering about for ages and I was hoping someone could give me an answer to rest my mind.
Let's assume that I have an input stream (like a file/socket/pipe) and want to parse the incoming data. Let's assume that each block of incoming data is split by a newline, like most common internet protocols. This application could just as well be parsing html, xml or any other smart data structure. The point is that the data is split into logical blocks by a delimiter rather than a fixed length. How can I buffer the data to wait for the delimiter to appear?
The answer seems simple enough: just have a large enough byte/char array to fit the entire thing.
But what if the delimiter comes after the buffer is full? This is actually a question about how to fit a dynamic block of data in a fixed size block. I can only really think of a few alternatives:
Increase the buffer size when needed. This may require heavy memory reallocation, and may lead to resource exhaustion from specially crafted streams (or perhaps even denial of service in the case of sockets where we want to protect ourselves against exhaustion attacks and drop connections that try to exhaust resources...and an attacker starts sending fake, oversized, packets to trigger the protection).
Start overwriting old data by using a circular buffer. Perhaps not an ideal method since the logical block would become incomplete.
Dump new data when the buffer is full. However, this way the delimiter will never be found, so this choice is obviously not a good option.
Just make the fixed size buffer damn large and assume all incoming logical data blocks is within its bounds...and if it ever fills, just interpret the full buffer as a logical block...
In either case I feel we must assume that the logical blocks will never exceed a certain size...
Any thoughts on this topic? Obviously there must be a way since the higher level languages offer some sort of buffering mechanisms with their readLine() stream methods.
Is there any "best way" to solve this or is there always a tradeoff? I really appreciate all thoughts and ideas on this topic since this question has been haunting me everytime I have needed to write a parser of some sort.
There are normally two techniques for this
1) What I think readline uses - if the buffer fills return the data without the delimiter on the end
2) When the buffer fills, remeber it filled, keep reading until you get the delimiter and report an error (or truncate the record at the buffer size)
Options (2) and (3) are out as you are losing data in both cases. Option (4) of a huge fixed size buffer would not solve the problem as it is just not possible to know what size is large enough? Is it all the physical memory + swap space + the free space available in all disks everywhere in the known universe?
Resizing the buffer looks like the best solution. Say realloc to twice the size and continue writing. There is always a chance of a specially constructed stream like a DoS that tries to bring down the system. My first thought was so set an arbitrarily large size as the max_size for the buffer. However, if we could do that, we could just set that as the size of the large buffer. So, resizing the buffer looks like the best option to me.
If the protocol or you do not define a upper bound for the length of each block then I don't see how you can prevent memory exhaustion edge cases.
Assuming that there is an upper bound using a fixed size block seems like a good approach for reasonably sized limits.
If the limits are high enough that a single fixed buffer will be inefficient then I would suggest using a data structure that is implemented internally as a linked list of fixed size buffers.
Why do you have to wait to start processing?
Generally alternative 4 is sound. It doesn't however, require an "assumption", rather a definition. You simply declare that blocks are smaller than 8K and be done with it. It's not difficult to do.
Further, there's alternative 5: Start processing partial buffers. This works unless you have designed a truly pathological protocol that sends critical data at the very end of the block.
HTML, XML, JSON/YAML, etc., can all be parsed incrementally. You don't require a delimeter to do useful processing.

Resources