I've got an iOS app compressing a bunch of small chunks of data. I use compression_encode_buffer running in LZ4 mode to do it so that it is fast enough for my needs.
Later, I take the file[s] I made and decode them on a non-Apple device. Previously I'd been using their ZLIB compression mode and could successfully decode it in C# with System.IO.Compression.DeflateStream.
However, I'm having a hell of a time with the LZ4 output. Based on the LZ4 docs here, Apple breaks the stream into a bunch of blocks, each starting with a 4-byte magic number, 4-byte decompressed size, and 4-byte compressed size. All that makes sense, and I'm able to parse the file into its consituent raw-LZ4 chunks. Each chunk in the buffer iOS outputs decompresses to about 65,635 bytes, and there's about 10 of them in my case.
But then: I have no idea what to DO with the LZ4 chunks I'm left with. I've tried decoding them with LZ4net's LZ4.LZ4Stream, LZ4net's LZ4.LZ4Codec (it manages the first block, but then fails when I feed in the 2nd one). I've also tried several C++ libraries to decode the data. Each of them seem to be looking for a header that the iOS compression functions have encoded in a non-standard way.
Answering my own: Apple's LZ4 decompressor (with necessary modifications to handle their raw storage format) is here: https://opensource.apple.com/source/xnu/xnu-3789.21.4/osfmk/vm/lz4.c.auto.html
Edit afterwards: I actually wasn't able to get this working, but I didn't spend much time on it because I found Apple's LZFSE decompressor.
LZFSE Decompressor can be found here: https://github.com/lzfse/lzfse
Related
If anyone has used the iOS wrapper for the LZMA SDK available at https://github.com/mdejong/lzmaSDK and have been able to tweak it in order to see the progress of unarchiving, please help.
I am going to use this SDK in iOS to extract a 16MB file, which uncompresses to a 150MB file, and this takes around 40seconds to complete. It would be good to have some kind of callback for showing the progress of uncompression.
Help is greatly appreciated.
Thanks
So, I looked at this issue quite a bit recently, and honestly the best you are going to be able to do is look for all the files in a specific tmp dir where decompression is going on and then count them and compare to a known size N. The problem with attempting to do this in the library is that it spans multiple runtimes and the callback idea makes the code a mess. Also, a callback would not help that much because of the way 7z compression works. To decode, one needs to build up the decompression dictionary before specific files can be decompressed, and that process of building up the dictionary takes a long time before the first file can even be written. So, if you put a "percent done" counter in your app showing how much was done, it would show 0% done for a long time, then jump to 50% and then 90 or 100 %. Basically, it would not be that useful even if it was implemented.
You could try C++ port of the latest LZMA SDK(15.06) without described above limitations(C version). Memory allocations and IO read/write can be tuned in runtime, plus work with password encrypted archives, smoothed progress, Lzma & Lzma2 archive types etc.
GitHub: https://github.com/OlehKulykov/LzmaSDKObjC
In my application I need to read data from an input stream. I have set the current buffer size for reading as 1024. But I have seen in some Android applications buffer size has been kept as 8192 (8 KB). Will there be any specific advantage if I increase the buffer size in my application to 8KB?
Any expert opinion will be much appreciated.
Edit: (I am using BB OS 6 and 7 and I am dealing with network inputstream.)
I can't say that I've found the universally best buffer size, but it seems to me that something in the range of 1KB to 8KB should be fine in most situations (for BlackBerry Java apps).
Keep in mind that if the amount of data is small (so you'd probably only need one or two buffers at 1KB-8KB), it's probably best just to use the IOUtilities method:
byte[] result = IOUtilities.streamToBytes(inputStream);
with which you don't need to actually pick a buffer size. But, if you know that result would be a large block of data, you're probably right in wanting to read one buffer at a time.
However, I would argue that the answer should almost always be obtained simply by building the app, and measuring performance with a few different values for byte buffer size. It's easy enough to change one constant, build, run and measure again, and then you're not guessing, or taking the advice of someone who doesn't know all the details of your app.
See here for information about BlackBerry Eclipse plugin memory analysis, and
here for BlackBerry Eclipse plugin profiling.
These tools are found in Eclipse by selecting the Window menu, then Show View -> Other... -> BlackBerry -> BlackBerry Memory Statistics View, or BlackBerry Profiler View, while debugging.
This way, you can see how much memory, or processor, the network code is using during the call to retrieve data and populate your buffer.
More
BlackBerry InputStream to String conversion
This question was also asked in the official BlackBerry forum here:
http://supportforums.blackberry.com/t5/Java-Development/What-is-the-best-size-for-a-buffer-in-BlackBerry/td-p/2559417
The OP gave this clarification:
"I am reading from network. Once I establish socket connection with the server, the server will send me notifications one after the other. I need to read the notifications/data from the inputstream available in the socket connection. For this I have a background thread which checks anything is available in the inputstream and if something is available, it will read with the help of a buffer and then passes the read data to a StringBuffer."
Given this information, I have a different take, in that I think the BlackBerry network handling abstracts the Java application from the network buffer processing to the extent that the application buffer size will have little if any impact on the performance.
But be aware, this is only my opinion.
My response on that thread was as follows:
First thing to note is that the method "isAvailable()", in my experience, does not work correctly on OS 5.0 and earlier. It is fixed in OS 6 (at least from my testing).
Because isAvailable() was broken, (and for other application reasons) what I have implemented for a socket connection is that each message is preceded by a length. So in the socket connection, I read the length of the next message, and then the actual data. This is done with no blocking - in other words I read the entire message, regardless of size. I recommend you do the same. The message must exist in full somewhere so it makes no difference if it is in some memory managed by the socket connection, or in some memory managed by you.
Note also, until OS 6.0, when you did the read you would get all the data to fill the buffer you had - in other words it waited till the buffer was full. In OS 6.0 and later, the read can complete without giving you a full buffer.
In your case, you might be working in a post OS 6.0 only, so you could use isAvailable() - create a buffer of that size, and read everything. I can't see that it makes any difference whether you have the bytes in memory managed by the socket, or memory managed by you.
But in fact, I would argue that the best approach is the one that makes your processing simplest. So for example, if you know that the next message is 200 bytes, then read 200 bytes, and then process that message. Then read the next message.
You could spend a lot of time attempting to manage the buffers to match the underlying socket buffers. I don't know exactly how the underlying BlackBerry socket processing code works, but it doesn't put data directly into your buffers. So let it manage its buffer size to optimize the network, you manage your buffer size to optimize your processing. That will work best for everyone.
I am developing a application that needs to read/write some data. So my first solution was to store the data in a json encoded string in a sqlite database. Since deserialization of the json string was slow (about 5sec) and i couldn't pre buffer any data i decided to store the data in a binary file (on the disk). For that i have implemented a reader that reads the binary file. Now i have compared the speed results and found out that the times are more or less the same (the file size is better though).
I am using NSFileHandle for reading the file and i am reading it line by line. I tested this on a iPhone 3GS with 0.5 MB of data. Is this normal? Should i switch to reading the file using C/C++ functions? Would it be any better? Does any one have any experience with this? My code is more or less based on this question How to read data from NSFileHandle line by line?.
Thanks!
My application regularly upload large files. Regardless of their size, all files are compressed before uploaded to server.
Part of this project requirements is to resume nicely after crash/power failure, so right now compression is done this way:
large-file.bin sliced in N slices
Compress each slice & upload it
In case of crash, I pickup from the last slice.
To optimize upload speed, I'm currently looking into sending the whole file (uploads are resumed if failed) instead of sending slices one by one, so I'm looking into compressing the whole file instead of compressing each slice.
I'm currently using 7z.dll. I wonder if it's possible, in case of power failure, to tell 7z to resume compression.
I know I could always implement my own compression routine and implement such feature, but before going that road I wonder if it's possible to do that in 7z (which already have an excellent compression ratio)
As far as I know, no compression algorithm supports that. You will likely have to recompress the source file from the beginning every time, discarding any output bytes until you reach the desired resume position, and then you can send the remaining output bytes from that point on.
I'm just researching at the moment the possibility of writing an app to record an hours worth of video/audio for a specific use case.
As the video will be an hour long I would want to encode on-the-fly and not after the recording has finished to keep disk usage to a minimum.
Do the video capture APIs write a large uncompressed file to disk that has to be encoded after or can they encode on-the-fly resulting in a optimised file written to disk?
It's important that the video is recorded at a lower resolution than the iPhone's advertised 720/1080p as I need to keep the file sizes down due to length of video (which will need to be uploaded).
Any information you have would be appreciated or even just a pointer in the right direction.
No they do not record uncompressed to disk (unless this is what you want). You can specify to record to a MOV/MP4 and have the video encoded in H264. Additionally you can control the average bit rate of the encoding. You can also specify the capture size, and output encoding size along with scaling options if needed. For demo code check out AVCamDemo in the WWDC 2010 sample code. This demo code may now be available in the docs.