LZMA SDK iOS to show progress - ios

If anyone has used the iOS wrapper for the LZMA SDK available at https://github.com/mdejong/lzmaSDK and have been able to tweak it in order to see the progress of unarchiving, please help.
I am going to use this SDK in iOS to extract a 16MB file, which uncompresses to a 150MB file, and this takes around 40seconds to complete. It would be good to have some kind of callback for showing the progress of uncompression.
Help is greatly appreciated.
Thanks

So, I looked at this issue quite a bit recently, and honestly the best you are going to be able to do is look for all the files in a specific tmp dir where decompression is going on and then count them and compare to a known size N. The problem with attempting to do this in the library is that it spans multiple runtimes and the callback idea makes the code a mess. Also, a callback would not help that much because of the way 7z compression works. To decode, one needs to build up the decompression dictionary before specific files can be decompressed, and that process of building up the dictionary takes a long time before the first file can even be written. So, if you put a "percent done" counter in your app showing how much was done, it would show 0% done for a long time, then jump to 50% and then 90 or 100 %. Basically, it would not be that useful even if it was implemented.

You could try C++ port of the latest LZMA SDK(15.06) without described above limitations(C version). Memory allocations and IO read/write can be tuned in runtime, plus work with password encrypted archives, smoothed progress, Lzma & Lzma2 archive types etc.
GitHub: https://github.com/OlehKulykov/LzmaSDKObjC

Related

How to limit memory usage by libjpeg

Short version: iOS's UIImageJPEGRepresentation() crashes on large images. I'm trying to use & modify libjpeg to respect the max_memory_to_use field, which it's ignoring.
Long version: I'm writing an iOS app which crashes when converting a large image to JPEG after prolonged usage reduces available memory (a trickling leak involving quirks of #autoreleasepool{}, but we're addressing that separately). For images captured by the device camera (normal use, actual size) UIImageJPEGRepresentation() can require up to 200MB (!), crashing if not available. This is a problem with UIImageJPEGRepresentation() which a web search shows goes back for years and seems unsolved; filing a tech support request with Apple elicits "file a bug report" which doesn't solve my immediate customer needs.
To resolve this, I'm bypassing UIImageJPEGRepresentation() by using libjpeg (http://www.ijg.org) and digging into its operation, which shows exactly the same problem (presumably Apple uses it in iOS). libjpeg does provide a means to specify maximum memory usage via the parameter max_memory_to_use a la:
struct jpeg_compress_struct cinco;
cinfo.mem->max_memory_to_use = 10*1024*1024;
which would be used by the libjpeg function jpeg_mem_available (j_common_ptr cinfo, long min_bytes_needed, long max_bytes_needed, long already_allocated) (in jmemnobs.c) but, in the standard implementation, is completely ignored (comment even says Here we always say, "we got all you want bud!"). Blender has altered the function (http://download.blender.org/source/chest/blender_2.03_tree/jpeg/jmemmac.c) to respect the parameter, but seems I'm missing something to make it work in my app or it's just being ignored anyway elsewhere.
So: how does one modify jmemnobs.c in libjpeg to actually & seriously respect memory limitations, rather than jokingly ignore them?

Unmapping memory-mapped images that are created during processing

I have a pretty big issue, although I only have the symptoms, and a theory on the cause.
I have a C++ application under Windows 7x64 that uses system calls to FFMPEG 0.7.13 to extract frames from videos. When running, the parent application maintains a nice, predicable memory footprint in memory profilers (task manager, RAMMap) of about 2MB. I can see the individual calls to FFMPEG also come and go without incident. The trouble is, after about 100 calls to FFMPEG, and 70,000+ PNGs created (no one directory has more than 1500 pngs), the Windows memory page size raises gradually from about 2.5GB to over 7.0GB, and the system is brought to its knees. The sum of the processes for all users is no where near the reported Memory Page amount.
I thought it might be Windows Search indexing related, so I turned off the indexing for the output directories in question using SetFileAttributes() and FILE_ATTRIBUTE_NOT_CONTENT_INDEXED, and while it seems to be working as advertised, it does not seem to combat the issue at hand. My current running theory is that all of these extracted PNGs are either fully or partially memory mapped, by FFMPEG or something else. I can also see the output PNGs under the RAMMap Physical Pages tab as standby mapped files.
Question:
- Is there enough information here to possibly diagnose the exact problem?
- Do I have a way to combat this issue?
Thanks in advance...

iOS: How do they make their app size very small (< 50 MB) that can be downloaded over 3G/4G?

I'm facing the iOS app size issue that the app size over 50 MB cannot be downloaded over 3G/4G connection. My iOS app is heavy because of large amount of images (.png). I have already tried some solutions but it's not enough.
Separate between iPhone and iPad.
Use Compressing PNGs Technique but this just reduce my app size little.
Use zip archiving like SSZipArchive.
I saw an app with download size less than 50 MB but after downloading and installing the usage size is more than 100 MB. How this can be done?
I think they may download some resources later in some way but I do not know how. Any one please give me some suggestions. Thank you.
EDIT:
The texture package sprite-sheet of format such .pvr.ccz also be used. This can be handle the same way as .png.
If your app is heavy on PNGs, you could apply something similar to #2, but with better results. I have used the ImageOptim + ImageAlpha combo with great results.
Here is a interesting case study about this method.
Yes most apps do this. Since you are talking about .png they already go through 2-stage compression. So I dont think further compressing or zip is going to help you.
Lets say you have several heavy images which bloat up your app size. So you remove those heavy images from app packaging and when the user downloads the app for the first time, at that time fetch those images as static files using usual http. You could either do this in a separate setup step (showing progress bar et al.) or you could download these images at runtime based on what the user does in your app.
Once you fetch them you can keep these images locally in your app documents folder or any other folder & serve them up locally from next time onwards (see SDWebCache). or you could keep it simple at make it pure http calls (no locally storing) but this might impact your users experience.

Is it possible to resume 7zip compression?

My application regularly upload large files. Regardless of their size, all files are compressed before uploaded to server.
Part of this project requirements is to resume nicely after crash/power failure, so right now compression is done this way:
large-file.bin sliced in N slices
Compress each slice & upload it
In case of crash, I pickup from the last slice.
To optimize upload speed, I'm currently looking into sending the whole file (uploads are resumed if failed) instead of sending slices one by one, so I'm looking into compressing the whole file instead of compressing each slice.
I'm currently using 7z.dll. I wonder if it's possible, in case of power failure, to tell 7z to resume compression.
I know I could always implement my own compression routine and implement such feature, but before going that road I wonder if it's possible to do that in 7z (which already have an excellent compression ratio)
As far as I know, no compression algorithm supports that. You will likely have to recompress the source file from the beginning every time, discarding any output bytes until you reach the desired resume position, and then you can send the remaining output bytes from that point on.

What is the fastest way of loading and re-sizing an image?

I need to display thumbnails of images in a given directory. I use TFileStream to read the image file before loading the image into an image component. The bitmap is then resized to the thumbnail size, and assigned to a TImage component on a TScrollBox.
It seems to work ok, but slows down quite a lot with larger images.
Is there a faster way of loading (image) files from disk and resizing them?
Thanks, Pieter
Not really. What you can do is resize them in a background thread, and use a "place holder" image until the resizing is done. I would then save these resized images to some sort of cache file for later processing (windows does this, and calls the cache thumbs.db in the current directory).
You have several options on the thread architecture itself. A single thread that does all images, or a thread pool where a thread only knows how to process a single image. The AsyncCalls library is even another way and can keep things fairly simple.
I'll complement the answer by skamradt with an attempt to design this for being as fast as possible. For this you should
optimize I/O
use multiple threads to make use of multiple CPU cores, and to keep even a single CPU core working while you read (or write) files
The use of multiple threads implies that using VCL classes for the resizing isn't going to work, as the VCL isn't thread-safe, and all hacks around that don't scale well. efg's Computer Lab has links for image processing code.
It's important to not cause several concurrent I/O operations when using multiple threads. If you choose to write the thumbnail images back to files, then once you have started reading a file you should read it completely, and once you have started writing a file you should also write it completely. Interleaving both operations will kill your I/O, because you potentially cause a lot of seeking operations of the hard disc head.
For best results the reading (and writing) of files should also not happen in the main (GUI) thread of your application. That would suggest the following design:
Have one thread read files into TGraphic objects, and put these into a thread-safe list.
Have a thread pool wait on the list of files in original size, and have one thread process one TGraphic object, resize it into another TGraphic object, and add this to another thread-safe list.
Notify the GUI thread for each thumbnail image added to the list, so it can be displayed.
If thumbnails are to be written to file, do this in the reading thread as well (see above for an explanation).
Edit:
On re-reading your question I notice that you maybe only need to resize one image, in which case a single background thread is of course enough. I'll leave my answer in place anyway, maybe it will be of use to someone else some time. It's what I learned from one of my latest projects, where the final program could have needed a little more speed but was only using about 75% of the quad core machine at peak times. Decoupling I/O from processing would have made the difference.
I often use TJPEGImage with Scale:=jsEighth (in Delphi 7). This is really fast because the JPEG de-compression can skip a lot of the data to fill a bitmap of only an eighth of width and height.
Another option is to use the shell's method to extract a thumbnail, which is pretty fast as well
I'm in the vision business, and I simply upload the images to the GPU using OpenGL. (typically 20x 2048x2000x8bpp per second), a bmp per texture, and let the videocard scale (win32, Mike Lischke's opengl headers)
Upload of such an image costs 5-10ms depending on exact videocard (if not integrated and nvidia 7300 series or newer. Very recent integrated GPUs might be doable also). Scaling and displaying costs 300us. Which means customers can pan and zoom like crazy without touching the app. I draw an overlay (which used to be a tmetafile but is now an own format) on top of it.
My biggest picture is 4096x7000x8bpp which shows and scales in under 30ms. (GF 8600)
A limitation of this technology is max texture size. It can be resolved by fragmenting the picture into multiple textures, but I haven't bothered yet because I deliver the systems with the software.
(some typical sizes:
nv6x00 series: 2k*2k but uploading is just about break even compared to GDI
nv7x00 series: 4k*4k For me the baseline cards. GF7300's are like $20-40
nv8x00 series: 8k*8k
)
Note that this might not be for everybody. But if you are in the lucky situation to specify hardware limits, it might work. The main problem are laptops like Thinkpads, the GPUs of which are older than the avg laptop, which are in turn often a generation behind Desktops.
I chose OpenGL over DirectX because it is more static in time, and easier to find non-game related examples.
Try to look at the Graphics32 library : it's very good at drawing things and works great with Bitmaps. They are Thread - Safe with good example, and it's totally free.
Exploit windows capacity to create thumbnails. Remember that hidden Thumbs.db files in folders that contain images?
I have implemented something like this feature but in VB. My software is able to build thumbnails of 100 files (mixed size) in around 10 seconds.
I am not able to convert it to Delphi though.

Resources