Swift: Loading a large video file (over 700MB) into memory - ios

I'm trying to load a large video file (over 700MB) into memory from the documents directory using let data: NSData = NSFileManager.defaultManager().contentsAtPath(path)! but it crashes my app. Smaller files load fine. Is there a better way to load larger files? Thanks.

Depending on what you want to do with the file, using NSData.init(contentsOfURL:options:) with the option .DataReadingMappedIfSafe may work for you.
This will map the file into memory (if possible), i.e. the file's contents will only be loaded (page by page) as you access them via the bytes property.

Related

How can I avoid having NSFileWrapper use lots memory when writing the file

I have an app that is using NSFileWrapper to create a backup of the user's data. This backup file contains text and media files (compression is not relevant here). Sometimes these backup files get quite large, over 200 MB in size. When I call NSFileWrapper -writeToURL... it appears to load the entire contents into memory as part of the writing process. On older devices, this causes my app to be terminated by the system due to memory constraints.
Is there a simple way to avoid having NSFileWrapper load everything into memory? I've read through every NSFileWrapper question on here that I could find. Any suggestions on how to tackle this?
Here is the current file structure of the backup file:
BackupContents.backupxyz
user.txt
- folder1
- audio files
asdf.caf
asdf2.caf
- folder2
- audio files
asdf3.caf
Again, please don't tell me to compress my audio files. That would only be a band-aid to a flawed design.
It seems like I could just move/copy all of the files into a directory using NSFileManager and then make that directory a package. Should I go down that path?
When an NSFileWrapper tree gets written out to disk, it will attempt to perform a hard-link of the original file to the the new location, but only if you supply a parameter for the originalContentsURL.
It sounds like you're constructing the file wrapper programmatically (for the backup scenario), so your files are probably scattered all over the filesystem. This would mean that when you writeToURL, you don't have an originalContentsURL. This means the hard-link logic is going to get skipped, and the file will get loaded so it can get rewritten.
So, if you want the hard-linking behavior, you need to find a way to provide an originalContentsURL. This is most easily done by supplying an appropriate URL to the initial writeToURL call.
Alternatively, you could try subclassing NSFileWrapper for regular files, and giving them an NSURL that they internally hang on to. You'd need to override writeToURL to pass this new URL up to super, but that URL should be enough to trigger the hard-link code. You'd want to then use this subclass of NSFileWrapper for the large files you want hard-linked in to place.

"Fastest way to unzip many files on iOS" or "How else can I download many files quickly into my iOS app"

In my app i want the user to be able to download offline map content.
So I (compressed) moved all my tiles into a zip file. (I used 0 compression)
The structure is like that: {z/x/y.jpg}
+0
+-0
+--0.jpg
+1
+-1
+--0.jpg
+2
+-2
+--1.jpg
So basically there are going to be many many files for zoom level 0-15. (about 120.000 tiles for my test-region).
I am using https://github.com/mattconnolly/ZipArchive now but also tried out https://github.com/soffes/ssziparchive before and both are pretty slow. It takes about 5!! minutes on my iPhone 5S for the files to unzip.
Is there any way I can speed things up? What other possibilities rather than downloading the tiles in one big zip file would there be?
Edit:
How can i download the content of the whole folder quickly to my iPhone without the need of unzipping something?
Any help is appreciated!
JPGs rarely compress at all with zip - they are by definition already compressed. What you should do is create your own binary file format, and put whatever metadata you need into it along with the images (which you should encode with a really low quality number, to get their size down).
When you download those files, you can open then, quickly read them into memory, and extract out data or images as needed.
This will be really fast and have virtually no overhead if your extra data is binary (not text).
PS: I just tripped on a PHP Plist class
If anyone is wondering how I was ending up:
For my use-case (MapTiles) I am using MBTiles now instead of zipped images. It's one big database file and super easy to read if using FMDB. No unpacking whatsoever needed...
Even if I was placing the Images all in one binary file without any compression, the "extracting" still took forever!

Why am I sometimes getting files filled with zeros at their end after being downloaded?

I'm developing a download manager using Indy and Delphi XE (The application uses Multithreading to attempt several connections to the server). Everything works fine but sometimes the final downloaded file is broken and when I check downloaded temp files I see that 2 or 3 of them is filled with zero at their end. (Each temp file is download result of each connection).
The larger the file is, the more broken temp files I get as the result.
For example in one of the temp files which was 65,536,000 bytes, only the range of 0-34,359,426 was valid and from 34,359,427 to 64,535,999 it was full of zeros. If I delete those zeros, application will automatically download the missing segments and what I get as the result, well if the problem wouldn't happen again, is the healthy downloaded file.
I want to get rid of those zeros at the end of the temp files without having a lost in download speed.
P.S. I'm using TFileStream and I'm sending it directly to TIdHTTP and downloading the files using GET method.
Additional Info: I handle OnWork event which assigns AWorkCount to a public int64 variable. Each time the file is downloaded, the downloaded file size (That Int64 variable) is logged to a text file and from what the log says is that the file has been downloaded completely (even those zero bytes).
Make sure the server actually supports downloading byte ranges before you request a range to download. If the server does not support ranges, a requested range will be ignored by the server and the entire file will be sent instead. If you are not already doing so, you should be using TIdHTTP.Head() to text for range support before then calling TIdHTTP.Get(). You also need to do this anyway to detect if the remote file has been altered since the last time you downloaded it. Any decent download manager needs to be able to handle things like that.
Also keep in mind that if TIdHTTP knows up front how many bytes are being transferred, it will pre-allocate the size of the destination TStream before then downloading data into it. This is to speed up the transfer and optimize disc I/O when using a TFileStream. So you should NOT use TFileStream to access the same file as the destination for multiple simultaneous downloads, even if they are writing to different areas of the file. Pre-allocating multiple TFileStream objects will likely trample over each other trying to set the file size to different positions. If you need to download a file in multiple pieces simultaneously then either:
1) download each piece to a separate file and copy them into the final file as needed once you have all of the pieces that you need.
2) use a custom TStream class, or Indy's TIdEventStream class, to manage the file I/O yourself so you can ignore TIdHTTP's pre-allocation attempts and ensure that multiple file I/O operatons do not overlap each other incorrectly.

Streaming PDF SDK to iOS via HTTP

Are there any good SDKs available on iOS that will not only display a PDF, but will show it as it is downloading from a web source? It is perfectly fine to use a paid for library as long as it is commercial-friendly.
To clarify, the SDK must be able to show partial files as they are downloading, whether I provide the stream or otherwise. I would like to avoid CGPDFScannerRef due to how low level it is -- I have tried FastPdfKit as well but it will only show the whole PDF after it has been fully downloaded. Any ideas?
PDF is a structured format that consists of different types of data blocks such as TOC, text, fonts, colors, annotations and information about these blocks is saved at the end of the file. So this makes it impossible for CGPDFDocumentRef to open the pdf without all the data available.
However you can get around this limitation by linearizing the PDF file so that the metadata information will be put at the beginning of the file. I'm not sure but I think you can then use CGDataProviderCreateSequential in combination with CGPDFDocumentRef to parse a partially downloaded PDF file.

How to load only a part of file by useing FileReference.load in Flash

Is there any way to load only a part of a big file by useing FileReference.load() in flash player?
It's known that the AS3 API FileReference has given the ability to load local file in flash.
However, in the case which you want pre-process this file in flash, the only way is by using the FileReference.load function. Unfortunately, the FileReference.load function will load the total file in the memory immediately. And the size of the loading file is limited up to 100M.
So, what i want to do is just load a part of the file whoes size is under 100M, process it, and upload it, and then load another one.
Any one has any idea?
Thanks :)

Resources