Download performance of AVAssetDownloadTask - fairplay

I'm using AVAssetDownloadTask to download some FairPlay-encrypted audio. As per guidelines, the audio is split up into small chunks to allow switching between bitrates during streaming. Our chunks are about 6 seconds each, which means less than 100 kb in size.
The download speed of this process is pretty bad. I've seen speeds between 85 KB/s and 250 KB/s. This is on a connection where when I download a new Xcode beta, I get several megabytes per second.
I'm guessing that the slow speed is due to having to make a separate request for each segment, which is a lot of overhead. I've tried inspecting the download traffic using Charles, and even though it shows one HTTPS connection per download task, the request body size continually ticks upward over the lifetime of the download. I tried downloading a 100MB test file from the same server where the audio files live and it came down at a few megabytes per second.
My question: what are best practices for getting good download performance using AVAssetDownloadTask? Should the segments be larger? Should there be a separate file that is one large chunk for downloading? Or is this behavior weird, suggesting I've got something configured wrong?

Related

Increase Video uploading speed performance

I'm using URLSession to upload recorded video file which is in .mov format. For 30 sec video it is taking so much time that my client want it to be bit faster. What should i do to increase upload speed? Thanks!
With a 60MB file it's not surprising it takes a long time.
If you're on 3G, The maximum upload speed is about 1.7 Mb/sec, so if you get about half that in real world, a 60MB file would take about 9 minutes to upload. You need to re-encode to make your file smaller.
Aim for around 1MB for a 10 second clip, so 3MB for 30 seconds, and you'll upload in about 25 seconds over 3G. Faster on 4G and Wifi obviously.
Always assume the worst case, and test your network connectivity using 3G.
To set a size limit, set the fileLengthLimit on your AVAssetExportSession, I'd start with a value of 100KB/second which in your case, for a 30 second clip would give you
fileLengthLimit = 3000000
From AVAssetExportSession.h
Indicates the file length that the output of the session should not exceed. Depending on the content of the source asset, it is possible for the output to slightly exceed the file length limit. The length of the output file should be tested if you require that a strict limit be observed before making use of the output. See also maxDuration and timeRange.

CPU load of streaming vs file downloading when routing data

I'm using a Raspberry Pi 2 to route wifi-eth connections. So from the eth side I have a computer that will connect to internet using the Pi wifi connection. On the Raspberry I started htop to monitor the CPUs load, then on the computer I started chrome and played a 20-minute 1080 video. The load on the CPU didn't seem to go beyond 5% anyhow. After that I closed youtube tab and started a download of a binary file of 5GB from the first row here (https://testdebit.info/). Well, I noticed that CPU load was much more higher, around 10%!
Any explanation of such a difference?
It has to do with compression and how video is encoded. A normal file can be compressed, but nothing like that of a video stream.
A video stream can achieve very high compressions due to the predictable characteristics of video, e.g. video from one frame to another doesn't change much. As such, video will send a whole frame (I-frame) and then update it with just the changes (P-frame). It's even possible to do backward prediction (B-frame). Here's a wikipedia reference.
Yes, I hear your next unspoken question: Doesn't more compression mean more CPU time to uncompress? That's true for a lot of types of compression, such as that used by zip files. But since raw video is not very information dense over time, you have compression techniques that in essence reduce the amount of data you send with very little CPU usage.
I hope this helps.

Flurry / Google Analytics / Localytics bandwidth consumption on iOS

I'm choosing an analytics service for my iOS app. I want to track quite a lot of events and the app I'm developing is going to be used outdoors, so there will be no wi-fi connection available, and even the cellular connectivity can be of a poor quality.
Analytics is the only thing that requires network connectivity in my app. Recently I've checked how much traffic it consumes, and it consumes much more than I've expected. That was about 500KB for Google Analytics and about 2MB for Flurry, and that's just for a 2-minute long session with a few hundred events. It seems very inefficient to me. (Flurry logs a little bit more parameters, but definitely not 4 times more.)
I wonder — have anybody compared other popular analytics solutions for their bandwidth consumption? Which one is the slimmest one?
Thank you
If you don't need real time data (and you probably don't with outdoor app), you can get the best network compression for Analytics by dispatching more hits at once to benefit from batching and compression. To do that set the dispatch interval to 30 minutes. The maximum size of uncompressed hit that analytics will accept is about 8k so you should be sending less then that. With compression that would bring it down to ~25% of the original size for individual hit assuming mostly ascii data. To generate 500k of data you should be sending few hundred hits individually. With batching and compression the hits will shrink down more efficiently. Usually batch of 20 hits will compress to less then 10% of the uncompressed size or about 800 bytes per hit at most. For further network savings just send less data per event or fewer events. Btw, Analytics has a rate limit of 60 tokens that are replenished at a rate of 1 hit every 2 seconds. If you are sending few hundred events in short period of time your data is likely getting rate limited.
https://developers.google.com/analytics/devguides/collection/ios/limits-quotas#ios_sdk

VLC Plugin Memory consumption

I have used the vlc plugin(vlc web plugin 2.1.3.0) in Firefox to display the receiving live stream from my server into my browser. and i need to display 16 channels into one web page, but when i play more than 10 channels in the same time, i show that the processor is 100% and some breaking in the video appear. i have checked the plugin-memory in the running task, i have showed that around 45 MB from memory is dedicated for each video (so 10 channels : 10 * 45 = 450 MB).
kindly, do you have any method to reduce the consumption of the VLC plugin to allow the display of 16 channels in the same time ?
best regards,
There is no way to do that correctly. You could probably save a few megabytes by disabling audio decoding if there are audio tracks in one of your 16 streams in case you don't need them. Except for that, 45MB per stream is quite reasonable in terms of VLC playback and won't be able to go much below that, unless you reduce the video dimensions.
Additionally, your problem is probably not the use of half a giga byte of memory (Chrome and Firefox easily manage to use that much memory by themselves if you open a few tabs), but that VLC exceeds your CPU capacity. Make sure not to use windowless playback since this is less efficient that the normal windowed mode.
VLC 2.2 will improve the performance of the webplugins on windows by adding hardware acceleration known from the standalone application.

ASIHTTPRequest download: Should I request a large file or many small size files?

I am using ASIHTTPRequest library for my iOS project. My app is about download a ebook (with 150+ jpg files). I have two options:
Zip all images and just request a single zipped file (around 200MB).
Request images each by each (it will become 150+ requests).
Which option is the best if I have more than 1000 users request the ebook simultaneously each day?
This is not exactly an 100% answer to your question, but speaking from experience, I believe you will find it helpful.
I did a somewhat similar app once where I was supposed to update (redownload) a very large number of xml files (up to a couple of thousands).
The one-by-one method was rather slow but with a good NSOperation and NSQueue management, it worked ok, with no UI freezes or crashes on the first iPad. I believe it took me at most 15-20 minutes for the max number of files (somewhere over 5000 operations, with 5 concurrent downloads each), on wifi connection.
When I tried the zip method, to see if it would be faster / better, it made the iPad 1 crash due to high memory usage. And the zip size was about 100 mega if I remember correctly
I would suggest you to go with the first option. 1000 requests per day is not such a high number and this way the user doesn't have to wait for the whole archive to be downloaded, but can read the already downloaded pages without delay.

Resources