iOS Video Creation Size - ios

I am working on an app which is running some processing on the iOS AV Foundation video stream, and then generating a video using the processed output.
I'm noticing that if I make the output frames of the video too large, the processing time to render the video frames is too large, and my app gets choppy.
Does anyone have a good suggestion for a method I can use to determine at run-time what the largest video size I can create without affecting (drastically) the framerate of the video? This way, if the app is running on an iPhone 5, it should be able to create higher-resolution videos than if it's running on an iPhone 4.
One thought that I had was that before the recording starts, I could try and render a few frames at different resolutions behind the scenes, and time how long the render takes, and use the largest one is takes less than X, but if there's a better way, I'd love to hear it.
Another option would just be to experiment off-line with what gives me good performance on different devices, and hard-code the video resolution per device type, but I'd rather avoid that.
Thanks in advance!

Related

Fast video stream start

I am building an app that streams video content, something like TikTok. So you can swipe videos in table, and when new cell becomes visible the video starts playing. And it works great, except when you compare it to TikTok or Instagram or ect. My video starts streaming pretty fast but not always, it is very sensible to network quality, and sometimes even when network is great it still buffering too long. When comparing to TikTok, Instagram ... in same conditions they don't seam to have that problem. I am using JWPlayer as video hosting service, and AVPlayer as player. I am also doing async preload of assets before assigning them to PlayerItem. So my question is what else can I do to speed up video start. Do I need to do some special video preparations before uploading it to streaming service. (also I stream m3U8 files). Is there some set of presets that enables optimum streaming quality and start speed. Thanks in advance.
So theres a few things you can do.
HLS is apples preferred method of streaming to an apple device. So try to get that as much as possible for iOS devices.
The best practices when it comes to mobile streaming is offering multiple resolutions. The trick is to start with the lowest resolution available to get the video started. Then switch to a higher resolution once the speed is determined to be capable of higher resolutions. Generally this can be done quickly that the user doesn't really notice. YouTube is the best example of this tactic. HLS automatically does this, not sure about m3U8.
Assuming you are offering a UICollectionView or UITableView, try to start low resolution streams of every video on the screen in the background every time the scrolling stops. Not only does this allow you to do some cool preview stuff based off the buffer but when they click on it the video is already established. If thats too slow try just the middle video.
Edit the video in the background before upload to only be at the max resolution you expected it to be played at. There is no 4k resolution screen resolutions on any iOS device and probably never will be so cut down the amount of data.
Without getting more specifics this is all I got for now. Hope I understood your question correctly. Good luck!

Create a timelapse from a normal video in iOS

I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a

Removing low frequency (hiss) noise from video in iOS

I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.

iOS - how can I programmatically calculate the time limit to record audio/video with the known file limit size

I have tried to google a lot but it seems like no one have done it beforein iOS.
My issue is: my server only allow the client to upload the video / audio / image file with limited size (e.g: 30M for video, 1M for audio). With that limit, I want to figure how much time the users are allow to record audio / video. This calculation must consider the difference devices for example the iPad 3 has better camera then ipad 2 so we will have less time to record the video.
I am wondering if we can programmatically calculate the time limit base on the known file size.
Thanks,
Luan.
When working with large amounts of data such as video and audio, compression should play a part in your calculation.
Compression results can vary greatly depending on what you are recording and as a result it would be unrealistic to try to forecast a certain maximum duration.
I can think of two options:
Predetermine very restrictive recording times per device (I believe it is possible in iOS to tell an iPad3 from an iPad2)
Figure out a way to re-encode a smaller part of the video until it is within limits.
Best of luck!
Cantgetright has the reason this is hard described perfectly.
What you really care about is megapixels of the camera (definition), worst case storage size of one second of video, and how many free megs are on the phone as well.
If you know most of these elements, time can be the constraint by which you determine the last one.
Always overestimate size to guarantee it'll work no matter what. People don't know how big 5secs of video is on their iDevices anyway so you can be stingy with allotted time

How to screen record the iOS simulator at 60 fps?

It turned out that capturing video from the screen is a hard task on the Mac. I have a small game running in the simulator and want to make a screencast of the gameplay for youtube. Since it's a fast-paced scroller game, video must be recorded at 60 fps to look good.
I know the actual video on youtube for example is just 24 to 30 fps, but each such slow frame is blended with another.
When capturing the simulator at a lower frame rate than 60 fps the result is jagged a lot since every frame is razor sharp with no blending.
I tried a couple of Mac screen recorders but none of them were able to capture 60fps video from the simulator, and the frames in the resulting video looked like if the app took plenty of screenshots and stiffed them together into a video container.
But since there are great demo videos on youtube showing fast-paced gameplay of iOS apps without just recording the screen with a video camera, I wonder what kind of application they use to get a smooth screen capture.
Hopefully someone who already went through this problem can point out some solutions.
I've had good results screen recording from the simulator using SnapZ Pro X from Ambrosia software:
http://www.ambrosiasw.com/utilities/snapzprox/
One problem that you're likely to have is that the simulator only simulates iOS's OpenGL graphics in software, so unless you have a really powerful Mac, it's likely that the simulator won't be able to run your game at 60fps anyway.
It's possible that the videos you've seen used the HDMI video out on the iPhone to mirror the screen from the device into a video capture card on the computer. That would likely perform much better because the Mac wouldn't have to both generate and record the graphics simultaneously.
I remember watching a video of the Aquaria guys talking about how they recorded their gameplay videos. Essentially the game recorded the input from the controller/keyboard while the game was played normally. Then they could play back the game they had just played but one frame at a time, with each frame being rendered out to a file as it went. Then all those frames are composited together and bam, a full 60fps video with perfectly rendered graphics. Bit overkill but it's a nice solution.
A program that is able to record at 60 fps is Screenflick.

Resources