How to record video with low audio quality - ios

I am working on a project, we want to minimize the size in the sound part of a video. I know we can use ACAudioSession to record a pure audio, and can set the quality detailed into sampling rate, number of channels.
But when I want to design a video taker which record records audio at the same time. I found for the AVCaptureSession, I can only set the quality of video and audio together using sessionPreset, which leads the quality of video and audio decrease at the same time.
I am wondering whether there is a way to keep the video in high quality while manage to reduce the size of audio when taking a video?
Appreciate for the help.

Related

Improve image quality from rtsp stream with opencv

I have a rtsp stream from a pretty good camera (my mobile phone).
I am getting the stream using opencv:
cv2.VideoCapture(get_camera_stream_url(camera))
However, the image quality I get is way bellow my mobile phone camera. I understand that rtsp protocol may lower the resolution but still, the image quality is not good for OCR.
However, although I have a VIDEO stream, the object I am recording is a static one. So, it is expected that all frames from the video should more or less the same, except for noise or lighting issues.
I was wondering if it is possible to get a 10 seg video with several frames and combine it to a SINGLE frame with better sharpness, reducing the noise.
Is it viable? How?

IOS Swift buffer 30FPS Video for realtime object-detection

I have trained an ObjectDetector for iOS. Now I want to use it on a Video with a frame rate of 30FPS.
The ObjectDetector is a bit too slow, needs 85ms for one frame. For the 30FPS it should be below 33ms.
Now I am wondering if it is possible to buffer the frames and the predictions for a specified time x and then play the video on the screen?
If you have already tried using a smaller/faster model (and also to ensured that your model is fully optimized to run in CoreML on the neural engine), we had success doing inference only every nth frame.
The results were suitable for our use-case and you couldn't really tell that we were only doing it at 5 fps because we were able to continue to display the camera output at full frame-rate.
If you don't need realtime then yes, certainly you could store the video and do the processing per frame afterwards; this would let you parallelize things into bigger batch sizes as well.

Is there a quality difference between output of AVCaptureMovieFileOutput and AVCaptureVideoDataOutput?

In the process of capturing a light trail photo, I noticed that for fast moving objects, there is slightly more discontinuity between successive frames if I use the sample buffers from AVCaptureVideoDataOutput compared to if I record a movie and extract frames and run the same algo.
Is there a refresh rate/frame rate difference if the two modes are used?
A colleague who has experience in professional photography claims that there is a visible lag even in Apple's default camera app when comparing the preview in Photo mode and Video mode but it is not something very obvious to me.
Furthermore, I am actually capturing video at a low frame rate (close to highest exposure)
To conclude these experiments, I need to know if there is any definitive proof to confirm or disprove the same

How to estimate bandwidth / speed requirements for real-time streaming video?

For a project I'm working on, I'm trying to stream video to an iPhone through its headphone jack. My estimated bitrate is about 200kbps (If i'm wrong about this, please ignore that).
I'd like to squeeze as much performance out of this bitrate as possible and sound is not important for me, only video. My understanding is that to stream a a real-time video I will need to encode it with some codec on-the-fly and send compressed frames to the iPhone for it to decode and render. Based on my research, it seems that H.265 is one of the most space efficient codecs available so i'm considering using that.
Assuming my basic understanding of live streaming is correct, how would I estimate the FPS I could achieve for a given resolution using the H.265 codec?
The best solution I can think of it to take a video file, encode it with H.265 and trim it to 1 minute of length to see how large the file is. The issue I see with this approach is that I think my calculations would include some overhead from the video container format (AVI, MKV, etc) and from the audio channels that I don't care about.
I'm trying to stream video to an iPhone through its headphone jack.
Good luck with that. Headphone jack is audio only.
My estimated bitrate is about 200kbps
At what resolution? 320x240?
I'd like to squeeze as much performance out of this bitrate as possible and sound is not important for me, only video.
Then, drop the sound streams all together. Really though, 200kbit isn't enough for video of any reasonable size or quality.
Assuming my basic understanding of live streaming is correct, how would I estimate the FPS I could achieve for a given resolution using the H.265 codec?
Nobody knows, because you've told us almost nothing about what's in this video. The bandwidth required for the video is a product of many factors, such as:
Resolution
Desired Quality
Color Space
Visual complexity of the scene
Movement and scene changes
Tweaks and encoding parameters (fast start? low latency?)
You're going to have to decide what sort of quality you're willing to accept, and decide subjectively what the balance between that quality and frame rate is. (Remember too that if there isn't much going on, you basically get frames for free since they take very little bandwidth. Experiment.)
The best solution I can think of it to take a video file, encode it with H.265 and trim it to 1 minute of length to see how large the file is.
Take many videos, typical of what you'll be dealing with, and figure it out from there.
The issue I see with this approach is that I think my calculations would include some overhead from the video container format (AVI, MKV, etc) and from the audio channels that I don't care about.
Your video stream won't have a container at all? Not even TS? You can use FFmpeg to dump the raw stream data for you.

Process Slow Motion Video Effect iOS

How would I go about creating a slow motion effect on a portion of a video recorded or obtained from the camera roll in iOS? I am using the AVFoundation framework to select a video from the camera roll or record a video. I intend to add the effect from Time1 to Time2 and then let the video continue at a normal speed.
Generally speaking you create a slow motion effect by recording at a higher frame rate. So if you record at 60FPS but playback at 30FPS, then you have created a half time slow motion effect. This is how it is done with film. With prerecorded fixed frame rate footage you could playback at a fraction of the original frame rate. If this is to be saved back to a container file, then you will need to adjust the presentation time stamps accordingly.

Resources