Shoot photo in high quality, while capturing video - ios

Hello,
Is it possible to take a image in high quality, while viewing video with AVCapture in low quality? The problem is that i need to analyze the video (with a special algorithm, which takes too much time on high quality images, so i use low video quality) and then take a photo in high quality, when my algorithm says "just do it" :)
Best regards

Related

proper method to identify a particular plant in the video when the taken video is fast?

I am new to machine learning. I want to run a classifier (machine-learning-based or deep learning based) on the video to identify one Plant in a grass video.
I have some issues in the taken video:
The video is fast and sometimes it is even difficult for me to identify the plant in the video.
The resolution or video quality is very high (taken by camera RGB - 12 Mega Pixel)
What would be the best approach to identify this plant? what machine learning approach will provide a more accurate result?
Every deep learning architecture takes frames for processing, so video is converted into frames, so it does not depend on video speed.
Accuracy depends on deep learning architecture.
you can check some deep learning architecture such as (YOLO and SSD).

Aforge Video-Picture quality

Even though webcam quality is 5mp , webcam seems and takes low quality of picture.
Is there any way to fix this and raise quality to maximum possible?

Taking Frame from Video vs Taking a Photo

My specific question is: What are the drawbacks to using a snipped frame from a video vs taking a photo?
Details:
I want to use frames from live video streams to replace taking pictures because it is faster. I have already researched and considered:
Videos need faster shutter speed, leading to higher possibility of blurring
Faster shutter speed also means less exposure to light, leading to potentially darker images
A snipped frame from a video will probably be lower resolution (although maybe we can possibly turn up the resolution to compensate for this?)
Video might take up more memory -- I am still exploring the details with another post (What is being stored and where when you use cv2.VideoCapture()?)
Anything else?
I will reword my question to make it (possibly) easier to answer: What changes must I make to a "snip frame from video" process to make the result equivalent to taking a photo? Are these changes worth it?
The maximum resolution in picamera is 2592x1944 for still photos and 1920x1080 for video recording. Other issues to take into account are that you cannot receive all formats from VideoCapture, so now conversion of the YUV frame to JPG will be your responsibility. OK, OpenCV can handle this, but it takes considerable CPU time and memory.

How to record video with low audio quality

I am working on a project, we want to minimize the size in the sound part of a video. I know we can use ACAudioSession to record a pure audio, and can set the quality detailed into sampling rate, number of channels.
But when I want to design a video taker which record records audio at the same time. I found for the AVCaptureSession, I can only set the quality of video and audio together using sessionPreset, which leads the quality of video and audio decrease at the same time.
I am wondering whether there is a way to keep the video in high quality while manage to reduce the size of audio when taking a video?
Appreciate for the help.

Take photo during video-input

I'm currently trying to take an image in the best quality during capturing video at a lower quality. The problem is, that i'm using the video stream to check if face are in front of the cam and this needs lot's of resources, so i'm using a lower quality video stream and if there are any faces detected I want to take a photo in high quality.
Best regards and thank's for your help!
You can not have multiple capture sessions so at some point you will need to swap to higher resolution. First thing you are saying that face detection takes too much resources when using high res snapshots.. Why not try to simply down-sample the image and keep using high resolution all the time (send the down sampled one to the face detection, display the high res):
I would start with most common apple's graphic context and try to down scale it. If that takes too much cpu you could try to do the same on the GPU (find some library that does that or just create a simple program) or you could even try to simply drop odd lines and columns of the image as the raw data. In any of those cases you should also note that you probably do not need the face detection on the same thread as displaying, also you most likely don't even need a high frame rate for the detection (you display camera a full FPS but update the face recognition at 10 FPS for instance).
Another thing you can do is simply have the whole thing in low res, then when you need to take the image stop the session, start high res session, take a screenshot and swap back to low res for face detection.

Resources