iOS frequency of taking photos vs video - ios

I am a beginner iOS developer working as a developer on a research project. We want to be able to take several photos within a couple seconds, but also want full control on adjusting the frequency of photos taken per second.
What are the limitations put in place of the number of photos the iPhone can take per second via the regular camera (not burst or tkmelapse)? Is there a maximum value?
Or is there a way to control the frequency of capturing in video?

Related

How to detect scrolling speed of a video/How to detect differences in images

I have some screen recording videos from which I want to extract some information. My thinking is to use cv2.VideoCapture() to get screenshots and then use OCR to get information. But there is a limit to how many times I can call OCR service(a business service). So I want to only use the critical screenshots that don't have much information overlap. For example, I got 300 screenshots from cv2 but I can already get all the information needed from 20 of them since the scrolling speed is slow and most of the screenshots are overlapped.
See a real example: I want to get all the app names in a screen recording video of AppStore.
The question is:
How can I find out the scrolling speed of the video so that I can adjust how often I capture a screenshot. Or to put it in another way: how can I find out how much the consecutive screenshots change, which actually implies the speed of scrolling?
you can use optical flow processing to detect scrolling, there will be only one dimension Y in flow detected so it will be easy to get the average scrolling by calculating the average of flows vector norm.
you can find here a python example to adapt easily in your case:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html

How to capture a photo automatically in iPhone and iPad

How to capture photo automatically in android phone? is about how to take a picture automatically without people's interaction. This feature is needed in many applications. For example, when you are going to take a picture of a document, you expect that the camera can take it automatically when the full document is insider the picture (or four corners of the document). So my question is how about doing it in iPhone or iPad?
Recently, I am working on Cordova, and does someone know that there are some plugins that have already existed for this kind of camera operations? Thanks
EDIT:
This operation will be done in an APP that will be given the full access of the camera, and the task is how to develop such an APP.
Instead of capturing photo, you should capture video frames. When the captured frame satisfies your requirements, stop capturing the video and proceed.

iOS 7+ Is there a possibility to capture video from frontal camera while showing another video on the screen?

I have a task.
There is iOS device. There is an app I should create.
The app shows some video file (local video file from the device) while frontal camera captures users' face.
Showing video and capturing user's face via frontal camera are simultaneous.
I see that FaceTime and Skype for iOS can do this. But the former one created by Apple (they can do whatever on their devices) while latter one is owned by Microsoft (big companies/big money sometimes allowed more than usual developers).
Moreover, I doubt on co-existense of video capturing along with video player at the same time.
So, I am not sure that this task is 100% implement-able and publish-able.
Is it possible on iOS 7+?
Is it allowed by Apple to do this (I mean that there are many technical possibilities on iOS but only some of them are OK for Apple. Especially during moderation process)?
Are there good technical references?
I believe so. Doing a search on Appstore shows a number of video conferencing apps:
Zoom cloud
Polycom
VidyoMobile
Fuze
Just search for "video conferencing".

iOS Video Creation Size

I am working on an app which is running some processing on the iOS AV Foundation video stream, and then generating a video using the processed output.
I'm noticing that if I make the output frames of the video too large, the processing time to render the video frames is too large, and my app gets choppy.
Does anyone have a good suggestion for a method I can use to determine at run-time what the largest video size I can create without affecting (drastically) the framerate of the video? This way, if the app is running on an iPhone 5, it should be able to create higher-resolution videos than if it's running on an iPhone 4.
One thought that I had was that before the recording starts, I could try and render a few frames at different resolutions behind the scenes, and time how long the render takes, and use the largest one is takes less than X, but if there's a better way, I'd love to hear it.
Another option would just be to experiment off-line with what gives me good performance on different devices, and hard-code the video resolution per device type, but I'd rather avoid that.
Thanks in advance!

iOS - how can I programmatically calculate the time limit to record audio/video with the known file limit size

I have tried to google a lot but it seems like no one have done it beforein iOS.
My issue is: my server only allow the client to upload the video / audio / image file with limited size (e.g: 30M for video, 1M for audio). With that limit, I want to figure how much time the users are allow to record audio / video. This calculation must consider the difference devices for example the iPad 3 has better camera then ipad 2 so we will have less time to record the video.
I am wondering if we can programmatically calculate the time limit base on the known file size.
Thanks,
Luan.
When working with large amounts of data such as video and audio, compression should play a part in your calculation.
Compression results can vary greatly depending on what you are recording and as a result it would be unrealistic to try to forecast a certain maximum duration.
I can think of two options:
Predetermine very restrictive recording times per device (I believe it is possible in iOS to tell an iPad3 from an iPad2)
Figure out a way to re-encode a smaller part of the video until it is within limits.
Best of luck!
Cantgetright has the reason this is hard described perfectly.
What you really care about is megapixels of the camera (definition), worst case storage size of one second of video, and how many free megs are on the phone as well.
If you know most of these elements, time can be the constraint by which you determine the last one.
Always overestimate size to guarantee it'll work no matter what. People don't know how big 5secs of video is on their iDevices anyway so you can be stingy with allotted time

Resources