Detect motion with iPad camera while doing other things - ipad

I have to have a video play-back in a loop, until I detect some motion (activity) with the front camera of iPad.
The video does not need to be recorder or played later, and I do not have to show the current video on the iPad.
Currently they have to tap the screen to stop the video but the customer wants a 'cool' video detection. I am not interested in face detection, just motion.
There are some examples about it ?
thanks,
EDIT
Well, currently the only workaround that I've found is detecting luminance... Just make an image of every frame (or n frame) and check the luminosity of the image, check another image from another frame and check again, if the variance is enough, something has changed :-)
Just find a good threshold variance and ready to go ...
Of course I would prefere a more robust workaround...

Related

Objective C Iphone take photos both cameras simultaneously

I need to take one picture from rear camera and another from back camera. I read that it wouldn´t possible at same time, but, you know if it is possible to switch between cameras in thi minimum time and try to take one fron and one back picture?
EDIT:
As I said before, I want to capture from both cameras at the same time. I know that it is not possible on Iphone devices but i tried to switch cameras very quickly. Iphone waste a lot of time switching between cameras. The ideal is to show in preview back camera and record frames from it, and record frames in front camera at the same time without previewing it and do not lose the front preview.
Thanks in advance.

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

How can I leverage the camera to detect certain occurrences?

This is kind of what a barcode scanner does, except I do not wish to detect a barcode (I will write the code for what I want to detect). How do I even set up the camera so it is a continuos scanner? Like the user just presses a play button and the camera will automatically scan for stuff? Just as an example, say I wish to run the scanner until the camera runs into the event that the whole screen is pure black, at which point it will display the message "detected all black".
There is an older Apple Technical Q&A that details how to use AVFoundation to continuously generate low resolution UIImages from a video capture session that you could then sample and use for your detection:
https://developer.apple.com/library/ios/qa/qa1702/_index.html

how to capture a video in specific part rather than full screen in iOS

I am capturing a video in my IOS app using AVFoundation. i am able to record the video and able to playback also.
But my problem is that i am showing the capturing video in a view which is around 200 points height.so i expected the video would be recorded in the same dimensions.but when i playback the video its showing that the whole screen has been recorded.
so i want to know is there any way to record the camera preview which was visible to user only.And any help should be appreciated.
the screenshots:
()
()
You cannot think of video resolution in terms of UIView dimensions (or even screen size, for that matter). The camera is going to record at a certain resolution depending on how you set up the AVCaptureSession. For instance, you can change the video quality by setting the session preset via:
[self.captureSession setSessionPreset:AVCaptureSessionPreset640x480]
(It is already set by default to the highest setting.)
Now, when you play the video back, you do have a bit of control over how it is presented. For instance, if you want to play it in a smaller view (who's layer is of type AVPlayerLayer), you can set the video gravity via:
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer*)self.previewView.layer;
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
And depending on what you pass for the gravity parameter, you will get different results.
Hopefully this helps. Your question is a little unclear as it seems like you want the camera to only record a certain amount of it's input, but you'd have to put your hand over part of the lens to accomplish that ;)

Still pin image capture responding time

I got a problem of the responding time of capturing an image from the still pin.
I have system running video stream at 320*480 from the capture pin and push a button to snap a 720P image from the still pin directly at the same time. But the responding time is too long, around 6 sec, which is not desirable.
I have try other video cap software support snapping a picture while video streaming, the responding time is similar.
I am wondering whether this a hardware problem or a software problem. And how the still pin capture is working actually.Is this from interpolation or change the resolution by hardware.
for example, the camera start at one resolution set keeps sensing and push the data to the buffer through the USB. is it possible for it immediately change to another resolution set and snap an image? is this why the system is taking picture slowly?
Or, is there a way to keep video streaming at high frame rate and snap a high resolution image immediately? No interpolation.
I am doing a project which has the function to snap a image from the video stream. The technology I use is DirectShow. And the responding time is not that long as yours. And the responding time has nothing to do with the streaming frame, according to my experience.
Usually a camera has its own default resolution. It is impossible for it mmediately change to another resolution set and snap an image. So that is not the reason.
Could you please show me some codes? And your camera's type ?

Resources