How can I leverage the camera to detect certain occurrences? - ios

This is kind of what a barcode scanner does, except I do not wish to detect a barcode (I will write the code for what I want to detect). How do I even set up the camera so it is a continuos scanner? Like the user just presses a play button and the camera will automatically scan for stuff? Just as an example, say I wish to run the scanner until the camera runs into the event that the whole screen is pure black, at which point it will display the message "detected all black".

There is an older Apple Technical Q&A that details how to use AVFoundation to continuously generate low resolution UIImages from a video capture session that you could then sample and use for your detection:
https://developer.apple.com/library/ios/qa/qa1702/_index.html

Related

Can ARKit (or ARSession) keep running in the background?

My question is - Can ARSession run in the background - basically I need the ARKit running and all the per frame information and camera incentrics but I DONT want to render the camera feed on the screen - As happens in the ARSCNView.
Not specifically in the background thread or process.
Basically I just want to use the Tracking information(Images + Camera Pos + Camera Euler Angles etc) from the ARKit and dont want to render anything in AR per se or the camera feed.
Before everyone jumps on me - I know that Apple restricts GPU process in the background - case in point :
Execution of the command buffer was aborted due to an error during
execution. Insufficient Permission (to submit GPU work from
background) (IOAF code 6)
But there should be a way to use the ARKit or ARSession without the camera feed and only with the tracking information, right?
There is no way to do what you want to do, as soon as the ARSCNScene or ARSKScene is fully covered AR tracking stops. There are a couple reasons for this.
First, ARKit is for Augmented Reality, not tracking operations. If you aren't displaying reality (the camera feed) on the screen then you cannot augment reality.
Second, and most important, Apple cares about user privacy and if you could access the camera without showing the feed onscreen the user would have no way of knowing when the camera was in use. This would leave open the possibility of apps spying on users without their knowledge. Apple will never allow this. Ever.
Also, AR hits the battery hard so it only runs when its actually being used.
Based on your comments it seems as though you don't actually want to create an AR experience anyway so there may be a better way to get what you are after through Heading information and the Accelerometer.

How to capture a photo automatically in iPhone and iPad

How to capture photo automatically in android phone? is about how to take a picture automatically without people's interaction. This feature is needed in many applications. For example, when you are going to take a picture of a document, you expect that the camera can take it automatically when the full document is insider the picture (or four corners of the document). So my question is how about doing it in iPhone or iPad?
Recently, I am working on Cordova, and does someone know that there are some plugins that have already existed for this kind of camera operations? Thanks
EDIT:
This operation will be done in an APP that will be given the full access of the camera, and the task is how to develop such an APP.
Instead of capturing photo, you should capture video frames. When the captured frame satisfies your requirements, stop capturing the video and proceed.

Disable camera shaking in ios

I am creating simple camera app and I want to add 'image stability' so when hands are shaking the camera does not twitch. Is it possible to do in iOS?
You can do this by getting the raw image from the camera, and only using a subset of the raw image frame, then programmatically picking a new subset for each raw image to use for the next frame. Needless to say, this is a large amount of work and should only be undertaken if you know what you are doing or want to have the most impressive video/picture taking app.
The iPhone 6+ has this built into the hardware and is, I believe, what the previous comment link to avfoundation is talking about.

How do I fire a camera connected on USB programatically?

I want to make something like they have at US dmv's where you sit down and it takes your picture, maybe like photobooth.
I want to connect a high end camera via usb, fire the camera and get the picture.
There's the Picture Transfer Protocol http://en.wikipedia.org/wiki/Picture_Transfer_Protocol a nastly little thing. All the cameras I held in my hands so far, claiming they had proper PTP support failed it somewhere. But in theory one can use PTP to remote control a camera, i.e. trigger the shutter, retrieve the picture and so on.
Rater than reimplementing the whole thing I recommend you get some readily usable PTP library. There are some open source ones listed on http://ptp.sourceforge.net
The easiest method is probably to use OpenCV: http://opencv.willowgarage.com/wiki/
If you need a high end camera - most digital SLRs have a tethered mode where you can control the camera, fire the shutter and retrieve the image data. Each camera maker has a proprietary (but normally free) sdk.
For a webcam type camera - these normally run in video mode, you simply grab an image out f the video stream - as PaulR says - use openCV

AV Foundation camera preview layer gets zoomed in, how to zoom out?

The application currently I am using has a main functionality to scan QR/Bar codes continuously using Zxing library (http://code.google.com/p/zxing/). For continuous frame capturing I used to initialize the AVCaptureSession and AVCaptureVideoOutput, AVCaptureVideoPreviewLayer described in the apple Q&A http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.
My problem is, when I used to run the camera preview, the image I can see through the Video device is much larger (1.5x) than the image we can see through the still camera of the iPhone. Our customer needs to hold the iPhone around 5cm distance from the bar code when he is scanning, but if you hold the iPhone to that parameter, the whole QR code won't be visible and the decoding fails.
Why is Video camera in iPhone 4 enlarges the image (by seeing through the AVCaptureVideoPreviewLayer) ?.
This is a function of the AVCaptureSession video preset, accessible by using the .sessionPreset property. For example, after configuring your captureSession, but before starting it, you would add
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
See the documentation here:
iOS Reference Document
The default preset for video is 1280x720 (I think) which is a lower resolution than the max supported by the camera. By using the "Photo" preset, you're getting the raw camera data.
You see the same behaviour with the built-in iPhone Camera app. Switch between still and video capture modes and you'll notice that the default zoom level changes. You see a wider view in still mode, whereas video mode zooms in a bit.
My guess is that continuous video capture needs to use a smaller area of the camera sensor to work optimally. If it used the whole sensor perhaps the system couldn't sustain 30 fps. Using a smaller area of the sensor gives the effect of "zooming in" to the scene.
I am answering my own question again. This was not answered even in Apple Dev forum, therefore I directly filed a technical support request from Apple and they have replied that this is a known issue and will be fixed and released with a future version. So there is nothing we can do more than waiting and see.

Resources