The application currently I am using has a main functionality to scan QR/Bar codes continuously using Zxing library (http://code.google.com/p/zxing/). For continuous frame capturing I used to initialize the AVCaptureSession and AVCaptureVideoOutput, AVCaptureVideoPreviewLayer described in the apple Q&A http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.
My problem is, when I used to run the camera preview, the image I can see through the Video device is much larger (1.5x) than the image we can see through the still camera of the iPhone. Our customer needs to hold the iPhone around 5cm distance from the bar code when he is scanning, but if you hold the iPhone to that parameter, the whole QR code won't be visible and the decoding fails.
Why is Video camera in iPhone 4 enlarges the image (by seeing through the AVCaptureVideoPreviewLayer) ?.
This is a function of the AVCaptureSession video preset, accessible by using the .sessionPreset property. For example, after configuring your captureSession, but before starting it, you would add
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
See the documentation here:
iOS Reference Document
The default preset for video is 1280x720 (I think) which is a lower resolution than the max supported by the camera. By using the "Photo" preset, you're getting the raw camera data.
You see the same behaviour with the built-in iPhone Camera app. Switch between still and video capture modes and you'll notice that the default zoom level changes. You see a wider view in still mode, whereas video mode zooms in a bit.
My guess is that continuous video capture needs to use a smaller area of the camera sensor to work optimally. If it used the whole sensor perhaps the system couldn't sustain 30 fps. Using a smaller area of the sensor gives the effect of "zooming in" to the scene.
I am answering my own question again. This was not answered even in Apple Dev forum, therefore I directly filed a technical support request from Apple and they have replied that this is a known issue and will be fixed and released with a future version. So there is nothing we can do more than waiting and see.
Related
The iPhone 7 plus and 8 plus (and X) have an effect in the native camera app called "Portrait mode", which simulates a bokeh-like effect by using depth data to blur the background.
I want to add the capability to take photos with this effect in my own app.
I can see that in iOS 11, depth data is available. But I have no idea how to use this to achieve the effect.
Am I missing something -- is it possible to turn on this effect somewhere and just get the image with it applied, rather than having to try and make this complicated algorithm myself?
cheers
Unfortunately portrait mode and portrait lighting aren't open to developers as of iOS 11 so you would have to implement a similar effect on your own. Capturing Depth in iPhone Photography and Image Editing with Depth from this years WWDC go into detail on how to capture and edit images with depth data.
There are two sample projects on the developer site that show you how to capture and visualize depth data using a Metal shader, and on how to detect faces using AVFoundation. You could definitely use these to get you started! If you search for AVCam in the Guides and Sample Code they should be the first two that come up (I would post the links but stack overflow is only letting me add two).
I'm working on an app that records and saves video to the user's camera roll. My setup: I'm using an AVCaptureSession with an AVCaptureVideoPreviewLayer for the preview and AVCaptureMovieFileOutput for the saved file.
Under most circumstances, what is seen in the preview layer matches up with the saved video asset, but if I turn on stabilization (either AVCaptureVideoStabilizationMode.standard or AVCaptureVideoStabilizationMode.cinematic), and set the zoom factor very high (around 130), then the preview and the output become noticeably offset from each other.
The output is consistently above and slightly to the right of what's shown in the preview. I suspect this happens at smaller zoom factors as well, but the effect is minimal enough to only be noticeable at higher zoom factors.
Part of the reason for turning stabilization on in the first place is to more easily line objects up when zoomed in, so merely limiting zoom or turning off stabilization isn't really an option.
I'm curious to know why the preview and output aren't in sync, but ultimately I'm looking for a possible solution that lets me keep 1. zoom 2. stabilization and 3. an accurate preview.
UPDATE: Dec. 12
After messing around with this some more, it seems that sometimes the issue doesn't happen using cinematic stabilization, and my new theory is that it might have to do with specific combinations of AVCaptureDeviceFormats and stabilization settings.
How to capture photo automatically in android phone? is about how to take a picture automatically without people's interaction. This feature is needed in many applications. For example, when you are going to take a picture of a document, you expect that the camera can take it automatically when the full document is insider the picture (or four corners of the document). So my question is how about doing it in iPhone or iPad?
Recently, I am working on Cordova, and does someone know that there are some plugins that have already existed for this kind of camera operations? Thanks
EDIT:
This operation will be done in an APP that will be given the full access of the camera, and the task is how to develop such an APP.
Instead of capturing photo, you should capture video frames. When the captured frame satisfies your requirements, stop capturing the video and proceed.
This is kind of what a barcode scanner does, except I do not wish to detect a barcode (I will write the code for what I want to detect). How do I even set up the camera so it is a continuos scanner? Like the user just presses a play button and the camera will automatically scan for stuff? Just as an example, say I wish to run the scanner until the camera runs into the event that the whole screen is pure black, at which point it will display the message "detected all black".
There is an older Apple Technical Q&A that details how to use AVFoundation to continuously generate low resolution UIImages from a video capture session that you could then sample and use for your detection:
https://developer.apple.com/library/ios/qa/qa1702/_index.html
I found some answers regarding both front and back camera usage at the same time regarding AUDIO/VIDEO recording, which is impossible.
In detail here:
Can the iPhone4 record from both front and rear-facing camera at the same time?
However, is it possible to use both cameras at the same time to take pictures for iOS?
No this is definitely not possible I'm afraid.
Only one camera session can be used at a time when using AVCaptureSession (the lower level API for camera interaction on iOS).
If you try to invoke multiple sessions (from each camera) as soon as one session begins, the other will stop.
You could quickly alternate between sessions, but the images will not be taken in synchronicity.