I develop phonegap ios application.I used barcode scanner zxing library. But I have a problem
How to implement camera auto focus ?
thank you
My Code:
-(NSString*)setUpCaptureSession {
NSError* error = nil;
AVCaptureSession* captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession = captureSession;
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (!device) return #"unable to obtain video capture device";
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) return #"unable to obtain video capture device input";
AVCaptureVideoDataOutput* output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
if (!output) return #"unable to obtain video capture output";
NSDictionary* videoOutputSettings = [NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey
];
output.alwaysDiscardsLateVideoFrames = YES;
output.videoSettings = videoOutputSettings;
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if (![captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]) {
return #"unable to preset medium quality video capture";
}
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
if ([captureSession canAddInput:input]) {
[captureSession addInput:input];
}
else {
return #"unable to add video capture device input to session";
}
if ([captureSession canAddOutput:output]) {
[captureSession addOutput:output];
}
else {
return #"unable to add video capture output to session";
}
// setup capture preview layer
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
// run on next event loop pass [captureSession startRunning]
[captureSession performSelector:#selector(startRunning) withObject:nil afterDelay:0];
return nil;
}
Unfortunately it appears that the plugin you are using doesn't expose the capture device directly. It does, however, expose the AVCaptureSession via the captureSession property. From this property you should be able to work backwards to get the AVCaptureInputDevice
AVCaptureSession *session=[zxing captureSession]; //Assuming zxing the variable holding a reference to your zxing instance
NSArray *inputs= [session inputs];
AVCaptureInputDevice *input=(AVCaptureInputDevice *)inputs[0]; // Obtain first input device
AVCaptureDevice *device=input.device;
NSError *error;
if ([device lockForConfiguration:&error])
{
device.focusMode=AVCaptureFocusModeContinuousAutoFocus;
[device unlockForConfiguration];
}
else
{
// TODO Handle the device lock error
}
Related
on iOS, is-it possible to get the Camera orientation (in degrees) like Android ?
Camera.CameraInfo info = new Camera.CameraInfo();
Camera.getCameraInfo(cameraId, info);
int degrees = info.orientation;
I tried
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *camera in devices) {
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:nil];
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session addInput:deviceInput];
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
[session addOutput:output];
AVCaptureConnection *connection = [output connectionWithMediaType:AVMediaTypeMetadata];
NSLog(#((int)connection.videoOrientation).stringValue);
}
but it return "0" for all devices
Thanks
AVCaptureConnection has the videoOrientation property.
You can also try using device orientation, if you need to transform photo output somehow. Here you can find some instruction, how to work with device orientation if you need.
I am trying to do real time image processing with an iPhone 6 at 240fps. The problem is when I capture video at that speed, I can't process the image fast enough since I need to sample each pixel to get an average. Reducing the image resolution would easily solve this problem, but I'm not able to figure out how to do this. The available AVCaptureDeviceFormat's have options with 192x144 px, but at 30fps. All 240fps options all have larger dimensions. Here is how I am sampling the data:
- (void)startDetection
{
const int FRAMES_PER_SECOND = 240;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetLow;
// Retrieve the back camera
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *captureDevice;
for (AVCaptureDevice *device in devices)
{
if ([device hasMediaType:AVMediaTypeVideo])
{
if (device.position == AVCaptureDevicePositionBack)
{
captureDevice = device;
break;
}
}
}
NSError *error;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
[self.session addInput:input];
if (error)
{
NSLog(#"%#", error);
}
// Find the max frame rate we can get from the given device
AVCaptureDeviceFormat *currentFormat;
for (AVCaptureDeviceFormat *format in captureDevice.formats)
{
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
// Find the lowest resolution format at the frame rate we want.
if (frameRates.maxFrameRate == FRAMES_PER_SECOND && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
{
currentFormat = format;
}
}
// Tell the device to use the max frame rate.
[captureDevice lockForConfiguration:nil];
captureDevice.torchMode=AVCaptureTorchModeOn;
captureDevice.activeFormat = currentFormat;
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
[captureDevice setVideoZoomFactor:4];
[captureDevice unlockForConfiguration];
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
[self.session addOutput:videoOutput];
// Start the video session
[self.session startRunning];
}
Try GPUImage library. Each filter has method forceProcessingAtSize:. After force resize on GPU, you can retrieve data with GPUImageRawDataOutput.
I got 60fps with process image on CPU with this method.
Right now I'm trying to allow users to take pictures in my app without using UIImagePickerController. I'm using AVCaptureSession and all the related classes to load a camera feed as a sublayer on a full-screen view I have on one of my view controllers. The code works but unfortunately the camera is very slow to load. Usually takes 2-3 seconds. Here is my code:
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
if ([session canSetSessionPreset:AVCaptureSessionPresetHigh])
//Check size based configs are supported before setting them
[session setSessionPreset:AVCaptureSessionPresetHigh];
[session setSessionPreset:AVCaptureSessionPreset1280x720];
CALayer *viewLayer = self.liveCameraFeed.layer;
//NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = viewLayer.bounds;
[viewLayer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device;
if(isFront)
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
else
{
device = [self frontCamera];
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
//NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
[session startRunning];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
Is there any way to speed it up? I've already tried loading it on another thread using Grand Central Dispatch and NSThread and though that stopped the app from freezing it made the loading of the camera take even longer. Any help is appreciated.
In my case, I need to wait for session to start running
dispatch_async(queue) {
self.session.startRunning()
dispatch_async(dispatch_get_main_queue()) {
self.delegate?.cameraManDidStart(self)
let layer = AVCaptureVideoPreviewLayer(session: self.session)
}
}
Waiting for AVCaptureSession's startRunning function was my solution too. You can run startRunning in global async and then in main thread you can add your AVCaptureVideoPreviewLayer.
Swift 4 sample
DispatchQueue.global().async {
self.captureSession.startRunning()
DispatchQueue.main.async {
let videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
}
}
You can load the AVCaptureSession at the time of viewWillAppear. It works for me. When I switch to the view with the AVCaptureSession from other view, then I see the camera running immediately.
For anyone interested the solution I came up with was preloading the camera on a different thread and keeping it open.
I tried all the above methods but it was not as good as Instagram or Facebook, So I loaded the AVCaptureDevice, AVCaptureVideoPreviewLayer, AVCaptureSession in the Parent Screen and passed it as parameter to the Child Screen. It was loading very rapidly.
I am trying to throttle my video capture framerate for my application, as I have found that it is impacting VoiceOver performance.
At the moment, it captures frames from the video camera, and then processes the frames using OpenGL routines as quickly as possible. I would like to set a specific framerate in the capture process.
I was expecting to be able to do this by using videoMinFrameDuration or minFrameDuration, but this seems to make no difference to performance. Any ideas?
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
// SET SOME OTHER PROPERTIES
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Start capturing
if([backFacingCamera supportsAVCaptureSessionPreset:AVCaptureSessionPreset1920x1080])
{
[captureSession setSessionPreset:AVCaptureSessionPreset1920x1080];
captureDeviceWidth = 1920;
captureDeviceHeight = 1080;
#if defined(VA_DEBUG)
NSLog(#"Video AVCaptureSessionPreset1920x1080");
#endif
}
else do some fall back stuff
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
if ([captureSession canAddInput:videoInput])
[captureSession addInput:videoInput];
if ([captureSession canAddOutput:videoOutput])
[captureSession addOutput:videoOutput];
if (![captureSession isRunning])
[captureSession startRunning];
Any ideas? Am I missing something? Is this the best way to throttle?
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
Mike Ullrich's answer worked up until ios 7. These two methods are unfortunately deprecated in ios7. You have to set the activeVideo{Min|Max}FrameDuration on the AVCaptureDevice itself. Something like:
int fps = 30; // Change this value
AVCaptureDevice *device = ...; // Get the active capture device
[device lockForConfiguration:nil];
[device setActiveVideoMinFrameDuration:CMTimeMake(1, fps)];
[device setActiveVideoMaxFrameDuration:CMTimeMake(1, fps)];
[device unlockForConfiguration];
Turns out you need to set both videoMinFrameDuration and videoMaxFrameDuration for either one to work.
eg:
[conn setVideoMinFrameDuration:CMTimeMake(1,1)];
[conn setVideoMaxFrameDuration:CMTimeMake(1,1)];
Is there a way to take a picture in code on the iPhone without going through the Apple controls? I have seen a bunch of apps that do this, but I'm not sure what API call to use.
EDIT: As suggested in the comments below, I have now explicitly shown how the AVCaptureSession needs to be declared and initialized. It seems that a few were doing the initialization wrong or declaring AVCaptureSession as a local variable in a method. This would not work.
Following code allows to take a picture using AVCaptureSession without user input:
// Get all cameras in the application and find the frontal camera.
AVCaptureDevice *frontalCamera;
NSArray *allCameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
// Find the frontal camera.
for ( int i = 0; i < allCameras.count; i++ ) {
AVCaptureDevice *camera = [allCameras objectAtIndex:i];
if ( camera.position == AVCaptureDevicePositionFront ) {
frontalCamera = camera;
}
}
// If we did not find the camera then do not take picture.
if ( frontalCamera != nil ) {
// Start the process of getting a picture.
session = [[AVCaptureSession alloc] init];
// Setup instance of input with frontal camera and add to session.
NSError *error;
AVCaptureDeviceInput *input =
[AVCaptureDeviceInput deviceInputWithDevice:frontalCamera error:&error];
if ( !error && [session canAddInput:input] ) {
// Add frontal camera to this session.
[session addInput:input];
// We need to capture still image.
AVCaptureStillImageOutput *output = [[AVCaptureStillImageOutput alloc] init];
// Captured image. settings.
[output setOutputSettings:
[[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil]];
if ( [session canAddOutput:output] ) {
[session addOutput:output];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in output.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
// Finally take the picture
if ( videoConnection ) {
[session startRunning];
[output captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput
jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *photo = [[UIImage alloc] initWithData:imageData];
}
}];
}
}
}
}
session variable is of type AVCaptureSession and has been declared in .h file of the class (either as a property or as a private member of the class):
AVCaptureSession *session;
It will then need to be initialized somewhere for instance in the class' init method:
session = [[AVCaptureSession alloc] init]
Yes, there are two ways to do this. One, available in iOS 3.0+ is to use the UIImagePickerController class, setting the showsCameraControls property to NO, and setting the cameraOverlayView property to your own custom controls. Two, available in iOS 4.0+ is to configure an AVCaptureSession, providing it with an AVCaptureDeviceInput using the appropriate camera device, and AVCaptureStillImageOutput. The first approach is much simpler, and works on more iOS version, but the second approach gives you much greater control over photo resolution and file options.