I'm currently trying to use iOS 7's newest api's to scan code 39 barcodes, but it's driving me crazy. I have to hold the phone a specific way really still for like 10 seconds in order for it to detect it. I compared it to Red Laser, Zbar, etc and they could analyze it in 1 second even if it was a little skewed. I'm not sure if it's because of the way that I load my capture session or what. I'd appreciate the help. Any suggestions on how to improve performance?
Here's how I load the scanner in my viewDidLoad method:
//Initialize Laser View
laserView = [[UIView alloc] init];
laserView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
laserView.layer.borderColor = [UIColor redColor].CGColor;
laserView.layer.borderWidth = 8;
laserView.layer.cornerRadius = 10;
[self.view addSubview:laserView];
//Start Session
scannerSession = [[AVCaptureSession alloc] init];
scannerDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Define Error Messages
NSError *error = nil;
//Define Input
scannerInput = [AVCaptureDeviceInput deviceInputWithDevice:scannerDevice error:&error];
//Check if Device has a Camera
if (scannerInput) {
[scannerSession addInput:scannerInput];
} else {
NSLog(#"Error: %#", error);
}
// Locks the configuration
BOOL success = [scannerDevice lockForConfiguration:nil];
if (success) {
if ([scannerDevice isAutoFocusRangeRestrictionSupported]) {
// Restricts the autofocus to near range (new in iOS 7)
[scannerDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
}
}
// unlocks the configuration
[scannerDevice unlockForConfiguration];
//Define Output & Metadata Object Types
scannerOutput = [[AVCaptureMetadataOutput alloc] init];
[scannerOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[scannerSession addOutput:scannerOutput];
scannerOutput.metadataObjectTypes = [scannerOutput availableMetadataObjectTypes];
//Create Video Preview Layer
scannerPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:scannerSession];
scannerPreviewLayer.frame = self.view.bounds;
scannerPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:scannerPreviewLayer];
//Start Session
[scannerSession startRunning];
[self.view bringSubviewToFront:cancelButton];
[self.view bringSubviewToFront:laserView];
And:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
//Prepare Laser View
CGRect laser = CGRectZero;
//Format Date
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"M/d"];
//Format Time
NSDateFormatter *timeFormatter = [[NSDateFormatter alloc] init];
[timeFormatter setDateFormat:#"h:ma"];
//Define Barcode Types to Recognize
AVMetadataMachineReadableCodeObject *barCodeObject;
NSString *idNumber = nil;
NSArray *barCodeTypes = #[AVMetadataObjectTypeCode39Code];
if ([metadataObjects count] > 1) {
NSLog(#"%lu Barcodes Found.", (unsigned long)[metadataObjects count]);
}
//Get String Value For Every Barcode (That Matches The Type We're Looking For)
for (AVMetadataObject *metadata in metadataObjects) {
for (NSString *type in barCodeTypes) {
//If The Barcode Is The Type We Need Then Get Data
if ([metadata.type isEqualToString:type]) {
barCodeObject = (AVMetadataMachineReadableCodeObject *)[scannerPreviewLayer transformedMetadataObjectForMetadataObject:(AVMetadataMachineReadableCodeObject *)metadata];
laser = barCodeObject.bounds;
idNumber = [(AVMetadataMachineReadableCodeObject *)metadata stringValue];
break;
}
}
// If IDNumber Found
if (idNumber != nil) {
//Stop Session
[scannerSession stopRunning];
[self vibrate];
NSLog(#"ID: %#", idNumber);
break;
}
//If IDNumber Is Not Found
else {
NSLog(#"No ID Found.");
}
}
//Update Laser
laserView.frame = laser;
}
I had a similar problem with the AVCaptureSession, the capture was very slow and sometimes it took long time to finish it.
Don't know if my solution is the one that is good for you but definetly can be helpfull to somebody else looking for this problem, like i did.
AVCaptureSession *captureSession = [AVCaptureSession new];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
With this code you force the camera to a high quality preset.
Hope this will help someone.
Try zooming in a little... videoDevice.videoZoomFactor = 2.0;
A lot of it has to do with the quality of the image (focus, glare, lighting, etc.) Under really good conditions, you should get scanning that is nearly instantaneous.
I suggested you put a NSLog in your delegate function captureOutput:
You will see that it gets called many times just for scanning a barcode once. Initializing NSDateFormatter is a very expensive operation per Why is allocating or initializing NSDateFormatter considered "expensive"?.
I would suggest that you create the NSDateFormatter outside of the delegate function and re-use it instead of creating it every time the function is called. This should make your app more responsive.
Related
I'm trying to use SFSpeechRecognizer but I don't have a way to test if I'm implementing it correctly, and since its a relatively new class i couldn't find a sample code (I don't know swift). Am I making any unforgivable mistakes/missing something ?
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status){
if (status == SFSpeechRecognizerAuthorizationStatusAuthorized) {
SFSpeechRecognizer* recognizer = [[SFSpeechRecognizer alloc] init];
recognizer.delegate = self;
SFSpeechAudioBufferRecognitionRequest* request = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
request.contextualStrings = #[#"data", #"bank", #"databank"];
SFSpeechRecognitionTask* task = [recognizer recognitionTaskWithRequest:request resultHandler:^(SFSpeechRecognitionResult* result, NSError* error){
SFTranscription* transcript = result.bestTranscription;
NSLog(#"%#", transcript);
}];
}
}];
I´m trying too but this code works for me, after all SFSpeechRecognizer and SFSpeechAudioBufferRecognitionRequest are not the same, so I think (haven´t tested) you have to ask for different permissions (have you asked for permissions before? to use the microphone and the speechRecognition?). Ok here´s the code:
//Available over iOS 10, only for maximum 1 minute, need internet connection; can be sourced from an audio recorded file or over the microphone
NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:#"es-MX"];
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:local];
NSString *soundFilePath = [myDir stringByAppendingPathComponent:#"/sound.m4a"];
NSURL *url = [[NSURL alloc] initFileURLWithPath:soundFilePath];
if(!speechRecognizer.isAvailable)
NSLog(#"speechRecognizer is not available, maybe it has no internet connection");
SFSpeechURLRecognitionRequest *urlRequest = [[SFSpeechURLRecognitionRequest alloc] initWithURL:url];
urlRequest.shouldReportPartialResults = YES; // YES if animate writting
[speechRecognizer recognitionTaskWithRequest: urlRequest resultHandler: ^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error)
{
NSString *transcriptText = result.bestTranscription.formattedString;
if(!error)
{
NSLog(#"transcriptText");
}
}];
I'm currently creating an iOS application that uses the camera to capture 15 frames per second for 30 seconds (a total of 450 frames). The problem is that [self.session startRunning] (the last line of the code provided) doesn't seem to be working; I say this because I've set up an array called hues to take the average red values of each of the 450 frames that should be captured. However, even after starting and stopping detection, the array remains empty. What am I missing?
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) NSMutableArray *hues;
#end
const int SECONDS = 15;
const int FPS = 30;
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
self.hues = [[NSMutableArray alloc] init]; // initiate things (from old code)
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetLow;
NSInteger numberOfFramesCaptured = self.hues.count;
// initialize the session with proper settings (from docs)
NSError *error = nil;
AVCaptureDevice *captureDevice; // initialize captureDevice and input, and add input (from old code)
AVCaptureDeviceInput *videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
if ([self.session canAddInput:videoInput])
{ // fails
[self.session addInput:videoInput];
}
// find the max fps we can get from the given device (from old code)
AVCaptureDeviceFormat *currentFormat = [captureDevice activeFormat];
for (AVCaptureDeviceFormat *format in captureDevice.formats) // executes twice
{ // skips all of this
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
// find the lowest resolution format at the frame rate we want (from old code)
if (frameRates.maxFrameRate == FPS && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
{
currentFormat = format;
}
}
// tell the device to use the max frame rate (from old code)
[captureDevice lockForConfiguration:nil];
captureDevice.torchMode = AVCaptureTorchModeOn;
captureDevice.activeFormat = currentFormat;
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FPS);
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FPS);
[captureDevice unlockForConfiguration];
// set the output (from old code)
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on (from old code)
dispatch_queue_t queue = dispatch_queue_create("queue", NULL);
// setup our delegate (from old code)
[videoOutput setSampleBufferDelegate:self queue:queue];
// configure the pixel format (from old code)
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
[self.session addOutput:videoOutput];
// start the video session
[self.session startRunning]; // PROBLEM OCCURS HERE
In your code AVCaptureDevice *captureDevice; never initialized.
I need the iOS camera to take a picture without any input from the user. How would I go about doing this? This is my code so far:
-(void)initCapture{
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] init];
AVCaptureStillImageOutput *newStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey,
nil];
[newStillImageOutput setOutputSettings:outputSettings];
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddOutput:newStillImageOutput]) {
[newCaptureSession addOutput:newStillImageOutput];
}
self.stillImageOutput = newStillImageOutput;
}
What else do I need to add and where do I go from here? I don't want to take a video, only a single still image. Also, how would I convert the image into a UIImage afterwards? Thanks
https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVCaptureSession_Class/Reference/Reference.html
Look at AVCaptureSession on how to take take picture without user input. You will need to add AVCaptureDevice for camera to the session.
You've got an AVCaptureStillImageOutput, stillImageOutput. Plus you need to have stored your capture session where you can get at it later. Call it self.sess. Then:
AVCaptureConnection *vc =
[self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[self.sess startRunning];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:vc
completionHandler:
^(CMSampleBufferRef buf, NSError *err) {
NSData* data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:buf];
UIImage* im = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
UIImageView* iv = [[UIImageView alloc] initWithFrame:someframe];
iv.contentMode = UIViewContentModeScaleAspectFit;
iv.image = im;
[self.view addSubview: iv];
[self.iv removeFromSuperview]; // in case we already did this
self.iv = iv;
[self.sess stopRunning];
});
}];
As #Salil says, the apple's official AVFoundation is very useful, take a look at that to understand main concept. Then you can download a sample code to see how to achieve the goal.
Here is the sample code to take still images with a preview layer.
I am new to Objective-C and iOS technology.I want to record the video through code and during run time, I have to get each frame as raw data for some processing.How can I achieve this? Please any one help me. Thanks in Advance. Here is my code so far:
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
}
The viewDidAppear function
-(void)viewDidAppear:(BOOL)animated
{
if (!_bpickeropen)
{
_bpickeropen = true;
_picker = [[UIImagePickerController alloc] init];
_picker.delegate = self;
NSArray *sourceTypes = [UIImagePickerController availableMediaTypesForSourceType:picker.sourceType];
if (![sourceTypes containsObject:(NSString *)kUTTypeMovie ])
{
NSLog(#"device not supported");
return;
}
_picker.sourceType = UIImagePickerControllerSourceTypeCamera;
_picker.mediaTypes = [NSArray arrayWithObjects:(NSString *)kUTTypeMovie,nil];//,(NSString *) kUTTypeImage
_picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
[self presentModalViewController:_picker animated:YES];
}
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
**NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
**
PROBLEMS
1.(Here i am getting the raw bytes only once)
2.(After that i want to store this raw bytes as binary file in app path).
// Do whatever with your bytes
NSLog(#"bytes per row %zd",bytesPerRow);
[dataForRawBytes writeToFile:[self datafilepath]atomically:YES];
NSLog(#"Sample Buffer Data is %#\n",dataForRawBytes);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
here i am setting the delegate of output// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input)
{
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
I appreciate any help.Thanks in advance.
You could look into the AVFoundation framework. It allows you access to the raw data generated from the camera.
This link is a good intro-level project to the AVFoundation video camera usage.
In order to get individual frames from the video output, you could use the AVCaptureVideoDataOutput class from the AVFoundation framework.
Hope this helps.
EDIT: You could look at the delegate functions of AVCaptureVideoDataOutputSampleBufferDelegate, in particular the captureOutput:didOutputSampleBuffer:fromConnection: method. This will be called every time a new frame is captured.
If you do not know how delegates work, this link is a good example of delegates.
I am using the AVFoundation framework and implementing according to to capture video and setting up a previewLayer on my view that shows an output size of 640 x 480. I would like to have the video cropped to this frame (so the user gets exactly what they see in the UI as an output) but I am having trouble finding the point at which such a change should take place.
I would like to crop the video before / during capture rather than after the user takes the video as the user will be shooting multiple videos in a "wizard" and I am trying to limit time between steps.
Is this even possible?
here is the setup code for the capture session:
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
newCaptureSession.sessionPreset = AVCaptureSessionPresetiFrame1280x720;
// Add inputs and output to the capture session
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddInput:newAudioInput]) {
[newCaptureSession addInput:newAudioInput];
}
if ([newCaptureSession canAddOutput:newStillImageOutput]) {
[newCaptureSession addOutput:newStillImageOutput];
}
[self setStillImageOutput:newStillImageOutput];
[self setVideoInput:newVideoInput];
[self setAudioInput:newAudioInput];
[self setSession:newCaptureSession];
// Set up the movie file output
NSURL *outputFileURL = [self tempFileURL];
AVCamRecorder *newRecorder = [[AVCamRecorder alloc] initWithSession:[self session] outputFileURL:outputFileURL];
[newRecorder setDelegate:self];
// Send an error to the delegate if video recording is unavailable
if (![newRecorder recordsVideo] && [newRecorder recordsAudio]) {
NSString *localizedDescription = NSLocalizedString(#"Video recording unavailable", #"Video recording unavailable description");
NSString *localizedFailureReason = NSLocalizedString(#"Movies recorded on this device will only contain audio. They will be accessible through iTunes file sharing.", #"Video recording unavailable failure reason");
NSDictionary *errorDict = [NSDictionary dictionaryWithObjectsAndKeys:
localizedDescription, NSLocalizedDescriptionKey,
localizedFailureReason, NSLocalizedFailureReasonErrorKey,
nil];
NSError *noVideoError = [NSError errorWithDomain:#"AVCam" code:0 userInfo:errorDict];
if ([[self delegate] respondsToSelector:#selector(captureManager:didFailWithError:)]) {
[[self delegate] captureManager:self didFailWithError:noVideoError];
}
}
[self setRecorder:newRecorder];
The video record start method:
-(void)startRecordingWithOrientation:(AVCaptureVideoOrientation)videoOrientation;
{
AVCaptureConnection *videoConnection = [AVCamUtilities connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self movieFileOutput] connections]];
if ([videoConnection isVideoOrientationSupported])
[videoConnection setVideoOrientation:videoOrientation];
[[self movieFileOutput] startRecordingToOutputFileURL:[self outputFileURL] recordingDelegate:self];
}