AVCaptureMetadataOutput Inverse Colors - ios

I am making an app that scans a barcode that inverted color (black background & white bars). I have to use AVFoundation. Currently, I am using AVCaptureMetadataOutput. I can get it to work perfectly with a normal barcode. I need to invert the color on the white -> black & black -> white etc. Can I add a CIColorInvert to the Input in AVCaptureSession
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
mCaptureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if([mCaptureSession canAddInput:videoInput]){
[mCaptureSession addInput:videoInput];
} else {
NSLog(#"Could not add video input: %#", [error localizedDescription]);
}
// set up metadata output and this class as its delegate so that if metadata (barcode 39) is detected it will send the data to this class
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if([mCaptureSession canAddOutput:metadataOutput]){
[mCaptureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeCode39Code]];
} else {
NSLog(#"Could not add metadata output");
}
// sets up what the camera sees as a layer of the view
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:mCaptureSession];
//CGRect frame = CGRectMake(0.0 - 50, 0.0, 1024.0, 1024.0 + 720.0);
CGRect bounds=self.view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
NSArray *filters = [[NSArray alloc] initWithObjects:[CIFilter filterWithName:#"CIColorInvert"], nil];
[previewLayer setFilters:filters];
//[previewLayer setFrame:self.view.bounds];
[self.view.layer addSublayer:previewLayer];
//starts the camera session
[mCaptureSession startRunning];
}

Related

How to show same camera video in two views

I am trying to show the same camera video in two different views; However I only get the video in one view. Could you help. Code is below
-(void) showCameraPreview{
self.camerPreviewCaptureSession =[[AVCaptureSession alloc] init];
self.camerPreviewCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput1 = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[self.camerPreviewCaptureSession addInput:videoInput1];
AVCaptureVideoPreviewLayer *newCaptureVideoViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCaptureVideoViewLayer.frame = self.viewPreview.bounds;
newCaptureVideoViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCaptureVideoViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview.bounds.size.width, self.viewPreview.bounds.size.height )];
AVCaptureVideoPreviewLayer *newCameraViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCameraViewLayer.frame = self.viewPreview1.bounds;
newCameraViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCameraViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview1.bounds.size.width, self.viewPreview1.bounds.size.height )];
[self.viewPreview1.layer addSublayer:newCameraViewLayer];
[self.viewPreview.layer addSublayer:newCaptureVideoViewLayer];
[self.camerPreviewCaptureSession startRunning];
}

how to correctly start a camera session using AVCapture session/AVCapture

I want to make an iOS app in objective C. Right now I'm stuck on making the preview layer to the AVCapture preview output. Could someone please tell me how to successfully start an image capture session using the AVCapture camera session in iOS Objective C? Any help is much appreciated. Thank you.
I give you answer for AVCaptureSession
-(void)capture
{
NSError *error=nil;
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *inputDevice = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *camera in devices)
{
if([camera position] == AVCaptureDevicePositionBack) // is Back camera
{
inputDevice = camera;
break;
}
}
[session addInput:inputDevice];
//Output
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview Layer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
previewLayer.frame = viewForCamera.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[viewForCamera.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
Try this code to get camera id.
NSString *cameraID = nil;
NSArray *captureDeviceType = #[AVCaptureDeviceTypeBuiltInWideAngleCamera];
AVCaptureDeviceDiscoverySession *captureDevice =
[AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:captureDeviceType
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionUnspecified];
cameraID = [captureDevice.devices.lastObject localizedName];

ios objective C screenshot sublayer not visible

I'm building an app where i want to take a snapshot from the camera and show it in a UIImageView. I'm able to take the snapshot but the AVCaptureVideoPreviewLayer is not visible in the screenshot. Does anyone know how to do that?
Here is my code:
#implementation ViewController
CGRect imgRect;
AVCaptureVideoPreviewLayer *previewLayer;
AVCaptureVideoDataOutput *output;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGFloat x = self.view.bounds.size.width * 0.5 - 128;
imgRect = CGRectMake(x, 64, 256, 256);
previewLayer.frame = imgRect;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
}
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
a bit of help is very much appreciated.
Thank you in advance
Gilbert Avezaat
You should use AVCaptureStillImageOutput to get image from the camera connection,
Here is how you could do it,
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
stillImageOutput.outputSettings = #{
AVVideoCodecKey: AVVideoCodecJPEG,
(__bridge id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}];
first check for Image is return or not . if return then ...
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
[self.view bringSubviewToFrunt:self.imgResult];
}
hope it help you .

Applying CIFilter on AVFoundation camera feed fails

I am trying to apply a CIFilter onto live camera feed (and be able to capture a filtered still image).
I have seen on StackOverflow some code pertaining the issue, but I haven't been able to get it to work.
My issue is that in the method captureOutput the filter seems correctly applied (I put a breakpoint in there and QuickLooked it), but I don't see it in my UIView (I see the original feed, without the filter).
Also I am not sure which output I should add to the session:
[self.session addOutput: self.stillOutput]; //AVCaptureStillImageOutput
[self.session addOutput: self.videoDataOut]; //AVCaptureVideoDataOutput
And which of those I should iterate through when looking for a connection (in findVideoConnection).
I am totally confused.
Here's some code:
viewDidLoad
-(void)viewDidLoad {
[super viewDidLoad];
self.shutterButton.userInteractionEnabled = YES;
self.context = [CIContext contextWithOptions: #{kCIContextUseSoftwareRenderer : #(YES)}];
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setValue:#15 forKey:kCIInputRadiusKey];
}
prepare session
-(void)prepareSessionWithDevicePosition: (AVCaptureDevicePosition)position {
AVCaptureDevice* device = [self videoDeviceWithPosition: position];
self.currentPosition = position;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
NSError* error = nil;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if ([self.session canAddInput: self.deviceInput]) {
[self.session addInput: self.deviceInput];
}
AVCaptureVideoPreviewLayer* previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: self.session];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoDataOut = [AVCaptureVideoDataOutput new];
[self.videoDataOut setSampleBufferDelegate: self queue:dispatch_queue_create("bufferQueue", DISPATCH_QUEUE_SERIAL)];
self.videoDataOut.alwaysDiscardsLateVideoFrames = YES;
CALayer* rootLayer = [[self view] layer];
rootLayer.masksToBounds = YES;
CGRect frame = self.previewView.frame;
previewLayer.frame = frame;
[rootLayer insertSublayer: previewLayer atIndex: 1];
self.stillOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
self.stillOutput.outputSettings = outputSettings;
[self.session addOutput: self.stillOutput];
//tried [self.session addOutput: self.videoDataOut];
//and didn't work (filtered image didn't show, and also couldn't take pictures)
[self findVideoConnection];
}
find video connection
-(void)findVideoConnection {
for (AVCaptureConnection* connection in self.stillOutput.connections) {
//also tried self.videoDataOut.connections
for (AVCaptureInputPort* port in [connection inputPorts]) {
if ([[port mediaType] isEqualToString: AVMediaTypeVideo]) {
self.videoConnection = connection;
break;
}
}
if (self.videoConnection != nil) {
break;
}
}
}
capture output, apply filter and put it in the CALayer
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey: #"inputImage"];
// render image
CGImageRef blurredImage = [self.context createCGImage:self.filter.outputImage fromRect:result.extent];
UIImage* img = [UIImage imageWithCGImage: blurredImage];
//Did this to check whether the image was actually filtered.
//And surprisingly it was.
dispatch_async(dispatch_get_main_queue(), ^{
//The image present in my UIView is for some reason not blurred.
self.previewView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}

iOS7 AVCapture captureOutput never gets called

Please understand that I cannot upload the whole code here.
I have
#interface BcsProcessor : NSObject <AVCaptureMetadataOutputObjectsDelegate> {}
and BcsProcessor has setupCaptureSession and captureOutput method.
- (void)captureOutput:(AVCaptureOutput*)captureOutput didOutputMetadataObjects:(NSArray*)metadataObjects fromConnection:(AVCaptureConnection*)connection
- (NSString*)setUpCaptureSession {
NSError* error = nil;
AVCaptureSession* captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession = captureSession;
AVCaptureDevice* __block device = nil;
if (self.isFrontCamera) {
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
[devices enumerateObjectsUsingBlock:^(AVCaptureDevice *obj, NSUInteger idx, BOOL *stop) {
if (obj.position == AVCaptureDevicePositionFront) {
device = obj;
}
}];
} else {
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
AVCaptureMetadataOutput* output = [[[AVCaptureMetadataOutput alloc] init] autorelease];
output.metadataObjectTypes = output.availableMetadataObjectTypes
dispatch_queue_t outputQueue = dispatch_queue_create("com.1337labz.featurebuild.metadata", 0);
[output setMetadataObjectsDelegate:self queue:outputQueue];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
if ([captureSession canAddInput:input]) {
[captureSession addInput:input];
}
if ([captureSession canAddOutput:output]) {
[captureSession addOutput:output];
}
// setup capture preview layer
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
// run on next event loop pass [captureSession startRunning]
[captureSession performSelector:#selector(startRunning) withObject:nil afterDelay:0];
return nil;
}
So the code above sets up the session and add AVCaptureMetadataOutput. and BcsProcessor is supposed to receive the captured metadata. but my captureOutput method does not receive any data, or gets called.
I'll appreciate any help or comments.
First make sure your input and output are correctly added to the session. You can check by logging captureSession.inputs and captureSession.outputs.
Second make sure output.metadataObjectTypes is correctly setup meaning output of availableMetadataObjectTypes is not empty. I believe this will be empty if you call it before adding the output.
and finally i don't see you adding the preview layer to the views layer
try after you init your layer with session...
self.previewLayer.frame = self.view.layer.bounds;
[self.view.layer addSublayer:previewLayer];

Resources