I am working on a video-related application. For this, I used AVFoundation framework to capture image and video. I am capturing an image and showing that captured image in next view, where image view is a subview to the scrollview. It's working fine in iPhone 5,iPhone 5s and also in iPad but in iPhone 4 after capturing the image and while attaching to scrollView, the app is becoming slow. The scroll view is not scrolling smoothly as in other devices. I am not getting issue where it has gone wrong. I am using the below code to capture images:
-(void)capturePhoto
{
_ciContext = [CIContext contextWithOptions:nil];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// - input
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:NULL];
NSError *error = nil;
if ([device lockForConfiguration:&error])
{
if ([device isFlashModeSupported:AVCaptureFlashModeOff])
{
device.flashMode = AVCaptureFlashModeOff;
}
[device unlockForConfiguration];
}
// - output
_dataOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey, nil];
_dataOutput.outputSettings = outputSettings;
// - output
NSMutableDictionary *settings;
settings = [NSMutableDictionary dictionary];
[settings setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(__bridge id) kCVPixelBufferPixelFormatTypeKey];
_dataOutputVideo = [[AVCaptureVideoDataOutput alloc] init];
_dataOutputVideo.videoSettings = settings;
[_dataOutputVideo setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
_session = [[AVCaptureSession alloc] init];
[_session addInput:deviceInput];
[_session addOutput:_dataOutput];
[_session addOutput:_dataOutputVideo];
// _session.sessionPreset = AVCaptureSessionPresetPhoto;
_session.sessionPreset = AVCaptureSessionPresetHigh;
// _session.sessionPreset = AVCaptureSessionPresetMedium;
[_session startRunning];
// add gesture
// UIGestureRecognizer *gr = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(didTapGesture:)];
// gr.delegate = self;
// [self.touchView addGestureRecognizer:gr];
_focusView = [[UIView alloc] init];
CGRect imageFrame = _focusView.frame;
imageFrame.size.width = 80;
imageFrame.size.height = 80;
_focusView.frame = imageFrame;
_focusView.center = CGPointMake(160, 202);
CALayer *layer = _focusView.layer;
layer.shadowOffset = CGSizeMake(2.5, 2.5);
layer.shadowColor = [[UIColor blackColor] CGColor];
layer.shadowOpacity = 0.5;
layer.borderWidth = 2;
layer.borderColor = [UIColor yellowColor].CGColor;
[self.touchView addSubview:_focusView];
_focusView.alpha = 0;
_isShowFlash = NO;
[self.view bringSubviewToFront:self.touchView];
UIView *footerView = [self.view viewWithTag:2];
[self.view bringSubviewToFront:footerView];
}
Later I am attaching to scroll view like this:
scrollImgView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 320, 340)];
UIImage *image = [UIImage imageWithData:appdelegate.capturedImgData];
UIImage *tempImage=[self resizeImage:image withWidth:320 withHeight:340];
NSData *imgData=UIImageJPEGRepresentation(tempImage,1.0);//0.25f
NSLog(#"image is %#",image);
scrollImgView.image=[UIImage imageWithData:imgData];
// scrollImgView.contentMode = UIViewContentModeScaleAspectFit;
// UIViewContentModeScaleAspectFit;
[postScrollView addSubview:scrollImgView];
Please give me suggestions if any one faced the same problem.
your coding is fine , and also is not a problem in in device, it may be occur
1. network problem
2. device memory is already loaded fully.
3. some Data conversation also taken times
here
UIImage *image = [UIImage imageWithData:appdelegate.capturedImgData];
UIImage *tempImage=[self resizeImage:image withWidth:320 withHeight:340]; //
NSData *imgData=UIImageJPEGRepresentation(tempImage,1.0);//0.25f
the above code u having used
1. data conversion is taken high and also taken the 2 time of image conversion optimize any `NSData`
2. third line is improve the `quality` of your image-- this also take the time for image conversion.
in my suggestion
1. use `Asychronous method`
-- sdwebimage
-- Asychronous Imageview
-- NSOperation_Queue
-- Dispatch im main_queue
use anyone of this , it will be taken the some response for you. in my suggestion is use SDWebImage.. use this link Loading takes a while when i set UIImage to a NSData with a url.
I solved the issue by changing
_session.sessionPreset = AVCaptureSessionPresetHigh to
_session.sessionPreset = AVCaptureSessionPresetMedium
Related
I am trying to show the same camera video in two different views; However I only get the video in one view. Could you help. Code is below
-(void) showCameraPreview{
self.camerPreviewCaptureSession =[[AVCaptureSession alloc] init];
self.camerPreviewCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput1 = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[self.camerPreviewCaptureSession addInput:videoInput1];
AVCaptureVideoPreviewLayer *newCaptureVideoViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCaptureVideoViewLayer.frame = self.viewPreview.bounds;
newCaptureVideoViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCaptureVideoViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview.bounds.size.width, self.viewPreview.bounds.size.height )];
AVCaptureVideoPreviewLayer *newCameraViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCameraViewLayer.frame = self.viewPreview1.bounds;
newCameraViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCameraViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview1.bounds.size.width, self.viewPreview1.bounds.size.height )];
[self.viewPreview1.layer addSublayer:newCameraViewLayer];
[self.viewPreview.layer addSublayer:newCaptureVideoViewLayer];
[self.camerPreviewCaptureSession startRunning];
}
I am making an app that scans a barcode that inverted color (black background & white bars). I have to use AVFoundation. Currently, I am using AVCaptureMetadataOutput. I can get it to work perfectly with a normal barcode. I need to invert the color on the white -> black & black -> white etc. Can I add a CIColorInvert to the Input in AVCaptureSession
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
mCaptureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if([mCaptureSession canAddInput:videoInput]){
[mCaptureSession addInput:videoInput];
} else {
NSLog(#"Could not add video input: %#", [error localizedDescription]);
}
// set up metadata output and this class as its delegate so that if metadata (barcode 39) is detected it will send the data to this class
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if([mCaptureSession canAddOutput:metadataOutput]){
[mCaptureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeCode39Code]];
} else {
NSLog(#"Could not add metadata output");
}
// sets up what the camera sees as a layer of the view
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:mCaptureSession];
//CGRect frame = CGRectMake(0.0 - 50, 0.0, 1024.0, 1024.0 + 720.0);
CGRect bounds=self.view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
NSArray *filters = [[NSArray alloc] initWithObjects:[CIFilter filterWithName:#"CIColorInvert"], nil];
[previewLayer setFilters:filters];
//[previewLayer setFrame:self.view.bounds];
[self.view.layer addSublayer:previewLayer];
//starts the camera session
[mCaptureSession startRunning];
}
I'm building an app where i want to take a snapshot from the camera and show it in a UIImageView. I'm able to take the snapshot but the AVCaptureVideoPreviewLayer is not visible in the screenshot. Does anyone know how to do that?
Here is my code:
#implementation ViewController
CGRect imgRect;
AVCaptureVideoPreviewLayer *previewLayer;
AVCaptureVideoDataOutput *output;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGFloat x = self.view.bounds.size.width * 0.5 - 128;
imgRect = CGRectMake(x, 64, 256, 256);
previewLayer.frame = imgRect;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
}
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
a bit of help is very much appreciated.
Thank you in advance
Gilbert Avezaat
You should use AVCaptureStillImageOutput to get image from the camera connection,
Here is how you could do it,
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
stillImageOutput.outputSettings = #{
AVVideoCodecKey: AVVideoCodecJPEG,
(__bridge id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}];
first check for Image is return or not . if return then ...
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
[self.view bringSubviewToFrunt:self.imgResult];
}
hope it help you .
I am trying to apply a CIFilter onto live camera feed (and be able to capture a filtered still image).
I have seen on StackOverflow some code pertaining the issue, but I haven't been able to get it to work.
My issue is that in the method captureOutput the filter seems correctly applied (I put a breakpoint in there and QuickLooked it), but I don't see it in my UIView (I see the original feed, without the filter).
Also I am not sure which output I should add to the session:
[self.session addOutput: self.stillOutput]; //AVCaptureStillImageOutput
[self.session addOutput: self.videoDataOut]; //AVCaptureVideoDataOutput
And which of those I should iterate through when looking for a connection (in findVideoConnection).
I am totally confused.
Here's some code:
viewDidLoad
-(void)viewDidLoad {
[super viewDidLoad];
self.shutterButton.userInteractionEnabled = YES;
self.context = [CIContext contextWithOptions: #{kCIContextUseSoftwareRenderer : #(YES)}];
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setValue:#15 forKey:kCIInputRadiusKey];
}
prepare session
-(void)prepareSessionWithDevicePosition: (AVCaptureDevicePosition)position {
AVCaptureDevice* device = [self videoDeviceWithPosition: position];
self.currentPosition = position;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
NSError* error = nil;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if ([self.session canAddInput: self.deviceInput]) {
[self.session addInput: self.deviceInput];
}
AVCaptureVideoPreviewLayer* previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: self.session];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoDataOut = [AVCaptureVideoDataOutput new];
[self.videoDataOut setSampleBufferDelegate: self queue:dispatch_queue_create("bufferQueue", DISPATCH_QUEUE_SERIAL)];
self.videoDataOut.alwaysDiscardsLateVideoFrames = YES;
CALayer* rootLayer = [[self view] layer];
rootLayer.masksToBounds = YES;
CGRect frame = self.previewView.frame;
previewLayer.frame = frame;
[rootLayer insertSublayer: previewLayer atIndex: 1];
self.stillOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
self.stillOutput.outputSettings = outputSettings;
[self.session addOutput: self.stillOutput];
//tried [self.session addOutput: self.videoDataOut];
//and didn't work (filtered image didn't show, and also couldn't take pictures)
[self findVideoConnection];
}
find video connection
-(void)findVideoConnection {
for (AVCaptureConnection* connection in self.stillOutput.connections) {
//also tried self.videoDataOut.connections
for (AVCaptureInputPort* port in [connection inputPorts]) {
if ([[port mediaType] isEqualToString: AVMediaTypeVideo]) {
self.videoConnection = connection;
break;
}
}
if (self.videoConnection != nil) {
break;
}
}
}
capture output, apply filter and put it in the CALayer
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey: #"inputImage"];
// render image
CGImageRef blurredImage = [self.context createCGImage:self.filter.outputImage fromRect:result.extent];
UIImage* img = [UIImage imageWithCGImage: blurredImage];
//Did this to check whether the image was actually filtered.
//And surprisingly it was.
dispatch_async(dispatch_get_main_queue(), ^{
//The image present in my UIView is for some reason not blurred.
self.previewView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
Here I have a video, I want to add a watermark in the end of the video. The requirement is to blur the last second, and put a GPUImageUIElement over the video. Here is what I want:
But it turns out to be like this:
I only want to blur the movie, not the label.
And here's my process:
self.originMovie = [[GPUImageMovie alloc] initWithAsset:video];
self.regularFilter = [[GPUImageFilter alloc] init];
self.blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
self.blendFilter.mix = 1.0;
self.combinationViewElement = [[GPUImageUIElement alloc] initWithView:self.combinationView];
self.regularFilter.frameProcessingCompletionBlock = ^(GPUImageOutput *output, CMTime time){
[weakSelf updateCombinationWithTimestamp:time];
[weakSelf.combinationViewElement update];
};
AVAssetTrack *videoTrack = [[video tracksWithMediaType:AVMediaTypeVideo] firstObject];
GPUImageUIElement *watermarkElement = [[GPUImageUIElement alloc] initWithView:self.watermarkView];
self.watermarkBlurFilter = [[GPUImageGaussianBlurPositionFilter alloc] init];
self.watermarkBlurFilter.blurSize = 0;
GPUImageFilter *filter = [[GPUImageFilter alloc] init];
filter.frameProcessingCompletionBlock = ^(GPUImageOutput *output , CMTime time) {
if (isnan(weakSelf.originMovie.progress)) {
return;
}
CGFloat duration = CMTimeGetSeconds(weakSelf.originMovie.asset.duration);
if ([weakSelf isWithinWatermarkDuration]) {
// 从最后一秒开始,blurSize从0线性递增到3
weakSelf.watermarkBlurFilter.blurSize = ((weakSelf.originMovie.progress * duration ) + (1 - duration)) * 3;
weakSelf.watermarkView.hidden = NO;
}
[watermarkElement update];
if (weakSelf.progressHandler) {
dispatch_async(dispatch_get_main_queue(), ^{
weakSelf.progressHandler(weakSelf.originMovie.progress);
});
}
};
GPUImageAlphaBlendFilter *watermarkBlendFilter = [[GPUImageAlphaBlendFilter alloc] init];
watermarkBlendFilter.mix = 1.0;
self.movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:self.videoURL size:videoTrack.naturalSize];
self.movieWriter.shouldPassthroughAudio = YES;
self.originMovie.audioEncodingTarget = self.movieWriter;
[self.originMovie enableSynchronizedEncodingUsingMovieWriter:self.movieWriter];
[self.originMovie addTarget:self.regularFilter];
[self.regularFilter addTarget:self.blendFilter];
[self.combinationViewElement addTarget:self.blendFilter];
[self.blendFilter addTarget:self.watermarkBlurFilter];
[self.watermarkBlurFilter addTarget:filter];
[filter addTarget:watermarkBlendFilter];
[watermarkElement addTarget:watermarkBlendFilter];
[watermarkBlendFilter addTarget:self.movieWriter];
[self.movieWriter startRecording];
[self.originMovie startProcessing];
The combinationViewElement is my other process, doesn't relative to this question.
I don't know if I make a mistake, so anyone has any idea, please let me know, appreciate it.
That's because you're adding your blur filter after the blend where you emboss the UI element on your input movie. You need to move your blur filter so that it comes after self.regularFilter, but before that is fed into self.blendFilter.
Having the blur after the blend will just blur the entire blended image.