iOS capture image form GPUImage - ios

I am trying to capture an image with an app that uses GPUImage. I have the camera set up like this
self.videoCamera = [[GPUImageVideoCamera alloc]
initWithSessionPreset:AVCaptureSessionPresetHigh
cameraPosition:AVCaptureDevicePositionBack];
_videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
[_videoCamera startCameraCapture];
[_videoCamera addTarget:sample1ImageView];
and i use a custom filter:
radFilter = [[GPUImageCustomFilter alloc] init];
[_videoCamera addTarget:cusFilter];
[cusFilter addTarget:imageView];
I then use this code for the camera capture:
[_videoCamera pauseCameraCapture];
[radFilter forceProcessingAtSize:CGSizeMake(600, 600)];
[radFilter useNextFrameForImageCapture];
UIImage* capturedImage = [radFilter imageFromCurrentFramebuffer];
UIImageWriteToSavedPhotosAlbum(capturedImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
[_videoCamera resumeCameraCapture];
and all i get is white pictures, with rgb 0,0,0.
I tried saving both in an IBAction and in a rac_signalForControlEvents,i used dispatch but nothing changed. Can anyone tell me what am i doing wrong?
Thank you,
Alex

try using GPUImageStillCamera like these..
in your .h file..
GPUImageStillCamera *stillCamera;
GPUImageView * filterView;
in your .m files viewdidload..
selectedFilter = [[GPUImageFilter alloc]init];
filterView=[[GPUImageView alloc]init];
stillCamera=[[GPUImageStillCamera alloc]initWithSessionPreset:AVCaptureSessionPresetPhoto cameraPosition:AVCaptureDevicePositionFront];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
[stillCamera addTarget:selectedFilter];
[selectedFilter addTarget:filterView];
[stillCamera startCameraCapture];
on the UIButtons click event for capturing image do these,i hope it helps..
[stillCamera capturePhotoAsImageProcessedUpToFilter:selectedFilter withCompletionHandler:^(UIImage *processedImage, NSError *error)
{
UIImageWriteToSavedPhotosAlbum(processedImage, self, nil, nil);
}];

You can use this code to capture the image.
- (UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (IBAction)btnCaptureClicked:(id)sender
{
[videoCamera pauseCameraCapture];
[filter useNextFrameForImageCapture];
//[filter imageFromCurrentFramebuffer];
UIImage *capturedImage= [self screenshot];
if(capturedImage != nil)
{
UIImageWriteToSavedPhotosAlbum(capturedImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
[videoCamera resumeCameraCapture];
}

Related

Objevtive - C : Save zoomed caputre image from camera with GPUImage?

I have done zooming functionality in camera with GPUImage. But when I capture image from camera with zoom and save it but still it save as normal pict(no zooming found). I want in whichever mode I capture image that must be saved in album. How can I solve this problem Any suggestion will be great. Thanks guys. My code :
- (void)viewDidLoad {
[super viewDidLoad];
self.library = [[ALAssetsLibrary alloc] init];
[self setViewLayOut];
[self setupFilter];
[self setZoomFunctionlityOnCamera];
}
- (void)setupFilter;
{
videoCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
switch (filterType)
{
case GPUIMAGE_COLORINVERT:
{
self.title = #"Color Negative";
filter = [[GPUImageColorInvertFilter alloc] init];
};
break;
case GPUIMAGE_GRAYSCALE:
{
self.title = #"Black and White Positive";
filter = [[GPUImageGrayscaleFilter alloc] init];
};
break;
default: filter = [[GPUImageFilter alloc] init];
self.title = #"Color Positive";
break;
}
videoCamera.runBenchmark = YES;
filterView = (GPUImageView *)cameraView;
[filter addTarget:filterView];
[videoCamera addTarget:filter];
[videoCamera startCameraCapture];
}
- (IBAction)clickPhotoBtn:(id)sender {
if (!isCameraPermissionAccessed) {
[self showAccessDeinedMessage :#"Camera permission denied" withMessage:#"To enable, please go to settings and allow camera permission for this app."];
return;
}
[videoCamera capturePhotoAsJPEGProcessedUpToFilter:filter withCompletionHandler:^(NSData *processedJPEG, NSError *error){
if (error!=nil)
{
[self showErrorMessage:#"Unable to capture image" ];
return ;
}
else {
UIImage *image = [UIImage imageWithData:processedJPEG];
if (filterType == GPUIMAGE_GRAYSCALE) {
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageColorInvertFilter *stillImageFilter = [[GPUImageColorInvertFilter alloc] init];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];
UIImageWriteToSavedPhotosAlbum(currentFilteredVideoFrame, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
else{
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
}
}];
}
Use below code it may helpful to you
+(UIImage*)croppedImageWithImage:(UIImage *)image zoom:(CGFloat)zoom
{
CGFloat zoomReciprocal = 1.0f / zoom;
CGPoint offset = CGPointMake(image.size.width * ((1.0f - zoomReciprocal) / 2.0f), image.size.height * ((1.0f - zoomReciprocal) / 2.0f));
CGRect croppedRect = CGRectMake(offset.x, offset.y, image.size.width * zoomReciprocal, image.size.height * zoomReciprocal);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([image CGImage], croppedRect);
UIImage* croppedImage = [[UIImage alloc] initWithCGImage:croppedImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(croppedImageRef);
return croppedImage;
}

GPUImage imageFromCurrentFramebuffer returning nil sometimes for GPUImageLookupFilter and it's subclasses

I have been using GPUImage for my project and I am getting into this problem where imageFromCurrentFramebuffer returns nil for some of the GPUImageLookupFilter's.
I subclassed GPUImageFilterGroup like in the GPUImageAmatorkaFilter my code is as follows:
-(MTLookupFilter *) initWithLookupImageName:(NSString *) lookupImageName {
self = [super init];
if (self) {
UIImage *image = [UIImage imageNamed:lookupImageName];
self.lookupImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageLookupFilter *lookupFilter = [[GPUImageLookupFilter alloc] init];
[self addFilter:lookupFilter];
[self.lookupImageSource addTarget:lookupFilter atTextureLocation:1];
[self.lookupImageSource processImage];
self.initialFilters = [NSArray arrayWithObjects:lookupFilter, nil];
self.terminalFilter = lookupFilter;
}
return self;
}
I have several of the objects of this class added into an array and I use:
- (IBAction)filterAction:(id)sender {
NSInteger index = arc4random()%self.filtersArray.count;
id filter = self.filtersArray[index];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.fullImage];
UIImage *filteredimage = nil;
[stillImageSource addTarget:filter];
[stillImageSource processImage];
[filter useNextFrameForImageCapture];
filteredimage = [filter imageFromCurrentFramebuffer];
if (filteredimage) {
self.imageView.image = filteredimage;
} else {
NSLog(#"Filtered image is nil");
}
}
The returned image from imageFromCurrentFramebuffer is sometimes nil and I do not understand it's cause. I would be thankful for any help. Sometimes the image is nil even for the filters, GPUImageAmatorkaFilter, GPUImageSoftEleganceFilter and `GPUImageMissEtikateFilter so I know it is not a problem with my subclass.

Add UIImage Element using GPUImage Framework

I am using Brad Larson's GPUImage Framework to add a UIImage element,i have successfully added the image but the main issue is that the image is getting stretched to the video's aspect ratio.
Here is my code:
GPUImageView *filterView = (GPUImageView *)self.view;
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
transformFilter=[[GPUImageTransformFilter alloc]init];
CGAffineTransform t=CGAffineTransformMakeScale(0.5, 0.5);
[(GPUImageTransformFilter *)filter setAffineTransform:t];
[videoCamera addTarget:transformFilter];
filter = [[GPUImageOverlayBlendFilter alloc] init];
[videoCamera addTarget:filter];
inputImage = [UIImage imageNamed:#"eye.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture forceProcessingAtSize:CGSizeMake(50, 50)];
[sourcePicture processImage];
[sourcePicture addTarget:filter];
[sourcePicture addTarget:transformFilter];
[filter addTarget:filterView];
[videoCamera startCameraCapture];
I have tried to use transform filter before blending the image,but it isn't getting scaled.
I want the image to appear at the center.How do i do it?
Thanks
You are on the right track, just have a few things out of place.
The following code will load an overlay image and apply a transformation to keep it at actual size. By default is will be centered over the video.
GPUImageView *filterView = (GPUImageView *)self.view;
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageOverlayBlendFilter alloc] init];
transformFilter = [[GPUImageTransformFilter alloc]init];
[videoCamera addTarget:filter];
[transformFilter addTarget:filter];
// setup overlay image
inputImage = [UIImage imageNamed:#"eye.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
// determine the necessary scaling to keep image at actual size
CGFloat tx = inputImage.size.width / 480.0; // 480/640: based on video camera preset
CGFloat ty = inputImage.size.height / 640.0;
// apply transform to filter
CGAffineTransform t = CGAffineTransformMakeScale(tx, ty);
[(GPUImageTransformFilter *)transformFilter setAffineTransform:t];
//
[sourcePicture addTarget:filter];
[sourcePicture addTarget:transformFilter];
[sourcePicture processImage];
[filter addTarget:filterView];
[videoCamera startCameraCapture];

iOS - Setting blurred image on top of other views, odd issues

So, I've got an odd scenario.
In my iOS app, I'm trying to blur the content area of the screen when a popover is opened.
I have this working when using Core Image, but only when using Gaussian blur- none of the other blurs work, which is odd.
I tried doing the same with GPUImage, and it blurs far faster, but doesn't actually put the view on top of the other views!
To summarize: in the source below, setBlurOnView will work properly- however setBlurOnViewWithGPUImage appears to not be working. The blur view (tag 6110) is created, but the app doesn't actually blur.
Note: This is on iOS 6, in the simulator.
Here's the relevant source:
// ScreenBlur.m
#import <QuartzCore/QuartzCore.h>
#import <CoreImage/CoreImage.h>
#import <GPUImage/GPUImage.h>
#import "ScreenBlur.h"
#import "GlobalData.h"
#import "Logger.h"
#implementation ScreenBlur
+ (void) setBlurOnViewWithGPUImage:(UIView*)view {
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:[self captureScreenInRect:view.frame inView:view]];
GPUImageGaussianBlurFilter *blur = [[GPUImageGaussianBlurFilter alloc] init];
[imageSource addTarget:blur];
[imageSource processImage];
[self setImage:[imageSource imageFromCurrentlyProcessedOutput] toView:view];
}
+ (void) setBlurOnView:(UIView *)view {
//http://stackoverflow.com/questions/17041669/creating-a-blurring-overlay-view
CIImage *inputImage = [CIImage imageWithCGImage:[self captureScreenInRect:view.frame inView:view].CGImage];
//CIContext *context = [CIContext contextWithOptions:nil];
if ([GlobalData getInstance].ciContext == nil) {
[Logger Log:#"ciContext does not exist, creating..." fromClass:#"ScreenBlur"];
// [GlobalData getInstance].ciContext = [CIContext contextWithOptions:nil]; //cpu context
[GlobalData getInstance].ciContext = [CIContext contextWithEAGLContext:[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]];
}
//set up the blur filter
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:3.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
// CIGaussianBlur has a tendency to shrink the image a little,
// this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [[GlobalData getInstance].ciContext createCGImage:result fromRect:[inputImage extent]];
[self setImage:[UIImage imageWithCGImage:cgImage] toView:view];
}
+ (void) setImage:(UIImage*)blurredImage toView:(UIView*)view {
UIView *blurView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, blurredImage.size.width, blurredImage.size.height)];
[blurView setBackgroundColor:[UIColor colorWithPatternImage:blurredImage]];
[blurView setTag:6110];
//set the image as the foreground for the view
[view addSubview:blurView];
[view bringSubviewToFront:blurView];
}
//same as the method above, but resizes the screenshot before applying the blur for increased performance at the expense of image quality.
+ (void) setBlurOnViewPerformanceMode:(UIView *)view {
//http://stackoverflow.com/questions/17041669/creating-a-blurring-overlay-view
UIImage *screenShot = [self imageWithImage:[self captureScreenInRect:view.frame inView:view] scaledToSize:CGSizeMake(view.frame.size.width / 2, view.frame.size.height / 2)];
CIImage *inputImage = [CIImage imageWithCGImage:screenShot.CGImage];
//CIContext *context = [CIContext contextWithOptions:nil];
if ([GlobalData getInstance].ciContext == nil) {
[Logger Log:#"ciContext does not exist, creating..." fromClass:#"ScreenBlur"];
// [GlobalData getInstance].ciContext = [CIContext contextWithOptions:nil]; //cpu context
[GlobalData getInstance].ciContext = [CIContext contextWithEAGLContext:[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]];
}
//set up the blur filter
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:3.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
//CGImageRef cgImage = [[GlobalData getInstance].ciContext createCGImage:result fromRect:[inputImage extent]];
CGImageRef cgImage = [[GlobalData getInstance].ciContext createCGImage:result fromRect:[inputImage extent]];
[self setImage:[self imageWithImage:[UIImage imageWithCGImage:cgImage] scaledToSize:view.frame.size] toView:view];
}
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
+ (void) removeBlurFromView:(UIView *)view {
for (UIView *subView in view.subviews) {
if (subView.tag == 6110) {
[subView removeFromSuperview];
}
}
}
+ (UIImage *)captureScreenInRect:(CGRect)captureFrame inView:(UIView*) view {
CALayer *layer;
layer = view.layer;
UIGraphicsBeginImageContext(view.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
#end
And then in my view controller, it's simply called with
[ScreenBlur setBlurOnView:self.view];
I found a workaround for this (or, who knows, maybe this is how it was supposed to be done).
//ScreenBlur.m
+ (GPUImageView*) getBlurredImageWithGPUImageFromView:(UIView*)view {
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:[self captureScreenInRect:view.frame inView:view] smoothlyScaleOutput:true];
GPUImageFastBlurFilter *blur = [[GPUImageFastBlurFilter alloc] init];
[blur setBlurPasses:3];
[imageSource addTarget:blur];
GPUImageView *filteredView = [[GPUImageView alloc] initWithFrame:view.frame];
[blur addTarget:filteredView];
[imageSource processImage];
return filteredView;
}
//ViewController.m
//blur the main screen
GPUImageView *blurred = [ScreenBlur getBlurredImageWithGPUImageFromView:self.view];
[blurred setTag:6110];
[self.view addSubview:blurred];
[self.view bringSubviewToFront:blurred];

GPUImage Green Screen

I am trying to do a green screen effect using GPUImage. The effect I am trying to achieve is to play a movie of curtains opening and replace the white part of the movie with the image. This will display the curtains and then the curtains open to display the image.
I have the movie displaying correctly and the white part of the movie is as black but the image does not display when the curtains open. What am I doing wrong?
NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:#"CurtainsOpening" withExtension:#"m4v"];
GPUImageMovie *movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
movieFile.playAtActualSpeed = YES;
NSLog(#"movie file = %#", movieFile);
GPUImageChromaKeyBlendFilter *filter = [[GPUImageChromaKeyBlendFilter alloc] init];
[(GPUImageChromaKeyBlendFilter *)filter setColorToReplaceRed:1.0 green:1.0 blue:1.0];
[(GPUImageChromaKeyBlendFilter *)filter setThresholdSensitivity:0.0]; //was 0.4
[movieFile addTarget:filter];
UIImage *inputImage = [UIImage imageNamed:#"curtains.jpg"];
NSLog(#"inputImage = %#", inputImage);
GPUImagePicture *overlayPicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
NSLog(#"overlayPicture = %#", overlayPicture);
[overlayPicture processImage];
[overlayPicture addTarget:filter];
//[movieFile addTarget:overlayPicture];
GPUImageView *view0 = [[GPUImageView alloc] initWithFrame:self.view.frame];
[view0 setFillMode:kGPUImageFillModeStretch];
NSLog(#"view0 = %#", view0);
[filter addTarget:view0];
[self.view addSubview:view0];
[view0 bringSubviewToFront:self.view];
NSLog(#"frame = %f %f", self.view.frame.size.width, self.view.frame.size.height);
[movieFile startProcessing];
I figured out it out. If anyone wants to know you need to make the GPUImagePicture variable an instance variable so the code does not remove the variable from memory when it exits the method.

Resources