dataProvider is 0x0 / nil (GPUImage Framework) - ios

I wrote some code which creates a filter and can be controlled via a UISlider.
But if I slide the UISlider, the app crashes.
My code:
.m file:
- (void) viewDidLoad {
[_sliderBrightness addTarget:self action:#selector(brightnessFilter) forControlEvents:UIControlEventValueChanged];
_sliderBrightness.minimumValue = -1.0;
_sliderBrightness.maximumValue = 1.0;
_sliderBrightness.value = 0.0;
}
- (IBAction)sliderBrightness:(UISlider *)sender {
CGFloat midpoint = [(UISlider *)sender value];
[(GPUImageBrightnessFilter *)brightFilter setBrightness:midpoint - 0.1];
[(GPUImageBrightnessFilter *)brightFilter setBrightness:midpoint + 0.1];
[sourcePicture processImage];
}
- (void) brightnessFilter {
UIImage *inputImage = _imgView.image;
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
brightFilter = [[GPUImageBrightnessFilter alloc] init];
GPUImageView *imgView2 = (GPUImageView *)self.view;
[brightFilter useNextFrameForImageCapture];
[sourcePicture addTarget:brightFilter];
[sourcePicture processImage];
UIImage* outputImage = [brightFilter imageFromCurrentFramebufferWithOrientation:0];
[_imgView setImage:outputImage];
}
Error:
GPUImageFramebuffer.m:
}
else
{
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
[self unlock]; // Don't need to keep this around anymore
}
In this line of code:
[self activateFramebuffer];
Error message:
Thread 1: EXC_BAD_ACCESS (code=EXC_1386_GPFLT)
Console:
self = (GPUImageFramebuffer *const) 0x10a0a6960
rawImagePixels = (GLubyte *) 0x190
dataProvider = (CGDataProviderRef) 0x0
renderTarget = (CVPixelBufferRef) 0x8
Maybe the dataProvider causes the crash but I don't really know because I'm new in developing iOS apps.

This obviously isn't going to work (and shouldn't even compile) because GPUImageBrightnessFilter has no -setTopFocusLevel: or -setBottomFocusLevel: method. You copied this from my sample application without changing these methods to the one appropriate to a brightness filter (which is the brightness property).
It's also rather confusing (and potentially problematic) to have both a brightnessFilter instance variable and -brightnessFilter method. You probably want to rename the former to make it clear that's where you're performing your initial setup of the filter and source image. You'll also need to call that in your view controller's setup (after your Nib is loaded).

Related

Image auto-rotates after using CIFilter

I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.navigationItem.hidesBackButton = YES; //hide default nav
//get image to display
DBConnector *dbconnector = [[DBConnector alloc] init];
album.moments = [dbconnector getMomentsForAlbum:album.title];
Moment *mmt = [album.moments firstObject];
_imageView.image = [mmt.moment firstObject];
CGImageRef aCGImage = _imageView.image.CGImage;
CIImage *aCIImage = [CIImage imageWithCGImage:aCGImage];
_editor = [CIFilter filterWithName:#"CIColorControls" keysAndValues:#"inputImage", aCIImage, nil];
_context = [CIContext contextWithOptions: nil];
[self startEditControllerFromViewController:self];
}
//cancel and finish buttons
- (BOOL) startEditControllerFromViewController: (UIViewController*) controller {
[_cancelEdit addTarget:self action:#selector(cancelEdit:) forControlEvents:UIControlEventTouchUpInside];
[_finishEdit addTarget:self action:#selector(finishEdit:) forControlEvents:UIControlEventTouchUpInside];
return YES;
}
//adjust brightness
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
_imageView.image = [UIImage imageWithCGImage: cgiimg];
CGImageRelease(cgiimg);
}
I believe that the problem stems from the brightnessSlider method, based on breakpoints that I've placed. Is there a way to stop the auto-rotating of my photo? If not, how can I rotate it back to the normal orientation?
Mere minutes after posting, I figured out the answer to my own question. Go figure. Anyway, I simply changed the slider method to the following:
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
_imageView.image = [UIImage imageWithCGImage: cgiimg scale:originalScale orientation:originalOrientation];
CGImageRelease(cgiimg);
}
This simply records the original orientation and scale of the image, and re-sets them when the data is converted back to a UIImage. Hope this helps someone else!

Global variable is not being updated fast enough 50% of the time

I have a photo taking app. When the user presses the button to take a photo, I set a global NSString variable called self.hasUserTakenAPhoto equal to YES. This works perfectly 100% of the time when using the rear facing camera. However, it only works about 50% of the time when using the front facing camera and I have no idea why.
Below are the important pieces of code and a quick description of what they do.
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2);
self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2);
PFFile *imageFile = [self.message objectForKey:#"file"];
NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url];
imageFile = nil;
self.imageData = [NSData dataWithContentsOfURL:imageFileURL];
imageFileURL = nil;
self.topHalfView.image = [UIImage imageWithData:self.imageData];
//START CREATING THE SESSION
self.session =[[AVCaptureSession alloc]init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];
if([self.session canAddInput:self.deviceInput])
[self.session addInput:self.deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session];
self.rootLayer = [[self view]layer];
[self.rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.rootLayer insertSublayer:_previewLayer atIndex:0];
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[self.session addOutput:self.videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[self.videoOutput setSampleBufferDelegate:self queue:queue];
[_session startRunning];
}
The Important part of viewDidLoad starts where I left the comment of //START CREATING THE SESSION
I basically create the session and then start running it. I have set this view controller as a AVCaptureVideoDataOutputSampleBufferDelegate, so as soon as the session starts running then the method below starts being called as well.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES.
NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);
if([self.hasUserTakenAPhoto isEqualToString:#"YES"]) {
//Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo.
self.image = [self imageFromSampleBuffer:sampleBuffer];
}
}
This code is receiving the video output from the camera every second, but I don't actually do anything with it until self.hasUserTakenAPhoto is equal to YES. Once that has a string value of YES, then I use the current sampleBuffer from the camera and place it inside my global variable called self.image
So, here is when self.hasUserTakenAPhoto is actually set to YES.
Below is my IBAction code that is called when the user presses the button to capture a photo. A lot happens when this code runs, but really all that matters is the very first statement of: self.hasUserTakenAPhoto = #"YES";
-(IBAction)stillImageCapture {
self.hasUserTakenAPhoto = #"YES";
[self.session stopRunning];
if(self.inputDevice.position == 2) {
self.image = [self selfieCorrection:self.image];
} else {
self.image = [self rotate:UIImageOrientationRight];
}
CGFloat widthToHeightRatio = _previewLayer.bounds.size.width / _previewLayer.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (self.image.size.width < self.image.size.height) {
cropRect.size.width = self.image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = self.image.size.height * widthToHeightRatio;
cropRect.size.height = self.image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0;
NSLog(#"Y Math: %f", (self.image.size.height - cropRect.size.height));
} else {
cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0;
cropRect.origin.y = 0;
float cropValueDoubled = self.image.size.height - cropRect.size.height;
float final = cropValueDoubled/2;
finalXValueForCrop = final;
}
CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal);
UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef];
self.image = image2;
CGImageRelease(imageRef);
self.bottomHalfView.image = self.image;
if ([self.hasUserTakenAPhoto isEqual:#"YES"]) {
[self.takingPhotoView setHidden:YES];
self.image = [self screenshot];
[_afterPhotoView setHidden:NO];
}
}
So basically the viewDidLoad method runs and the session is started, the session is sending everything the camera sees to the captureOutput method, and then as soon as the user presses the "take a photo" button we set the string value of self.hasUserTakenAPhoto to YES, the session stops, and since self.hasUserTakenAPhoto is equal to YES now, the captureOutput method places the very last camera buffer into the self.image object for me to use.
I just can't figure this out because like I said it works 100% of the time when using the rear facing camera. However, when using the front facing camera it only works 50% of the time.
I have narrowed the problem down to the fact that self.hasUserTakenAPhoto does not update to YES fast enough when using the ront facing camera, and I know because if you look in my 2nd code I posted it has the statement of NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);.
When this works correctly, and the user has just pressed the button to capture a photo which also stops the session, the very last time that NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto); runs it will print with the correct value of YES.
However, when it doesn't work correctly and doesn't update fast enough, the very last time it runs it still prints to the log with a value of null.
Any ideas on why self.hasUserTakenAPhoto does not update fast enough 50% of the time whe using the front facing camera? Even if we can't figure that out, it doesn't matter. I just need help then coming up with an alternate solution to this.
Thanks for the help.
I think its a scheduling problem. At the return point of your methods
– captureOutput:didOutputSampleBuffer:fromConnection:
– captureOutput:didDropSampleBuffer:fromConnection:
add a CFRunLoopRun()

GPUImage RawDataInput and RawDataOutput rotates image

I'm trying to use GPUImage rawInput and rawOutput to do some custom stuff, but somehow my output is rotated.
I'm setting my GPUImageVideoCamera like this:
self.videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
self.videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
self.videoCamera.runBenchmark = YES;
And then just this.
CGSize rawSize = CGSizeMake(640.0, 480.0 );
GPUImageRawDataOutput *rawOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:rawSize resultsInBGRAFormat:YES];
GPUImageRawDataInput __block *rawInput = [[GPUImageRawDataInput alloc] initWithBytes:[rawOutput rawBytesForImage] size:rawSize];;
__weak GPUImageRawDataOutput *weakRawOutput = rawOutput;
[rawOutput setNewFrameAvailableBlock:^{
[weakRawOutput rawBytesForImage];
[rawInput updateDataFromBytes:[weakRawOutput rawBytesForImage] size:rawSize];
[rawInput processData];
}];
and of course
[self.videoCamera addTarget:rawOutput];
[rawInput addTarget:self.cameraView];
[self.videoCamera startCameraCapture];
This is what i get
https://www.dropbox.com/s/yo19o7ryagk58le/2013-12-05%2018.02.56.png
Any ideas how to PROPERLY handle this?
EDIT:
I just found out, that putting sepia filter before rawOutput corrects rotation, but grayScale filter doesn't ...

GPUImage slow memory accumulation

I looked at forums to locate similar questions, but it seems that my issue is different.
I am using GPUImage, I built framework using bash script. I looked at samples of creating filters & their chain for processing.
But in my app I am facing constant increase in memory consumption. There are no memory leaks, I profiled my app.
image - is an image taken from the camera. I call the code described bellow in for loop. I do witness memory increase starting from 7MB, to 150MB, after than the app is terminated.
Here is the code :
- (void)processImageInternal:(UIImage *)image
{
GPUImagePicture * picture = [[GPUImagePicture alloc]initWithImage:image];
GPUImageLanczosResamplingFilter *lanczosResamplingFilter = [GPUImageLanczosResamplingFilter new];
CGFloat scale = image.scale;
CGSize size = image.size;
CGFloat newScale = roundf(2*scale);
CGSize newSize = CGSizeMake((size.width *scale)/newScale,(size.height *scale)/newScale);
[lanczosResamplingFilter forceProcessingAtSize:newSize];
[picture addTarget:lanczosResamplingFilter];
GPUImageGrayscaleFilter * grayScaleFilter = [GPUImageGrayscaleFilter new];
[grayScaleFilter forceProcessingAtSize:newSize];
[lanczosResamplingFilter addTarget:grayScaleFilter];
GPUImageMedianFilter * medianFilter = [GPUImageMedianFilter new];
[medianFilter forceProcessingAtSize:newSize];
[grayScaleFilter addTarget:medianFilter];
GPUImageSharpenFilter * sharpenFilter = [GPUImageSharpenFilter new];
sharpenFilter.sharpness +=2.0;
[sharpenFilter forceProcessingAtSize:newSize];
[medianFilter addTarget:sharpenFilter];
GPUImageGaussianBlurFilter * blurFilter = [GPUImageGaussianBlurFilter new];
blurFilter.blurSize = 0.5;
[blurFilter forceProcessingAtSize:newSize];
[sharpenFilter addTarget:blurFilter];
GPUImageUnsharpMaskFilter * unsharpMask = [GPUImageUnsharpMaskFilter new];
[unsharpMask forceProcessingAtSize:newSize];
[blurFilter addTarget:unsharpMask];
[picture processImage];
image = [unsharpMask imageFromCurrentlyProcessedOutput];
}
The code is executed in background thread.
Here is calling code:
for (NSUInteger i=0;i <100;i++)
{
NSLog(#" INDEX OF I : %d",i);
[self processImageInternal:image];
}
I even added some cleanup logic into - (void)processImageInternal:(UIImage *)image method
- (void)processImageInternal:(UIImage *)image
{
.........
[picture processImage];
image = [unsharpMask imageFromCurrentlyProcessedOutput];
//clean up code....
[picture removeAllTargets];
[self releaseResourcesForGPUOutput:unsharpMask];
[self releaseResourcesForGPUOutput:blurFilter];
[self releaseResourcesForGPUOutput:sharpenFilter];
[self releaseResourcesForGPUOutput:medianFilter];
[self releaseResourcesForGPUOutput:lanczosResamplingFilter];
[self releaseResourcesForGPUOutput:grayScaleFilter];
In release method, basically I am releasing whatever is possible to release: FBO, textures, targets. Here is the code:
- (void)releaseResourcesForGPUOutput:(GPUImageOutput *) output
{
if ([output isKindOfClass:[GPUImageFilterGroup class]])
{
GPUImageFilterGroup *group = (GPUImageFilterGroup *) output;
for (NSUInteger i=0; i<group.filterCount;i++)
{
GPUImageFilter * curFilter = (GPUImageFilter * )[group filterAtIndex:i];
[self releaseResourcesForGPUFilter:curFilter];
}
[group removeAllTargets];
}
else if ([output isKindOfClass:[GPUImageFilter class]])
{
[self releaseResourcesForGPUFilter:(GPUImageFilter *)output];
}
}
Unsharp mask it GPU group filters, i.e. composite of filters.
- (void)releaseResourcesForGPUFilter:(GPUImageFilter *)filter
{
if ([filter respondsToSelector:#selector(releaseInputTexturesIfNeeded)])
{
[filter releaseInputTexturesIfNeeded];
[filter destroyFilterFBO];
[filter cleanupOutputImage];
[filter deleteOutputTexture];
[filter removeAllTargets];
}
}
I did as Brad suggested to do: My code was called in UI touch processing loop. Autorealized pool objects are drained, but from Allocation Instruments I cannot see anything strange. But my application is terminated.
Total allocation was 130 MB at the end, but for some reasons my apps was terminated.
Here is how it looks like: ( I placed screen shots, log from device, even trace from Instruments). My app is called TesseractSample, but I completely switched off tesseract usage, even its' initialisation.
http://goo.gl/w5XrVb
In log from device I see that there maximum allow rpages were used:
http://goo.gl/ImG2Wt
But I do not have an clue what it means. rpages == recent_max ( 167601)

How can I make blind down effect to an image in IOS?

I want that, when I roll the iPad, the image blinds up/down. Effect should be like
http://madrobby.github.com/scriptaculous/combination-effects-demo/ Blind Down demo.
How can I do that?
I tried Reflection example of Apple but I had performance issues since I should redraw image in every gyroscope action.
Here is the Code:
- (void)viewDidLoad
{
[super viewDidLoad];
tmp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"galata2.jpg"]];
// Do any additional setup after loading the view, typically from a nib.
NSUInteger reflectionHeight = imageView1.bounds.size.height * 1;
imageView1 = [[UIImageView alloc] init];
imageView1.image = [UIImage imageNamed:#"galata1.jpg"];
[imageView1 sizeToFit];
[self.view addSubview:imageView1];
imageView2 = [[UIImageView alloc] init];
//UIImageView *tmp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"galata2.jpg"]];
imageView2.image = [UIImage imageNamed:#"galata2.jpg"];
[imageView2 sizeToFit];
[self.view addSubview:imageView2];
motionManager = [[CMMotionManager alloc] init];
motionManager.gyroUpdateInterval = 1.0/10.0;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error){
[self performSelectorOnMainThread:#selector(handleDeviceMotion:) withObject:motion waitUntilDone:YES];
}];
}
////
- (void)handleDeviceMotion:(CMDeviceMotion*)motion{
CMAttitude *attitude = motion.attitude;
int rotateAngle = abs((int)degrees(attitude.roll));
//CMRotationRate rotationRate = motion.rotationRate;
NSLog(#"rotation rate = [Pitch: %f, Roll: %d, Yaw: %f]", degrees(attitude.pitch), abs((int)degrees(attitude.roll)), degrees(attitude.yaw));
int section = (int)(rotateAngle / 30);
int x = rotateAngle % 30;
NSUInteger reflectionHeight = (1024/30)*x;
NSLog(#"[x = %d]", reflectionHeight);
imageView2.image = [self reflectedImage:tmp withHeight:reflectionHeight];
}
////
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
if(height == 0)
return nil;
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, fromImage.bounds.size.height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(1, kImageHeight);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, fromImage.bounds.size.width,height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// In order to grab the part of the image that we want to render, we move the context origin to the
// height of the image that we want to capture, then we flip the context so that the image draws upside down.
//CGContextTranslateCTM(mainViewContentContext, 0.0,0.0);
//CGContextScaleCTM(mainViewContentContext, 1.0, -1.0);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, fromImage.bounds.size.width, fromImage.bounds.size.height), fromImage.image.CGImage);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
One way to do this is to use another covering view that gradually changes height by animation;
If you have a view called theView that you want to cover, try something like this to reveal theView underneath a cover view:
UIView *coverView = [UIView alloc] initWithFrame:theView.frame];
coverView.backgroundcolor = [UIColor whiteColor];
[theView.superView addSubView:coverView]; // this covers theView, adding it to the same view that the view is contained in;
CGRect newFrame = theView.frame;
newFrame.size.height = 0;
newFrame.origin.y = theView.origin.y + theView.size.height;
[UIView animateWithDuration:1.5
delay: 0.0
options: UIViewAnimationOptionRepeat
animations:^{
coverView.frame = newFrame;
}
completion:nil
];
This should cover the view and then reveal it by changing the frame ov the cover, moving it down while changing the height.
I haven't tried the code, but this is one direction you can take to create the blind effect. I have used similar code often, and it is very easy to work with. Also, it doesn't require knowing core animation.

Resources