Global variable is not being updated fast enough 50% of the time - ios

I have a photo taking app. When the user presses the button to take a photo, I set a global NSString variable called self.hasUserTakenAPhoto equal to YES. This works perfectly 100% of the time when using the rear facing camera. However, it only works about 50% of the time when using the front facing camera and I have no idea why.
Below are the important pieces of code and a quick description of what they do.
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2);
self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2);
PFFile *imageFile = [self.message objectForKey:#"file"];
NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url];
imageFile = nil;
self.imageData = [NSData dataWithContentsOfURL:imageFileURL];
imageFileURL = nil;
self.topHalfView.image = [UIImage imageWithData:self.imageData];
//START CREATING THE SESSION
self.session =[[AVCaptureSession alloc]init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];
if([self.session canAddInput:self.deviceInput])
[self.session addInput:self.deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session];
self.rootLayer = [[self view]layer];
[self.rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.rootLayer insertSublayer:_previewLayer atIndex:0];
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[self.session addOutput:self.videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[self.videoOutput setSampleBufferDelegate:self queue:queue];
[_session startRunning];
}
The Important part of viewDidLoad starts where I left the comment of //START CREATING THE SESSION
I basically create the session and then start running it. I have set this view controller as a AVCaptureVideoDataOutputSampleBufferDelegate, so as soon as the session starts running then the method below starts being called as well.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES.
NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);
if([self.hasUserTakenAPhoto isEqualToString:#"YES"]) {
//Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo.
self.image = [self imageFromSampleBuffer:sampleBuffer];
}
}
This code is receiving the video output from the camera every second, but I don't actually do anything with it until self.hasUserTakenAPhoto is equal to YES. Once that has a string value of YES, then I use the current sampleBuffer from the camera and place it inside my global variable called self.image
So, here is when self.hasUserTakenAPhoto is actually set to YES.
Below is my IBAction code that is called when the user presses the button to capture a photo. A lot happens when this code runs, but really all that matters is the very first statement of: self.hasUserTakenAPhoto = #"YES";
-(IBAction)stillImageCapture {
self.hasUserTakenAPhoto = #"YES";
[self.session stopRunning];
if(self.inputDevice.position == 2) {
self.image = [self selfieCorrection:self.image];
} else {
self.image = [self rotate:UIImageOrientationRight];
}
CGFloat widthToHeightRatio = _previewLayer.bounds.size.width / _previewLayer.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (self.image.size.width < self.image.size.height) {
cropRect.size.width = self.image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = self.image.size.height * widthToHeightRatio;
cropRect.size.height = self.image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0;
NSLog(#"Y Math: %f", (self.image.size.height - cropRect.size.height));
} else {
cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0;
cropRect.origin.y = 0;
float cropValueDoubled = self.image.size.height - cropRect.size.height;
float final = cropValueDoubled/2;
finalXValueForCrop = final;
}
CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal);
UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef];
self.image = image2;
CGImageRelease(imageRef);
self.bottomHalfView.image = self.image;
if ([self.hasUserTakenAPhoto isEqual:#"YES"]) {
[self.takingPhotoView setHidden:YES];
self.image = [self screenshot];
[_afterPhotoView setHidden:NO];
}
}
So basically the viewDidLoad method runs and the session is started, the session is sending everything the camera sees to the captureOutput method, and then as soon as the user presses the "take a photo" button we set the string value of self.hasUserTakenAPhoto to YES, the session stops, and since self.hasUserTakenAPhoto is equal to YES now, the captureOutput method places the very last camera buffer into the self.image object for me to use.
I just can't figure this out because like I said it works 100% of the time when using the rear facing camera. However, when using the front facing camera it only works 50% of the time.
I have narrowed the problem down to the fact that self.hasUserTakenAPhoto does not update to YES fast enough when using the ront facing camera, and I know because if you look in my 2nd code I posted it has the statement of NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);.
When this works correctly, and the user has just pressed the button to capture a photo which also stops the session, the very last time that NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto); runs it will print with the correct value of YES.
However, when it doesn't work correctly and doesn't update fast enough, the very last time it runs it still prints to the log with a value of null.
Any ideas on why self.hasUserTakenAPhoto does not update fast enough 50% of the time whe using the front facing camera? Even if we can't figure that out, it doesn't matter. I just need help then coming up with an alternate solution to this.
Thanks for the help.

I think its a scheduling problem. At the return point of your methods
– captureOutput:didOutputSampleBuffer:fromConnection:
– captureOutput:didDropSampleBuffer:fromConnection:
add a CFRunLoopRun()

Related

iOS UIImageView memory not getting deallocated on ARC

I want to animate an image view in circular path and on click of image that image view need to change the new image. My problem is the images i allocated to the image view is not deallocated. And app receives memory warning and crashed. I surfed and tried lot of solutions for this problem but no use. In my case i need to create all ui components from Objective c class. Here am posting the code for creating image view and animation.
#autoreleasepool {
for(int i= 0 ; i < categories.count; i++)
{
NSString *categoryImage = [NSString stringWithFormat:#"%#_%ld.png",[categories objectAtIndex:i],(long)rating];
if (paginationClicked) {
if([selectedCategories containsObject:[categories objectAtIndex:i]]){
categoryImage = [NSString stringWithFormat:#"sel_%#",categoryImage];
}
}
UIImageView *imageView = [[UIImageView alloc] init];
imageView.image = [self.mySprites objectForKey:categoryImage];
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.clipsToBounds = NO;
[imageView sizeToFit];
imageView.accessibilityHint = [categories objectAtIndex:i];
// imageView.frame = CGRectMake(location.x+sin(M_PI/2.5)*(self.view.frame.size.width*1.5),location.y+cos(M_PI/2.5)*(self.view.frame.size.width*1.5) , 150, 150);
imageView.userInteractionEnabled = YES;
imageView.multipleTouchEnabled = YES;
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(categoryTapGestureCaptured:)];
singleTap.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:singleTap];
[categoryView addSubview:imageView];
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
UIBezierPath *path = [UIBezierPath bezierPath];
[path addArcWithCenter:location
radius:self.view.frame.size.width*1.5
startAngle:0.8
endAngle:-0.3+(0.1*(i+1))
clockwise:NO];
animation.path = path.CGPath;
imageView.center = path.currentPoint;
animation.fillMode = kCAFillModeForwards;
animation.removedOnCompletion = NO;
animation.duration = 1+0.25*i;
animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn];
// Apply it
[imageView.layer addAnimation:animation forKey:#"animation.trash"];
}
}
And this is the code to change the image on click.
for (UIImageView *subview in subviews) {
NSString *key = [NSString stringWithFormat:#"%#_%ld.png",subview.accessibilityHint,(long)rating];
if ([SelectedCategory isEqualToString:subview.accessibilityHint]) {
NSString *tempSubCategory = [categoryObj objectForKey:SelectedCategory];
if([selectedCategories containsObject:SelectedCategory]){
subview.image = [self.mySprites objectForKey:key];
[selectedCategories removeObject:SelectedCategory];
if (tempSubCategory.length != 0) {
subCategoriesAvailable = subCategoriesAvailable-1;
}
[self showNoPagination:subCategoriesAvailable+2];
}else{
if(selectedCategories.count != 2){
key = [NSString stringWithFormat:#"sel_%#",key];
subview.image = [self.mySprites objectForKey:key];
[selectedCategories addObject:SelectedCategory];
if ([SelectedCategory isEqualToString:#"Other"]) {
[self showCommentDialog];
}else{
if (tempSubCategory.length != 0) {
subCategoriesAvailable = subCategoriesAvailable+1;
}
[self showNoPagination:subCategoriesAvailable+2];
}
}
}
[self disableCategories];
break;
}
}
And i don't know what am doing wrong here. I tried nullifying on for loop but no use.
Code which i used for removing the image view
UIView *categoryView = [self.view viewWithTag:500];
NSArray *subviews = [categoryView subviews];
for (UIImageView *subview in subviews) {
if(![selectedCategories containsObject:subview.accessibilityHint]){
[subview removeFromSuperview];
subview.image = Nil;
}
}
Adding sprite reader code for reference
#import "UIImage+Sprite.h"
#import "XMLReader.h"
#implementation UIImage (Sprite)
+ (NSDictionary*)spritesWithContentsOfFile:(NSString*)filename
{
CGFloat scale = [UIScreen mainScreen].scale;
NSString* file = [filename stringByDeletingPathExtension];
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] &&
(scale == 2.0))
{
file = [NSString stringWithFormat:#"%##2x", file];
}
NSString* extension = [filename pathExtension];
NSData* data = [NSData dataWithContentsOfFile:[NSString stringWithFormat:#"%#.%#", file,extension]];
NSError* error = nil;
NSDictionary* xmlDictionary = [XMLReader dictionaryForXMLData:data error:&error];
NSDictionary* xmlTextureAtlas = [xmlDictionary objectForKey:#"TextureAtlas"];
UIImage* image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:#"%#.%#", file,[[xmlTextureAtlas objectForKey:#"imagePath"]pathExtension]]];
CGSize size = CGSizeMake([[xmlTextureAtlas objectForKey:#"width"] integerValue],
[[xmlTextureAtlas objectForKey:#"height"] integerValue]);
if (!image || CGSizeEqualToSize(size, CGSizeZero)) return nil;
CGImageRef spriteSheet = [image CGImage];
NSMutableDictionary* tempDictionary = [[NSMutableDictionary alloc] init];
NSArray* xmlSprites = [xmlTextureAtlas objectForKey:#"sprite"];
for (NSDictionary* xmlSprite in xmlSprites)
{
CGRect unscaledRect = CGRectMake([[xmlSprite objectForKey:#"x"] integerValue],
[[xmlSprite objectForKey:#"y"] integerValue],
[[xmlSprite objectForKey:#"w"] integerValue],
[[xmlSprite objectForKey:#"h"] integerValue]);
CGImageRef sprite = CGImageCreateWithImageInRect(spriteSheet, unscaledRect);
// If this is a #2x image it is twice as big as it should be.
// Take care to consider the scale factor here.
[tempDictionary setObject:[UIImage imageWithCGImage:sprite scale:scale orientation:UIImageOrientationUp] forKey:[xmlSprite objectForKey:#"n"]];
CGImageRelease(sprite);
}
return [NSDictionary dictionaryWithDictionary:tempDictionary];
}
#end
Please help me to resolve this. Thanks in advance.
It looks like all the images are being retained by the dictionary(assumption) self.mySprites, as you are loading them with the call imageView.image = [self.mySprites objectForKey:categoryImage];
If you loaded the images into the dictionary with +[UIImage imageNamed:], then the dictionary initially contains only the compressed png images. Images are decompressed from png to bitmap as they are rendered to the screen, and these decompressed images use a large amount of RAM (that's the memory usage you're seeing labeled "ImageIO_PNG_Data"). If the dictionary is retaining them, then the memory will grow every time you render a new one to the screen, as the decompressed data is held inside the UIImage object retained by the dictionary.
Options available to you:
Store the image names in the self.mySprites dictionary, and load the images on demand. You should be aware that +[UIImage imageNamed:] implements internal RAM caching to speed things up, so this might also cause memory issues for you if the images are big, as the cache doesn't clear quickly. If this is an issue, consider using +[UIImage imageWithContentsOfFile:], although it requires some additional code (not much), which doesn't cache images in RAM.
Re-implement self.mySprites as an NSCache. NSCache will start throwing things out when the memory pressure gets too high, so you'll need to handle the case that the image is not there when you expect it to be, and load it from disk (perhaps using the above techniques)
CAKeyframeAnimation inherits from CAPropertyAnimation which in tern is inherited from CAAnimation.
If you see the delegate of CAAnimation class, it is strongly referenced as written below as it is declared -
/* The delegate of the animation. This object is retained for the
* lifetime of the animation object. Defaults to nil. See below for the
* supported delegate methods. */
#property(strong) id delegate;
Now you have added the reference of animation on imageView.layer, doing so the reference of imageView.layer will be strongly retained by CAAnimation reference.
Also you have set
animation.removedOnCompletion = NO;
which won't remove the animation from layer on completion
So if you are done with a image view then first removeAllAnimations from its layer and then release the image view.
I think as the CAAnimation strongly refers the imageView reference(it would also have increased it's retain count) and this could be the reason that you have removed the imageView from superview, after which it's retain count would still not be zero, and so leading to a leak.
Is there any specific requirement to set
animation.removedOnCompletion = NO;
since, setting
animation.removedOnCompletion = YES;
could solve the issue related to memory leak.
Alternatively, to resolve memory leak issue you can remove the corresponding animation on corresponding imageView' layer, by implementing delegate of CAAnimation, like as shown below -
/* Called when the animation either completes its active duration or
* is removed from the object it is attached to (i.e. the layer). 'flag'
* is true if the animation reached the end of its active duration
* without being removed. */
- (void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag {
    if (flag) {
        NSLog(#"%#", #"The animation is finished. Do something here.");
    }
//Get the reference of the corresponding imageView
    [imageView.layer removeAnimationForKey:#"animation.trash"];
}
UIView *categoryView = [self.view viewWithTag:500];
NSArray *subviews = [categoryView subviews];
for (UIImageView *subview in subviews) {
if(![selectedCategories containsObject:subview.accessibilityHint]){
[subview.layer removeAnimationForKey:#"animation.trash"]; // either this
//or// [subview.layer removeAllAnimations]; //alternatively
[subview removeFromSuperview];
subview.image = Nil;
}
}

Blurred Image for "thinking/loading" screen

I have used 2 pages to create a blurred image with a spinner that I want to use for loading/thinking overlays:
http://www.sitepoint.com/all-purpose-loading-view-for-ios/
http://x-code-tutorials.com/2013/06/18/ios7-style-blurred-overlay-in-xcode/
It is working ok, but needs so modifications. I have 3 questions.
1st question:
After the button is clicked it seems to take a long time to actually come up. Any suggestions?
2nd question is:
The blurred image gets shift to the left and down, either when it is taken or when it is set in the view. Any thoughts on why?
Seems like the higher the numberWithFloat is, the more shift in the image.
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: #"inputRadius"];
3rd question:
I am trying to get this to display while the API backend is doing database stuff. If I don't call RemoveBlurredOverlay then it displays and worked, however if I call it after all the database work it won't display at all. Any thoughts? Need to be threaded?
BlurredOverlay.m
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView {
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:[self captureBlur:superView]];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlur:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[super removeFromSuperview];
}
#end
MainViewController.m
...
- (IBAction)loginButton:(id)sender {
//Add a blur view to tell uses the app is "thinking"
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view];
NSInteger success = 0;
//Check to see if the username or password textfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser]; //loginUser sends the http to the back end API that does the database stuff
}
//If successful, go to the View
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
//Seque to the main View
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
self.passwordTextField.text = #"";
}
}
Here is my answer for Question 1:
New to threads so and advice would be greatly appreciated.
I added another method in BlurredImageView.m to build the view with taking in a blurred imaged.
I made the captureBlurredImage method public, and called that in a thread in ViewDidLoad in Login.m first, then passed the blurred image into the new loadBlurredOverlay. I also added the login processing into a thread. It is really fast now, however:
Question #3 still remains!!!!
If I call [blurredOverlay removeBlurredOverlay]; in LoginViewController.m which calls [self removeFromSuperview]; in BlurredOverlay.m the BlurredImage and spinner never comes up. If I comment it out, I works like a charm but can't get it to dismiss after the login processing is done.
Comments and help will be appreciated. I will edit this answer if we can get to the bottom of this.
BlurredImage.m
#import "BlurredOverlay.h"
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView :(UIImage *) blurredImage {
NSLog(#"In loadBlurredOverlay with parameter blurredImage: %#", blurredImage);
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:blurredImage];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc]
initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
[superView bringSubviewToFront:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlurredImage:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[self removeFromSuperview];
}
#end
LoginViewController.m
...
-(void)viewDidAppear:(BOOL)animated
{
//Get a blurred image of the view in a thread<#^(void)block#>
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
self.blurredImage = [BlurredOverlay captureBlurredImage:self.view];
});
}
...
//Send the username and password to backend for verification
//If verified, go to ViewController
- (IBAction)loginButton:(id)sender {
__block BOOL success = false;
//Add a blur view with spinner to tell user the app is processing login information
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view :self.blurredImage];
//Login user in a thread
//Get a blurred image of the view in a thread
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//Check to see if the username or passord texfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser];
}
else {
success = false;
}
dispatch_async(dispatch_get_main_queue(), ^{
//If successful, go to the ViewController
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Seque to the main ViewController
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Reset passwordTextField
self.passwordTextField.text = #"";
}
});
});
}
...

UICollection View Scroll lag with SDWebImage

Background
I have searched around SO and apple forum. Quite a lot of people talked about performance of collection view cell with image. Most of them said it is lag on scroll since loading the image in the main thread.
By using SDWebImage, the images should be loading in separate thread. However, it is lag only in the landscape mode in the iPad simulator.
Problem description
In the portrait mode, the collection view load 3 cells for each row. And it has no lag or insignificant delay.
In the landscape mode, the collection view load 4 cells for each row. And it has obvious lag and drop in frame rate.
I have checked with instrument tools with the core animation. The frame rate drop to about 8fps when new cell appear. I am not sure which act bring me such a low performance for the collection view.
Hope there would be someone know the tricks part.
Here are the relate code
In The View Controller
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
ProductCollectionViewCell *cell=[collectionView dequeueReusableCellWithReuseIdentifier:#"ProductViewCell" forIndexPath:indexPath];
Product *tmpProduct = (Product*)_ploader.loadedProduct[indexPath.row];
cell.product = tmpProduct;
if (cellShouldAnimate) {
cell.alpha = 0.0;
[UIView animateWithDuration:0.2
delay:0
options:(UIViewAnimationOptionCurveLinear | UIViewAnimationOptionAllowUserInteraction)
animations:^{
cell.alpha = 1.0;
} completion:nil];
}
if(indexPath.row >= _ploader.loadedProduct.count - ceil((LIMIT_COUNT * 0.3)))
{
[_ploader loadProductsWithCompleteBlock:^(NSError *error){
if (nil == error) {
cellShouldAnimate = NO;
[_collectionView reloadData];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 2 * NSEC_PER_SEC), dispatch_get_main_queue(), ^{
cellShouldAnimate = YES;
});
} else if (error.code != 1){
#ifdef DEBUG_MODE
ULog(#"Error.des : %#", error.description);
#else
CustomAlertView *alertView = [[CustomAlertView alloc]
initWithTitle:#"Connection Error"
message:#"Please retry."
buttonTitles:#[#"OK"]];
[alertView show];
#endif
}
}];
}
return cell;
}
PrepareForReuse in the collectionViewCell
- (void)prepareForReuse
{
[super prepareForReuse];
CGRect bounds = self.bounds;
[_thumbnailImgView sd_cancelCurrentImageLoad];
CGFloat labelsTotalHeight = bounds.size.height - _thumbnailImgView.frame.size.height;
CGFloat brandToImageOffset = 2.0;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
brandToImageOffset = 53.0;
}
CGFloat labelStartY = _thumbnailImgView.frame.size.height + _thumbnailImgView.frame.origin.y + brandToImageOffset;
CGFloat nameLblHeight = labelsTotalHeight * 0.46;
CGFloat priceLblHeight = labelsTotalHeight * 0.18;
_brandLbl.frame = (CGRect){{15, labelStartY}, {bounds.size.width - 30, nameLblHeight}};
CGFloat priceToNameOffset = 8.0;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
priceToNameOffset = 18.0;
}
_priceLbl.frame = (CGRect){{5, labelStartY + nameLblHeight - priceToNameOffset}, {bounds.size.width-10, priceLblHeight}};
[_spinner stopAnimating];
[_spinner removeFromSuperview];
_spinner = nil;
}
Override the setProduct method
- (void)setProduct:(Product *)product
{
_product = product;
_spinner = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray];
_spinner.center = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
[self addSubview:_spinner];
[_spinner startAnimating];
_spinner.hidesWhenStopped = YES;
// Add a spinner
__block UIActivityIndicatorView *tmpSpinner = _spinner;
__block UIImageView *tmpImgView = _thumbnailImgView;
ProductImage *thumbnailImage = _product.images[0];
[_thumbnailImgView sd_setImageWithURL:[NSURL URLWithString:thumbnailImage.mediumURL]
completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType, NSURL *imageURL) {
// dismiss the spinner
[tmpSpinner stopAnimating];
[tmpSpinner removeFromSuperview];
tmpSpinner = nil;
if (nil == error) {
// Resize the incoming images
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGFloat imageHeight = image.size.height;
CGFloat imageWidth = image.size.width;
CGSize newSize = tmpImgView.bounds.size;
CGFloat scaleFactor = newSize.width / imageWidth;
newSize.height = imageHeight * scaleFactor;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *small = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(),^{
tmpImgView.image = small;
});
});
if (cacheType == SDImageCacheTypeNone) {
tmpImgView.alpha = 0.0;
[UIView animateWithDuration:0.2
delay:0
options:(UIViewAnimationOptionCurveLinear | UIViewAnimationOptionAllowUserInteraction)
animations:^{
tmpImgView.alpha = 1.0;
} completion:nil];
}
} else {
// loading error
[tmpImgView setImage:[UIImage imageNamed:#"broken_image_small"]];
}
}];
_brandLbl.text = [_product.brand.name uppercaseString];
_nameLbl.text = _product.name;
[_nameLbl sizeToFit];
// Format the price
NSNumberFormatter * floatFormatter = [[NSNumberFormatter alloc] init];
[floatFormatter setNumberStyle:NSNumberFormatterDecimalStyle];
[floatFormatter setDecimalSeparator:#"."];
[floatFormatter setMaximumFractionDigits:2];
[floatFormatter setMinimumFractionDigits:0];
[floatFormatter setGroupingSeparator:#","];
_priceLbl.text = [NSString stringWithFormat:#"$%# USD", [floatFormatter stringFromNumber:_product.price]];
if (_product.salePrice.intValue > 0) {
NSString *rawStr = [NSString stringWithFormat:#"$%# $%# USD", [floatFormatter stringFromNumber:_product.price], [floatFormatter stringFromNumber:_product.salePrice]];
NSMutableAttributedString * string = [[NSMutableAttributedString alloc] initWithString:rawStr];
// Change all the text to red first
[string addAttribute:NSForegroundColorAttributeName
value:[UIColor colorWithRed:157/255.0 green:38/255.0 blue:29/255.0 alpha:1.0]
range:NSMakeRange(0,rawStr.length)];
// find the first space
NSRange firstSpace = [rawStr rangeOfString:#" "];
// Change from zero to space to gray color
[string addAttribute:NSForegroundColorAttributeName
value:_priceLbl.textColor
range:NSMakeRange(0, firstSpace.location)];
[string addAttribute:NSStrikethroughStyleAttributeName
value:#2
range:NSMakeRange(0, firstSpace.location)];
_priceLbl.attributedText = string;
}
}
SDWebImage is very admirable, but DLImageLoader is absolutely incredible, and a key piece of many big production apps
https://stackoverflow.com/a/19115912/294884
it's amazingly easy to use.
To avoid the skimming problem, basically just introduce a delay before bothering to start downloading the image. So, essentially like this...it's this simple
dispatch_after_secs_on_main(0.4, ^
{
if ( ! [urlWasThen isEqualToString:self.currentImage] )
{
// so in other words, in fact, after a short period of time,
// the user has indeed scrolled away from that item.
// (ie, the user is skimming)
// this item is now some "new" item so of course we don't
// bother loading "that old" item
// ie, we now know the user was simply skimming over that item.
// (just TBC in the preliminary clause above,
// since the image is already in cache,
// we'd just instantly load the image - even if the user is skimming)
// NSLog(#" --- --- --- --- --- --- too quick!");
return;
}
// a short time has passed, and indeed this cell is still "that" item
// the user is NOT skimming, SO we start loading the image.
//NSLog(#" --- not too quick ");
[DLImageLoader loadImageFromURL:urlWasThen
completed:^(NSError *error, NSData *imgData)
{
if (self == nil) return;
// some time has passed while the image was loading from the internet...
if ( ! [urlWasThen isEqualToString:self.currentImage] )
{
// note that this is the "normal" situation where the user has
// moved on from the image, so no need toload.
//
// in other words: in this case, not due to skimming,
// but because SO much time has passed,
// the user has moved on to some other part of the table.
// we pointlessly loaded the image from the internet! doh!
//NSLog(#" === === 'too late!' image load!");
return;
}
UIImage *image = [UIImage imageWithData:imgData];
self.someImage.image = image;
}];
});
That's the "incredibly easy" solution.
IMO, after vast experimentation, it actually works considerably better than the more complex solution of tracking when the scroll is skimming.
once again, DLImageLoader makes all this extremely easy https://stackoverflow.com/a/19115912/294884
Note that the section of code above is just the "usual" way you load an image inside a cell.
Here's typical code that would do that:
-(void)imageIsNow:(NSString *)imUrl
{
// call this routine o "set the image" on this cell.
// note that "image is now" is a better name than "set the image"
// Don't forget that cells very rapidly change contents, due to
// the cell reuse paradigm on iOS.
// this cell is being told that, the image to be displayed is now this image
// being aware of scrolling/skimming issues, cache issues, etc,
// utilise this information to apprporiately load/whatever the image.
self.someImage.image = nil; // that's UIImageView
self.currentImage = imUrl; // you need that string property
[self loadImageInASecIfItsTheSameAs:imUrl];
}
-(void)loadImageInASecIfItsTheSameAs:(NSString *)urlWasThen
{
// (note - at this point here the image may already be available
// in cache. if so, just display it. I have omitted that
// code for simplicity here.)
// so, right here, "possibly load with delay" the image
// exactly as shown in the code above .....
dispatch_after_secs_on_main(0.4, ^
...etc....
...etc....
}
Again this is all easily possible due to DLImageLoader which is amazing. It is an amazingly solid library.

Image auto-rotates after using CIFilter

I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.navigationItem.hidesBackButton = YES; //hide default nav
//get image to display
DBConnector *dbconnector = [[DBConnector alloc] init];
album.moments = [dbconnector getMomentsForAlbum:album.title];
Moment *mmt = [album.moments firstObject];
_imageView.image = [mmt.moment firstObject];
CGImageRef aCGImage = _imageView.image.CGImage;
CIImage *aCIImage = [CIImage imageWithCGImage:aCGImage];
_editor = [CIFilter filterWithName:#"CIColorControls" keysAndValues:#"inputImage", aCIImage, nil];
_context = [CIContext contextWithOptions: nil];
[self startEditControllerFromViewController:self];
}
//cancel and finish buttons
- (BOOL) startEditControllerFromViewController: (UIViewController*) controller {
[_cancelEdit addTarget:self action:#selector(cancelEdit:) forControlEvents:UIControlEventTouchUpInside];
[_finishEdit addTarget:self action:#selector(finishEdit:) forControlEvents:UIControlEventTouchUpInside];
return YES;
}
//adjust brightness
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
_imageView.image = [UIImage imageWithCGImage: cgiimg];
CGImageRelease(cgiimg);
}
I believe that the problem stems from the brightnessSlider method, based on breakpoints that I've placed. Is there a way to stop the auto-rotating of my photo? If not, how can I rotate it back to the normal orientation?
Mere minutes after posting, I figured out the answer to my own question. Go figure. Anyway, I simply changed the slider method to the following:
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
_imageView.image = [UIImage imageWithCGImage: cgiimg scale:originalScale orientation:originalOrientation];
CGImageRelease(cgiimg);
}
This simply records the original orientation and scale of the image, and re-sets them when the data is converted back to a UIImage. Hope this helps someone else!

GPUImage slow memory accumulation

I looked at forums to locate similar questions, but it seems that my issue is different.
I am using GPUImage, I built framework using bash script. I looked at samples of creating filters & their chain for processing.
But in my app I am facing constant increase in memory consumption. There are no memory leaks, I profiled my app.
image - is an image taken from the camera. I call the code described bellow in for loop. I do witness memory increase starting from 7MB, to 150MB, after than the app is terminated.
Here is the code :
- (void)processImageInternal:(UIImage *)image
{
GPUImagePicture * picture = [[GPUImagePicture alloc]initWithImage:image];
GPUImageLanczosResamplingFilter *lanczosResamplingFilter = [GPUImageLanczosResamplingFilter new];
CGFloat scale = image.scale;
CGSize size = image.size;
CGFloat newScale = roundf(2*scale);
CGSize newSize = CGSizeMake((size.width *scale)/newScale,(size.height *scale)/newScale);
[lanczosResamplingFilter forceProcessingAtSize:newSize];
[picture addTarget:lanczosResamplingFilter];
GPUImageGrayscaleFilter * grayScaleFilter = [GPUImageGrayscaleFilter new];
[grayScaleFilter forceProcessingAtSize:newSize];
[lanczosResamplingFilter addTarget:grayScaleFilter];
GPUImageMedianFilter * medianFilter = [GPUImageMedianFilter new];
[medianFilter forceProcessingAtSize:newSize];
[grayScaleFilter addTarget:medianFilter];
GPUImageSharpenFilter * sharpenFilter = [GPUImageSharpenFilter new];
sharpenFilter.sharpness +=2.0;
[sharpenFilter forceProcessingAtSize:newSize];
[medianFilter addTarget:sharpenFilter];
GPUImageGaussianBlurFilter * blurFilter = [GPUImageGaussianBlurFilter new];
blurFilter.blurSize = 0.5;
[blurFilter forceProcessingAtSize:newSize];
[sharpenFilter addTarget:blurFilter];
GPUImageUnsharpMaskFilter * unsharpMask = [GPUImageUnsharpMaskFilter new];
[unsharpMask forceProcessingAtSize:newSize];
[blurFilter addTarget:unsharpMask];
[picture processImage];
image = [unsharpMask imageFromCurrentlyProcessedOutput];
}
The code is executed in background thread.
Here is calling code:
for (NSUInteger i=0;i <100;i++)
{
NSLog(#" INDEX OF I : %d",i);
[self processImageInternal:image];
}
I even added some cleanup logic into - (void)processImageInternal:(UIImage *)image method
- (void)processImageInternal:(UIImage *)image
{
.........
[picture processImage];
image = [unsharpMask imageFromCurrentlyProcessedOutput];
//clean up code....
[picture removeAllTargets];
[self releaseResourcesForGPUOutput:unsharpMask];
[self releaseResourcesForGPUOutput:blurFilter];
[self releaseResourcesForGPUOutput:sharpenFilter];
[self releaseResourcesForGPUOutput:medianFilter];
[self releaseResourcesForGPUOutput:lanczosResamplingFilter];
[self releaseResourcesForGPUOutput:grayScaleFilter];
In release method, basically I am releasing whatever is possible to release: FBO, textures, targets. Here is the code:
- (void)releaseResourcesForGPUOutput:(GPUImageOutput *) output
{
if ([output isKindOfClass:[GPUImageFilterGroup class]])
{
GPUImageFilterGroup *group = (GPUImageFilterGroup *) output;
for (NSUInteger i=0; i<group.filterCount;i++)
{
GPUImageFilter * curFilter = (GPUImageFilter * )[group filterAtIndex:i];
[self releaseResourcesForGPUFilter:curFilter];
}
[group removeAllTargets];
}
else if ([output isKindOfClass:[GPUImageFilter class]])
{
[self releaseResourcesForGPUFilter:(GPUImageFilter *)output];
}
}
Unsharp mask it GPU group filters, i.e. composite of filters.
- (void)releaseResourcesForGPUFilter:(GPUImageFilter *)filter
{
if ([filter respondsToSelector:#selector(releaseInputTexturesIfNeeded)])
{
[filter releaseInputTexturesIfNeeded];
[filter destroyFilterFBO];
[filter cleanupOutputImage];
[filter deleteOutputTexture];
[filter removeAllTargets];
}
}
I did as Brad suggested to do: My code was called in UI touch processing loop. Autorealized pool objects are drained, but from Allocation Instruments I cannot see anything strange. But my application is terminated.
Total allocation was 130 MB at the end, but for some reasons my apps was terminated.
Here is how it looks like: ( I placed screen shots, log from device, even trace from Instruments). My app is called TesseractSample, but I completely switched off tesseract usage, even its' initialisation.
http://goo.gl/w5XrVb
In log from device I see that there maximum allow rpages were used:
http://goo.gl/ImG2Wt
But I do not have an clue what it means. rpages == recent_max ( 167601)

Resources