Image auto-rotates after using CIFilter - ios

I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.navigationItem.hidesBackButton = YES; //hide default nav
//get image to display
DBConnector *dbconnector = [[DBConnector alloc] init];
album.moments = [dbconnector getMomentsForAlbum:album.title];
Moment *mmt = [album.moments firstObject];
_imageView.image = [mmt.moment firstObject];
CGImageRef aCGImage = _imageView.image.CGImage;
CIImage *aCIImage = [CIImage imageWithCGImage:aCGImage];
_editor = [CIFilter filterWithName:#"CIColorControls" keysAndValues:#"inputImage", aCIImage, nil];
_context = [CIContext contextWithOptions: nil];
[self startEditControllerFromViewController:self];
}
//cancel and finish buttons
- (BOOL) startEditControllerFromViewController: (UIViewController*) controller {
[_cancelEdit addTarget:self action:#selector(cancelEdit:) forControlEvents:UIControlEventTouchUpInside];
[_finishEdit addTarget:self action:#selector(finishEdit:) forControlEvents:UIControlEventTouchUpInside];
return YES;
}
//adjust brightness
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
_imageView.image = [UIImage imageWithCGImage: cgiimg];
CGImageRelease(cgiimg);
}
I believe that the problem stems from the brightnessSlider method, based on breakpoints that I've placed. Is there a way to stop the auto-rotating of my photo? If not, how can I rotate it back to the normal orientation?

Mere minutes after posting, I figured out the answer to my own question. Go figure. Anyway, I simply changed the slider method to the following:
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
_imageView.image = [UIImage imageWithCGImage: cgiimg scale:originalScale orientation:originalOrientation];
CGImageRelease(cgiimg);
}
This simply records the original orientation and scale of the image, and re-sets them when the data is converted back to a UIImage. Hope this helps someone else!

Related

Blurred Image for "thinking/loading" screen

I have used 2 pages to create a blurred image with a spinner that I want to use for loading/thinking overlays:
http://www.sitepoint.com/all-purpose-loading-view-for-ios/
http://x-code-tutorials.com/2013/06/18/ios7-style-blurred-overlay-in-xcode/
It is working ok, but needs so modifications. I have 3 questions.
1st question:
After the button is clicked it seems to take a long time to actually come up. Any suggestions?
2nd question is:
The blurred image gets shift to the left and down, either when it is taken or when it is set in the view. Any thoughts on why?
Seems like the higher the numberWithFloat is, the more shift in the image.
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: #"inputRadius"];
3rd question:
I am trying to get this to display while the API backend is doing database stuff. If I don't call RemoveBlurredOverlay then it displays and worked, however if I call it after all the database work it won't display at all. Any thoughts? Need to be threaded?
BlurredOverlay.m
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView {
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:[self captureBlur:superView]];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlur:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[super removeFromSuperview];
}
#end
MainViewController.m
...
- (IBAction)loginButton:(id)sender {
//Add a blur view to tell uses the app is "thinking"
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view];
NSInteger success = 0;
//Check to see if the username or password textfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser]; //loginUser sends the http to the back end API that does the database stuff
}
//If successful, go to the View
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
//Seque to the main View
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
self.passwordTextField.text = #"";
}
}
Here is my answer for Question 1:
New to threads so and advice would be greatly appreciated.
I added another method in BlurredImageView.m to build the view with taking in a blurred imaged.
I made the captureBlurredImage method public, and called that in a thread in ViewDidLoad in Login.m first, then passed the blurred image into the new loadBlurredOverlay. I also added the login processing into a thread. It is really fast now, however:
Question #3 still remains!!!!
If I call [blurredOverlay removeBlurredOverlay]; in LoginViewController.m which calls [self removeFromSuperview]; in BlurredOverlay.m the BlurredImage and spinner never comes up. If I comment it out, I works like a charm but can't get it to dismiss after the login processing is done.
Comments and help will be appreciated. I will edit this answer if we can get to the bottom of this.
BlurredImage.m
#import "BlurredOverlay.h"
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView :(UIImage *) blurredImage {
NSLog(#"In loadBlurredOverlay with parameter blurredImage: %#", blurredImage);
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:blurredImage];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc]
initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
[superView bringSubviewToFront:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlurredImage:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[self removeFromSuperview];
}
#end
LoginViewController.m
...
-(void)viewDidAppear:(BOOL)animated
{
//Get a blurred image of the view in a thread<#^(void)block#>
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
self.blurredImage = [BlurredOverlay captureBlurredImage:self.view];
});
}
...
//Send the username and password to backend for verification
//If verified, go to ViewController
- (IBAction)loginButton:(id)sender {
__block BOOL success = false;
//Add a blur view with spinner to tell user the app is processing login information
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view :self.blurredImage];
//Login user in a thread
//Get a blurred image of the view in a thread
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//Check to see if the username or passord texfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser];
}
else {
success = false;
}
dispatch_async(dispatch_get_main_queue(), ^{
//If successful, go to the ViewController
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Seque to the main ViewController
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Reset passwordTextField
self.passwordTextField.text = #"";
}
});
});
}
...

How to recognize particular image is animating during imageview animation?

I am creating a timer which is used to play a song for 6 seconds.But i want that whenever i clicked the button it will recognized the particular image which is animating at that particular instant.For that i used CALayer but it is not giving the image name.
This is my code:
-(void)playSong
{
timerButton.imageView.animationImages =[NSArray arrayWithObjects:[UIImage imageNamed:#"Timer6.png"],[UIImage imageNamed:#"Timer5.png"],
[UIImage imageNamed:#"Timer4.png"],[UIImage imageNamed:#"Timer3.png"],
[UIImage imageNamed:#"Timer2.png"],[UIImage imageNamed:#"Timer1.png"],
nil];
timerButton.imageView.animationDuration=6.0;
[self.player play];
[timerButton.imageView startAnimating];
}
- (void)handleLongPressGestures:(UILongPressGestureRecognizer *)sender
{
if (sender.state == UIGestureRecognizerStateBegan)
{
CALayer *player1 = timerButton.imageView.layer;
}
}
Please help me.
Unfortunately, you cannot get the image name from a UIImageView or UIImage object.
You could, however, set an accessibility identifier to a UIImage object that, in your case, could be it's file name.
Then simply doing someUIImageObject.accessibilityIdentifier should return the file name.
(refer answer)
Example:
UIImage *image = [UIImage imageNamed:#"someImage.png"];
[image setAccessibilityIdentifier:#"someImage"];
NSLog(#"%#",image.accessibilityIdentifier);
Now... you'd expect timerButton.imageView.image.accessibilityIdentifier to give you the image name but that's not how it works when the UIImageView is animating.
This time, we need to access it via the UIImageViews animationImages array property.
For this, we will need some custom logic to get the UIImage object from this animationImages array.
We'll first record the time we started animating the images and with some basic maths, we can calculate the index of the animationImages array that is currently being displayed in the UIImageView.
(refer answer)
Answer (code):
-(void)playSong
{
NSMutableArray *arrImages = [[NSMutableArray alloc] init];
for(int i = 6 ; i > 0 ; i--) {
//Generate image file names: "Timer6.png", "Timer5.png", ..., "Timer1.png"
NSString *strImageName = [NSString stringWithFormat:#"Timer%d.png",i];
//create image object
UIImage *image = [UIImage imageNamed:strImageName];
//remember image object's filename
[image setAccessibilityIdentifier:strImageName];
[arrImages addObject:image];
}
[timerButton.imageView setAnimationImages:arrImages];
[timerButton.imageView setAnimationDuration:6.0f];
[timerButton.imageView setAnimationRepeatCount:0];
[self.player play];
[timerButton.imageView startAnimating];
//record start time
startDate = [NSDate date]; //declare "NSDate *startDate;" globally
}
- (void)handleLongPressGestures:(UILongPressGestureRecognizer *)sender
{
if (sender.state == UIGestureRecognizerStateBegan) {
[self checkCurrentImage];
}
}
-(void)checkCurrentImage
{
//logic to determine which image is being displayed from the animationImages array
NSTimeInterval durationElapsed = -1 * [startDate timeIntervalSinceNow];
NSTimeInterval durationPerFrame = timerButton.imageView.animationDuration / timerButton.imageView.animationImages.count;
int currentIndex = (int)(durationElapsed / durationPerFrame) % timerButton.imageView.animationImages.count;
UIImage *imageCurrent = timerButton.imageView.animationImages[currentIndex];
NSString *strCurrentImage = imageCurrent.accessibilityIdentifier;
NSLog(#"%#",strCurrentImage);
}

dataProvider is 0x0 / nil (GPUImage Framework)

I wrote some code which creates a filter and can be controlled via a UISlider.
But if I slide the UISlider, the app crashes.
My code:
.m file:
- (void) viewDidLoad {
[_sliderBrightness addTarget:self action:#selector(brightnessFilter) forControlEvents:UIControlEventValueChanged];
_sliderBrightness.minimumValue = -1.0;
_sliderBrightness.maximumValue = 1.0;
_sliderBrightness.value = 0.0;
}
- (IBAction)sliderBrightness:(UISlider *)sender {
CGFloat midpoint = [(UISlider *)sender value];
[(GPUImageBrightnessFilter *)brightFilter setBrightness:midpoint - 0.1];
[(GPUImageBrightnessFilter *)brightFilter setBrightness:midpoint + 0.1];
[sourcePicture processImage];
}
- (void) brightnessFilter {
UIImage *inputImage = _imgView.image;
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
brightFilter = [[GPUImageBrightnessFilter alloc] init];
GPUImageView *imgView2 = (GPUImageView *)self.view;
[brightFilter useNextFrameForImageCapture];
[sourcePicture addTarget:brightFilter];
[sourcePicture processImage];
UIImage* outputImage = [brightFilter imageFromCurrentFramebufferWithOrientation:0];
[_imgView setImage:outputImage];
}
Error:
GPUImageFramebuffer.m:
}
else
{
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
[self unlock]; // Don't need to keep this around anymore
}
In this line of code:
[self activateFramebuffer];
Error message:
Thread 1: EXC_BAD_ACCESS (code=EXC_1386_GPFLT)
Console:
self = (GPUImageFramebuffer *const) 0x10a0a6960
rawImagePixels = (GLubyte *) 0x190
dataProvider = (CGDataProviderRef) 0x0
renderTarget = (CVPixelBufferRef) 0x8
Maybe the dataProvider causes the crash but I don't really know because I'm new in developing iOS apps.
This obviously isn't going to work (and shouldn't even compile) because GPUImageBrightnessFilter has no -setTopFocusLevel: or -setBottomFocusLevel: method. You copied this from my sample application without changing these methods to the one appropriate to a brightness filter (which is the brightness property).
It's also rather confusing (and potentially problematic) to have both a brightnessFilter instance variable and -brightnessFilter method. You probably want to rename the former to make it clear that's where you're performing your initial setup of the filter and source image. You'll also need to call that in your view controller's setup (after your Nib is loaded).

Global variable is not being updated fast enough 50% of the time

I have a photo taking app. When the user presses the button to take a photo, I set a global NSString variable called self.hasUserTakenAPhoto equal to YES. This works perfectly 100% of the time when using the rear facing camera. However, it only works about 50% of the time when using the front facing camera and I have no idea why.
Below are the important pieces of code and a quick description of what they do.
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2);
self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2);
PFFile *imageFile = [self.message objectForKey:#"file"];
NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url];
imageFile = nil;
self.imageData = [NSData dataWithContentsOfURL:imageFileURL];
imageFileURL = nil;
self.topHalfView.image = [UIImage imageWithData:self.imageData];
//START CREATING THE SESSION
self.session =[[AVCaptureSession alloc]init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];
if([self.session canAddInput:self.deviceInput])
[self.session addInput:self.deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session];
self.rootLayer = [[self view]layer];
[self.rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.rootLayer insertSublayer:_previewLayer atIndex:0];
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[self.session addOutput:self.videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[self.videoOutput setSampleBufferDelegate:self queue:queue];
[_session startRunning];
}
The Important part of viewDidLoad starts where I left the comment of //START CREATING THE SESSION
I basically create the session and then start running it. I have set this view controller as a AVCaptureVideoDataOutputSampleBufferDelegate, so as soon as the session starts running then the method below starts being called as well.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES.
NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);
if([self.hasUserTakenAPhoto isEqualToString:#"YES"]) {
//Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo.
self.image = [self imageFromSampleBuffer:sampleBuffer];
}
}
This code is receiving the video output from the camera every second, but I don't actually do anything with it until self.hasUserTakenAPhoto is equal to YES. Once that has a string value of YES, then I use the current sampleBuffer from the camera and place it inside my global variable called self.image
So, here is when self.hasUserTakenAPhoto is actually set to YES.
Below is my IBAction code that is called when the user presses the button to capture a photo. A lot happens when this code runs, but really all that matters is the very first statement of: self.hasUserTakenAPhoto = #"YES";
-(IBAction)stillImageCapture {
self.hasUserTakenAPhoto = #"YES";
[self.session stopRunning];
if(self.inputDevice.position == 2) {
self.image = [self selfieCorrection:self.image];
} else {
self.image = [self rotate:UIImageOrientationRight];
}
CGFloat widthToHeightRatio = _previewLayer.bounds.size.width / _previewLayer.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (self.image.size.width < self.image.size.height) {
cropRect.size.width = self.image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = self.image.size.height * widthToHeightRatio;
cropRect.size.height = self.image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0;
NSLog(#"Y Math: %f", (self.image.size.height - cropRect.size.height));
} else {
cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0;
cropRect.origin.y = 0;
float cropValueDoubled = self.image.size.height - cropRect.size.height;
float final = cropValueDoubled/2;
finalXValueForCrop = final;
}
CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal);
UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef];
self.image = image2;
CGImageRelease(imageRef);
self.bottomHalfView.image = self.image;
if ([self.hasUserTakenAPhoto isEqual:#"YES"]) {
[self.takingPhotoView setHidden:YES];
self.image = [self screenshot];
[_afterPhotoView setHidden:NO];
}
}
So basically the viewDidLoad method runs and the session is started, the session is sending everything the camera sees to the captureOutput method, and then as soon as the user presses the "take a photo" button we set the string value of self.hasUserTakenAPhoto to YES, the session stops, and since self.hasUserTakenAPhoto is equal to YES now, the captureOutput method places the very last camera buffer into the self.image object for me to use.
I just can't figure this out because like I said it works 100% of the time when using the rear facing camera. However, when using the front facing camera it only works 50% of the time.
I have narrowed the problem down to the fact that self.hasUserTakenAPhoto does not update to YES fast enough when using the ront facing camera, and I know because if you look in my 2nd code I posted it has the statement of NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);.
When this works correctly, and the user has just pressed the button to capture a photo which also stops the session, the very last time that NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto); runs it will print with the correct value of YES.
However, when it doesn't work correctly and doesn't update fast enough, the very last time it runs it still prints to the log with a value of null.
Any ideas on why self.hasUserTakenAPhoto does not update fast enough 50% of the time whe using the front facing camera? Even if we can't figure that out, it doesn't matter. I just need help then coming up with an alternate solution to this.
Thanks for the help.
I think its a scheduling problem. At the return point of your methods
– captureOutput:didOutputSampleBuffer:fromConnection:
– captureOutput:didDropSampleBuffer:fromConnection:
add a CFRunLoopRun()

Can the new tintColor property of UIImageview in iOS 7 be used for animating images?

tintColor is a life saver, it takes app theming to a whole new (easy) level.
//the life saving bit is the new UIImageRenderingModeAlwaysTemplate mode of UIImage
UIImage *templateImage = [[UIImage imageNamed:#"myTemplateImage"] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
imageView.image = templateImage;
//set the desired tintColor
imageView.tintColor = color;
The above code will "paint" the image's non-transparent parts according to the UIImageview's tint color which is oh so cool.No need for core graphics for something simple like that.
The problem I face is with animations.
Continuing from the above code:
//The array with the names of the images we want to animate
NSArray *imageNames = #[#"1",#"2"#"3"#"4"#"5"];
//The array with the actual images
NSMutableArray *images = [NSMutableArray new];
for (int i = 0; i < imageNames.count; i++)
{
[images addObject:[[UIImage imageNamed:[imageNames objectAtIndex:i]] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate]];
}
//We set the animation images of the UIImageView to the images in the array
imageView.animationImages = images;
//and start animating the animation
[imageView startAnimating];
The animation is performed correctly but the images use their original color (i.e. the color used in the gfx editing application) instead of the UIImageView's tintColor.
I am about to try to perform the animation myself (by doing something a little bit over the top like looping through the images and setting the UIImageView's image property with a NSTimer delay so that the human eye can catch it).
Before doing that I'd like to ask if the tintColor property of UIImageView is supposed to support what I'm trying to do with it i.e use it for animations.
Thanks.
Rather than animate the images myself, I decided to render the individual frames using a tint color and then let UIImage do the animation. I created a category on UIImage with the following methods:
+ (instancetype)animatedImageNamed:(NSString *)name tintColor:(UIColor *)tintColor duration:(NSTimeInterval)duration
{
NSMutableArray *images = [[NSMutableArray alloc] init];
short index = 0;
while ( index <= 1024 )
{
NSString *imageName = [NSString stringWithFormat:#"%#%d", name, index++];
UIImage *image = [UIImage imageNamed:imageName];
if ( image == nil ) break;
[images addObject:[image imageTintedWithColor:tintColor]];
}
return [self animatedImageWithImages:images duration:duration];
}
- (instancetype)imageTintedWithColor:(UIColor *)tintColor
{
CGRect imageBounds = CGRectMake( 0, 0, self.size.width, self.size.height );
UIGraphicsBeginImageContextWithOptions( self.size, NO, self.scale );
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM( context, 0, self.size.height );
CGContextScaleCTM( context, 1.0, -1.0 );
CGContextClipToMask( context, imageBounds, self.CGImage );
[tintColor setFill];
CGContextFillRect( context, imageBounds );
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
It works just like + [UIImage animatedImageNamed:duration:] (including looking for files named "image0", "image1", etc) except that it also takes a tint color.
Thanks to this answer for providing the image tinting code: https://stackoverflow.com/a/19152722/321527
.tintColor can probably handle it. I use NSTimers for UIButton's setTitleColor method all the time. Here's an example.
UPDATED: Tested and works on iPhone 5s iOS 7.1!
- (void)bringToMain:(UIImage *)imageNam {
timer = [NSTimer scheduledTimerWithTimeInterval:.002
target:self
selector:#selector(animateTint)
userInfo:nil
repeats:YES];
}
- (void)animateTint {
asd += 1.0f;
[imageView setTintColor:[UIColor colorWithRed:((asd/100.0f) green:0.0f blue:0.0f alpha:1.0f]];
if (asd == 100) {
asd = 0.0f
[timer invalidate];
}
}

Resources