Blurred Image for "thinking/loading" screen - ios

I have used 2 pages to create a blurred image with a spinner that I want to use for loading/thinking overlays:
http://www.sitepoint.com/all-purpose-loading-view-for-ios/
http://x-code-tutorials.com/2013/06/18/ios7-style-blurred-overlay-in-xcode/
It is working ok, but needs so modifications. I have 3 questions.
1st question:
After the button is clicked it seems to take a long time to actually come up. Any suggestions?
2nd question is:
The blurred image gets shift to the left and down, either when it is taken or when it is set in the view. Any thoughts on why?
Seems like the higher the numberWithFloat is, the more shift in the image.
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: #"inputRadius"];
3rd question:
I am trying to get this to display while the API backend is doing database stuff. If I don't call RemoveBlurredOverlay then it displays and worked, however if I call it after all the database work it won't display at all. Any thoughts? Need to be threaded?
BlurredOverlay.m
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView {
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:[self captureBlur:superView]];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlur:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[super removeFromSuperview];
}
#end
MainViewController.m
...
- (IBAction)loginButton:(id)sender {
//Add a blur view to tell uses the app is "thinking"
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view];
NSInteger success = 0;
//Check to see if the username or password textfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser]; //loginUser sends the http to the back end API that does the database stuff
}
//If successful, go to the View
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
//Seque to the main View
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
self.passwordTextField.text = #"";
}
}

Here is my answer for Question 1:
New to threads so and advice would be greatly appreciated.
I added another method in BlurredImageView.m to build the view with taking in a blurred imaged.
I made the captureBlurredImage method public, and called that in a thread in ViewDidLoad in Login.m first, then passed the blurred image into the new loadBlurredOverlay. I also added the login processing into a thread. It is really fast now, however:
Question #3 still remains!!!!
If I call [blurredOverlay removeBlurredOverlay]; in LoginViewController.m which calls [self removeFromSuperview]; in BlurredOverlay.m the BlurredImage and spinner never comes up. If I comment it out, I works like a charm but can't get it to dismiss after the login processing is done.
Comments and help will be appreciated. I will edit this answer if we can get to the bottom of this.
BlurredImage.m
#import "BlurredOverlay.h"
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView :(UIImage *) blurredImage {
NSLog(#"In loadBlurredOverlay with parameter blurredImage: %#", blurredImage);
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:blurredImage];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc]
initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
[superView bringSubviewToFront:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlurredImage:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[self removeFromSuperview];
}
#end
LoginViewController.m
...
-(void)viewDidAppear:(BOOL)animated
{
//Get a blurred image of the view in a thread<#^(void)block#>
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
self.blurredImage = [BlurredOverlay captureBlurredImage:self.view];
});
}
...
//Send the username and password to backend for verification
//If verified, go to ViewController
- (IBAction)loginButton:(id)sender {
__block BOOL success = false;
//Add a blur view with spinner to tell user the app is processing login information
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view :self.blurredImage];
//Login user in a thread
//Get a blurred image of the view in a thread
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//Check to see if the username or passord texfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser];
}
else {
success = false;
}
dispatch_async(dispatch_get_main_queue(), ^{
//If successful, go to the ViewController
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Seque to the main ViewController
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Reset passwordTextField
self.passwordTextField.text = #"";
}
});
});
}
...

Related

UICollection View Scroll lag with SDWebImage

Background
I have searched around SO and apple forum. Quite a lot of people talked about performance of collection view cell with image. Most of them said it is lag on scroll since loading the image in the main thread.
By using SDWebImage, the images should be loading in separate thread. However, it is lag only in the landscape mode in the iPad simulator.
Problem description
In the portrait mode, the collection view load 3 cells for each row. And it has no lag or insignificant delay.
In the landscape mode, the collection view load 4 cells for each row. And it has obvious lag and drop in frame rate.
I have checked with instrument tools with the core animation. The frame rate drop to about 8fps when new cell appear. I am not sure which act bring me such a low performance for the collection view.
Hope there would be someone know the tricks part.
Here are the relate code
In The View Controller
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
ProductCollectionViewCell *cell=[collectionView dequeueReusableCellWithReuseIdentifier:#"ProductViewCell" forIndexPath:indexPath];
Product *tmpProduct = (Product*)_ploader.loadedProduct[indexPath.row];
cell.product = tmpProduct;
if (cellShouldAnimate) {
cell.alpha = 0.0;
[UIView animateWithDuration:0.2
delay:0
options:(UIViewAnimationOptionCurveLinear | UIViewAnimationOptionAllowUserInteraction)
animations:^{
cell.alpha = 1.0;
} completion:nil];
}
if(indexPath.row >= _ploader.loadedProduct.count - ceil((LIMIT_COUNT * 0.3)))
{
[_ploader loadProductsWithCompleteBlock:^(NSError *error){
if (nil == error) {
cellShouldAnimate = NO;
[_collectionView reloadData];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 2 * NSEC_PER_SEC), dispatch_get_main_queue(), ^{
cellShouldAnimate = YES;
});
} else if (error.code != 1){
#ifdef DEBUG_MODE
ULog(#"Error.des : %#", error.description);
#else
CustomAlertView *alertView = [[CustomAlertView alloc]
initWithTitle:#"Connection Error"
message:#"Please retry."
buttonTitles:#[#"OK"]];
[alertView show];
#endif
}
}];
}
return cell;
}
PrepareForReuse in the collectionViewCell
- (void)prepareForReuse
{
[super prepareForReuse];
CGRect bounds = self.bounds;
[_thumbnailImgView sd_cancelCurrentImageLoad];
CGFloat labelsTotalHeight = bounds.size.height - _thumbnailImgView.frame.size.height;
CGFloat brandToImageOffset = 2.0;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
brandToImageOffset = 53.0;
}
CGFloat labelStartY = _thumbnailImgView.frame.size.height + _thumbnailImgView.frame.origin.y + brandToImageOffset;
CGFloat nameLblHeight = labelsTotalHeight * 0.46;
CGFloat priceLblHeight = labelsTotalHeight * 0.18;
_brandLbl.frame = (CGRect){{15, labelStartY}, {bounds.size.width - 30, nameLblHeight}};
CGFloat priceToNameOffset = 8.0;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
priceToNameOffset = 18.0;
}
_priceLbl.frame = (CGRect){{5, labelStartY + nameLblHeight - priceToNameOffset}, {bounds.size.width-10, priceLblHeight}};
[_spinner stopAnimating];
[_spinner removeFromSuperview];
_spinner = nil;
}
Override the setProduct method
- (void)setProduct:(Product *)product
{
_product = product;
_spinner = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray];
_spinner.center = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
[self addSubview:_spinner];
[_spinner startAnimating];
_spinner.hidesWhenStopped = YES;
// Add a spinner
__block UIActivityIndicatorView *tmpSpinner = _spinner;
__block UIImageView *tmpImgView = _thumbnailImgView;
ProductImage *thumbnailImage = _product.images[0];
[_thumbnailImgView sd_setImageWithURL:[NSURL URLWithString:thumbnailImage.mediumURL]
completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType, NSURL *imageURL) {
// dismiss the spinner
[tmpSpinner stopAnimating];
[tmpSpinner removeFromSuperview];
tmpSpinner = nil;
if (nil == error) {
// Resize the incoming images
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGFloat imageHeight = image.size.height;
CGFloat imageWidth = image.size.width;
CGSize newSize = tmpImgView.bounds.size;
CGFloat scaleFactor = newSize.width / imageWidth;
newSize.height = imageHeight * scaleFactor;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *small = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(),^{
tmpImgView.image = small;
});
});
if (cacheType == SDImageCacheTypeNone) {
tmpImgView.alpha = 0.0;
[UIView animateWithDuration:0.2
delay:0
options:(UIViewAnimationOptionCurveLinear | UIViewAnimationOptionAllowUserInteraction)
animations:^{
tmpImgView.alpha = 1.0;
} completion:nil];
}
} else {
// loading error
[tmpImgView setImage:[UIImage imageNamed:#"broken_image_small"]];
}
}];
_brandLbl.text = [_product.brand.name uppercaseString];
_nameLbl.text = _product.name;
[_nameLbl sizeToFit];
// Format the price
NSNumberFormatter * floatFormatter = [[NSNumberFormatter alloc] init];
[floatFormatter setNumberStyle:NSNumberFormatterDecimalStyle];
[floatFormatter setDecimalSeparator:#"."];
[floatFormatter setMaximumFractionDigits:2];
[floatFormatter setMinimumFractionDigits:0];
[floatFormatter setGroupingSeparator:#","];
_priceLbl.text = [NSString stringWithFormat:#"$%# USD", [floatFormatter stringFromNumber:_product.price]];
if (_product.salePrice.intValue > 0) {
NSString *rawStr = [NSString stringWithFormat:#"$%# $%# USD", [floatFormatter stringFromNumber:_product.price], [floatFormatter stringFromNumber:_product.salePrice]];
NSMutableAttributedString * string = [[NSMutableAttributedString alloc] initWithString:rawStr];
// Change all the text to red first
[string addAttribute:NSForegroundColorAttributeName
value:[UIColor colorWithRed:157/255.0 green:38/255.0 blue:29/255.0 alpha:1.0]
range:NSMakeRange(0,rawStr.length)];
// find the first space
NSRange firstSpace = [rawStr rangeOfString:#" "];
// Change from zero to space to gray color
[string addAttribute:NSForegroundColorAttributeName
value:_priceLbl.textColor
range:NSMakeRange(0, firstSpace.location)];
[string addAttribute:NSStrikethroughStyleAttributeName
value:#2
range:NSMakeRange(0, firstSpace.location)];
_priceLbl.attributedText = string;
}
}
SDWebImage is very admirable, but DLImageLoader is absolutely incredible, and a key piece of many big production apps
https://stackoverflow.com/a/19115912/294884
it's amazingly easy to use.
To avoid the skimming problem, basically just introduce a delay before bothering to start downloading the image. So, essentially like this...it's this simple
dispatch_after_secs_on_main(0.4, ^
{
if ( ! [urlWasThen isEqualToString:self.currentImage] )
{
// so in other words, in fact, after a short period of time,
// the user has indeed scrolled away from that item.
// (ie, the user is skimming)
// this item is now some "new" item so of course we don't
// bother loading "that old" item
// ie, we now know the user was simply skimming over that item.
// (just TBC in the preliminary clause above,
// since the image is already in cache,
// we'd just instantly load the image - even if the user is skimming)
// NSLog(#" --- --- --- --- --- --- too quick!");
return;
}
// a short time has passed, and indeed this cell is still "that" item
// the user is NOT skimming, SO we start loading the image.
//NSLog(#" --- not too quick ");
[DLImageLoader loadImageFromURL:urlWasThen
completed:^(NSError *error, NSData *imgData)
{
if (self == nil) return;
// some time has passed while the image was loading from the internet...
if ( ! [urlWasThen isEqualToString:self.currentImage] )
{
// note that this is the "normal" situation where the user has
// moved on from the image, so no need toload.
//
// in other words: in this case, not due to skimming,
// but because SO much time has passed,
// the user has moved on to some other part of the table.
// we pointlessly loaded the image from the internet! doh!
//NSLog(#" === === 'too late!' image load!");
return;
}
UIImage *image = [UIImage imageWithData:imgData];
self.someImage.image = image;
}];
});
That's the "incredibly easy" solution.
IMO, after vast experimentation, it actually works considerably better than the more complex solution of tracking when the scroll is skimming.
once again, DLImageLoader makes all this extremely easy https://stackoverflow.com/a/19115912/294884
Note that the section of code above is just the "usual" way you load an image inside a cell.
Here's typical code that would do that:
-(void)imageIsNow:(NSString *)imUrl
{
// call this routine o "set the image" on this cell.
// note that "image is now" is a better name than "set the image"
// Don't forget that cells very rapidly change contents, due to
// the cell reuse paradigm on iOS.
// this cell is being told that, the image to be displayed is now this image
// being aware of scrolling/skimming issues, cache issues, etc,
// utilise this information to apprporiately load/whatever the image.
self.someImage.image = nil; // that's UIImageView
self.currentImage = imUrl; // you need that string property
[self loadImageInASecIfItsTheSameAs:imUrl];
}
-(void)loadImageInASecIfItsTheSameAs:(NSString *)urlWasThen
{
// (note - at this point here the image may already be available
// in cache. if so, just display it. I have omitted that
// code for simplicity here.)
// so, right here, "possibly load with delay" the image
// exactly as shown in the code above .....
dispatch_after_secs_on_main(0.4, ^
...etc....
...etc....
}
Again this is all easily possible due to DLImageLoader which is amazing. It is an amazingly solid library.

Image auto-rotates after using CIFilter

I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.navigationItem.hidesBackButton = YES; //hide default nav
//get image to display
DBConnector *dbconnector = [[DBConnector alloc] init];
album.moments = [dbconnector getMomentsForAlbum:album.title];
Moment *mmt = [album.moments firstObject];
_imageView.image = [mmt.moment firstObject];
CGImageRef aCGImage = _imageView.image.CGImage;
CIImage *aCIImage = [CIImage imageWithCGImage:aCGImage];
_editor = [CIFilter filterWithName:#"CIColorControls" keysAndValues:#"inputImage", aCIImage, nil];
_context = [CIContext contextWithOptions: nil];
[self startEditControllerFromViewController:self];
}
//cancel and finish buttons
- (BOOL) startEditControllerFromViewController: (UIViewController*) controller {
[_cancelEdit addTarget:self action:#selector(cancelEdit:) forControlEvents:UIControlEventTouchUpInside];
[_finishEdit addTarget:self action:#selector(finishEdit:) forControlEvents:UIControlEventTouchUpInside];
return YES;
}
//adjust brightness
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
_imageView.image = [UIImage imageWithCGImage: cgiimg];
CGImageRelease(cgiimg);
}
I believe that the problem stems from the brightnessSlider method, based on breakpoints that I've placed. Is there a way to stop the auto-rotating of my photo? If not, how can I rotate it back to the normal orientation?
Mere minutes after posting, I figured out the answer to my own question. Go figure. Anyway, I simply changed the slider method to the following:
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
_imageView.image = [UIImage imageWithCGImage: cgiimg scale:originalScale orientation:originalOrientation];
CGImageRelease(cgiimg);
}
This simply records the original orientation and scale of the image, and re-sets them when the data is converted back to a UIImage. Hope this helps someone else!

Can the new tintColor property of UIImageview in iOS 7 be used for animating images?

tintColor is a life saver, it takes app theming to a whole new (easy) level.
//the life saving bit is the new UIImageRenderingModeAlwaysTemplate mode of UIImage
UIImage *templateImage = [[UIImage imageNamed:#"myTemplateImage"] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
imageView.image = templateImage;
//set the desired tintColor
imageView.tintColor = color;
The above code will "paint" the image's non-transparent parts according to the UIImageview's tint color which is oh so cool.No need for core graphics for something simple like that.
The problem I face is with animations.
Continuing from the above code:
//The array with the names of the images we want to animate
NSArray *imageNames = #[#"1",#"2"#"3"#"4"#"5"];
//The array with the actual images
NSMutableArray *images = [NSMutableArray new];
for (int i = 0; i < imageNames.count; i++)
{
[images addObject:[[UIImage imageNamed:[imageNames objectAtIndex:i]] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate]];
}
//We set the animation images of the UIImageView to the images in the array
imageView.animationImages = images;
//and start animating the animation
[imageView startAnimating];
The animation is performed correctly but the images use their original color (i.e. the color used in the gfx editing application) instead of the UIImageView's tintColor.
I am about to try to perform the animation myself (by doing something a little bit over the top like looping through the images and setting the UIImageView's image property with a NSTimer delay so that the human eye can catch it).
Before doing that I'd like to ask if the tintColor property of UIImageView is supposed to support what I'm trying to do with it i.e use it for animations.
Thanks.
Rather than animate the images myself, I decided to render the individual frames using a tint color and then let UIImage do the animation. I created a category on UIImage with the following methods:
+ (instancetype)animatedImageNamed:(NSString *)name tintColor:(UIColor *)tintColor duration:(NSTimeInterval)duration
{
NSMutableArray *images = [[NSMutableArray alloc] init];
short index = 0;
while ( index <= 1024 )
{
NSString *imageName = [NSString stringWithFormat:#"%#%d", name, index++];
UIImage *image = [UIImage imageNamed:imageName];
if ( image == nil ) break;
[images addObject:[image imageTintedWithColor:tintColor]];
}
return [self animatedImageWithImages:images duration:duration];
}
- (instancetype)imageTintedWithColor:(UIColor *)tintColor
{
CGRect imageBounds = CGRectMake( 0, 0, self.size.width, self.size.height );
UIGraphicsBeginImageContextWithOptions( self.size, NO, self.scale );
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM( context, 0, self.size.height );
CGContextScaleCTM( context, 1.0, -1.0 );
CGContextClipToMask( context, imageBounds, self.CGImage );
[tintColor setFill];
CGContextFillRect( context, imageBounds );
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
It works just like + [UIImage animatedImageNamed:duration:] (including looking for files named "image0", "image1", etc) except that it also takes a tint color.
Thanks to this answer for providing the image tinting code: https://stackoverflow.com/a/19152722/321527
.tintColor can probably handle it. I use NSTimers for UIButton's setTitleColor method all the time. Here's an example.
UPDATED: Tested and works on iPhone 5s iOS 7.1!
- (void)bringToMain:(UIImage *)imageNam {
timer = [NSTimer scheduledTimerWithTimeInterval:.002
target:self
selector:#selector(animateTint)
userInfo:nil
repeats:YES];
}
- (void)animateTint {
asd += 1.0f;
[imageView setTintColor:[UIColor colorWithRed:((asd/100.0f) green:0.0f blue:0.0f alpha:1.0f]];
if (asd == 100) {
asd = 0.0f
[timer invalidate];
}
}

Animating UIImageView with colorWithPatternImage

I have a UIImageView that is animated using the following code:
NSMutableArray *imageArray = [[NSMutableArray alloc] init];
for(int i = 1; i < 15; i++) {
NSString *str = [NSString stringWithFormat:#"marker_%i.png", i];
UIImage *img = [UIImage imageNamed:str];
if(img != nil) {
[imageArray addObject:img];
}
}
_imageContainer.animationImages = imageArray;
_imageContainer.animationDuration = 0.5f;
[_imageContainer startAnimating];
What I want now is to repeat the image to get a pattern. There is colorWithPatternImage, but that isn't made for animations.
I want the whole background filled with a animated pattern.
Instead of using a very large images (960x640) i could use a image of 64x64 for example and repeat that to fill the screen.
Is there any way?
Leave your code as it is, but instead using UIImageView use my subclass:
//
// AnimatedPatternView.h
//
#import <UIKit/UIKit.h>
#interface AnimatedPatternView : UIImageView;
#end
//
// AnimatedPatternView.m
//
#import "AnimatedPatternView.h"
#implementation AnimatedPatternView
-(void)setAnimationImages:(NSArray *)imageArray
{
NSMutableArray* array = [NSMutableArray arrayWithCapacity:imageArray.count];
for (UIImage* image in imageArray) {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, 0);
UIColor* patternColor = [UIColor colorWithPatternImage:image];
[patternColor setFill];
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextFillRect(ctx, self.bounds);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[array addObject:img];
}
[super setAnimationImages:array];
}
#end
If you create the view using interface builder you only need to set the class of the image view in the identity inspector.

How can I make blind down effect to an image in IOS?

I want that, when I roll the iPad, the image blinds up/down. Effect should be like
http://madrobby.github.com/scriptaculous/combination-effects-demo/ Blind Down demo.
How can I do that?
I tried Reflection example of Apple but I had performance issues since I should redraw image in every gyroscope action.
Here is the Code:
- (void)viewDidLoad
{
[super viewDidLoad];
tmp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"galata2.jpg"]];
// Do any additional setup after loading the view, typically from a nib.
NSUInteger reflectionHeight = imageView1.bounds.size.height * 1;
imageView1 = [[UIImageView alloc] init];
imageView1.image = [UIImage imageNamed:#"galata1.jpg"];
[imageView1 sizeToFit];
[self.view addSubview:imageView1];
imageView2 = [[UIImageView alloc] init];
//UIImageView *tmp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"galata2.jpg"]];
imageView2.image = [UIImage imageNamed:#"galata2.jpg"];
[imageView2 sizeToFit];
[self.view addSubview:imageView2];
motionManager = [[CMMotionManager alloc] init];
motionManager.gyroUpdateInterval = 1.0/10.0;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error){
[self performSelectorOnMainThread:#selector(handleDeviceMotion:) withObject:motion waitUntilDone:YES];
}];
}
////
- (void)handleDeviceMotion:(CMDeviceMotion*)motion{
CMAttitude *attitude = motion.attitude;
int rotateAngle = abs((int)degrees(attitude.roll));
//CMRotationRate rotationRate = motion.rotationRate;
NSLog(#"rotation rate = [Pitch: %f, Roll: %d, Yaw: %f]", degrees(attitude.pitch), abs((int)degrees(attitude.roll)), degrees(attitude.yaw));
int section = (int)(rotateAngle / 30);
int x = rotateAngle % 30;
NSUInteger reflectionHeight = (1024/30)*x;
NSLog(#"[x = %d]", reflectionHeight);
imageView2.image = [self reflectedImage:tmp withHeight:reflectionHeight];
}
////
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
if(height == 0)
return nil;
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, fromImage.bounds.size.height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(1, kImageHeight);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, fromImage.bounds.size.width,height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// In order to grab the part of the image that we want to render, we move the context origin to the
// height of the image that we want to capture, then we flip the context so that the image draws upside down.
//CGContextTranslateCTM(mainViewContentContext, 0.0,0.0);
//CGContextScaleCTM(mainViewContentContext, 1.0, -1.0);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, fromImage.bounds.size.width, fromImage.bounds.size.height), fromImage.image.CGImage);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
One way to do this is to use another covering view that gradually changes height by animation;
If you have a view called theView that you want to cover, try something like this to reveal theView underneath a cover view:
UIView *coverView = [UIView alloc] initWithFrame:theView.frame];
coverView.backgroundcolor = [UIColor whiteColor];
[theView.superView addSubView:coverView]; // this covers theView, adding it to the same view that the view is contained in;
CGRect newFrame = theView.frame;
newFrame.size.height = 0;
newFrame.origin.y = theView.origin.y + theView.size.height;
[UIView animateWithDuration:1.5
delay: 0.0
options: UIViewAnimationOptionRepeat
animations:^{
coverView.frame = newFrame;
}
completion:nil
];
This should cover the view and then reveal it by changing the frame ov the cover, moving it down while changing the height.
I haven't tried the code, but this is one direction you can take to create the blind effect. I have used similar code often, and it is very easy to work with. Also, it doesn't require knowing core animation.

Resources