I have a UIImageView that is animated using the following code:
NSMutableArray *imageArray = [[NSMutableArray alloc] init];
for(int i = 1; i < 15; i++) {
NSString *str = [NSString stringWithFormat:#"marker_%i.png", i];
UIImage *img = [UIImage imageNamed:str];
if(img != nil) {
[imageArray addObject:img];
}
}
_imageContainer.animationImages = imageArray;
_imageContainer.animationDuration = 0.5f;
[_imageContainer startAnimating];
What I want now is to repeat the image to get a pattern. There is colorWithPatternImage, but that isn't made for animations.
I want the whole background filled with a animated pattern.
Instead of using a very large images (960x640) i could use a image of 64x64 for example and repeat that to fill the screen.
Is there any way?
Leave your code as it is, but instead using UIImageView use my subclass:
//
// AnimatedPatternView.h
//
#import <UIKit/UIKit.h>
#interface AnimatedPatternView : UIImageView;
#end
//
// AnimatedPatternView.m
//
#import "AnimatedPatternView.h"
#implementation AnimatedPatternView
-(void)setAnimationImages:(NSArray *)imageArray
{
NSMutableArray* array = [NSMutableArray arrayWithCapacity:imageArray.count];
for (UIImage* image in imageArray) {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, 0);
UIColor* patternColor = [UIColor colorWithPatternImage:image];
[patternColor setFill];
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextFillRect(ctx, self.bounds);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[array addObject:img];
}
[super setAnimationImages:array];
}
#end
If you create the view using interface builder you only need to set the class of the image view in the identity inspector.
Related
I have used 2 pages to create a blurred image with a spinner that I want to use for loading/thinking overlays:
http://www.sitepoint.com/all-purpose-loading-view-for-ios/
http://x-code-tutorials.com/2013/06/18/ios7-style-blurred-overlay-in-xcode/
It is working ok, but needs so modifications. I have 3 questions.
1st question:
After the button is clicked it seems to take a long time to actually come up. Any suggestions?
2nd question is:
The blurred image gets shift to the left and down, either when it is taken or when it is set in the view. Any thoughts on why?
Seems like the higher the numberWithFloat is, the more shift in the image.
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: #"inputRadius"];
3rd question:
I am trying to get this to display while the API backend is doing database stuff. If I don't call RemoveBlurredOverlay then it displays and worked, however if I call it after all the database work it won't display at all. Any thoughts? Need to be threaded?
BlurredOverlay.m
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView {
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:[self captureBlur:superView]];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlur:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[super removeFromSuperview];
}
#end
MainViewController.m
...
- (IBAction)loginButton:(id)sender {
//Add a blur view to tell uses the app is "thinking"
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view];
NSInteger success = 0;
//Check to see if the username or password textfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser]; //loginUser sends the http to the back end API that does the database stuff
}
//If successful, go to the View
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
//Seque to the main View
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay]; //This makes it not display at all
self.passwordTextField.text = #"";
}
}
Here is my answer for Question 1:
New to threads so and advice would be greatly appreciated.
I added another method in BlurredImageView.m to build the view with taking in a blurred imaged.
I made the captureBlurredImage method public, and called that in a thread in ViewDidLoad in Login.m first, then passed the blurred image into the new loadBlurredOverlay. I also added the login processing into a thread. It is really fast now, however:
Question #3 still remains!!!!
If I call [blurredOverlay removeBlurredOverlay]; in LoginViewController.m which calls [self removeFromSuperview]; in BlurredOverlay.m the BlurredImage and spinner never comes up. If I comment it out, I works like a charm but can't get it to dismiss after the login processing is done.
Comments and help will be appreciated. I will edit this answer if we can get to the bottom of this.
BlurredImage.m
#import "BlurredOverlay.h"
#implementation BlurredOverlay
+(BlurredOverlay *)loadBlurredOverlay:(UIView *)superView :(UIImage *) blurredImage {
NSLog(#"In loadBlurredOverlay with parameter blurredImage: %#", blurredImage);
BlurredOverlay *blurredOverlay = [[BlurredOverlay alloc] initWithFrame:superView.bounds];
// Create a new image view, from the image made by our gradient method
UIImageView *blurredBackground = [[UIImageView alloc] initWithImage:blurredImage];
[blurredOverlay addSubview:blurredBackground];
// This is the new stuff here ;)
UIActivityIndicatorView *indicator =
[[UIActivityIndicatorView alloc]
initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge];
//set color
[indicator setColor:UIColorFromRGB(0x72CE97)];
// Set the resizing mask so it's not stretched
indicator.autoresizingMask =
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleBottomMargin |
UIViewAutoresizingFlexibleLeftMargin;
// Place it in the middle of the view
indicator.center = CGPointMake(superView.bounds.origin.x + (superView.bounds.size.width / 2), superView.bounds.origin.y + (superView.bounds.size.height / 2));
// Add it into the spinnerView
[blurredOverlay addSubview:indicator];
// Start it spinning! Don't miss this step
[indicator startAnimating];
//blurredOverlay.backgroundColor = [UIColor blackColor];
[superView addSubview:blurredOverlay];
[superView bringSubviewToFront:blurredOverlay];
return blurredOverlay;
}
+ (UIImage *) captureBlurredImage:(UIView *)superView {
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(superView.frame.size);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 1] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
UIImage *blurrredImage = [[UIImage alloc] initWithCIImage:resultImage];
return blurrredImage;
}
-(void)removeBlurredOverlay{
// Take me the hells out of the superView!
[self removeFromSuperview];
}
#end
LoginViewController.m
...
-(void)viewDidAppear:(BOOL)animated
{
//Get a blurred image of the view in a thread<#^(void)block#>
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
self.blurredImage = [BlurredOverlay captureBlurredImage:self.view];
});
}
...
//Send the username and password to backend for verification
//If verified, go to ViewController
- (IBAction)loginButton:(id)sender {
__block BOOL success = false;
//Add a blur view with spinner to tell user the app is processing login information
BlurredOverlay *blurredOverlay = [BlurredOverlay loadBlurredOverlay:self.view :self.blurredImage];
//Login user in a thread
//Get a blurred image of the view in a thread
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//Check to see if the username or passord texfields are empty or email field is in wrong format
if([self validFields]){
//Try to login user
success = [self loginUser];
}
else {
success = false;
}
dispatch_async(dispatch_get_main_queue(), ^{
//If successful, go to the ViewController
if (success) {
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Seque to the main ViewController
[self performSegueWithIdentifier:#"loginSuccessSegue" sender:self];
}
else
{
//Remove blurredOverlay
//[blurredOverlay removeBlurredOverlay];
//Reset passwordTextField
self.passwordTextField.text = #"";
}
});
});
}
...
I'm having trouble getting my head around this; I've looked around for answers on here but either nothing directly applies to my question or I just can't make sense of it. (I am relatively new to this, so apologise if there is an obvious answer.)
I am inserting an array of UIImages (contained within a UIImageView) into a UIScrollView. I can programmatically scroll to points in the ScrollView, but I need to be able to identify by name which image is currently being shown after scrolling (so I can compare the image to one in another ScrollView).
How I have created my arrays and added the images to the ImageView and ScrollView is below.
ViewController.m
-(void)viewDidLoad {
...
// Store the names as strings
stringArr = [[NSMutableArray arrayWithObjects:
#"img0",
#"img1",
#"img2",
#"img3",
nil] retain];
// Add images to array
dataArr = [[NSMutableArray arrayWithObjects:
[UIImage imageNamed:[stringArr objectAtIndex:0]],
[UIImage imageNamed:[stringArr objectAtIndex:1]],
[UIImage imageNamed:[stringArr objectAtIndex:2]],
[UIImage imageNamed:[stringArr objectAtIndex:3]],
nil] retain];
// Use a dictionary to try and make it possible to retrieve an image by name
dataDictionary = [NSMutableDictionary dictionaryWithObjects:dataArr forKeys:stringArr];
i = 0;
currentY = 0.0f;
// Set up contents of scrollview
// I'm adding each of the four images four times, in a random order
for (imageCount = 0; imageCount < 4; imageCount++) {
// Add images from the array to image views inside the scroll view.
for (UIImage *image in reelDictionary)
{
int rand = arc4random_uniform(4);
UIImage *images = [dataArr objectAtIndex:rand];
imgView = [[UIImageView alloc] initWithImage:images];
imgView.contentMode = UIViewContentModeScaleAspectFit;
imgView.clipsToBounds = YES;
// I have tried to use this to tag each individual image
imgView.tag = i;
i++;
CGRect rect = imgView.frame;
rect.origin.y = currentY;
imgView.frame = rect;
currentY += imgView.frame.size.height;
[scrollReel1 addSubview:reel1_imgView];
[reel1_imgView release];
}
}
scrollReel.contentSize = CGSizeMake(100, currentY);
[self.view addSubview:scrollReel];
...
}
This is how I am working out where I am in the ScrollView (currentOffset), and also exactly which image I need to retrieve (symbolNo). The value of symbolNo is correct when I test it, but I am unsure how to use the value with respect to image name retrieval.
NSInteger currentOffset = scrollReel.contentOffset.y;
NSInteger symbolNo = (currentOffset / 100) + 1;
Thanks in advance for any assistance.
There is no way to do this. The UIImage object doesn't store its name once it's loaded.
You could get around this by using the tag property on the image views if all your images have numerical names.
Otherwise you'll need to find a new way to model your data.
You basically need the reverse mapping of what you had. Here is a quick and dirty solution
NSMutableDictionary *indexToImageMap = [NSMutableDictionary new];
for (imageCount = 0; imageCount < 4; imageCount++) {
// Add images from the array to image views inside the scroll view.
for (UIImage *image in reelDictionary)
{
int rand = arc4random_uniform(4);
UIImage *images = [dataArr objectAtIndex:rand];
imgView = [[UIImageView alloc] initWithImage:images];
imgView.contentMode = UIViewContentModeScaleAspectFit;
imgView.clipsToBounds = YES;
// I have tried to use this to tag each individual image
imgView.tag = i;
i++;
[indexToImageMap setObject:imgView forKey:[NSNumber numberWithInt:i];
CGRect rect = imgView.frame;
rect.origin.y = currentY;
imgView.frame = rect;
currentY += imgView.frame.size.height;
[scrollReel1 addSubview:reel1_imgView];
[reel1_imgView release];
}
}
And to look it up you do
NSInteger currentOffset = scrollReel.contentOffset.y;
NSInteger symbolNo = (currentOffset / 100) + 1;
NSImage *image = [indexToImageMap objectForKey:[NSNumber numberWithInt:symbolNo]];
Subclass image view and add imageName property. if i understand what you are asking this should work.
#import <UIKit/UIKit.h>
#interface myImageView : UIImageView
{
__strong NSString *imageName;
}
#property (strong) NSString *imageName;
#end
#import "myImageView.h"
#implementation myImageView
#synthesize imageName;
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
}
return self;
}
#end
then use a dictionary to keep everything instead of array + dictionary.
myImageView *imgView1 = [[myImageView alloc] init];
[imgView1 setImageName:#"image_name_here"];
[imgView1 setImage:[UIImage imageNamed:#"image_name_here"]];
NSMutableDictionary *dicti = [[NSMutableDictionary alloc] init];
[dicti setObject:imgView1 forKey:#"image_name_here_1"];
[dicti setObject:imgView2 forKey:#"image_name_here_2"];
[dicti setObject:imgView... forKey:#"image_name_here_..."];
when you find the imageView you can search image in dictionary. because you know name of the imageView now.
For a certain UIImageView animationImages, I wish to create 2 arrays - one per each direction.
I have png files I can load into one of the directions, and I wish to flip each image and add it to second direction array.
I tried to do something like this:
_movingLeftImages = [NSMutableArray array];
_movingRightImages = [NSMutableArray array];
int imagesCounter = 0;
while (imagesCounter <= 8)
{
NSString* imageName = [NSString stringWithFormat:#"movingImg-%i", imagesCounter];
UIImage* moveLeftImage = [UIImage imageNamed:imageName];
[_movingLeftImages addObject:moveLeftImage];
UIImage* moveRightImage = [UIImage imageWithCGImage:moveLeftImage.CGImage scale:moveLeftImage.scale orientation:UIImageOrientationUpMirrored];
[_movingRightImages addObject:moveRightImage];
imagesCounter++;
}
Upon the relevant event, I load the UIImageView animationImages with the relevant array; like this:
if ( /*should move LEFT event*/ )
{
movingImageView.animationImages = [NSArray arrayWithArray:_movingLeftImages];
}
if ( /*should move RIGHT event*/ )
{
movingImageView.animationImages = [NSArray arrayWithArray:_movingRightImages];
}
The images are not flipped. They remain as they where.
Any idea how to handle this?
use this code for animating images
UIImageView * animatedImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 200,200)];
NSArray *imageNames = #[#"bird1.png", #"bird2.png"];
animTime =.0;
[animatedImageView setAnimationImages:imageNames] ;
animatedImageView.animationDuration = animTime;
animatedImageView.animationRepeatCount = 1;
[self.view addSubview: animatedImageView];
[animatedImageView startAnimating];
tintColor is a life saver, it takes app theming to a whole new (easy) level.
//the life saving bit is the new UIImageRenderingModeAlwaysTemplate mode of UIImage
UIImage *templateImage = [[UIImage imageNamed:#"myTemplateImage"] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
imageView.image = templateImage;
//set the desired tintColor
imageView.tintColor = color;
The above code will "paint" the image's non-transparent parts according to the UIImageview's tint color which is oh so cool.No need for core graphics for something simple like that.
The problem I face is with animations.
Continuing from the above code:
//The array with the names of the images we want to animate
NSArray *imageNames = #[#"1",#"2"#"3"#"4"#"5"];
//The array with the actual images
NSMutableArray *images = [NSMutableArray new];
for (int i = 0; i < imageNames.count; i++)
{
[images addObject:[[UIImage imageNamed:[imageNames objectAtIndex:i]] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate]];
}
//We set the animation images of the UIImageView to the images in the array
imageView.animationImages = images;
//and start animating the animation
[imageView startAnimating];
The animation is performed correctly but the images use their original color (i.e. the color used in the gfx editing application) instead of the UIImageView's tintColor.
I am about to try to perform the animation myself (by doing something a little bit over the top like looping through the images and setting the UIImageView's image property with a NSTimer delay so that the human eye can catch it).
Before doing that I'd like to ask if the tintColor property of UIImageView is supposed to support what I'm trying to do with it i.e use it for animations.
Thanks.
Rather than animate the images myself, I decided to render the individual frames using a tint color and then let UIImage do the animation. I created a category on UIImage with the following methods:
+ (instancetype)animatedImageNamed:(NSString *)name tintColor:(UIColor *)tintColor duration:(NSTimeInterval)duration
{
NSMutableArray *images = [[NSMutableArray alloc] init];
short index = 0;
while ( index <= 1024 )
{
NSString *imageName = [NSString stringWithFormat:#"%#%d", name, index++];
UIImage *image = [UIImage imageNamed:imageName];
if ( image == nil ) break;
[images addObject:[image imageTintedWithColor:tintColor]];
}
return [self animatedImageWithImages:images duration:duration];
}
- (instancetype)imageTintedWithColor:(UIColor *)tintColor
{
CGRect imageBounds = CGRectMake( 0, 0, self.size.width, self.size.height );
UIGraphicsBeginImageContextWithOptions( self.size, NO, self.scale );
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM( context, 0, self.size.height );
CGContextScaleCTM( context, 1.0, -1.0 );
CGContextClipToMask( context, imageBounds, self.CGImage );
[tintColor setFill];
CGContextFillRect( context, imageBounds );
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
It works just like + [UIImage animatedImageNamed:duration:] (including looking for files named "image0", "image1", etc) except that it also takes a tint color.
Thanks to this answer for providing the image tinting code: https://stackoverflow.com/a/19152722/321527
.tintColor can probably handle it. I use NSTimers for UIButton's setTitleColor method all the time. Here's an example.
UPDATED: Tested and works on iPhone 5s iOS 7.1!
- (void)bringToMain:(UIImage *)imageNam {
timer = [NSTimer scheduledTimerWithTimeInterval:.002
target:self
selector:#selector(animateTint)
userInfo:nil
repeats:YES];
}
- (void)animateTint {
asd += 1.0f;
[imageView setTintColor:[UIColor colorWithRed:((asd/100.0f) green:0.0f blue:0.0f alpha:1.0f]];
if (asd == 100) {
asd = 0.0f
[timer invalidate];
}
}
I have made a UIViewController which conforms to the UITableViewDataSource and UITableViewDelegate protocol and has a UITableView as it's subview.
I have set the backgroundView property of the table to be a UIImageView in order to display an image as the background of the table.
In order to have custom spacings between the cells I made the row height larger than I wanted and customised the cell's contentView to be the size I wanted, making it look like there is extra space (Following this SO answer).
I wanted to add a blur to the cell so that the background was blurred and I did this through Brad Larson's GPUImage framework. This works fine however, since I want the background blur to update as it scrolls, the scroll becomes very laggy.
My code is:
//Gets called from the -scrollViewDidScroll:(UIScrollView *)scrollView method
- (void)updateViewBG
{
UIImage *superviewImage = [self snapshotOfSuperview:self.tableView];
UIImage* newBG = [self applyTint:self.tintColour image:[filter imageByFilteringImage:superviewImage]];
self.layer.contents = (id)newBG.CGImage;
self.layer.contentsScale = newBG.scale;
}
//Code to create an image from the area behind the 'blurred cell'
- (UIImage *)snapshotOfSuperview:(UIView *)superview
{
CGFloat scale = 0.5;
if (([UIScreen mainScreen].scale > 1 || self.contentMode == UIViewContentModeScaleAspectFill)) {
CGFloat blockSize = 12.0f/5;
scale = blockSize/MAX(blockSize * 2, floor(self.blurRadius));
}
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, -self.frame.origin.x, -self.frame.origin.y);
NSArray *hiddenViews = [self prepareSuperviewForSnapshot:superview];
[superview.layer renderInContext:context];
[self restoreSuperviewAfterSnapshot:hiddenViews];
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshot;
}
-(UIImage*)applyTint:(UIColor*)colour image:(UIImage*)inImage{
UIImage *newImage;
if (colour) {
UIGraphicsBeginImageContext(inImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, inImage.size.width, inImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, inImage.CGImage);
[[colour colorWithAlphaComponent:0.8] set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeLighten);
CGContextDrawImage(ctx, area, inImage.CGImage);
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
} else {
newImage = inImage;
}
return newImage;
}
Now for the question:
Is there a better way to add the blur? Maybe so that the layer doesn't have to be rendered each movement? iOS7's control centre/notification centre seem to be able to do this without any lagging.
Maybe with the GPUImageUIElement class? If so, how do I use this?
Another way I looked at was to create the blur on the background image initially and then crop just the areas I needed to use out, however I couldn't get this to work, since the images may or may not be the same size as the screen so the scaling was a problem (Using CGImageCreateWithImageInRect() and the rect being the cell's position on the table).
I also found out that I have to add the blur to the tableview itself with the frame being that of the cell, and the cell having a clear colour.
Thanks in advance
EDIT
Upon request, here is the code for the image cropping I attempted before:
- (void)updateViewBG
{
//self.bgImg is the pre-blurred image, -getContentViewFromCellFrame: is a convenience method to get just the content area from the whole cell (since the contentarea is smaller than the cell)
UIImage* bg = [self cropImage:self.bgImg
toRect:[LATableBlur getContentViewFromCellFrame:[self.tableView rectForRowAtIndexPath:self.cellIndexPath]]];
bg = [self applyTint:self.tintColour image:bg];
self.layer.contents = (id)bg.CGImage;
self.layer.contentsScale = bg.scale;
}
- (UIImage*)cropImage:(UIImage*)image toRect:(CGRect)frame
{
CGSize imgSize = [image size];
double heightRatio = imgSize.height/self.tableView.frame.size.height;
double widthRatio = imgSize.width/self.tableView.frame.size.width;
UIImage* cropped = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(image.CGImage,
CGRectMake(frame.origin.x*widthRatio,
frame.origin.y*heightRatio,
frame.size.width*widthRatio,
frame.size.height*heightRatio))];
return cropped;
}
I managed to solve it with a solution I, at first, didn't think it would work.
Generating several blurred images is certainly not the solution as it costs a lot.
I used only one blurred image and cached it.
So I subclassed UITableViewCell :
#interface BlurredCell : UITableViewCell
#end
I implemented two class methods to access the cached images (blurred and normal ones)
+(UIImage *)normalImage
{
static dispatch_once_t onceToken;
static UIImage *_normalImage;
dispatch_once(&onceToken, ^{
_normalImage = [UIImage imageNamed:#"bg.png"];
});
return _normalImage;
}
I used REFrostedViewController's category on UIImage to generate the blurred image
+(UIImage *)blurredImage
{
static dispatch_once_t onceToken;
static UIImage *_blurredImage;
dispatch_once(&onceToken, ^{
_blurredImage = [[UIImage imageNamed:#"bg.png"] re_applyBlurWithRadius:BlurredCellBlurRadius
tintColor:[UIColorcolorWithWhite:1.0f
alpha:0.4f]
saturationDeltaFactor:1.8f
maskImage:nil];
});
return _blurredImage;
}
In order to have the effect of blurred frames inside the cell but still see the non blurred image on the sides, I used to scroll views.
One with an image view with the normal image and the other one with an image view with the blurred image. I set the content size to be the size of the image and the contentOffset will be set through an interface.
So the table view ends up with each cell holding the whole background image but cropping it at certain offset and still showing the entire image
#implementation BlurredCell
- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier
{
self = [super initWithStyle:style reuseIdentifier:reuseIdentifier];
if (self) {
// Initialization code
[self.contentView addSubview:self.normalScrollView];
[self.contentView addSubview:self.blurredScrollView];
}
return self;
}
-(UIScrollView *)normalScrollView
{
if (!_normalScrollView) {
_normalScrollView = [[UIScrollView alloc] initWithFrame:self.bounds];
_normalScrollView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
_normalScrollView.scrollEnabled = NO;
UIImageView *imageView =[[UIImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
imageView.contentMode = UIViewContentModeScaleToFill;
imageView.image = [BlurredCell normalImage];
_normalScrollView.contentSize = imageView.frame.size;
[_normalScrollView addSubview:imageView];
}
return _normalScrollView;
}
-(UIScrollView *)blurredScrollView
{
if (!_blurredScrollView) {
_blurredScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(BlurredCellPadding, BlurredCellPadding,
self.bounds.size.width - 2.0f * BlurredCellPadding,
self.bounds.size.height - 2.0f * BlurredCellPadding)];
_blurredScrollView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
_blurredScrollView.scrollEnabled = NO;
_blurredScrollView.contentOffset = CGPointMake(BlurredCellPadding, BlurredCellPadding);
UIImageView *imageView =[[UIImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
imageView.contentMode = UIViewContentModeScaleToFill;
imageView.image = [BlurredCell blurredImage];
_blurredScrollView.contentSize = imageView.frame.size;
[_blurredScrollView addSubview:imageView];
}
return _blurredScrollView;
}
-(void)setBlurredContentOffset:(CGFloat)offset
{
self.normalScrollView.contentOffset = CGPointMake(self.normalScrollView.contentOffset.x, offset);
self.blurredScrollView.contentOffset = CGPointMake(self.blurredScrollView.contentOffset.x, offset + BlurredCellPadding);
}
#end
setBlurredContentOffset: should be called each time the table view's content offset changes.
So in the table view delegate's implementation (the view controller) we do it in those two methods :
// For the first rows
-(void)tableView:(UITableView *)tableView willDisplayCell:(BlurredCell *)cell
forRowAtIndexPath:(NSIndexPath *)indexPath
{
[cell setBlurredContentOffset:cell.frame.origin.y];
}
// Each time the table view is scrolled
-(void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (BlurredCell *cell in [self.tableView visibleCells]) {
[cell setBlurredContentOffset:cell.frame.origin.y - scrollView.contentOffset.y];
}
}
Here is a complete working demo