Fast blurring for UITableViewCell contentView Background - ios

I have made a UIViewController which conforms to the UITableViewDataSource and UITableViewDelegate protocol and has a UITableView as it's subview.
I have set the backgroundView property of the table to be a UIImageView in order to display an image as the background of the table.
In order to have custom spacings between the cells I made the row height larger than I wanted and customised the cell's contentView to be the size I wanted, making it look like there is extra space (Following this SO answer).
I wanted to add a blur to the cell so that the background was blurred and I did this through Brad Larson's GPUImage framework. This works fine however, since I want the background blur to update as it scrolls, the scroll becomes very laggy.
My code is:
//Gets called from the -scrollViewDidScroll:(UIScrollView *)scrollView method
- (void)updateViewBG
{
UIImage *superviewImage = [self snapshotOfSuperview:self.tableView];
UIImage* newBG = [self applyTint:self.tintColour image:[filter imageByFilteringImage:superviewImage]];
self.layer.contents = (id)newBG.CGImage;
self.layer.contentsScale = newBG.scale;
}
//Code to create an image from the area behind the 'blurred cell'
- (UIImage *)snapshotOfSuperview:(UIView *)superview
{
CGFloat scale = 0.5;
if (([UIScreen mainScreen].scale > 1 || self.contentMode == UIViewContentModeScaleAspectFill)) {
CGFloat blockSize = 12.0f/5;
scale = blockSize/MAX(blockSize * 2, floor(self.blurRadius));
}
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, -self.frame.origin.x, -self.frame.origin.y);
NSArray *hiddenViews = [self prepareSuperviewForSnapshot:superview];
[superview.layer renderInContext:context];
[self restoreSuperviewAfterSnapshot:hiddenViews];
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshot;
}
-(UIImage*)applyTint:(UIColor*)colour image:(UIImage*)inImage{
UIImage *newImage;
if (colour) {
UIGraphicsBeginImageContext(inImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, inImage.size.width, inImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, inImage.CGImage);
[[colour colorWithAlphaComponent:0.8] set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeLighten);
CGContextDrawImage(ctx, area, inImage.CGImage);
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
} else {
newImage = inImage;
}
return newImage;
}
Now for the question:
Is there a better way to add the blur? Maybe so that the layer doesn't have to be rendered each movement? iOS7's control centre/notification centre seem to be able to do this without any lagging.
Maybe with the GPUImageUIElement class? If so, how do I use this?
Another way I looked at was to create the blur on the background image initially and then crop just the areas I needed to use out, however I couldn't get this to work, since the images may or may not be the same size as the screen so the scaling was a problem (Using CGImageCreateWithImageInRect() and the rect being the cell's position on the table).
I also found out that I have to add the blur to the tableview itself with the frame being that of the cell, and the cell having a clear colour.
Thanks in advance
EDIT
Upon request, here is the code for the image cropping I attempted before:
- (void)updateViewBG
{
//self.bgImg is the pre-blurred image, -getContentViewFromCellFrame: is a convenience method to get just the content area from the whole cell (since the contentarea is smaller than the cell)
UIImage* bg = [self cropImage:self.bgImg
toRect:[LATableBlur getContentViewFromCellFrame:[self.tableView rectForRowAtIndexPath:self.cellIndexPath]]];
bg = [self applyTint:self.tintColour image:bg];
self.layer.contents = (id)bg.CGImage;
self.layer.contentsScale = bg.scale;
}
- (UIImage*)cropImage:(UIImage*)image toRect:(CGRect)frame
{
CGSize imgSize = [image size];
double heightRatio = imgSize.height/self.tableView.frame.size.height;
double widthRatio = imgSize.width/self.tableView.frame.size.width;
UIImage* cropped = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(image.CGImage,
CGRectMake(frame.origin.x*widthRatio,
frame.origin.y*heightRatio,
frame.size.width*widthRatio,
frame.size.height*heightRatio))];
return cropped;
}

I managed to solve it with a solution I, at first, didn't think it would work.
Generating several blurred images is certainly not the solution as it costs a lot.
I used only one blurred image and cached it.
So I subclassed UITableViewCell :
#interface BlurredCell : UITableViewCell
#end
I implemented two class methods to access the cached images (blurred and normal ones)
+(UIImage *)normalImage
{
static dispatch_once_t onceToken;
static UIImage *_normalImage;
dispatch_once(&onceToken, ^{
_normalImage = [UIImage imageNamed:#"bg.png"];
});
return _normalImage;
}
I used REFrostedViewController's category on UIImage to generate the blurred image
+(UIImage *)blurredImage
{
static dispatch_once_t onceToken;
static UIImage *_blurredImage;
dispatch_once(&onceToken, ^{
_blurredImage = [[UIImage imageNamed:#"bg.png"] re_applyBlurWithRadius:BlurredCellBlurRadius
tintColor:[UIColorcolorWithWhite:1.0f
alpha:0.4f]
saturationDeltaFactor:1.8f
maskImage:nil];
});
return _blurredImage;
}
In order to have the effect of blurred frames inside the cell but still see the non blurred image on the sides, I used to scroll views.
One with an image view with the normal image and the other one with an image view with the blurred image. I set the content size to be the size of the image and the contentOffset will be set through an interface.
So the table view ends up with each cell holding the whole background image but cropping it at certain offset and still showing the entire image
#implementation BlurredCell
- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier
{
self = [super initWithStyle:style reuseIdentifier:reuseIdentifier];
if (self) {
// Initialization code
[self.contentView addSubview:self.normalScrollView];
[self.contentView addSubview:self.blurredScrollView];
}
return self;
}
-(UIScrollView *)normalScrollView
{
if (!_normalScrollView) {
_normalScrollView = [[UIScrollView alloc] initWithFrame:self.bounds];
_normalScrollView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
_normalScrollView.scrollEnabled = NO;
UIImageView *imageView =[[UIImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
imageView.contentMode = UIViewContentModeScaleToFill;
imageView.image = [BlurredCell normalImage];
_normalScrollView.contentSize = imageView.frame.size;
[_normalScrollView addSubview:imageView];
}
return _normalScrollView;
}
-(UIScrollView *)blurredScrollView
{
if (!_blurredScrollView) {
_blurredScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(BlurredCellPadding, BlurredCellPadding,
self.bounds.size.width - 2.0f * BlurredCellPadding,
self.bounds.size.height - 2.0f * BlurredCellPadding)];
_blurredScrollView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
_blurredScrollView.scrollEnabled = NO;
_blurredScrollView.contentOffset = CGPointMake(BlurredCellPadding, BlurredCellPadding);
UIImageView *imageView =[[UIImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
imageView.contentMode = UIViewContentModeScaleToFill;
imageView.image = [BlurredCell blurredImage];
_blurredScrollView.contentSize = imageView.frame.size;
[_blurredScrollView addSubview:imageView];
}
return _blurredScrollView;
}
-(void)setBlurredContentOffset:(CGFloat)offset
{
self.normalScrollView.contentOffset = CGPointMake(self.normalScrollView.contentOffset.x, offset);
self.blurredScrollView.contentOffset = CGPointMake(self.blurredScrollView.contentOffset.x, offset + BlurredCellPadding);
}
#end
setBlurredContentOffset: should be called each time the table view's content offset changes.
So in the table view delegate's implementation (the view controller) we do it in those two methods :
// For the first rows
-(void)tableView:(UITableView *)tableView willDisplayCell:(BlurredCell *)cell
forRowAtIndexPath:(NSIndexPath *)indexPath
{
[cell setBlurredContentOffset:cell.frame.origin.y];
}
// Each time the table view is scrolled
-(void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (BlurredCell *cell in [self.tableView visibleCells]) {
[cell setBlurredContentOffset:cell.frame.origin.y - scrollView.contentOffset.y];
}
}
Here is a complete working demo

Related

pass image with textField to another viewController

I have two view controller class on first i have a image view plus textField inside the imageView and on second view controller there is a imageView. First view controller have a done-button, on clicking done-button i want imageView with textField pass to the secondViewController imageView.
Is there any way to do it?
Please suggest me.
- (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
// Create a new context the size of the frame
CGSize targetImageSize = CGSizeMake(frame.size.width,frame.size.height);
// Check for retina image rendering option
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(targetImageSize, NO, 0);
else
UIGraphicsBeginImageContext(targetImageSize);
CGContextRef context2 = UIGraphicsGetCurrentContext();
// The view to be rendered
[[view layer] renderInContext:context2];
// Get the rendered image
UIImage *original_image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return original_image;
}
Try this function for Render image
And the below one for merge them
- (UIImage *)mergeImage:(UIImage *)img
{
CGSize offScreenSize = CGSizeMake(206, 432);
if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(offScreenSize, NO, 0);
else UIGraphicsBeginImageContext(offScreenSize);
CGRect rect = CGRectMake(0, 0, 206, 432);
[imgView.image drawInRect:rect];
[img drawInRect:rect];
UIImage* imagez = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imagez;
}
You can change coordinates and height,width according to your requirement.
Example:
This method is declared in ClassB.h
- (void) setProperties:(UIImage *)myImage MyLabel:(UILabel *)myLabel;
This above is implemented in ClassB.m
- (void) setProperties:(UIImage *)myImage MyLabel:(UILabel *)myLabel
{
//assign myImage and myLabel here
}
Then in ClassA
ClassB *classB = [[ClassB alloc] init];
[classB setProperties:myImage MyLabel:myLabel];

How to get rid of massive Memory leak with blurred image in uitableview cell

Somehow when I scroll to the bottom of my table (96 items only) I get 1 gb of memory usage (it increase for every cell that gets created. I have a table that has an image with a blurred image in front of it that is using a cropped version of the original image with text then on top of that. Pretty simple. I'm using the apple provided sample code for blurring images available here: https://github.com/iGriever/TWSReleaseNotesView/blob/master/TWSReleaseNotesView/UIImage%2BImageEffects.m
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = #"itemCell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier forIndexPath:indexPath];
NSDictionary *foodItem = [self.foodItems objectAtIndex:indexPath.row];
// Set up the image view
UIImageView *imageView = (UIImageView *)[cell viewWithTag:1];
UIImage *foodImage = [UIImage imageNamed:[foodItem objectForKey:FOOD_IMAGE_FILE_KEY]];
[imageView setImage:foodImage];
[imageView setContentMode:UIViewContentModeScaleAspectFill];
// Set up the label
UILabel *labelView = (UILabel *)[cell viewWithTag:2];
[labelView setFont:[UIFont flatFontOfSize:20.0]];
labelView.text = #"Blah";
// Set up the image view
UIImageView *blurredView = (UIImageView *)[cell viewWithTag:3];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIImage *blurredImage = [[self cropForBlur:foodImage] applyBlurWithRadius:4
tintColor:[UIColor colorWithWhite:1.0 alpha:0.2]
saturationDeltaFactor:1.2
maskImage:nil];
dispatch_sync(dispatch_get_main_queue(), ^{
blurredView.image = blurredImage;
});
});
return cell;
}
*Note: I know it is most likely the blur (as opposed to my cropping) as it only happens when I do the blur. Also it's nothing to do with the async dispatch stuff as it still happens if I don't do that.
Yes I'm using ARC. Yes I'm using storyboards.
Here's cropForBlur:
- (UIImage *)cropForBlur:(UIImage *)originalImage
{
CGSize size = [originalImage size];
int startCroppingPosition = 100;
if (size.height > size.width) {
startCroppingPosition = size.height/2 + ((size.width / 320) * 45);
} else {
startCroppingPosition = size.height/2 + ((size.width / 320) * 45);
}
// WTF: Don't forget that the CGImageCreateWithImageInRect believes that
// the image is 180 rotated, so x and y are inverted, same for height and width.
CGRect cropRect = CGRectMake(0, startCroppingPosition, size.width, ((size.width/320) * 35));
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], cropRect);
UIImage *newImage = [UIImage imageWithCGImage:imageRef scale:(size.width/160) orientation:originalImage.imageOrientation];
CGImageRelease(imageRef);
return newImage;
}
I've also tried looking in Instruments but it will show that I'm using heaps of memory in total but the big chunks of memory don't show up in the breakdown. Weird.
Here's the bit that's saying I am using heaps of memory in the left bar.
Here's the allocations bit in Instruments. I don't see how they could match up. I haven't got zombies on or anything (unless there's somewhere other that in edit scheme to change that).
Here's the leaks Instruments view after scrolling down a bit. Doesn't seem to show any real leaks :S So confused.
See the solution from this link. The problem is the image has a different scale from your screen scale. The final output image can be very big.

Add image to the left of a UILabel

I have this relatively simple question, I think.
Imagine I have a UILabel with some text in it. Then, I want on the left, or right side of
the text also an image to be displayed (added).
Something like this:
http://www.zazzle.com/blue_arrow_button_left_business_card_templates-240863912615266256
Is there a way to do it, using, say UILabel methods? I didn't find such.
Just in case someone else look this up in the future. I would subclass the UIlabel class and add an image property.
Then you can override the setter of the text and image property.
- (void)setImage:(UIImage *)image {
_image = image;
[self repositionTextAndImage];
}
- (void)setText:(NSString *)text {
[super setText:text];
[self repositionTextAndImage];
}
In repositionTextAndImage, you can do your positioning calculation. The code I pasted, just insert an image on the left.
- (void)repositionTextAndImage {
if (!self.imageView) {
self.imageView = [[UIImageView alloc] init];
[self addSubview:self.imageView];
}
self.imageView.image = self.image;
CGFloat y = (self.frame.size.height - self.image.size.height) / 2;
self.imageView.frame = CGRectMake(0, y, self.image.size.width, self.image.size.height);
}
Lastly, override drawTextInRect: and make sure you leave space on the left of your label so that it does not overlap with the image.
- (void)drawTextInRect:(CGRect)rect {
// Leave some space to draw the image.
UIEdgeInsets insets = {0, self.image.size.width + kImageTextSpacer, 0, 0};
[super drawTextInRect:UIEdgeInsetsInsetRect(rect, insets)];
}
I just implemented similar thing in my Live Project, hope it will be helpful.
-(void)setImageIcon:(UIImage*)image WithText:(NSString*)strText{
NSTextAttachment *attachment = [[NSTextAttachment alloc] init];
attachment.image = image;
float offsetY = -4.5; //This can be dynamic with respect to size of image and UILabel
attachment.bounds = CGRectIntegral( CGRectMake(0, offsetY, attachment.image.size.width, attachment.image.size.height));
NSMutableAttributedString *attachmentString = [[NSMutableAttributedString alloc] initWithAttributedString:[NSAttributedString attributedStringWithAttachment:attachment]];
NSMutableAttributedString *myString= [[NSMutableAttributedString alloc] initWithString:strText];
[attachmentString appendAttributedString:myString];
_lblMail.attributedText = attachmentString;
}
Create a custom UIView that contains a UIImageView and UILabel subview. You'll have to do some geometry logic within to size the label to fit the image on the left or right, but it shouldn't be too much.
Create a UIImageView with your image and add the UILabel on top like
[imageview addSubView:label];
Set the frame of the label accordingly to your required position.

How can I make blind down effect to an image in IOS?

I want that, when I roll the iPad, the image blinds up/down. Effect should be like
http://madrobby.github.com/scriptaculous/combination-effects-demo/ Blind Down demo.
How can I do that?
I tried Reflection example of Apple but I had performance issues since I should redraw image in every gyroscope action.
Here is the Code:
- (void)viewDidLoad
{
[super viewDidLoad];
tmp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"galata2.jpg"]];
// Do any additional setup after loading the view, typically from a nib.
NSUInteger reflectionHeight = imageView1.bounds.size.height * 1;
imageView1 = [[UIImageView alloc] init];
imageView1.image = [UIImage imageNamed:#"galata1.jpg"];
[imageView1 sizeToFit];
[self.view addSubview:imageView1];
imageView2 = [[UIImageView alloc] init];
//UIImageView *tmp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"galata2.jpg"]];
imageView2.image = [UIImage imageNamed:#"galata2.jpg"];
[imageView2 sizeToFit];
[self.view addSubview:imageView2];
motionManager = [[CMMotionManager alloc] init];
motionManager.gyroUpdateInterval = 1.0/10.0;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error){
[self performSelectorOnMainThread:#selector(handleDeviceMotion:) withObject:motion waitUntilDone:YES];
}];
}
////
- (void)handleDeviceMotion:(CMDeviceMotion*)motion{
CMAttitude *attitude = motion.attitude;
int rotateAngle = abs((int)degrees(attitude.roll));
//CMRotationRate rotationRate = motion.rotationRate;
NSLog(#"rotation rate = [Pitch: %f, Roll: %d, Yaw: %f]", degrees(attitude.pitch), abs((int)degrees(attitude.roll)), degrees(attitude.yaw));
int section = (int)(rotateAngle / 30);
int x = rotateAngle % 30;
NSUInteger reflectionHeight = (1024/30)*x;
NSLog(#"[x = %d]", reflectionHeight);
imageView2.image = [self reflectedImage:tmp withHeight:reflectionHeight];
}
////
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
if(height == 0)
return nil;
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, fromImage.bounds.size.height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(1, kImageHeight);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, fromImage.bounds.size.width,height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// In order to grab the part of the image that we want to render, we move the context origin to the
// height of the image that we want to capture, then we flip the context so that the image draws upside down.
//CGContextTranslateCTM(mainViewContentContext, 0.0,0.0);
//CGContextScaleCTM(mainViewContentContext, 1.0, -1.0);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, fromImage.bounds.size.width, fromImage.bounds.size.height), fromImage.image.CGImage);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
One way to do this is to use another covering view that gradually changes height by animation;
If you have a view called theView that you want to cover, try something like this to reveal theView underneath a cover view:
UIView *coverView = [UIView alloc] initWithFrame:theView.frame];
coverView.backgroundcolor = [UIColor whiteColor];
[theView.superView addSubView:coverView]; // this covers theView, adding it to the same view that the view is contained in;
CGRect newFrame = theView.frame;
newFrame.size.height = 0;
newFrame.origin.y = theView.origin.y + theView.size.height;
[UIView animateWithDuration:1.5
delay: 0.0
options: UIViewAnimationOptionRepeat
animations:^{
coverView.frame = newFrame;
}
completion:nil
];
This should cover the view and then reveal it by changing the frame ov the cover, moving it down while changing the height.
I haven't tried the code, but this is one direction you can take to create the blind effect. I have used similar code often, and it is very easy to work with. Also, it doesn't require knowing core animation.

iOS: Drawing a CGImage into a subclass of CALayer during custom property animation in correct resolution

I'm at the end of my wisdom...
I have a view which should display an animated circular section which is pre-rendered and available as png files (both retina and non-retina versions, correctly named; pie_p_000.png vs. pie_p_000#2x.png). This section should be animated according the change to some percentage value within the app. So I made a subclass of UIView which has a custom CALayer within its layer hierarchy. The plan is to implement a custom animatable property for the CALayer subclass and change pictures in the overridden method drawInContext:. So far so good, the plan worked and the animation is shown when I change the percentage value using the function setPercentageWithFloat: of the view (full source below).
The thing is: I really don't know why my iPhone4 always presents the low-res image. I tried already playing around with scale factors, but that didn't help. Either the images are presented in the right size and low-res or the images are presented double size.
Overriding display: and setting the contents property directly has the effect that the animation doesn't appear (during the animation no image is presented); after the animation time the final image is presented. In this case the correct resolution image is presented.
Btw: the following code is not very sophisticated in error-handling, flexibility and elegance yet as it is an attempt just to get that thing running ;) - so the image is still presented flipped and so on...
I hope that somebody has a hint for me.
Thanks
The view:
#import "ScrollBarView.h"
#import "ScrollBarLayer.h"
#implementation ScrollBarView
#synthesize sbl;
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
}
return self;
}
- (void)setImagesWithName:(NSString*)imageName {
ScrollBarLayer *ssbl = [[ScrollBarLayer alloc] init];
ssbl.frame = CGRectMake(0, 0, 30, 30);
[ssbl setImagesWithName:imageName];
[self.layer addSublayer:ssbl];
self.sbl = ssbl;
[ssbl release];
}
- (void) dealloc {
self.sbl = nil;
[super dealloc];
}
- (void)setPercentageWithFloat:(CGFloat)perc {
if(perc > 1.0){
perc = 1.0;
} else if(perc < 0) {
perc = 0;
}
[self.sbl setPercentage:perc];
CABasicAnimation* ba = [CABasicAnimation animationWithKeyPath:#"percentage"];
ba.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
ba.duration = 0.8;
ba.toValue = [NSNumber numberWithFloat:perc];
[self.sbl addAnimation:ba forKey:nil];
}
#end
The View can be configured to work with different images (using the function setImagesWithName:) using different names (theName). Within this method the View adds the ScrollBarLayer to its layer property.
The Layer:
#import "ScrollBarLayer.h"
#implementation ScrollBarLayer
#synthesize filename;
#dynamic percentage;
- (id)init {
self = [super init];
if (self) {
}
return self;
}
- (void)setImagesWithName:(NSString*)imageName {
self.frame = CGRectMake(0, 0, 30, 30);
self.percentage = 0;
self.filename = imageName;
}
+ (BOOL) needsDisplayForKey:(NSString*)key {
if([key isEqualToString:#"percentage"]) {
return YES;
}
return [super needsDisplayForKey:key];
}
- (void) drawInContext:(CGContextRef)ctx {
int imageIdx = (int)roundf((float)199 * self.percentage);
NSString *thisfilename = [self.filename stringByAppendingString:[NSString stringWithFormat:#"%03d.png", i+1]];
UIImage* c = [UIImage imageNamed:thisfilename];
CGImageRef img = [c CGImage];
CGSize sz = CGSizeMake(CGImageGetWidth(img), CGImageGetHeight(img));
UIGraphicsBeginImageContextWithOptions(CGSizeMake(sz.width, sz.height), NO, 0.0);
CGContextDrawImage(ctx, CGRectMake(0, 0, sz.width, sz.height), img);
UIGraphicsEndImageContext();
}
- (void) dealloc {
self.pictures = nil;
self.filename = nil;
[super dealloc];
}
#end
I also would need light on this.
I think that the issue is between points and pixels... I think that you should check if the device has a retina display, and if so, you should replace the CGContextDrawImage(...) CGRect (in second parameter) for one with half width and height.
#define IS_RETINA_DISPLAY() ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] == YES && [[UIScreen mainScreen] scale] == 2.00)
...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(sz.width, sz.height), NO, 0.0);
if (IS_RETINA_DISPLAY())
{
CGContextDrawImage(ctx, CGRectMake(0, 0, sz.width/2.0f, sz.height/2.0f), img);
} else
{
CGContextDrawImage(ctx, CGRectMake(0, 0, sz.width, sz.height), img);
}
UIGraphicsEndImageContext();
...
But I'm not sure this gill give you a "smooth" result...

Resources