I'm trying to make a photo slider similar to iOS's photos app. I've seen PhotoScroller, and I'm using initWithContentsOfFile.
CATiledLayer seems like a good idea, except I have no way of pre-generating tiles. The tiles also take up a lot of space. Images are part of a document bundle synced up with iCloud. Photos are typically JPEG. From hours of reading it seems like generating the tiles on the fly is slower than just loading the whole image.
It seems like a majority of the time is spent in decompressing the image anyway. And moving it to a background queue and displaying a smaller image should work well. So that's what I'm trying to do. And it works to a point, but if I slide without waiting for the image to load there's still somewhat of a stutter, which sometimes causes the scroll view to hang momentarily (when paging).
This is the function that sets the image:
- (void)setImage:(UIImage *)image placeholder:(UIImage *)placeholder
{
_image = image;
self.zoomScale = 1.0;
imageView.image = nil;
imageSize = image.size;
imageView.frame = CGRectMake(0, 0, image.size.width, image.size.height);
MachTimer *timer = [MachTimer new];
[timer start];
imageView.image = placeholder;
NSLog(#"placeholder: %f", [timer elapsedSeconds]);
//imageView.layer.contents = (id)placeholder.CGImage;
self.contentSize = image.size;
[self setMaxMinZoomScalesForCurrentBounds];
self.zoomScale = self.minimumZoomScale;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
MachTimer *timer = [MachTimer new];
[timer start];
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
NSLog(#"decode: %f", [timer elapsedSeconds]);
dispatch_sync(dispatch_get_main_queue(), ^{
if(_image == image) {
MachTimer *timer = [MachTimer new];
[timer start];
imageView.image = image;
NSLog(#"display: %f", [timer elapsedSeconds]);
//imageView.layer.contents = (id)image.CGImage;
}
});
});
}
My "placeholder" times are about 0.00005 - 0.00006 seconds (decompress and display), and they're 480px tall. My "decode" times (for full image) are about 0.8 to 1.2 seconds. "display" is about 0.0001 seconds (which is about 0.1 milliseconds).
So with those times the UI should be smooth as butter, but it isn't.
I've even tried to go as far as setting contents of a regular UIView to the CGImage. I've tried iOS 6.0's drawsAsynchronously, and that seems to make it a little worse.
What am I doing wrong?
EDIT:
I've pushed my sample project to GitHub:
https://github.com/lukescott/PhotoScroller
I would create a background thread for hard work (photo load or decode) and when its done then delegate UI work (draw or whatever) to main thread using one of tools (queues, msgs, etc).
Related
before I ask my question I should say I have read so much about it and I've tried so many ways but none has worked. I'm doing dozens of Core Image processing in a concurrent Queue, and I need to wait for them by using a dispatch_barrier_async to finish so only then I can do my final render and go to the next view controller but ironically, dispatch_barrier doesn't wait for my concurrent queue to finish, Why is that? is it because Im doing core image processing in the wrong thread?
//Here is my cocurrent queue.
dispatch_queue_t concurrentQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);
Using it for processing effects here as an example
-(void)setupEffects{
//It's one of my effects as an example which renders for previewing the //effect.
case Effect4:{
dispatch_async(concurrentQueue, ^{
//BG
self.firstCIFilter = [CIFilter filterWithName:#"CIHexagonalPixellate"
withInputParameters:#{#"inputImage": [self getFirstCIImage],#"inputScale":#26}];
self.lastSelectedInputImgforBG =[self applyCIEffectWithCrop];
//FG
self.firstCIFilter = [CIFilter filterWithName:#"CIPhotoEffectProcess"
withInputParameters:#{#"inputImage":[self getFirstCIImage]}];
self.fgImgWithEffect = [self applyCIEffect];
dispatch_async(dispatch_get_main_queue(), ^{
self.lastSelectedInputImgforFG= [self cropAndFadeAndRenderFGImage];
[self saveEffect];
[self loadEffectsWithIndex:effectIndex];
});
});
}
//Once user is done, it renders the image once again
-(UIImage *)applyCIEffectWithCrop{
__weak typeof(self) weakSelf = self;
#autoreleasepool{
weakSelf.firstCIContext =nil;
weakSelf.firstResultCIImage=nil;
weakSelf.croppingCIImage=nil;
weakSelf.firstCIContext = [CIContext contextWithOptions:nil];
weakSelf.firstResultCIImage = [weakSelf.firstCIFilter valueForKey:kCIOutputImageKey];
weakSelf.croppingCIImage=[weakSelf.firstResultCIImage imageByCroppingToRect:CGRectMake(0,0, weakSelf.affineClampImage1.size.width*scale , weakSelf.affineClampImage1.size.height*scale)];
return [UIImage imageFromCIImage:weakSelf.croppingCIImage scale:1.0 orientation:weakSelf.scaledDownInputImage.imageOrientation cropped:YES withFirstCIImage:[weakSelf getFirstCIImage]];
}
}
And then for my final render, this method needs to wait for my setupEffect to finish and then use the segue but it doesn't.
- (void)doneButtonAction {
_finalRender =YES;
CGFloat max=MAX(self.originalSizeInputImage.size.width,self.originalSizeInputImage.size.height);
if (max<=1700){
//Do nothing for Final Render
self.scaledDownInputImage= self.originalSizeInputImage;
}else{
CGSize scaledDownSize = [self getScalingSizeForFinalRenderForImage: self.originalSizeInputImage];
self.scaledDownInputImage = [self scaleThisImage:self.originalSizeInputImage scaledToFillSize:scaledDownSize];
}
imageRect = AVMakeRectWithAspectRatioInsideRect(self.scaledDownInputImage.size, self.viewWithLoadedImages.bounds);
//Preparation for high quality render with high resolution input
//image.
self.affineClampImage1 = [self affineClampImage];
self.selectionCropAndBlurredImage = [self croppedFGtoGetBlurred];
[self.imgData appendData:UIImagePNGRepresentation(self.scaledDownInputImage)];
[self.effectClass getimageWithImageData:self.imgData];
if (_effectMode) {
//Applying effects again for the high resolution input image.
[self setupEffects];
}else{
[self setupFilters];
}
dispatch_async(concurrentQueue, ^{
//Rendering the high quality Images in full resolution here.
CGRect frame = CGRectMake(0.0, 0.0,
self.lastSelectedInputImgforBG.size.width *self.lastSelectedInputImgforBG.scale,
self.lastSelectedInputImgforBG.size.height *self.lastSelectedInputImgforBG.scale);
UIGraphicsBeginImageContextWithOptions(frame.size, NO, 1.0);
// Draw transparent images on top of each other
[self.lastSelectedInputImgforBG drawInRect:frame];
[self.lastSelectedInputImgforFG drawInRect:frame];
self.tempImage=nil;
self.tempImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
});
dispatch_barrier_async(concurrentQueue, ^{
// Getting the full resolution rendered image and going to
//the next viewcontroller when the setupEffect and render is
//finished... which it doesn't wait until they're finished...
self.finalHightqualityRenderedImage = self.tempImage;
[self performSegueWithIdentifier:#"showShareVC" sender:self];
});
}
I should mention My code works without problem without using my concurrent queue but of course that blocks the UI until its done which is not my goal.
Your help will be truly appreciated.
I think the explanation is at the bottom of the dispatch_barrier_async:
The queue you specify should be a concurrent queue that you create yourself using the dispatch_queue_create function. If the queue you pass to this function is a serial queue or one of the global concurrent queues, this function behaves like the dispatch_async function.
So instead of grabbing DISPATCH_QUEUE_PRIORITY_BACKGROUND as per your
first line of code, create concurrentQueue yourself using dispatch_queue_create.
I try to combine two UIImage with the following code:
- (void)combineImage:(UIImage *)image WithFrame:(CGRect)frame Completion:(ImageProcessorCompletionBlock)block {
__weak typeof(self) wSelf = self;
dispatch_async(_queue, ^{
if (wSelf) {
typeof(wSelf) sSelf = wSelf;
UIGraphicsBeginImageContextWithOptions(sSelf.originalImage.size, NO, 0.0);
[sSelf.originalImage drawInRect:CGRectMake(0, 0, sSelf.originalImage.size.width, sSelf.originalImage.size.height)];
[image drawInRect:frame];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
if (block) {
block(result);
}
});
}
});
}
That works but when I check out the usage of memory, it scared me. Every time I run the method the memory rise up and never release. Sometimes I receive the memory warning. Can anyone tell me why and give me a solution to solve the problem? Thanks a lot!
Finally I figure out the problem.
UIGraphicsBeginImageContextWithOptions(sSelf.originalImage.size, NO, 0.0);
The first parameter is the size of the image and the last one is the scale factor. At the beginning I have already set the image size same as the original one. But I also set the scale as 0.0, which means it is set to the scale factor of the device’s main screen. So the result image is enlarged.
If I run the code several times, the result's size gets bigger and bigger, finally it use up the memory and I receive the warning.
so i am trying to make an app that will let the user change the color of the UIImage, for that i am using this function i found
- (UIImage *)imageWithTintColor:(UIColor *)color fraction:(CGFloat)fraction
{
if (color)
{
UIImage *image;
if ([UIScreen instancesRespondToSelector:#selector(scale)])
{
UIGraphicsBeginImageContextWithOptions([self size], NO, 0.f);
}
else
{
UIGraphicsBeginImageContext([self size]);
}
CGRect rect = CGRectZero;
rect.size = [self size];
[color set];
UIRectFill(rect);
[self drawInRect:rect blendMode:kCGBlendModeDestinationIn alpha:1.0];
if (fraction > 0.0)
{
[self drawInRect:rect blendMode:kCGBlendModeSourceAtop alpha:fraction];
}
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
return self;
}
everything works but the CG raster Data is growing in memory
I found the problem, and it was my bad logic, i am using 2 views one to show and one to work with ex:resize, move, rotate. And each time i was addingSubview to both where one of them need to hold just 1 at a time, a simple:
for (UIView *view in 2cndView.subviews)
{
[view removeFromSuperview];
}
did the trick for me
I have been fighting with my app, that suddenly would not launch properly, for some time now. It turned out that when I had switched a number of images' Render as to Template in the Image asset file, it caused the app to totally bomb out. CG Raster Data was growing exponentially and finally caused the app to stop and Xcode just said
Lost connection with iPhone.. check connections etc
It would appear that during every launch the images get reprocessed for this 'Template' setting, which consumed a disgusting amount of RAM and actually left it unable to boot. To solve this, I lowered the resolution of the images - as simple as that.
I have a scrollView in which i load images into from the net .I sometimes get memory warnings, which i assume are because i am doing something wrong with the images loader.
I am trying to fix little things, and i just wanted to show the code here, and hear maybe there are more things i can fix to get rid of this warnings.
So every time the scroller (iPad) has only 4/5 images that are : current page-3->current page+3.
This is how i load the images(every image has also a blur effect with Apple's classes) :
(should i allocated imageView every time? can i improve something here? )
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^
{
NSData *imdata2 = [NSData dataWithContentsOfURL:url];
dispatch_async(dispatch_get_main_queue(), ^
{
UIImage *theImage=[UIImage imageWithData:imdata2 scale:1];
UIImage *LightImage = [theImage applyLightEffect];
UIImage *scaledImage =[resizer resizeImageToWidth:[Globals sharedGlobals].imagesWidth WithImage:theImage];
CGRect viewSizeBack=CGRectMake(scroller.bounds.size.width*toPage , 0, scroller.bounds.size.width, scroller.bounds.size.height);
int x=[Globals sharedGlobals].pageMargins;
int y=([UIScreen mainScreen].bounds.size.height-scaledImage.size.height)/2;
CGRect viewSizeFront=CGRectMake(x , y, scaledImage.size.width,scaledImage.size.height);
UIImageView *backImageView=[[UIImageView alloc] initWithFrame:viewSizeBack];
UIImageView *frontImageView=[[UIImageView alloc] initWithFrame:viewSizeFront];
backImageView.layer.cornerRadius = 0.0;
backImageView.layer.masksToBounds = YES;
backImageView.image=LightImage;
frontImageView.layer.cornerRadius = 0.0;
frontImageView.layer.masksToBounds = YES;
frontImageView.image=scaledImage;
frontImageView.layer.borderWidth=1.0;
frontImageView.layer.borderColor=[UIColor colorWithRed:255.0 green:255.0 blue:255.0 alpha:1.0].CGColor;
[backImageView addSubview:frontImageView];
backImageView.tag=toPage;
frontImageView.tag=toPage;
[scroller addSubview:backImageView];
});
});
You should only ever have 3 images loaded at a maximum - the previous page (if it exists), the current page and the next page.
Any other images you have loaded above this is wasteful because you can't see them and they're just taking up memory for no good reason. If the images aren't too big then you can maintain them in memory and purge them when you get a warning, but for large images this will still generally cause you issues.
If you don't use ARC then add this:
[backImageView autorelease];
[frontImageView autorelease];
I have an iOS App that loads a set of images and a duration into an array then I have a timer that displays the images like so:
- (void)fireTimer:(NSTimer *)theTimer
{
self.image = [frames objectAtIndex:currentFrame];
NSTimeInterval time = [[durations objectAtIndex:currentFrame] floatValue];
self.timer = [NSTimer scheduledTimerWithTimeInterval:time target:self selector:#selector(fireTimer:) userInfo:nil repeats:NO];
currentFrame++;
if (currentFrame >= [frames count])
{
currentFrame = 0;
[timer invalidate];
}
}
To start the animation I call fireTimer and cycle through the images then when all the images are processed I call [timer invalidate] to stop the animation. I cannot use startAnimation because I need different durations for each image.
Right know I am NOT doing this on a background thread so the animation is choppy because other processing is happening whilst the image animates.
What is the best way to animate this in the background? Can I simply put this call to fireTimer in a block?
I know this may not be the best way to animate an image on iOS, but I do not want to do a lot of refactoring on the code right now.
Thanks for any suggestions or examples for a better solution!!!
I would suggest you use UIImageView to animate your images : it is made to do this task. If, like you say, some images needs to remains longer then others, then you can just display them many times. Let's say that image2 needs to be displayed twice longer than image1 and image3, just initialize the array animationImages on your UIImageView like : #[image1, image2, image2, image3].
Instead of using a timer, you could use performSelector: withObject: afterDelay:
-(void)loopBackground:(int)index
{
if(index < [self.durations count]{
self.image = [self.frames objectAtIndex:index];
[self performSelector:#selector(loopBackground:) withObject:index++ afterDelay:[[self.durations objectAtIndex:index] floatValue]];
}
}