I have a question about the underlying implementation of the Core Image system. I'm adding some CIImages on top of each other. Not that much, about 5 or 6 of them. To save memory and performance, they all have their transparent pixels cropped. They are then drawn at offsets, so I'm using a #"CIAffineTransform" filter to position them.
CIFilter* moveFilter = [CIFilter filterWithName:#"CIAffineTransform"];
My question is: does the moveFilter.outputImage REALLY generate a new image, or does it generate "render settings" that are later on used to draw the actual image?
(If it is the first, that would mean I'm effectively rendering the image twice. It would be a huge flaw in the Core Image API and hard to believe Apple created it this way.)
Filters do not generate anything. outputImage does not generate anything. CIImage does not generate anything. All you are doing is constructing a chain of filters.
Rendering to a bitmap doesn't happen until you explicitly ask for it to happen. You do this in one of two ways:
Call CIContext createCGImage:fromRect:.
Actually draw a CIImage-based UIImage into a graphics context.
Related
I use Core Graphics to draw in an UIView, and cache the contents in a CGLayer.
One of its functions needs to duplicate a sub-area of the CGLayer and move it to a new location within the same layer. As a traditional trick, I used to do this by drawing the layer into its own context.
However, the behavior of this trick is "undefined" according to the documentation, and it stops working in iOS 12.
Is there an alternative way to do this efficiently? (I have tried drawing the sub-area into an CGImage then drawing the result image back to the layer. But this method seems sort of slow and not so memory efficient.:()
I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.
I'm confused as to why so much converting between image formats is needed in iOS. For example, if I load a jpg into a UIImage and then want to do face detection on it, I need to create a CIImage to pass to the CIDetector. Doesn't this represent a hit in both memory and performance?
Is this some legacy thing between Core Graphics, Core Image and UIKit (and probably openGL ES but I don't work with that)? Is the hit trivial overall?
I'll do what I need to do but I'd like to understand more about this is needed. Also, I've run into issues sometimes doing conversions and get tangled in the differences between the formats.
Update
Ok - so I just got dinged again by my confusion over these formats (or the confusion OF these formats...). Wasted a half hour. Here is what I was doing:
Testing for faces in a local image, I created the needed CIImage with:
CIImage *ciImage = [image CIImage];
and was not getting any features back no matter what orientation I passed in. I know this particular image has worked with the CIDetectorTypeFace before and that I have run into trouble with the CIImage format. The tried creating the CCImage like this:
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
and Face Detection works fine. Arrgh! I made sure with [image CIImage] that the resulting CIImage was not nil. So I'm confused. The first approach just gets a pointer while the second creates a new CIImage. Does that make the difference?
Digging into the UIImage.h file I see the following:
// returns underlying CGImageRef or nil if CIImage based
#property(nonatomic,readonly) CGImageRef CGImage;
// returns underlying CIImage or nil if CGImageRef based
#property(nonatomic,readonly) CIImage *CIImage;
So I guess that is the key - Developer Beware: test for nil...
The reason is in the conception. All of UIKit, CoreGraphics and CoreImage does three such fundamentally different things that there can't be a 'grand central unified image format'. Also, these frameworks cooperate well; that said, the conversion should be as optimized and as fast as possible, but of course image processing is always a relatively computationally expensive operation.
I have tried the UIImage+Resize category that's popular, and with varying interpolation settings. I have tried scaling via CG methods, and CIFilters. However, I can never get an image downsized that does not either look slightly soft in focus, nor full of jagged artifacts. Is there another solution, or a third party library, which would let me get a very crisp image?
It must be possible on the iPhone, because for instance the Photos app will show a crisp image even when pinching to scale it down.
You said CG, but did not specify your approach.
Using drawing or bitmap context:
CGContextSetInterpolationQuality(gtx, kCGInterpolationHigh);
CGContextSetShouldAntialias(gtx, true); << default varies by context type
CGContextDrawImage(gtx, rect, image);
and make sure your views and their layers are not resizing the image again. I've had good results with this. It's possible other views are affecting your view or the context. If it does not look good, try it in isolation to sanity check whether or not something is distorting your view/image.
If you are drawing to a bitmap, then you create the bitmap with the target dimensions, then draw to that.
Ideally, you will maintain the aspect ratio.
Also note that this can be quite CPU intensive -- repeatedly drawing/scaling in HQ will cost a lot of time, so you may want to create a resized copy instead (using CGBitmapContext).
Here is the Routine that I wrote to do this. There is a bit of soft focus, though depending on how far you are scaling the original image its not too bad. I'm scaling programatic Screen Shot Images.
- (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
CGContextSetInterpolationQuality is what you are looking for.
You should try this category additions
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
When an image is scaled down, it is often a good idea to apply some sharpening.
The problem is, Core Image on iOS does not (yet) implement the sharpening filters (CISharpenLuminance, CIUnsharpMask), so you would have to roll your own. Or nag Apple till they implement these filters on iOS, too.
However, Sharpen luminance and Unsharp mask are fairly advanced filters, and in previous projects I have found that even a simple 3x3 kernel would produce clearly visible and satisfactory results.
Hence, if you feel like working at the pixel level, you could get the image data out of a graphics context, bit mask your way to R, G and B values and code graphics like it is 1999. It will be a bit like re-inventing the wheel, though.
Maybe there are some standard graphics libraries around that can do this, too (ImageMagick?)
It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.