I have a custom UIView that I want to render as an UIImage. The custom UIView is a subclass of UIImageView.
Inside this view, I am rendering some UI elements (drawing a bunch of circles over the image). The number of circles added can go up to the order of thousands.
I am using this simple code snippet to render the view as an UIImage:
// Create the UIImage (code runs on the main thread inside an #autorelease pool)
UIGraphicsBeginImageContextWithOptions(viewToSave.image.size, viewToSave.opaque, 1.0);
[viewToSave.layer renderInContext:UIGraphicsGetCurrentContext()];
imageToSave = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Since I'm rendering some stuff inside the .layer of the UIView,
// I don't think I can use "drawViewHierarchyInRect: afterScreenUpdates:"
Here is the memory allocations taken from Instruments, in an example with ~3000 circles added as sub-views:
Now here's the strange part... This runs fine and I can render the image multiple times (consecutive) and save it in the image gallery on devices like iPhone 5, iPhone 5s, iPhone 6s, iPad Air 2, iPad Mini 4... But the same code triggers a memory warning on iPhone X and eventually crashes the application...
Unfortunately, I do not have access to an iPhone X and the person who reported this doesn't have access to a Mac, so I cannot investigate deeper.
I really don't know if I am doing anything wrong... Are you aware if there is something different about the iPhone X? I've been struggling this issue for quite a while...
I guess that the problem has to do with how CALayer:renderInContext: handles drawing of thousands of views in a context that requires them to be scaled up. Would it be possible to try render the sub-views yourself? Then compare and verify if it works better by using instrumentation.
UIImage *imageToSave = [self imageFromSubLayer:viewToSave];
- (UIImage *)imageFromSubLayers:(UIImageView *)imageView {
CGSize size = imageView.image.size;
UIGraphicsBeginImageContextWithOptions(size, YES, .0);
CGContextRef context = UIGraphicsGetCurrentContext();
for (CALayer *layer in imageView.layer.sublayers)
[layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
As allways, the answer lays in the smallest (and not included in the question) detail...
What I really didn't consider (and found out after quite a while), is that the iPhone X has a scale factor equal to #3x. This is the difference between the iPhone X and all other devices that were running the code just fine...
At some point in my code, I was setting the .contentScaleFactor of the subviews to be equal to [UIScreen mainScreen].scale. This means that on higher-end devices, the image quality should be better.
For the iPhone X, [UIScreen mainScreen].scale returns 3.
For all of the other devices I have tested with, [UIScreen mainScreen].scale returns 2. This means that on the iPhone X, the ammount of memory used to render the image is way higher.
Fun fact number two: From another useful SO post, I found out that on the iPhone X, if you try to allocate more than 50% of it's total amount of memory (1392 MB), it crashes. On the other hand, for example, in the case of the iPhone 6s, the percentage is higher: 68% (1396 MB). This means that for some older devices you have more meory to work with than on the iPhone X.
Sorry for missleading, it was a honest mistake from my part. Thank you all for your answers!
I recently wrote a method into an app I'm making to convert UIView's into UIImages (so i could display gradients on progress views/tab bars). I ended up settling with the following code, I'm using this code to render tab bar buttons, it works on all devices including the X.
Objective C:
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:gradientView.bounds.size];
UIImage *gradientImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
[gradientView drawViewHierarchyInRect:gradientView.bounds afterScreenUpdates:true];
}];
Swift 4:
let renderer = UIGraphicsImageRenderer(size: gradientView.bounds.size)
let image = renderer.image { ctx in
gradientView.drawHierarchy(in: gradientView.bounds, afterScreenUpdates: true)
}
I used this code in a sample project I wrote up, here is a link to the project files:
Swift,
Objective C
as you will see both projects will run on iPhone X perfectly!
I know, the following sounds weird. But try to make the target image one pixel larger than the one that you are drawing. This solved it for me (my particular problem: "[CALayer renderInContext]" crashes on iPhone X).
In code:
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(viewToSave.image.size.width + 1,
viewToSave.image.size.height + 1),
viewToSave.opaque,
1.0
);
I am using CIFunHouse apple demo for my project to applying filter effects, When I try to take a snapshot of GLKView on iPad Air by making
UIImage* imageCaptured = [(GLKView*)_videoPreviewView snapshot];
the UIImage generated have distortion. see image
Any ideas? How to fix this.
Thanks in advance.
I am using captureStillImageAsynchronouslyFromConnection: to get image sample buffer from a camera. After that I am running the image through an OpenGL (GPU) to apply filters but unfortunately someone at Apple have put camera on iPhone 4 which output is bigger than the maximum texture size.
Brad Larsons explanation: The iPhone 4 is a special case, in that it can take photos large enough (2592x1936) that they just exceed the maximum texture size of the GPU on those devices (2048x2048). This causes the processing to fail, currently. All other devices either don't take photos that large, or support larger texture sizes (the iPad 2, iPad 3, and iPhone 4S support these larger sizes).
so the code I have is scaling down the image but I have to create CMSampleBufferRef after resizing on iPhone 4 only to cheat the capture process ... anyone knows how to get the CMSampleBufferRef from CIImage?
Objective-C
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]];
ciImage = [[ciImage imageByApplyingTransform:myScaleTransform] imageByCroppingToRect:myRect];
I'm using rather straightforward code to display a zoomable PDF in a scrollview, and it has been working beautifully on the iPad 2 and the original iPad. But it's staggeringly slow on the iPad 3. I know I'm pushing more pixels, but the rendering performance is simply unacceptable.
In iOS 5.0 and later, the tileSize property is arbitrarily clamped at 1024, which means tiles appear half that size on the retina display. Has anyone found a way to overcome this limitation?
Otherwise, has anyone found a way to improve the speed of the CATiledLayer on the iPad 3?
Have you tried setting shouldRasterize to YES on the layer?
Did you run a time profiler on these draws and did you rule out the possibility of redundant draws?
I've had some weird double drawing, which was easily found using:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context
{
NSLog(#"draw %#", NSStringFromCGRect(CGContextGetClipBoundingBox(context)));
// draw pdf
}
There's also a variety of settings to play with:
tiledLayer.levelsOfDetail = 2
tiledLayer.levelsOfDetailBias = 4
tiledLayer.tileSize = self.bounds.size
CGContextSetInterpolationQuality(context, kCGInterpolationLow)
CGContextSetRenderingIntent(context, kCGRenderingIntentDefault)
self.contentScaleFactor = 1.0
Hello people of the wasteland :),
Brief: There is a problem with GL_RGB internal texture format on iOS platform.
In my application I try to save some memory by using GL_RGB instead of GL_RGBA as an internal format.
I'm using the next code piece to achieve this. Nothing else is changed.
glTexImage2D(_textureTargetType,
0,
GL_RGB, // pixel internalFormat
texWidth, // image width
texHeight, // image height
0, // border
GL_RGBA, // pixel format
GL_UNSIGNED_BYTE, // pixel data type
bitmapData);
On MacOS these changes went fluently, no problems. But on iOS, particularly 4.3 (OpenGL ES2.0) it gives me GL_INVALID_OPERATION everytime I try to render textured polgons with this texture. As nothing except this format is changed I think the problem is in incompatibility of GL_RGB internal format with OpenGL ES2.0. This is just my guess, I'm no guru.
This doesn't work in simulator nor iPod touch 4th gen.
Thank you for any reasonable suggestion.
According to the documentation, "internalformat must match format. No conversion between formats is supported during texture image processing." See the Khronos website. OpenGL does not have this limitation, so this code will work on Mac OS, but not the more limited OpenGL ES on iOS devices.