Preventing antialiasing from CALayer renderInContext - ios

I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?

It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.

I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?

Related

Objective-C How does snapchat make the text on top of an image/video so sharp and not pixelated?

In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.

Force NSView drawing to behave as if display is retina

I've got a custom NSView which draws a chart in my app. I am generating a PDF which includes the image. In iOS I do this using code like this:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self drawRect:self.frame];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In iOS, the displays are retina which means the image is very high resolution, however, I'm trying to do this in my Mac app now and the quality of the image is poor because non-retina Macs will generate a non-high res version of the image.
I would like to force my NSView to behave as if it was retina when I'm using it to generate an image. That way, when I put the image into my PDF, it'll be much higher resolution. Right now, it's very blurry and not attractive.
Even a Retina bitmap will still be blurry and unattractive when scaled up enough. Assuming the view draws its contents in drawRect:, rather than trying to render the view into a PDF at a fixed resolution, a better approach is to draw directly into a PDF graphics context. This will produce a nice scalable PDF. The drawing code will need to be factored so it can be used for both the view’s drawRect: and the PDF.
Also, the iOS documentation states you should never call drawRect: yourself. Call renderInContext: on the view‘s layer, or use the newer drawViewHierarchyInRect:afterScreenUpdates:.
You can call -[NSView dataWithPDFInsideRect:] to get PDF data from the drawing in a view. For example:
NSData* data = [someView dataWithPDFInsideRect:someView.bounds];
[data writeToFile:#"/tmp/foo.pdf" atomically:YES];
Any vector drawing (e.g. text, Bezier paths, etc.) that your view and its subviews do will end up as scalable vector graphics in the PDF.

iOS Drawing image with objects (arrows) and saving to disk

I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}

UIImage rendering not clear on iPad Mini

The following code block is used in my application to take a screenshot of the current screen of an iPad mini(768 x 1024):
UIImage *img;
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
In a different viewcontroller, I present a UIScrollView with a width of 540 and a height of 290. I display the screencapture UIImage in a UIImageView which I create programmatically initWithFrame with a rectangle width of 250 and height of 250. The content size of the scrollview is 768 by 250.
Now running the application, I display four rectangles and screenshot the screen using the above block of code. Transitioning to the UIScrollView, the image is not clear (and by not clear, some rectangles are missing sides while some are thicker than others). Is there a way to display the image clearer? I know the image has to be scaled down from the original 768 by 1024 to 250 by 250. Could this be the problem? If so, what would be the best fix?
Edit:
Above a screenshot of the image I want to capture.
Below is the UIImage in UIImageView within a UIScrollView:
Cast each coordinate to int, or use CGRectIntegral, to do that directly on a CGRect, decimal point requires AA and makes images blurry.
Try changing the content mode of your UIImageViews. If you use UIViewContentModeScaleAspectFill, you shouldn't see any extra space around the edges.
Update: From the screenshots you posted, it looks like this is just an effect of the built-in downscaling in UIKit. Try manually downscaling the image to fit using Core Graphics first. Alternatively, you might want to use something like the CILanczosScaleTransform Core Image filter (iOS 6+).

renderInContext: producing an image with blurry text

I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?
Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.
I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral
I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}

Resources