renderInContext: producing an image with blurry text - ios

I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?

Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.

I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral

I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}

Related

Get pixel values of image and crop the black part of the image : Objective C

Can I get pixel value of image and crop its black part. For instance, I have the this image:
.
And I want something like this
without the black part.
Any possible solution on how to do this? Any libraries/code?
I am using Objective C.
I have seen this solution to the similar question but I don't understand it in detail. Please kindly provide steps in detail. Thanks.
Probably the fastest way of doing this is iterating through the image and find the border pixels which are not black. Then redraw the image to a new context clipping the rect received by border pixels.
By border pixels I mean the left-most, top-most, bottom-most and right-most. You can find a way to get the raw RGBA buffer from the UIImage through which you may then iterate through width and height and set the border values when appropriate. That means for instance to get leftMostPixel you would first set it to some large value (or to the image width) and then in the iteration if the pixel is not black and if leftMostPixel > x then leftMostPixel = x.
Now that you have the 4 bounding values you can create a frame from it. To redraw just the target rectangle you may use various tools with contexts but probably the easiest is creating the view with size of bounding rect and put an image view with the size of the original image on it and create a screenshot of the view. The image view origin must be minus the origin of the bounded rect though (we put it offscreen a bit).
You may encounter some issues with the orientation of the image though. If the image will have some orientation other then up the raw data will not respect that. So you need to take that into account when creating the bounded rect... Or redraw the image first to make it oriented correctly... Or you can even create a sub buffer with RGBA data and create the CGImage from those data and applying the same orientation to the output UIImage as with input.
So after getting the bounds there are quite a few procedures. Some are slower, some take more memory, some are simply hard to code and have edge cases.

Objective-C How does snapchat make the text on top of an image/video so sharp and not pixelated?

In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.

iOS7 UIImage drawAtPoint not retina

i am trying to draw an image with the following code:
[img drawAtPoint:CGPointZero];
but the problem is that on an iphone with a retina display the image doesn´t get drawn in retina scale. It seems like the image gets upscaled and then drawn.
I don´t want to use drawInRect because the image is in right size and it´s way slower to use drawInRect.
Any ideas?
You probably are not setting the appropriate scale factor. When you create the bitmap context one of the arguments is the scale:
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
According to the official documentation scale is:
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
You're probably passing 1.0f which will result in the issue you've described. Try passing 0.0f.

UIImage rendering not clear on iPad Mini

The following code block is used in my application to take a screenshot of the current screen of an iPad mini(768 x 1024):
UIImage *img;
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
In a different viewcontroller, I present a UIScrollView with a width of 540 and a height of 290. I display the screencapture UIImage in a UIImageView which I create programmatically initWithFrame with a rectangle width of 250 and height of 250. The content size of the scrollview is 768 by 250.
Now running the application, I display four rectangles and screenshot the screen using the above block of code. Transitioning to the UIScrollView, the image is not clear (and by not clear, some rectangles are missing sides while some are thicker than others). Is there a way to display the image clearer? I know the image has to be scaled down from the original 768 by 1024 to 250 by 250. Could this be the problem? If so, what would be the best fix?
Edit:
Above a screenshot of the image I want to capture.
Below is the UIImage in UIImageView within a UIScrollView:
Cast each coordinate to int, or use CGRectIntegral, to do that directly on a CGRect, decimal point requires AA and makes images blurry.
Try changing the content mode of your UIImageViews. If you use UIViewContentModeScaleAspectFill, you shouldn't see any extra space around the edges.
Update: From the screenshots you posted, it looks like this is just an effect of the built-in downscaling in UIKit. Try manually downscaling the image to fit using Core Graphics first. Alternatively, you might want to use something like the CILanczosScaleTransform Core Image filter (iOS 6+).

Preventing antialiasing from CALayer renderInContext

I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?
It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.
I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?

Resources