I've got a custom NSView which draws a chart in my app. I am generating a PDF which includes the image. In iOS I do this using code like this:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self drawRect:self.frame];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In iOS, the displays are retina which means the image is very high resolution, however, I'm trying to do this in my Mac app now and the quality of the image is poor because non-retina Macs will generate a non-high res version of the image.
I would like to force my NSView to behave as if it was retina when I'm using it to generate an image. That way, when I put the image into my PDF, it'll be much higher resolution. Right now, it's very blurry and not attractive.
Even a Retina bitmap will still be blurry and unattractive when scaled up enough. Assuming the view draws its contents in drawRect:, rather than trying to render the view into a PDF at a fixed resolution, a better approach is to draw directly into a PDF graphics context. This will produce a nice scalable PDF. The drawing code will need to be factored so it can be used for both the view’s drawRect: and the PDF.
Also, the iOS documentation states you should never call drawRect: yourself. Call renderInContext: on the view‘s layer, or use the newer drawViewHierarchyInRect:afterScreenUpdates:.
You can call -[NSView dataWithPDFInsideRect:] to get PDF data from the drawing in a view. For example:
NSData* data = [someView dataWithPDFInsideRect:someView.bounds];
[data writeToFile:#"/tmp/foo.pdf" atomically:YES];
Any vector drawing (e.g. text, Bezier paths, etc.) that your view and its subviews do will end up as scalable vector graphics in the PDF.
Related
When you put a UIImage into a UIImageView that is smaller than the view, and the content mode of the view is ScaleToFit, iOS enlarges the bitmap, as expected. What I have never expected is that it blurs the edges of the pixels it has scaled-up. I can see this might be a nice touch if you're looking at photographs, but in many other cases I want to see those nasty, hard, straight edges!
Does anyone know how you can configure a UIImage or UIImageView to enlarge with sharp pixel edges? That is: Let it look pixellated, not blurred.
Thanks!
If you want to scale up any image in UIImageView with sharpen edge, use the following property of CALayer.
imageview.layer.magnificationFilter = kCAFilterNearest;
It seems that magnificationFilter affects an interpolation method of contents of UIView. I recommend you to read an explanation of the property in CALayer.h.
/* The filter types to use when rendering the `contents' property of
* the layer. The minification filter is used when to reduce the size
* of image data, the magnification filter to increase the size of
* image data. Currently the allowed values are `nearest' and `linear'.
* Both properties default to `linear'. */
#property(copy) NSString *minificationFilter, *magnificationFilter;
I hope that my answer is useful for you.
I would like to render text in iOS to a Texture, So I will be able to draw it using OpenGL. I am using this code:
CGSize textSize = [m_string sizeWithAttributes: m_attribs];
CGSize frameSize = CGSizeMake(NextPowerOf2((NSInteger)(MAX(textSize.width, textSize.height))), NextPowerOf2((NSInteger)textSize.height));
UIGraphicsBeginImageContextWithOptions(frameSize, NO /*opaque*/ , 1.0 /*scale*/);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetTextDrawingMode(currentContext, kCGTextFillStroke);
CGContextSetLineWidth(currentContext, 1);
[m_string drawAtPoint:CGPointMake (0, 0) withAttributes:m_attribs];
When I try to use kCGTextFillStroke or kCGTextStroke I get this:
When I try to use kCGTextFill I get this:
Is there any way to get simple, one line clean text like this? (Taken from rendering on OS X)
This looks like an issue with resolution but no matter that...
Sine you are using iOS I suggest you to use an UI component UILabel for instance. Then set any parameters to the label you wish which includes line break mode, number of lines, attributed text, fonts... You may call sizeToFit to get the minimum possible size of the label. You do not add the label to any other view but create an UIImage from the view (you have quite a few answers for that on SO). Once you have the image you may simply copy the raw RGBA data to the texture (again loads of answers on how to get the RGBA data from the UIImage). And that is it. Well you might want to check for content scale for retina x2 and x3 devices or do those manually by increasing the font sizes by the corresponding factors.
This procedure might seem like a workaround and might be much slower then using core graphics but the truth is quite far from that:
Creating a context with size and options creates an RGBA buffer same as for the CGImage (the UIImage only wraps it)
The core graphics is used to draw the view to UIImage so the procedure is essentially the same under the hood.
You still need to copy the data to the texture but that is in both of the cases. A little downside here might be that in order to access the RGBA raw data from the image you will need to copy (duplicate) the raw data somewhere down the line but that is a relatively quick operation and most likely same happens in your procedure.
So it is possible that this procedure consumes a bit more resources (not much and possibly even less actually) but you do get unlimited power when it comes to drawing a text.
Well, eventually I rendered to a texture with doubled size and converted it to UIImage with scale = 2. By that taking advantage of retina display.
UIImage* pTheImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage* pScaledImage = [UIImage imageWithCGImage:pTheImage.CGImage scale:2 orientation:pTheImage.imageOrientation];
Than I just use it as a texture for openGL drawing.
In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.
I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?
It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.
I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?
I have several SVG images that I would like to use in an iOS application, and, in short, I would like to turn the SVG images into UIImages (or CGImages).
My goal is that I should be able to load the images from the .svg files at an arbitrary size (assuming correct W/H ratio) and store them as UIImages or CGImages without any loss of image quality. (Note this all has to happen at runtime, pre-converting the images to various sized .png files and putting them in the App bundle isn't a viable option.)
Is this possible, and if so, how could I go about doing this? I have a good working knowledge of Core Graphics but I have never worked with vector graphics before.
SVGKit could help you - this library renders SVGs onto CALayer instances; from that, you can easily composite the image to a bitmap and make a CGImage of it.
I would not recommend converting a vectored document to a bitmap in most instances, but if you must SVGgh has a class SVGRenderer which has a method -(UIImage*)asImageWithSize:(CGSize)maximumSize andScale:(CGFloat)scale which will create a UIImage.
This is assuming SVGgh renders your SVGs properly, it doesn't support image effects, for instance.
#import <SVGgh/SVGgh.h>
-(UIImage) imageFromSVGURL:(NSURL*)svgURL withSize:(CGSize)maximumSize andScale:(CGFloat)scale
{
SVGRenderer* renderer = [[SVGRenderer alloc] initWithContentsOfURL:svgURL];
return [renderer asImageWithSize:maximumSize andScale:scale];
}