NSTextAttachment image not showing when drawing using CoreText - ios

For some reason I'm unable to get NSTextAttachment images to draw when using core text, although the same image would display fine when the NSAttributedString is added to an UILabel.
On iOS this rendering will give empty spaces for the NSTextAttachments, for OS X, a placeholder [OBJECT] square image is rendered for each NSTextAttachment instead. Is there something else that needs to be done in order to render images with CoreText?
The rendering code:
CGFloat contextHeight = CGBitmapContextGetHeight(context);
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)_attributedString);
CGPathRef path = CGPathCreateWithRect(CGRectMake(rect.origin.x,
contextHeight - rect.origin.y - rect.size.height,
rect.size.width,
rect.size.height), NULL);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, NULL);
CFRelease(framesetter);
CGPathRelease(path);
CGContextSaveGState(context);
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0.0f, contextHeight);
CGContextScaleCTM(context, 1.0f, -1.0f);
CTFrameDraw(frame, context);
CGContextRestoreGState(context);
CFRelease(frame);

The reason is simply that NSTextAttachment only works for rendering a NSAttributedString into an UIView/NSView. It can't be used to render into a regular CGContext.
There are two possible ways to solve the problem:
Create a UILabel, CATextLayer or similar, and render it into the graphics context.
Use CTRunDelegate to punch spaces in the text, then loop through all the lines to be rendered and draw the images directly into the CGContext manually. The way to do it is detailed here: https://www.raywenderlich.com/4147/core-text-tutorial-for-ios-making-a-magazine-app. Expect a lot of work if you go down this route, but it works.

Related

Output String Using Core Text

The IOS CGContext documentation says that various string output functions are now deprecated in favor of core text. The documentation just says "Use Core Text instead."
if I have
NSString *string ;
What would be the simple, currently approved method for drawing that text to the CGContext?
Here is my overridden drawRect: method to render attributed string with all explanation comments inside. By the time this method is called, UIKit has configured the drawing environment appropriately for your view and you can simply call whatever drawing methods and functions you need to render your content.
/*!
* #abstract draw the actual coretext on the context
*
*/
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
[self.backgroundColor setFill];
CGContextFillRect(context, rect);
if (_ctframe != NULL) CFRelease(_ctframe);
if (_framesetter != NULL) CFRelease(_framesetter);
//Creates an immutable framesetter object from an attributed string.
//Use here the attributed string with which to construct the framesetter object.
_framesetter = CTFramesetterCreateWithAttributedString((__bridge
CFAttributedStringRef)self.attributedString);
//Creates a mutable graphics path.
CGMutablePathRef mainPath = CGPathCreateMutable();
if (!_path) {
CGPathAddRect(mainPath, NULL, CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height));
} else {
CGPathAddPath(mainPath, NULL, _path);
}
//This call creates a frame full of glyphs in the shape of the path
//provided by the path parameter. The framesetter continues to fill
//the frame until it either runs out of text or it finds that text
//no longer fits.
CTFrameRef drawFrame = CTFramesetterCreateFrame(_framesetter, CFRangeMake(0, 0),
mainPath, NULL);
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// draw text
CTFrameDraw(drawFrame, context);
//clean up
if (drawFrame) CFRelease(drawFrame);
CGPathRelease(mainPath);
}

Merge two PNG UIImages in iOS without losing transparency

I have two png format images and both have transparency defined. I need to merge these together into a new png image but without losing any of the transparency in the result.
+(UIImage *) combineImage:(UIImage *)firstImage colorImage:(UIImage *)secondImage
{
UIGraphicsBeginImageContext(firstImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, firstImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGRect rect = CGRectMake(0, 0, firstImage.size.width, firstImage.size.height);
// draw white background to preserve color of transparent pixels
CGContextSetBlendMode(context, kCGBlendModeDarken);
[[UIColor whiteColor] setFill];
CGContextFillRect(context, rect);
CGContextSaveGState(context);
CGContextRestoreGState(context);
// draw original image
CGContextSetBlendMode(context, kCGBlendModeDarken);
CGContextDrawImage(context, rect, firstImage.CGImage);
// tint image (loosing alpha) - the luminosity of the original image is preserved
CGContextSetBlendMode(context, kCGBlendModeDarken);
//CGContextSetAlpha(context, .85);
[[UIColor colorWithPatternImage:secondImage] setFill];
CGContextFillRect(context, rect);
CGContextSaveGState(context);
CGContextRestoreGState(context);
// mask by alpha values of original image
CGContextSetBlendMode(context, kCGBlendModeDestinationIn);
CGContextDrawImage(context, rect, firstImage.CGImage);
// image drawing code here
CGContextRestoreGState(context);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
needed any help to improve my code in performance.
Thanks in advance
First of all, those calls to CGContextSaveGState and CGContextRestoreGState, one after the other with nothing in between, isn't doing anything for you. See this other answer for an explanation of what CGContextSaveGState and CGContextRestoreGState do: CGContextSaveGState vs UIGraphicsPushContext
Now, it's not 100% clear to me what you mean by "merging" the images. If you just want to draw one on top of the other, and blend their colors using a standard blending mode then you just need to change those blend mode calls to pass kCGBlendModeNormal (or just leave out the calls to CGContextSetBlendMode entirely. If you want to mask the second image by the first image's alpha value then you should draw the second image with the normal blend mode, then switch to kCGBlendModeDestinationIn and draw the first image.
I'm afraid I'm not really sure what you're trying to do with the image tinting code in the middle, but my instinct is that you won't end up needing it. You should be able to get most merging effects by just drawing one image, then setting the blending mode, then drawing the other image.
Also, the code you've got there under the comment "draw white background to preserve color of transparent pixels" might draw white through the whole image, but it certainly doesn't preserve the color of transparent pixels, it makes those pixels white! You should remove that code too unless you really want your "transparent" color to be white.
Used the code given in Vinay's question and Aaron's comments to develop this hybrid that overlays any number of images:
/**
Returns the images overplayed atop each other according to their array position, with the first image being bottom-most, and the last image being top-most.
- parameter images: The images to overlay.
- parameter size: The size of resulting image. Any images not matching this size will show a loss in fidelity.
*/
func combinedImageFromImages(images: [UIImage], withSize size: CGSize) -> UIImage
{
// Setup the graphics context (allocation, translation/scaling, size)
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(context, 0, size.height)
CGContextScaleCTM(context, 1.0, -1.0)
let rect = CGRectMake(0, 0, size.width, size.height)
// Combine the images
for image in images {
CGContextDrawImage(context, rect, image.CGImage)
}
let combinedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return combinedImage
}

Fast screenshot ios

In my project I have to make a screenshot of the screen and apply blur to create the effect of frosted glass. Content can be moved under the glass and blured picture changed.
I'v used Accelerate.framework to speedup blurring, also i,v used OpenGL to draw CIImage directly to GLView.
Now I'm looking for a way to optimize getting screenshot of the screen.
I use this method to get screenshot of some area at the bottom of the screen:
CGSize size = CGSizeMake(rect.size.width, rect.size.height);
// get screenshot of self.view
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
CGContextClearRect(ctx, rect);
CGColorSpaceRelease(colorSpaceRef);
CGContextSetInterpolationQuality(ctx, kCGInterpolationNone);
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextTranslateCTM(ctx, 0.0, someView.frame.size.height);
CGContextScaleCTM(ctx, 1, -1);
//add mask
CGImageRef maskImage = [UIImage imageNamed:#"mask.png"].CGImage;
CGContextClipToMask(ctx, rect, maskImage);
[someView.layer renderInContext:ctx];
//get screenshot image
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
It works fine and fast if self.view has 1-2 subviews, but if there are several subviews (or it is tableview), then everything starts to slow down.
So i try to find a fast way to get pixels from some rect on screen. Maybe using a low-level API.
if just perform some animations , try this way , called -snapshotViewAfterScreenUpdates: or -resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets: method which are UIView provided , these method return UIView object without rendering into a bitmap image , so it is a more efficient .

Convert PDF to UIImageView

I've found some code which gives me a UIImage out of a PDF-File. It works, but I have two questions:
Is there a possibility to achieve a better quality of the UIImage? (See Screenshot)
I only see the first page in my UIImageView. Do I have to embed the file in a UIScrollView to be complete?
Or is it better to render just one page and use buttons to navigate through the pages?
P.S. I know that UIWebView can display PDF-Pages with some functionalities but I need it as a UIImage or at least in a UIView.
Bad quality Image:
Code:
-(UIImage *)image {
UIGraphicsBeginImageContext(CGSizeMake(280, 320));
CGContextRef context = UIGraphicsGetCurrentContext();
CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("ls.pdf"), NULL, NULL);
CGPDFDocumentRef pdf = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL);
CGContextTranslateCTM(context, 0.0, 320);
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page = CGPDFDocumentGetPage(pdf, 4);
CGContextSaveGState(context);
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, CGRectMake(0, 0, 280, 320), 0, true);
CGContextConcatCTM(context, pdfTransform);
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
I know i'm a little late here, but i hope i can help someone else looking for an answer.
As to the questions asked:
I'm afraid the only way to achieve a better image quality is to render a bigger image, and letting the UIImageView resize it for you. I don't think you can set the resolution, but using a bigger image may be a good choice. It won't take too long for the page to render, and the image will have a better quality. PDF files are rendered on demand depending on the zoom level, that's why they seem to have "better quality".
As to rendering all the pages, you can get the number of pages in the document calling CGPDFDocumentGetNumberOfPages( pdf ) and using a simple for loop you can concat all the images generated in one single image. For displaying it, use the UIScrollVIew.
In my opinion, this approach is better than the above, but you should try to optimize it, for example rendering always the current, the previous and the next page. For nice scrolling transition effects, why not use a horizontal UIScrollView.
For more generic rendering code, i always do the rotation like this:
int rotation = CGPDFPageGetRotationAngle(page);
CGContextTranslateCTM(context, 0, imageSize.height);//moves up Height
CGContextScaleCTM(context, 1.0, -1.0);//flips horizontally down
CGContextRotateCTM(context, -rotation*M_PI/180);//rotates the pdf
CGRect placement = CGContextGetClipBoundingBox(context);//get the flip's placement
CGContextTranslateCTM(context, placement.origin.x, placement.origin.y);//moves the the correct place
//do all your drawings
CGContextDrawPDFPage(context, page);
//undo the rotations/scaling/translations
CGContextTranslateCTM(context, -placement.origin.x, -placement.origin.y);
CGContextRotateCTM(context, rotation*M_PI/180);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextTranslateCTM(context, 0, -imageSize.height);
Steipete already mentioned setting the white background:
CGContextSetRGBFillColor(context, 1, 1, 1, 1);
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
So the last thing to keep in mind is when exporting an image, set the quality to the maximum. For example:
UIImageJPEGRepresentation(image, 1);
What are you doing with the CGContextTranslateCTM(context, 0.0, 320); call?
You should extract the proper metrics form the pdf, with code like this:
cropBox = CGPDFPageGetBoxRect(page, kCGPDFCropBox);
rotate = CGPDFPageGetRotationAngle(page);
Also, as you see, the pdf might has rotation info, so you need to use the CGContextTranslateCTM/CGContextRotateCTM/CGContextScaleCTM depending on the angle.
You also might wanna clip any content that is outside of the CropBox area, as pdf has various viewPorts that you usually don't wanna display (e.g. for printers so that seamless printing is possible) -> use CGContextClip.
Next, you're forgetting that the pdf reference defines a white background color. There are a lot of documents out there that don't define any background color at all - you'll get weird results if you don't draw a white background on your own --> CGContextSetRGBFillColor & CGContextFillRect.

Draw background image using CGContextDrawImage

I want to draw on a UIView that has a background image that I set using the code below:
- (void)setBackgroundImageFromData:(NSData *)imageData {
UIImage *image = [UIImage imageWithData:imageData];
int width = image.size.width;
int height = image.size.height;
CGSize size = CGSizeMake(width, height);
CGRect imageRect = CGRectMake(0, 0, width, height);
UIGraphicsBeginImageContext(size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(currentContext, 0, height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, imageRect, image.CGImage);
UIGraphicsEndImageContext();
}
The initial view is created using the code from Apple's GLPaint example. For the life of me, that background image is not shown. What am I missing?
Thanks!
You create a UIImage, and an imageRect successfully. You then begin a image context to draw the image to, draw the image to the context, and end the context. The problem is that you just allow the context to expire without doing anything to it.
In UIKit you don't push new visuals upwards, you wait until requested to draw. Internal mechanisms cache your images and use them to move things about and otherwise redraw the screen at the usual 60fps.
If this is a custom UIView subclass, then you probably want to keep hold of the UIImage and composite it as part of your drawRect:. You can set the contents of your UIView as having changed by calling setNeedsRedraw — you'll then be asked to redraw your contents at some point in the future.
If this isn't a custom subclass, then the easiest thing to do is to wrap this view in an outer view and add a UIImageView behind it, to which you can set the UIImage.

Resources