I would like to render text in iOS to a Texture, So I will be able to draw it using OpenGL. I am using this code:
CGSize textSize = [m_string sizeWithAttributes: m_attribs];
CGSize frameSize = CGSizeMake(NextPowerOf2((NSInteger)(MAX(textSize.width, textSize.height))), NextPowerOf2((NSInteger)textSize.height));
UIGraphicsBeginImageContextWithOptions(frameSize, NO /*opaque*/ , 1.0 /*scale*/);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetTextDrawingMode(currentContext, kCGTextFillStroke);
CGContextSetLineWidth(currentContext, 1);
[m_string drawAtPoint:CGPointMake (0, 0) withAttributes:m_attribs];
When I try to use kCGTextFillStroke or kCGTextStroke I get this:
When I try to use kCGTextFill I get this:
Is there any way to get simple, one line clean text like this? (Taken from rendering on OS X)
This looks like an issue with resolution but no matter that...
Sine you are using iOS I suggest you to use an UI component UILabel for instance. Then set any parameters to the label you wish which includes line break mode, number of lines, attributed text, fonts... You may call sizeToFit to get the minimum possible size of the label. You do not add the label to any other view but create an UIImage from the view (you have quite a few answers for that on SO). Once you have the image you may simply copy the raw RGBA data to the texture (again loads of answers on how to get the RGBA data from the UIImage). And that is it. Well you might want to check for content scale for retina x2 and x3 devices or do those manually by increasing the font sizes by the corresponding factors.
This procedure might seem like a workaround and might be much slower then using core graphics but the truth is quite far from that:
Creating a context with size and options creates an RGBA buffer same as for the CGImage (the UIImage only wraps it)
The core graphics is used to draw the view to UIImage so the procedure is essentially the same under the hood.
You still need to copy the data to the texture but that is in both of the cases. A little downside here might be that in order to access the RGBA raw data from the image you will need to copy (duplicate) the raw data somewhere down the line but that is a relatively quick operation and most likely same happens in your procedure.
So it is possible that this procedure consumes a bit more resources (not much and possibly even less actually) but you do get unlimited power when it comes to drawing a text.
Well, eventually I rendered to a texture with doubled size and converted it to UIImage with scale = 2. By that taking advantage of retina display.
UIImage* pTheImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage* pScaledImage = [UIImage imageWithCGImage:pTheImage.CGImage scale:2 orientation:pTheImage.imageOrientation];
Than I just use it as a texture for openGL drawing.
Related
When you put a UIImage into a UIImageView that is smaller than the view, and the content mode of the view is ScaleToFit, iOS enlarges the bitmap, as expected. What I have never expected is that it blurs the edges of the pixels it has scaled-up. I can see this might be a nice touch if you're looking at photographs, but in many other cases I want to see those nasty, hard, straight edges!
Does anyone know how you can configure a UIImage or UIImageView to enlarge with sharp pixel edges? That is: Let it look pixellated, not blurred.
Thanks!
If you want to scale up any image in UIImageView with sharpen edge, use the following property of CALayer.
imageview.layer.magnificationFilter = kCAFilterNearest;
It seems that magnificationFilter affects an interpolation method of contents of UIView. I recommend you to read an explanation of the property in CALayer.h.
/* The filter types to use when rendering the `contents' property of
* the layer. The minification filter is used when to reduce the size
* of image data, the magnification filter to increase the size of
* image data. Currently the allowed values are `nearest' and `linear'.
* Both properties default to `linear'. */
#property(copy) NSString *minificationFilter, *magnificationFilter;
I hope that my answer is useful for you.
Can I get pixel value of image and crop its black part. For instance, I have the this image:
.
And I want something like this
without the black part.
Any possible solution on how to do this? Any libraries/code?
I am using Objective C.
I have seen this solution to the similar question but I don't understand it in detail. Please kindly provide steps in detail. Thanks.
Probably the fastest way of doing this is iterating through the image and find the border pixels which are not black. Then redraw the image to a new context clipping the rect received by border pixels.
By border pixels I mean the left-most, top-most, bottom-most and right-most. You can find a way to get the raw RGBA buffer from the UIImage through which you may then iterate through width and height and set the border values when appropriate. That means for instance to get leftMostPixel you would first set it to some large value (or to the image width) and then in the iteration if the pixel is not black and if leftMostPixel > x then leftMostPixel = x.
Now that you have the 4 bounding values you can create a frame from it. To redraw just the target rectangle you may use various tools with contexts but probably the easiest is creating the view with size of bounding rect and put an image view with the size of the original image on it and create a screenshot of the view. The image view origin must be minus the origin of the bounded rect though (we put it offscreen a bit).
You may encounter some issues with the orientation of the image though. If the image will have some orientation other then up the raw data will not respect that. So you need to take that into account when creating the bounded rect... Or redraw the image first to make it oriented correctly... Or you can even create a sub buffer with RGBA data and create the CGImage from those data and applying the same orientation to the output UIImage as with input.
So after getting the bounds there are quite a few procedures. Some are slower, some take more memory, some are simply hard to code and have edge cases.
I've got a custom NSView which draws a chart in my app. I am generating a PDF which includes the image. In iOS I do this using code like this:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self drawRect:self.frame];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In iOS, the displays are retina which means the image is very high resolution, however, I'm trying to do this in my Mac app now and the quality of the image is poor because non-retina Macs will generate a non-high res version of the image.
I would like to force my NSView to behave as if it was retina when I'm using it to generate an image. That way, when I put the image into my PDF, it'll be much higher resolution. Right now, it's very blurry and not attractive.
Even a Retina bitmap will still be blurry and unattractive when scaled up enough. Assuming the view draws its contents in drawRect:, rather than trying to render the view into a PDF at a fixed resolution, a better approach is to draw directly into a PDF graphics context. This will produce a nice scalable PDF. The drawing code will need to be factored so it can be used for both the view’s drawRect: and the PDF.
Also, the iOS documentation states you should never call drawRect: yourself. Call renderInContext: on the view‘s layer, or use the newer drawViewHierarchyInRect:afterScreenUpdates:.
You can call -[NSView dataWithPDFInsideRect:] to get PDF data from the drawing in a view. For example:
NSData* data = [someView dataWithPDFInsideRect:someView.bounds];
[data writeToFile:#"/tmp/foo.pdf" atomically:YES];
Any vector drawing (e.g. text, Bezier paths, etc.) that your view and its subviews do will end up as scalable vector graphics in the PDF.
I'm working with UIImage and like everyone else have to deal with retina and non-retina display adaptability. As for as I know, retina display requires double pixels.
I'm wondering if I could simply use a large image with the same width/height ratio, just resize it smaller to adapt all device?
For example, I made a original image with size of 200*200 pixel. Now I want to use it in application as 20*20 pixel, and 80*80 pixel (two situations). Then I have to make four copies like img2020.png, img2020#2x.png, img8080.png and img8080#2x.png
So if I want to use it in three situations with difference size, I have to store 6 copies. Can I just use UIImage's resize function to do this? I've tried a bit but cannot figure out it's quality and performance.
Any ideas? Thanks a lot :)
All native API suppose you to use image.png and image#2x.png, so it may be difficult sometimes to use just one image and scale it depending on retina/non-retina. Moreover using retina graphics on non-retina devices lead to more extensive use of these devices' resource causing battery drain. And, of course, if you have many images, that will decrease performance of your application. In other words there are reasons to use double set of images and you should better use it instead of one large image being scaled.
You don't need to make 6 copies. You should use the size 200*200 pixel. And set the property contentMode of imageview to aspectFit. Or you can also use below function and change the size of images at run time.
-(UIImage *)Resize_Image:(UIImage *)image requiredHeight:(float)requiredheight andWidth:(float)requiredwidth
{
float actualHeight = image.size.height;
float actualWidth = image.size.width;
if (actualWidth*requiredheight <actualHeight*requiredwidth)
{
actualWidth=requiredheight*(actualWidth/actualHeight);
actualHeight=requiredheight;
}
else
{
actualHeight=requiredwidth*(actualWidth/actualHeight); actualWidth=requiredwidth;
}
CGRect rect = CGRectMake(0.0, 0.0, actualWidth, actualHeight);
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I made some comparisons before. Leaving iOS handle the resizing causes lower quality, and really unacceptable sometimes.
I feel lazy sometimes, my approach is to run it with the retina version, and if it looks bad, I will create a low-res version.
If you're writing an iPhone-only app, most of iPhones on the market has retina, so I don't think you should worry about non-retina version. Just my opinion though.
I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?
Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.
I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral
I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}