Overlay text as a texture iOS - ios

I tried to overlay text as a texture using following code snippet:
UILabel *myLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 200, 30)];
myLabel.text = #"Hello world!";
myLabel.font = [UIFont fontWithName:#"Helvetica" size:18];
myLabel.textColor = [UIColor colorWithRed:0/255.0 green:0/255.0 blue:0/255.0 alpha:1];
myLabel.backgroundColor = [UIColor clearColor];
UIGraphicsBeginImageContext(myLabel.bounds.size);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 30);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
[myLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *layerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But it's not visible. For displaying image as a texture using OpenGL I tried the following code snippet:
/// here image is the layerImage created in above piece of code
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
But the image is not getting displayed. Is there any other approach to display text as a texture using OpenGL in ios?

Related

iOS - Text not looking good with OpenGL

I'm drawing text with OpenGL ES 2.0, but the result doesn't look good (especially the bottom of "t"):
(It looks kinda pixelized and blurred)
I don't know exactly why since I load the font in high res (200px or 400px) for example.
I draw each char in an image, that compose the texture atlas. So all the chars can be displayed at any time without loading. I scale the texture with openGL (MVP matrix), but it shouldn't look as bad anyway.
Here's an UIImage of the char "t" that I retrieve with the method below, before it's processed with openGL :
(You can see it look way better without OpenGL)
Here's the code I use to create an image from a char :
- (id)initWithString:(NSString *)string dimensions:(CGSize)dimensions alignment:(NSTextAlignment)alignment fontName:(NSString *)name fontSize:(CGFloat)size {
NSUInteger width, height, i;
CGContextRef context;
void *data;
CGColorSpaceRef colorSpace;
UIFont *font;
font = [UIFont fontWithName:name size:size];
width = (NSUInteger)dimensions.width;
height = (NSUInteger)dimensions.height;
while(dimensions.width > kMaxTextureSize && size > 0) {
font = [UIFont fontWithName:name size: --size];
dimensions = [string sizeWithAttributes:#{NSFontAttributeName: font}];
width = (GLuint)NextPowerOfTwo((int)dimensions.width);
height = (GLuint)NextPowerOfTwo((int)dimensions.height);
}
colorSpace = CGColorSpaceCreateDeviceRGB();
data = calloc(1, width * height * 4);
context = CGBitmapContextCreate(data,
width, height,
8,
4 * width,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(context, 0.f, 0.f, 0.f, 1);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0f, -1.0f); //NOTE: NSString draws in UIKit referential i.e. renders upside-down compared to CGBitmapContext referential
UIGraphicsPushContext(context);
NSMutableParagraphStyle *style = [[NSParagraphStyle defaultParagraphStyle] mutableCopy];
[style setLineBreakMode:NSLineBreakByWordWrapping];
UIColor *whiteColor = [UIColor whiteColor];
NSDictionary *attrsDictionary = #{
NSFontAttributeName: font,
NSBaselineOffsetAttributeName: #1.0F,
NSParagraphStyleAttributeName: style,
NSForegroundColorAttributeName: whiteColor
};
CGRect destRect = CGRectMake(0, 0, dimensions.width, dimensions.height);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetAllowsAntialiasing(context, YES);
CGContextSetShouldAntialias(context, YES);
CGContextSetShouldSmoothFonts(context, YES);
[string drawInRect:destRect
withAttributes:attrsDictionary];
CGImageRef cgImageFinal = CGBitmapContextCreateImage(context);
UIGraphicsPopContext();
self = [self initWithData:data pixelFormat:GL_RGBA
pixelsWide:width pixelsHigh:height];
CGContextRelease(context);
free(data);
return self;
}
And here's the code to retrieve a openGL texture from it :
- (id)initWithData:(const void *)data
pixelFormat:(GLenum)pixelFormat
pixelsWide:(NSUInteger)width
pixelsHigh:(NSUInteger)height {
if ((self = [super init])) {
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
switch (pixelFormat) {
case GL_RGBA:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
break;
case GL_RGB:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, data);
break;
case GL_ALPHA:
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, data);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#""];
}
self.width = width;
self.height = height;
}
return self;
}
Any help would be really much appreciated !

How to write a text on image iOS?

In my app I'm trying to add a text on an image. I've done it using a post in stackoverflow. It's like the following.
+(UIImage *)addText:(UIImage *)img text:(NSString *)text1{
int w = img.size.width;
int h = img.size.height;
//lon = h - lon;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];// "05/05/09";
CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 255, 255, 255, 1);
//rotate text
CGContextSetTextMatrix(context, CGAffineTransformMakeRotation( -M_PI/4 ));
CGContextShowTextAtPoint(context, 4, 52, text, strlen(text));
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
When I'm writing a text using this code, the original image's colour is changed like the following,
Original Image
Output with Text
Can anyone explain why is this happening. and how to fix this??
Thanks in Advance!!
We need to create a image context and apply drawing and get final image from this context. You can try this method. Hope this help you.
+ (UIImage *) addText:(UIImage *)img text:(NSString *)text
{
CGRect rect = [[UIScreen mainScreen] bounds];
// create a context according to image size
UIGraphicsBeginImageContext(rect.size);
// draw image
[img drawInRect:rect];
UIFont* font = [UIFont systemFontOfSize:14.0];
/// Make a copy of the default paragraph style
NSMutableParagraphStyle* paragraphStyle = [[NSParagraphStyle defaultParagraphStyle] mutableCopy];
/// Set line break mode
paragraphStyle.lineBreakMode = NSLineBreakByTruncatingTail;
/// Set text alignment
paragraphStyle.alignment = NSTextAlignmentCenter;
NSDictionary *attributes = #{ NSFontAttributeName: font,
NSParagraphStyleAttributeName: paragraphStyle };
CGRect textRect = CGRectMake(20, 160.0, 280.0, 44);
/// draw text
[text drawInRect:textRect withAttributes:attributes];
// get as image
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
EDIT: This can only be run on the main thread! So it's pretty much useless.
+(UIImage *)imageFromView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Here's a new one using Core Graphics that can be used on any thread
+ (UIImage *)imageFromView:(UIView *)view
{
size_t width = view.bounds.size.width*2;
size_t height = view.bounds.size.height*2;
unsigned char *imageBuffer = (unsigned char *)malloc(width*height*8);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageBuffer, width, height, 8, width*4, colourSpace,kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colourSpace);
CGContextTranslateCTM(imageContext, 0.0, height);
CGContextScaleCTM(imageContext, 2.0, -2.0);
[view.layer renderInContext:imageContext];
CGImageRef outputImage = CGBitmapContextCreateImage(imageContext);
UIImage *image = [[UIImage alloc] initWithCGImage:outputImage];
CGImageRelease(outputImage);
CGContextRelease(imageContext);
free(imageBuffer);
return image;
}
I am using this and it works fine for me
- (UIImage*) drawText:(NSString*) text
inImage:(UIImage*) image
atPoint:(CGPoint) point
{
UIFont *font = [UIFont fontWithName:FONT_BOLD size:20];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0,0,image.size.width,image.size.height)];
CGRect rect = CGRectMake(point.x, point.y, image.size.width, image.size.height);
[UIColorFromRGB(0x515151) set];
[text drawInRect:CGRectIntegral(rect) withFont:font];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

drawAtPoint not working after converting from CGContextShowTextAtPoint

as CGContextShowTextAtPoint is deprecated in iOS 7, i would like to change CGContextShowTextAtPoint to drawAtPoint but the text is outputting.
//Add text to UIImage
-(UIImage *)addText:(UIImage *)img text:(NSString *)text1{
int w = 32;
int h = 32;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
//char* text= (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
// CGContextSelectFont(context, "Montserrat-Regular",12, kCGEncodingMacRoman);
// CGContextShowTextAtPoint(context,9,12,text, strlen(text));
//
[text1 drawAtPoint:CGPointMake(0, 0) withFont:[UIFont fontWithName:#"Montserrat-Regular" size:12]];
CGImageRef imgCombined = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *retImage = [UIImage imageWithCGImage:imgCombined];
CGImageRelease(imgCombined);
return retImage;
}
ok got it work on
//Add text to UIImage
-(UIImage *)addText:(UIImage *)img text:(NSString *)text1
{
UIGraphicsBeginImageContext(CGSizeMake(36, 36));
CGRect aRectangle = CGRectMake(0,0, 36.0f,36.0f);
[img drawInRect:aRectangle];
[text1 drawInRect : CGRectMake(0, 10, 36, 36)
withFont : [UIFont fontWithName:#"Montserrat-Regular" size:12]
lineBreakMode : NSLineBreakByTruncatingTail
alignment : NSTextAlignmentCenter ];
UIImage *theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return theImage;
}

glReadPixels only saves 1/4 screen size snapshots

I'm working on an Augmented Reality app for a client. The OpenGL and EAGL part has been done in Unity 3D, and implemented into a View in my application.
What i need now, is a button that snaps a screenshot of the OpenGL content, which is the backmost view.
I tried writing it myself, but when i click a button with the assigned IBAction, it only saves 1/4 of the screen (the lower left corner) - though it does save it to the camera roll.
So basically, how can i make it save the entire screensize, instead of just one fourth?
here's my code for the method:
-(IBAction)tagBillede:(id)sender
{
UIImage *outputImage = nil;
CGRect s = CGRectMake(0, 0, 320, 480);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
if (!buffer) goto error;
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
if (!ref) goto error;
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
if (!iref) goto error;
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
if (!pixels) goto error;
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
if (!context) goto error;
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
if (!outputRef) goto error;
outputImage = [UIImage imageWithCGImage: outputRef];
if (!outputImage) goto error;
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, self, #selector(image: didFinishSavingWithError: contextInfo:), nil);
}
I suspect you are using a device with a Retina display, which is 640x960. You need to take the screen scale into account; it is 1.0 on non-Retina displays and 2.0 on Retina displays. Try initializing s like this:
CGFloat scale = UIScreen.mainScreen.scale;
CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale);
If the device is a retina device, you need to scale the opengl stuff yourself. You're actually specifying that you want the lower-left corner by only capturing half the width and half the height.
You need to double both your width and height for the retina screens, but realistically, you should be multiplying it by the screen's scale:
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
Thought i'd chime and, and at the same time, throw some gratitude :)
I got it working like a charm now, here's the cleaned up code:
UIImage *outputImage = nil;
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);

Textures quality poor when zoomed out

Im porting my app to iOS, and while it has the same gl calls regarding texture binding, the quality gets very poor when zoomed out (mipmaps are enabled). Is there something wrong with this texture loading code?
EDIT: Im starting to think this is actually a retina display issue.
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"jpg"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
// Flip the Y-axis
CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
It was a retina display issue. After some googling i came to this article which explains that the default scaleFactor is 1, which is why the textures looked bad on my device.
http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreens/SupportingHiResScreens.html

Resources