iOS UIView render to texture -- preserve quality - ios

I've implemented a program to render a UILabel to a texture and then map that same texture to a quad in OpenGL. I then rendered the same UILabel normally by adding it to the UIView hierarchy to compare them visually.
I immediately noticed that the quality of the UILabel that I rendered to texture is lower than a normally rendered UILabel. I am having trouble figuring out why the quality is lower and would appreciate any advice.
I'm using the render to texture technique found here Render contents of UIView as an OpenGL texture
Here is some of the relevant code
// setup test UIView (label for now) to be rendered to texture
_testLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 100, 50)];
[_testLabel setText:#"yo"];
_testLabel.textAlignment = NSTextAlignmentCenter;
[_testLabel setBackgroundColor:RGB(0, 255, 0)];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_testLabelContext = CGBitmapContextCreate(NULL, _testLabel.bounds.size.width, _testLabel.bounds.size.height, 8, 4*_testLabel.bounds.size.width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
// setup texture handle for view to be rendered to texture
glGenTextures(1, &_testLabelTextureHandle);
glBindTexture(GL_TEXTURE_2D, _testLabelTextureHandle);
// these must be defined for non mipmapped nPOT textures (double check)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _testLabel.bounds.size.width, _testLabel.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
then in my render function...
GLubyte *labelPixelData = (GLubyte*)CGBitmapContextGetData(_testLabelContext);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _testLabel.bounds.size.width, _testLabel.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, labelPixelData);

Don't you need to multiply the render width and height by the #2x/#3x? OpenGL works in real pixels.

The answer from #soprof is correct. You do need to scale the content with the main screen bounds in every case when creating a view screenshot. You need to use the same coordinates later when uploading the data to the texture. Simply creating a larger context is not enough.
This actually goes for all connections between the openGL and an UIView. You need to understand that the coordinate system is kept as if 1x was used which is very useful for the view layout. Apple uses contentScaleFactor property on the UIView to then actually scale the the internal content but that is still not visible in frame or bounds property. So for instance to get the image from an UIView you would need to do something like this:
+ (UIImage *)imageFromView:(UIView *)view
{
CGRect rect = [view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size, NO, view.contentScaleFactor);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
See the context may be created with options and a view scale is used. But the problem does not stop here. The actual content scale factor is set when the view is added to the view hierarchy. This means simply initializing a label is not enough for the scale factor to be correct (it will be 1.0f by default). So you may either set the scale manually on the view to what you want, you may assign it from the screen using [UIScreen mainScreen].scale, insert this value directly when creating a context or anything in between.
Anyway if you modify your code to include the scale you will need the new frame at some point which is always
rect.size.width *= view.contentScaleFactor;
rect.size.height *= view.contentScaleFactor;
You need to use such rect for all continuing code but do not change the frame of the view you are trying to draw as the fonts will not be scaled with it. What might work though is applying a scale transform matrix on the actual inputed view (if I remember correctly the frame will scale as well so even your code should work naturally).
As I mentioned this goes to all connections so if you are creating a render buffer from the layer you will also need to use the scale in it. renderbufferStorage:GL_RENDERBUFFER fromDrawable: will take the UIView layer but you must explicitly set the scale to the layer to get the correct size of the render buffer using layer.contentsScale = [UIScreen mainScreen].scale. So if you are using this and you failed to set the scale the whole scene will have a 1x quality and even fixing the label texture will not produce a high enough quality.

Related

Loading a PNG file to openGL ES

Im trying to set a certain texture to an object on openGL ES this is how i load it:
- (GLuint)setupTexture:(NSString *)fileName {
// 1
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4,
CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
...
}
Now, when i'm using this tile_floor.png file:
the image is loaded and drawn on screen.
but when i use this wood.png file:
all i get is a black object.
Why is it different? is there any importance to the file dimensions (width or height)? i did not 'hard code' any dimensions or parameters, so that all images can be loaded.
It's important to say i don't get any errors in the console, and the program is running.
While you can use NPOT (non-power of two) image in many OpenGL ES implementations (all iOS devices support the extension GL_APPLE_texture_2D_limited_npot) you have to use the right edge mode:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This will make the texture clamp to the edges of the texture coordinates 0, 1 range. Your wood texture looks like it does not repeat anyway.
OK, i got this. openGL ES requires power of 2 image size (both width and height).
Resized the photo and it worked.

GLKView snapshot method: null return val, getting an error

I can't figure out how to use the GLKView:snapshot method.
I'm using a GLKView to render some OpenGL stuff. It all works; seems like I have it all set up correctly.
But, when I try to do a snapshot, it fails: I get a null return value, and the following log message:
Error: CGImageCreate: invalid image size: 0 x 0.
Seems like this would mean the view itself is invalid for some reason, but it's not -- everything is working, aside from this.
I've looked at a few code samples, and I'm not doing anything different.
So... anyone seen this before? Ideas?
I never figured out the above problem; however, I found an excellent workaround. I found this chunk which just reads the render buffer and saves it to a UIImage. Problem solved!
- (UIImage*)snapshotRenderBuffer {
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
NSInteger dataLength = backingWidth * backingHeight * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0.0f, 0.0f, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(
backingWidth, backingHeight, 8, 32, backingWidth * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
ref, NULL, true, kCGRenderingIntentDefault);
// (sayeth abd)
// This creates a context with the device pixel dimensions -- not points.
// To be compatible with all devices, you're meant to keep everything as points and a scale factor; but,
// this gives us a scaled down image for purposes of saving. So, keep everything in device resolution,
// and worry about it later...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(backingWidth, backingHeight), NO, 0.0f);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, backingWidth, backingHeight), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
return image;
}
Maybe this doesn't apply in your case, but the docs for GLKView:snapshot say:
Never call this method inside your drawing function.

Can you load only a smaller rectangular portion of a larger on-disk image into memory?

On iOS and most mobile devices there is a restriction on the size of the image that you can load, due to memory contraints. Is it possible to have a large image on disk (say 5,000 pixels by 5,000 pixels) but only read a smaller rectangle within that image (say 100x100) into memory for display?
In other words, do you need to load the entire image into memory if you just want to see a small subsection of it? If it's possible to load just the smaller portion, how can we do this?
This way, one could save a lot of space like spritesheets do for repetitive content. It would be important to note that the overall goal is to minimize the file size so the large image should be compressed with jpeg or png or some other kind of compression. I suspect video formats are like this because you never load an entire video into the memory.
Although I have not utilized the techniques, you might find the following Apple Sample useful:
LargeImageDownsizing Sample
You could do something with mapped NSData like this:
UIImage *pixelDataForRect(NSString *fileName, const CGRect pixelRect)
{
// get the pixels from that image
uint32_t width = pixelRect.size.width;
uint32_t height = pixelRect.size.height;
// create the context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef bitMapContext = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, height);
CGContextConcatCTM(bitMapContext, flipVertical);
// render the image (assume PNG compression)
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) [NSData dataWithContentsOfMappedFile:fileName]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, YES, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
uint32_t imageWidth = CGImageGetWidth(image);
uint32_t imageHeight = CGImageGetHeight(image);
CGRect drawRect = CGRectMake(-pixelRect.origin.x, -((imageHeight - pixelRect.origin.y) - height), imageWidth, imageHeight);
CGContextDrawImage(bitMapContext, drawRect, image);
CGImageRelease(image);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return retImage;
}
Your best bet is using UIScrollView with CATiledLayer.
Check out the "Designing Apps with Scroll Views presentation from WWDC 2010 for a description of how to do this:
https://developer.apple.com/videos/wwdc/2010/
The idea is to take your large image and chop it down into tiles, and then use a UIScrollView to provide your user with a scrollable view of the image, only loading those sections of the image that are necessary based on the position of the scrollview. This is accomplished using CATiledLayer.

Convert PDF to UIImageView

I've found some code which gives me a UIImage out of a PDF-File. It works, but I have two questions:
Is there a possibility to achieve a better quality of the UIImage? (See Screenshot)
I only see the first page in my UIImageView. Do I have to embed the file in a UIScrollView to be complete?
Or is it better to render just one page and use buttons to navigate through the pages?
P.S. I know that UIWebView can display PDF-Pages with some functionalities but I need it as a UIImage or at least in a UIView.
Bad quality Image:
Code:
-(UIImage *)image {
UIGraphicsBeginImageContext(CGSizeMake(280, 320));
CGContextRef context = UIGraphicsGetCurrentContext();
CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("ls.pdf"), NULL, NULL);
CGPDFDocumentRef pdf = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL);
CGContextTranslateCTM(context, 0.0, 320);
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page = CGPDFDocumentGetPage(pdf, 4);
CGContextSaveGState(context);
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, CGRectMake(0, 0, 280, 320), 0, true);
CGContextConcatCTM(context, pdfTransform);
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
I know i'm a little late here, but i hope i can help someone else looking for an answer.
As to the questions asked:
I'm afraid the only way to achieve a better image quality is to render a bigger image, and letting the UIImageView resize it for you. I don't think you can set the resolution, but using a bigger image may be a good choice. It won't take too long for the page to render, and the image will have a better quality. PDF files are rendered on demand depending on the zoom level, that's why they seem to have "better quality".
As to rendering all the pages, you can get the number of pages in the document calling CGPDFDocumentGetNumberOfPages( pdf ) and using a simple for loop you can concat all the images generated in one single image. For displaying it, use the UIScrollVIew.
In my opinion, this approach is better than the above, but you should try to optimize it, for example rendering always the current, the previous and the next page. For nice scrolling transition effects, why not use a horizontal UIScrollView.
For more generic rendering code, i always do the rotation like this:
int rotation = CGPDFPageGetRotationAngle(page);
CGContextTranslateCTM(context, 0, imageSize.height);//moves up Height
CGContextScaleCTM(context, 1.0, -1.0);//flips horizontally down
CGContextRotateCTM(context, -rotation*M_PI/180);//rotates the pdf
CGRect placement = CGContextGetClipBoundingBox(context);//get the flip's placement
CGContextTranslateCTM(context, placement.origin.x, placement.origin.y);//moves the the correct place
//do all your drawings
CGContextDrawPDFPage(context, page);
//undo the rotations/scaling/translations
CGContextTranslateCTM(context, -placement.origin.x, -placement.origin.y);
CGContextRotateCTM(context, rotation*M_PI/180);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextTranslateCTM(context, 0, -imageSize.height);
Steipete already mentioned setting the white background:
CGContextSetRGBFillColor(context, 1, 1, 1, 1);
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
So the last thing to keep in mind is when exporting an image, set the quality to the maximum. For example:
UIImageJPEGRepresentation(image, 1);
What are you doing with the CGContextTranslateCTM(context, 0.0, 320); call?
You should extract the proper metrics form the pdf, with code like this:
cropBox = CGPDFPageGetBoxRect(page, kCGPDFCropBox);
rotate = CGPDFPageGetRotationAngle(page);
Also, as you see, the pdf might has rotation info, so you need to use the CGContextTranslateCTM/CGContextRotateCTM/CGContextScaleCTM depending on the angle.
You also might wanna clip any content that is outside of the CropBox area, as pdf has various viewPorts that you usually don't wanna display (e.g. for printers so that seamless printing is possible) -> use CGContextClip.
Next, you're forgetting that the pdf reference defines a white background color. There are a lot of documents out there that don't define any background color at all - you'll get weird results if you don't draw a white background on your own --> CGContextSetRGBFillColor & CGContextFillRect.

Drawing using renderInContext in a way that is analagous to UIView rendering

I have some UIViews that have different centers and transforms applied to them.
I want to reconstruct these views on to a bitmap context. (Basically I want to take what the user has created on screen and render it on to a movie file)
I am able to get the view rendered in the context to look almost correct however there seems to be an offset. I am thinking the problem is that the UIImageView's .center property is not reflected in the transforms that I am doing. However I am unsure how to do it.
Note that the UIViews are originally positioned/transformed relative to a 1024x768 ipad screen where as the video buffer is 352 x 288 pixels
If I just add a CGContextTranslateCTM(newContext,img.center.x,img.center.y) then everything looks completely off. Any ideas how to properly transform the view to the correct center?
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextRotateCTM(newContext, M_PI_2);
CGContextScaleCTM(newContext, 1, -1);
for(int i=0; i<[self.renderObjects count]; i++){
UIImageView * img = [self.renderObjects objectAtIndex:i];
[img setNeedsDisplay];
[img setBackgroundColor:[UIColor colorWithRed:1 green:1 blue:1 alpha:0.2]];
CGContextSaveGState(newContext);
CGContextScaleCTM(newContext, 0.375, 0.34);
CGContextConcatCTM(newContext, img.transform);
[img.layer renderInContext:newContext];
CGContextRestoreGState(newContext);
}
Here is the code that made it work for me: note that the 1024, 768 is because the UIImageViews were positioned in the iPad coordinate system. The rotations are inverted though so if someone can find a general solution for that it would be great.
UIImageView * curr = your image
[curr setNeedsDisplay];
CGContextSaveGState(newContext);
CGContextScaleCTM(newContext,height/768.0,width/1024.0);
CGContextTranslateCTM(newContext, 768-curr.center.x, curr.center.y);
CGContextConcatCTM(newContext, curr.transform);
CGContextTranslateCTM(newContext, -curr.bounds.size.width/2, -curr.bounds.size.height/2);
[curr.layer renderInContext:newContext];
CGContextRestoreGState(newContext);

Resources