Dynamic image rendering on iOS - ipad

I have a programming task for an application I am writing for the iPad and the documentation is not clear about how to go about doing this. I am hoping for some good advice on approaching this problem.
Basically, I have a memory buffer that stores raw RGB for a 256x192 pixel image. This image will be written to regularly and I wish to display this to a 768x576 pixel area on the screen on an update call. I would like this to be relatively quick and maybe optimise it by only processing the areas of the image that actually change.
How would I go about doing this? My initial thought is to create a CGBitmapContext to manage the 256x192 image, then create a CGImage from this, then create a UIImage from this and change the image property of a UIImageView instance. This sounds like a rather slow process.
Am I on the right lines or should I be looking at something different. Another note is that this image must co-exists with other UIKit views on the screen.
Thanks for any help you can provide.

In my experience, obtaining an image from a bitmap context is actually very quick. The real performance hit, if any, will be in the drawing operations themselves. Since you are scaling the resultant image, you might obtain better results by creating the bitmap context at the final size, and drawing everything scaled to begin with.
If you do use a bitmap context, however, you must make sure to add an alpha channel (RGBA or ARGB), as CGBitmapContext does not support just RGB.

OK, I've come up with a solution. Thanks Justin for giving me the confidence to use the bitmap contexts. In the end I used this bit of code:
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CFDataRef data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, (UInt8*)screenBitmap, sizeof(UInt32)*256*192, kCFAllocatorNull);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(256, 192, 8, 32, sizeof(UInt32)*256, colourSpace, bitmapInfo, provider, 0, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colourSpace);
CGDataProviderRelease(provider);
CFRelease(data);
self.image = [UIImage imageWithCGImage:image];
CGImageRelease(image);
Also note that screenBitmap is my UInt32 array of size 256x192, and self is a UIImageView-derived object. This code works well, but is it the right way of doing it?

Related

How to render font in iOS in simple one-line way?

I would like to render text in iOS to a Texture, So I will be able to draw it using OpenGL. I am using this code:
CGSize textSize = [m_string sizeWithAttributes: m_attribs];
CGSize frameSize = CGSizeMake(NextPowerOf2((NSInteger)(MAX(textSize.width, textSize.height))), NextPowerOf2((NSInteger)textSize.height));
UIGraphicsBeginImageContextWithOptions(frameSize, NO /*opaque*/ , 1.0 /*scale*/);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetTextDrawingMode(currentContext, kCGTextFillStroke);
CGContextSetLineWidth(currentContext, 1);
[m_string drawAtPoint:CGPointMake (0, 0) withAttributes:m_attribs];
When I try to use kCGTextFillStroke or kCGTextStroke I get this:
When I try to use kCGTextFill I get this:
Is there any way to get simple, one line clean text like this? (Taken from rendering on OS X)
This looks like an issue with resolution but no matter that...
Sine you are using iOS I suggest you to use an UI component UILabel for instance. Then set any parameters to the label you wish which includes line break mode, number of lines, attributed text, fonts... You may call sizeToFit to get the minimum possible size of the label. You do not add the label to any other view but create an UIImage from the view (you have quite a few answers for that on SO). Once you have the image you may simply copy the raw RGBA data to the texture (again loads of answers on how to get the RGBA data from the UIImage). And that is it. Well you might want to check for content scale for retina x2 and x3 devices or do those manually by increasing the font sizes by the corresponding factors.
This procedure might seem like a workaround and might be much slower then using core graphics but the truth is quite far from that:
Creating a context with size and options creates an RGBA buffer same as for the CGImage (the UIImage only wraps it)
The core graphics is used to draw the view to UIImage so the procedure is essentially the same under the hood.
You still need to copy the data to the texture but that is in both of the cases. A little downside here might be that in order to access the RGBA raw data from the image you will need to copy (duplicate) the raw data somewhere down the line but that is a relatively quick operation and most likely same happens in your procedure.
So it is possible that this procedure consumes a bit more resources (not much and possibly even less actually) but you do get unlimited power when it comes to drawing a text.
Well, eventually I rendered to a texture with doubled size and converted it to UIImage with scale = 2. By that taking advantage of retina display.
UIImage* pTheImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage* pScaledImage = [UIImage imageWithCGImage:pTheImage.CGImage scale:2 orientation:pTheImage.imageOrientation];
Than I just use it as a texture for openGL drawing.

How to create a CGColorSpaceRef that is of format RGBAh?

What is the analog to CGColorSpaceCreateDeviceRGB that creates a RGBAh color space? I need to feed this CGColorSpaceRef to a CIContext object (via context:withOptions:).
For clarity, here's what I'm looking for code-wise:
[CIContext contextWithEAGLContext:self.eaglContext
options:#{kCIContextWorkingColorSpace : /* something here of format RGBAh */}];
Thanks!
See supported pixel formats here. You just need to pass the proper flags for 16bpp when creating the graphics context.
CGColorSpaceRef do not have a pixel format (e.g.RGBAh) but you can ask a colors space for its CGColorSpaceGetModel (e.g. kCGColorSpaceModelRGB)
CGImageRefs do have a pixel format but do not support RGBAh.
CIImage* in contrast does support RGBAh.

How to encode emission or specular info in the alpha of a open gl texture

I have an OpenGL texture with UV map on it. I've read about using the alpha channel to store some other value which saves needing to load an extra map from somewhere. For example, you could store specular info (shininess), or an emission map in the alpha since you only need a float for that and the alpha isn't being used.
So I tried it. Writing the shader isn't the problem. I have all that part worked out. The problem is just getting all 4 channels in to the texture like I want.
I have all the maps so in PSD I put the base map in the rgb and the emissions map in the a. But when you save as png the alpha either doesn't save (if you add it as a new channel) or it trashes the rgb by premultiplying the transparency to the rgb (if you apply the map as a mask).
Apparently PNG files support transparency but not alpha channels per se. So there doesn't appear to be a way to control all 4 channels.
But I have read about doing this. So what format can I save it in from PSD that I can load with my image loader in the iPhone?
NSString *path = [[NSBundle mainBundle] pathForResource:name ofType:type];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
Does this method accept other file formats? Like TIFF which would allow me to control all 4 channels?
I could use texturetool to make a PVR.. but from the docs it appears to also take a PNG as input.
EDIT:
First to be clear this is in the iPhone.
It might be psd's fault. Like I said, there are two ways to set up the document in my version of psd (cc 14.2 mac) that I can find. One is to manually add a new channel and paste the maps in there. It shows up as a red overlay. The second is to add a mask, option click it and paste the alpha in there. In that case it shows it with the alpa as a transparency with the checkerboard in the alpha zero areas. When I save as png the alpha option greys out.
And when I load the png back in to psd it appears to be premultiplied. I can't get back to my full rgb data in photoshop.
Is there a different tool I can use to merge the two maps into a png that will store it png-32?
TIFF won't work cause it doesn't store alpha either. Maybe I was thinking of TGA.
I also noticed this in my loader...
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef thisContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
if (flipImage)
{
CGContextTranslateCTM (thisContext, 0, height);
CGContextScaleCTM (thisContext, 1.0, -1.0);
}
CGColorSpaceRelease( colorSpace );
CGContextClearRect( thisContext, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( thisContext, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glBindTexture(GL_TEXTURE_2D, textureInfo[texIndex].texture);
When I create that context the option is kCGImageAlphaPremultipliedLast.
Maybe I do need to try the glkit loader, but it appears that my png is premultiplied.
It is possible to create a PNG with an alpha channel, but you will not be able to read that PNG image using the builtin iOS APIs without a premultiplication. The core issue is that CoreGraphics only supports premultiplied alpha for performance reasons. You also have to be careful to disable Xcode's optimization of PNGs attached to the project file because it does the premultiplication at compile time. What you could do is compile and link in your own copy of libpng after turning off the Xcode PNG processing, and then read the file directly with libpng at the C level. But, honestly this is kind of a waste of time. Just save one image with the RGB values and another as grayscale with the alpha values as 0-255 grayscale values. You can have those grayscale values mean anything you want and you will not have to worry about premult messing things up. Your opengl code will just need to read from multiple textures, not a big deal.

How to combine an image with a mask into one single UIImage with Accelerate Framework?

This code combines an image and a grayscale mask image into one UIImage. It works but it is slow.
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
I think that Accelerate Framework can help. But I am not sure.
There is vImage and it can do alpha compositing. Or maybe what I look for is called "vImage Transform". Not like CATransform3D but "transforming" image.
But what I need is make a photo into a transparent JPEG based on a mask.
Can Accelerate Framework be used for this? Or is there an alternative?
VImageOverwriteChannels_ARGB8888 is probably the API you want, provided that the image JPEG is opaque to start with. You can use vImageBuffer_InitWithCGImage to extract out the source image as 8 bpc, 32 bpp, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little. This will give you a BGRA8888 image with opaque alpha. Get out the mask as a 8bpc, 8bpp, kCGImageAlphaNone image. Use vImageOverwriteChannels_ARGB8888 to overwrite the BGRA alpha with the new alpha channel. Then make a new CGImage with vImageCreateCGImageFromBuffer, modifying the format slightly to kCGImageAlphaFirst | kCGBitmapByteOrder32Little.
You can also try flattening the mask into the image by taking maskedReference above and decoding it directly to BGRA kCGImageAlphaFirst. This only really works well if the image and the mask are the same size. Otherwise some resampling occurs, which is time consuming.
I don't know whether either of these is really going to be faster or not. It would be useful to look at an instruments time profile of where your time is going. vImageOverwriteChannels_ARGB8888 is probably only a tiny bit of the work to be done here. Depending on the format of the original image, quite a lot of work for colorspace conversion and image format conversion can occur behind the scenes in vImageBuffer_InitWithCGImage and vImageCreateCGImageFromBuffer. The key to speed here (and with the competing CG path) is to minimize the workload by making intelligent choices.
Sometimes, trying some stuff, then filing a bug against apple if nothing works well can yield an informed response. A trivially reproducible example is usually key.

What is the CoreGraphics equivalent of UIKit's contentScaleFactor?

What is the CoreGraphics equivalent of UIKit's contentScaleFactor?
I am creating a PDF using the UKit PDF creation functions, which allow rendering to a PDF context. I have a requirement, however, to DEGRADE the quality of the generated PDF. We have achieved this already (rendering to a UIView) using UIKit's contentScaleFactor property, which is the factor of conversion between graphics space and pixel space. However, I need to apply this magic & voodoo to a CGContextRef, without a UIView, but I don't know what I should do.
Any other suggestions as to how to degrade the PDF quality would be much appreciated.
Thanks
Edit: My input is a PDF document. I am re-creating a PDF from another PDF using CoreGraphics, but the process CAN be slow, depending on the graphical intensity of some PDF pages.
When you create you context, specify width and height that are a fraction of your original PDF:
CGContextRef context = CGBitmapContextCreate(NULL,
pdfSize.width / 4,
pdfSize.height/4,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
then do your drawing here scaling it down as appropriate. Then you could do:
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *reflectionImage = [UIImage imageWithCGImage:cgImage scale:4.0 orientation:up];
or you could draw cgImage image in a new enlarged context, it depends on what you are trying to do.

Resources