is it possible to add an alpha property on a region in the same image?
For example
The easiest solution would be to break the image apart and save the alpha as part of a png, then organize the imageviews to be flush against each other.
Otherwise, I wrote this quick code in a regular view that does the same with an image (I'm relatively new to Core Graphics so I'm sure there are better ways of doing this - also, my example the images are side by side):
-(void) drawRect {
// GET THE CONTEXT, THEN FLIP THE COORDS (my view is 189 ponts tall)
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flip = CGAffineTransformMake(1, 0, 0, -1, 0, 189);
CGContextConcatCTM(context, flip);
// GET THE IMAGE REF
UIImage *targetImage = [UIImage imageNamed:#"test.jpg"];
CGImageRef imageRef = targetImage.CGImage;
// SET THE COORDS
CGRect imageCoords = CGRectMake(0, 0, 116, 189);
CGRect imageCoordsTwo = CGRectMake(116, 0, 117, 189);
// CUT UP THE IMAGE INTO TWO IMAGES
CGImageRef firstImage = CGImageCreateWithImageInRect(imageRef, imageCoords);
CGImageRef secondImage = CGImageCreateWithImageInRect(imageRef, imageCoordsTwo);
// DRAW FIRST IMAGE, SAVE THE STATE, THEN SET THE TRANSPARENCY AMOUNT
CGContextDrawImage(context, imageCoords, firstImage);
CGContextSaveGState(context);
CGContextSetAlpha(context, .4f);
// DRAW SECOND IMAGE, RESTORE THE STATE
CGContextDrawImage(context, imageCoordsTwo, secondImage);
CGContextRestoreGState(context);
// TIDY UP
CGImageRelease(firstImage);
CGImageRelease(secondImage);
}
Related
I have looked at the various ways people are using to tint an image (I want to apply a red layer) and the only one I have go to work is ridiculously elaborate. Is there a simpler way?
// 1. Tint the Image
NSString *name = #"Skyline.png";
UIImage *imgBottomCrop = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(imgBottomCrop.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[[UIColor redColor] setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, imgBottomCrop.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rectBottomCrop = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rectBottomCrop, imgBottomCrop.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rectBottomCrop, imgBottomCrop.CGImage);
CGContextAddRect(context, rectBottomCrop);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Display Image
displayPicture2.image = coloredImg;
If the image is static (doesn't change while the application is running) the easiest way would be to have another image stored and just load it.
If it's not static - You're doing it correctly. Another way i can think of is just having a half-transparent red image stored and displaying it over Your image.
I am having problem understanding the reason for the following method, to return images that are visibly pixelated. I have double checked the size of the image, and it is fine. What's more, without tinting it, the image edges are smooth, and lack pixelation.
The method for tinting image, based on IOS7 ImageView's tintColor property, works fine, however I would love to find out what is wrong with the following code, because it seems to work for everybody but me. Thanks!
- (UIImage *)imageTintedWithColor:(UIColor *)color
{
if (color) {
UIImage *img = self; // The method is a part of UIImage category, hence the "self"
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextSetBlendMode(context, kCGBlendModeSourceIn);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
return self;
}
Change this line:
UIGraphicsBeginImageContext(img.size);
to:
UIGraphicsBeginImageContextWithOptions(img.size, NO, 0);
If your images will never have an transparency, change the NO to YES.
I'm trying to overlay a color on a UIImage, but only on the left half of the image (I'm using code from http://coffeeshopped.com/2010/09/iphone-how-to-dynamically-color-a-uiimage to overlay the color). The code I have now is:
- (UIImage *)imageWithColor:(UIColor *)color{
// begin a new image context, to draw our colored image onto
CGSize size = CGSizeMake(self.line.image.size.width/2, self.line.image.size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, [[UIScreen mainScreen] scale]);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to overlay, and the original image
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
CGContextDrawImage(context, rect, self.line.image.CGImage);
// set a mask that matches the shape of the image, then draw (overlay) a colored rectangle
CGContextClipToMask(context, rect, self.line.image.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
I thought setting the size to be half the width would work, but everything still gets color. I guess I'm missing something very fundamental. Any ideas?
In
CGContextAddRect(context, rect);
you are adding a rectangle with the full size.
I have a UIImage which is an image of a mountainous landscape. I need to show locations on this image, by turning corresponding 2-3 pixels at the spot to red. How do I accomplish this? Thanks!
You can draw image to graphics context like this:
CGRect imageRect = CGRectMake(0, 0, width, height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
//Save current status of graphics context
CGContextSaveGState(context);
CGContextDrawImage(context, imageRect, image.CGImage);
And then just draw a point on it wherever you want like this:
//CGContextFillRect(context, CGRectMake(x,y,1,1));
//Fix error according to #gsempe's comment
CGContextFillRect(context, CGRectMake(x,y,1./(image.scale),1./(image.scale)))
Then just save it to UIImage again:
CGContextRestoreGState(context);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
You also should take care of image's orientation. Here is some good article on it.
I used the timing profile tool to identify that 95% of the time is spent calling the function CGContextDrawImage.
In my app there are a lot of duplicate images repeatably being chopped from a sprite map and drawn to the screen. I was wondering if it was possible to cache the output of CGContextDrawImage in an NSMutableDictionay, then if the same sprite is requested again it can be just pull it from the cache rather than doing all the work of clipping and rendering it again. This is what i’ve got but I have not been to successful:
Definitions
if(cache == NULL) cache = [[NSMutableDictionary alloc]init];
//Identifier based on the name of the sprite and location within the sprite.
NSString* identifier = [NSString stringWithFormat:#"%#-%d",filename,frame];
Adding to cache
CGRect clippedRect = CGRectMake(0, 0, clipRect.size.width, clipRect.size.height);
CGContextClipToRect( context, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(clipRect.origin.x * -1,
clipRect.origin.y * -1,
atlas.size.width,
atlas.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(context, drawRect, atlas.CGImage);
[cache setValue:UIGraphicsGetImageFromCurrentImageContext() forKey:identifier];
UIGraphicsEndImageContext();
Rendering a cached sprite
There is probably a better way to render CGImage which is my ultimate caching goal but at the moment I’m just looking to successfully render the cached image out however this has not been successful.
UIImage* cachedImage = [cache objectForKey:identifier];
if(cachedImage){
NSLog(#"Cached %#",identifier);
CGRect imageRect = CGRectMake(0,
0,
cachedImage.size.width,
cachedImage.size.height);
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0);
else
UIGraphicsBeginImageContext(imageRect.size);
//Use draw for now just to see if the image renders out ok
CGContextDrawImage(context, imageRect, cachedImage.CGImage);
UIGraphicsEndImageContext();
}
Yes it's possible to cache a rendered image. Below is a sample of how it's done:
+ (UIImage *)getRenderedImage:(UIImage *)image targetSize:(CGSize)targetSize
{
CGRect targetRect = CGRectIntegral(CGRectMake(0, 0, targetSize.width, targetSize.height)); // should be used by your drawing code
CGImageRef imageRef = image.CGImage; // should be used by your drawing code
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// TODO: draw and clip your image here onto context
// CGContextDrawImage CGContextClipToRect calls
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
This way, you get a rendered copy of the resource image. Because during rendering, you have the context, you are free to do anything you want. You just need to determine the output size beforehand.
The resulting image is just an instance of UIImage that you can put into NSMutableDictionary for later use.