Remove masking from an UIImage in iOS - ios

I am masking an UIImage in iOS. I want to remove that mask now. How will I achieve it. This is my code for masking the image :
UIColor *header1Color = [UIColor colorWithRed:0.286 green:0.286 blue:0.286 alpha:0.1];
UIImage *img = self.calorie_image.image;
// int width = img.size.width; //308
// int height = img.size.height; //67
UIGraphicsBeginImageContext(img.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[header1Color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
//calorie_value = 230;
int x = 200;
int y = 0;
int mwidth = 120;
int mheight = 67;
NSString *zone = #"";
NSArray *min = [[NSArray alloc] initWithObjects:#"0", #"1200", #"1800", #"2200", nil];
NSArray *max = [[NSArray alloc] initWithObjects:#"1200", #"1800", #"2200", #"3500", nil];
NSArray *x_values = [[NSArray alloc] initWithObjects:#"42", #"137", #"200", #"320", nil];
NSArray *mwidth_values = [[NSArray alloc] initWithObjects:#"278", #"183", #"120", #"0", nil];
NSArray *zones = [[NSArray alloc] initWithObjects:#"red", #"green", #"orange", #"red", nil];
for (int i=0; i < 4; i++) {
if(calorie_value >= [min[i] integerValue] && calorie_value <= [max[i] integerValue]) {
zone = zones[i];
x = [x_values[i] integerValue];
mwidth = [mwidth_values[i] integerValue];
break;
}
}
if([[DiabetaDbTransaction getToadyCalories] integerValue] > 0) {
CGContextClearRect(context, CGRectMake(x,y,mwidth,mheight));
}
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.calorie_image.image = coloredImg;

How about to store the orginal image as an ivar
and when you want to remove mask just :
#implementation ViewController {
UIImage *coloredImg_orginal;
}
Before you add mask to image set coloredImg_orginal.
coloredImg_orginal = self.calorie_image.image;
and in the function to remove mask just set orgial image instead of the mask image
self.calorie_image.image = coloredImg_orginal;
(sorry for my bad english)

Related

How to place the `UIColor` on `UIImage` at particular (X,Y) position?

I am trying to place UIColor on particular (X,Y) position of UIImage,
But not able to get it,
Here, My code looks like below,
Below method return me a UIColor from particular (X,Y) position of UIImage
- (UIColor *)isWallPixel:(UIImage *)image xCoordinate:(int)x yCoordinate:(int)y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo]; // If you need this info, enable it
UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
CFRelease(pixelData);
UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The
return color;
}
//Original Image, I get image pixels from this image.
UIImage* image = [UIImage imageNamed:#"plus.png"];
//Need convert Image
UIImage *imagedata = [[UIImage alloc]init];
//size of the original image
int Width = image.size.width;
int height = image.size.height;
//need to get this size of another blank image
CGRect rect = CGRectMake(0.0f, 0.0f, Width, height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (int i = 0; i <Width; i++) {
for (int j = 0; j < height; j++) {
//Here I got the Image pixel color from below highlighted method
UIColor *colordatra = [self isWallPixel:image xCoordinate:i yCoordinate:j];
CGContextSetFillColorWithColor(context, [colordatra CGColor]);
rect.origin.x = i;
rect.origin.y = j;
CGContextDrawImage(context, rect, imagedata.CGImage);
imagedata = UIGraphicsGetImageFromCurrentImageContext();
}
}
UIGraphicsEndImageContext();
Please note that I want to functionality like get the UIColor from particular position and placed that UIColor on another blank image at same position.
With above code I am not able to get this, Please let me know where I have done the mistake.
Hope, for favourable reply,
Thanks in Advance.
By using uiView you can do this one like that.Actually you are getting x,y,width and height
uiview *view = [uiview alloc]initwithframe:CGRectMake(x,y,width,height);
view.backgroundcolor =[UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
[self.YourImageView addsubview:view];

iOS drawing over UIImage

I need to draw over an image graffiti style, as below. My problem is that I need to have the capability to erase the lines I've drawn without erasing sections of the UIImage. At the moment I'm considering using one image for the background image and another image, with a transparent background, for the graffiti drawing, then combining the two once the drawing is complete. Is there a better way?
- (void)drawRect:(CGRect)rect
{
//Get drawing context
CGContextRef context = UIGraphicsGetCurrentContext();
//Create drawing layer if required
if(drawingLayer == nil)
{
drawingLayer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(drawingLayer);
CGContextScaleCTM(layerContext, scale, scale);
self.viewRect = CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.width);
NSLog(#"%f %f",self.viewRect.size.width,self.viewRect.size.width);
}
//Draw paths from array
int arrayCount = [pathArray count];
for(int i = 0; i < arrayCount; i++)
{
path = [pathArray objectAtIndex:i];
UIBezierPath *bezierPath = path.bezierPath;
CGContextRef layerContext = CGLayerGetContext(drawingLayer);
CGContextAddPath(layerContext, bezierPath.CGPath);
CGContextSetLineWidth(layerContext, path.width);
CGContextSetStrokeColorWithColor(layerContext, path.color.CGColor);
CGContextSetLineCap(layerContext, kCGLineCapRound);
if(activeColor == 4)
{
CGContextSetBlendMode(layerContext, kCGBlendModeClear);
}
else
{
CGContextSetBlendMode(layerContext, kCGBlendModeNormal);
}
CGContextStrokePath(layerContext);
}
if (loadedImage == NO)
{
loadedImage = YES;
CGContextRef layerContext = CGLayerGetContext(drawingLayer);
CGContextSaveGState(layerContext);
UIGraphicsBeginImageContext (self.viewRect.size);
CGContextTranslateCTM(layerContext, 0, self.viewRect.size.height);
CGContextScaleCTM(layerContext, 1.0, -1.0);
CGContextDrawImage(layerContext, self.viewRect, self.image.CGImage);
UIGraphicsEndImageContext();
CGContextRestoreGState(layerContext);
}
[pathArray removeAllObjects];
CGContextDrawLayerInRect(context, self.viewRect, drawingLayer);
}

How to crop the detected face

I use CoreImage to detect the face. I want to crop the face after face detection. I use this snippet to detect face:
-(void)markFaces:(UIImageView *)facePicture{
CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -imageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Get the face rect: Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
UIGraphicsBeginImageContext(faceView.bounds.size);
[faceView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage with a CIFilter
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: #"inputRadius"];
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
UIImage *endImage = [[UIImage alloc] initWithCIImage:resultImage];
//Place the UIImage in a UIImageView
UIImageView *newView = [[UIImageView alloc] initWithFrame:self.view.bounds];
newView.image = endImage;
[self.view addSubview:newView];
CGFloat faceWidth = faceFeature.bounds.size.width;
[imageView addSubview:faceView];
// LEFT EYE
if(faceFeature.hasLeftEyePosition)
{
const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
UIView *leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f
,faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
NSLog(#"Left Eye X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f,faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE);
leftEyeView.backgroundColor = [[UIColor magentaColor] colorWithAlphaComponent:0.3];
leftEyeView.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
[imageView addSubview:leftEyeView];
}
// RIGHT EYE
if(faceFeature.hasRightEyePosition)
{
const CGPoint rightEyePos = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);
UIView *rightEye = [[UIView alloc] initWithFrame:CGRectMake(rightEyePos.x - faceWidth*EYE_SIZE_RATE*0.5,
rightEyePos.y - faceWidth*EYE_SIZE_RATE*0.5,
faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
NSLog(#"Right Eye X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",rightEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
rightEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f,faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE);
rightEye.backgroundColor = [[UIColor blueColor] colorWithAlphaComponent:0.2];
rightEye.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
[imageView addSubview:rightEye];
}
// MOUTH
if(faceFeature.hasMouthPosition)
{
const CGPoint mouthPos = CGPointApplyAffineTransform(faceFeature.mouthPosition, transform);
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5,
mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5,
faceWidth*MOUTH_SIZE_RATE,
faceWidth*MOUTH_SIZE_RATE)];
NSLog(#"Mouth X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5f,
mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5f,faceWidth*MOUTH_SIZE_RATE,
faceWidth*MOUTH_SIZE_RATE);
mouth.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.3];
mouth.layer.cornerRadius = faceWidth*MOUTH_SIZE_RATE*0.5;
[imageView addSubview:mouth];
}
}
}
What I want is just crop the face.
You can easily crop the face using this function.it is tested and working properly.
-(void)faceWithFrame:(CGRect)frame{
CGRect rect = frame;
CGImageRef imageRef = CGImageCreateWithImageInRect([self.imageView.image CGImage], rect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
self.cropedImg.image =cropedImage;
}
You just pass the face frame and the above function will give the crop face image.

Drawing Text on UILabel with a gradient image

I have an UILabel subclass that draws some text in drawinRect method using a pattern image. This pattern image is a gradient image I am creating.
-(void)drawRect:(CGRect)rect
{
UIColor *color = [UIColor colorWithPatternImage:[self gradientImage]];
[color set];
[someText drawInRect:rec withFont:font lineBreakMode:UILineBreakModeWordWrap alignment:someAlignment];
}
The gradient image method is
-(UIImage *)gradientImage
{
NSData *colorData = [[NSUserDefaults standardUserDefaults] objectForKey:#"myColor"];
UIColor *co = [NSKeyedUnarchiver unarchiveObjectWithData:colorData];
colors = [NSArray arrayWithObjects:co,co,[UIColor whiteColor],co,co,co,co,nil];
NSArray *gradientColors=colors;
CGFloat width;
CGFloat height;
CGSize textSize = [someText sizeWithFont:font];
width=textSize.width; height=textSize.height;
UIGraphicsBeginImageContext(CGSizeMake(width, height));
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current (need to do this manually because we are not drawing in a UIView)
UIGraphicsPushContext(context);
//draw gradient
CGGradientRef gradient;
CGColorSpaceRef rgbColorspace;
//set uniform distribution of color locations
size_t num_locations = [gradientColors count];
CGFloat locations[num_locations];
for (int k=0; k<num_locations; k++)
{
locations[k] = k / (CGFloat)(num_locations - 1); //we need the locations to start at 0.0 and end at 1.0, equaly filling the domain
}
//create c array from color array
CGFloat components[num_locations * 4];
for (int i=0; i<num_locations; i++)
{
UIColor *color = [gradientColors objectAtIndex:i];
NSAssert(color.canProvideRGBComponents, #"Color components could not be extracted from StyleLabel gradient colors.");
components[4*i+0] = color.red;
components[4*i+1] = color.green;
components[4*i+2] = color.blue;
components[4*i+3] = color.alpha;
}
rgbColorspace = CGColorSpaceCreateDeviceRGB();
gradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
CGRect currentBounds = self.bounds;
CGPoint topCenter = CGPointMake(CGRectGetMidX(currentBounds), 0.0f);
CGPoint midCenter = CGPointMake(CGRectGetMidX(currentBounds), CGRectGetMidY(currentBounds));
CGContextDrawLinearGradient(context, gradient, topCenter, midCenter, 0);
CGGradientRelease(gradient);
CGColorSpaceRelease(rgbColorspace);
// pop context
UIGraphicsPopContext();
// get a UIImage from the image context
UIImage *gradientImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
UIGraphicsEndImageContext();
return gradientImage;
}
Everything is working fine. The text is drawn with nice glossy effect. Now my problem is if I change the x or y position of the text inside the UILabel and then call the
[someText drawinRect:rec]... to redraw, the gloss effect is generated differently. It seems that the pattern image is changing with change of text position..
The following is the effect for the frame rec = CGRectMake(0, 10, self.frame.size.width, self.frame.size.height);
The following is the effect for the frame rec = CGRectMake(0, 40, self.frame.size.width, self.frame.size.height);
I hope I have explained my question well. Could not find any exact answer elsewhere. Thanks in advance.
You might want to look at using an existing 3rd party solution like FXLabel.

ios merge text with image doesn't work with non-wetsern characters

I'm trying to develop localized help files. It works great with western languages but is not displaying non-western characters (just outputs the image file). Same happens if text is encoded in utf-8 or utf-16
Don't like posting all this code but I just can't track down where the issue is.
Any help much appreciated!
- (IBAction)segmentedControlChanged:(UISegmentedControl *)sender {
UIImage *helpImage;
NSArray *helpText;
if (sender.selectedSegmentIndex == 1) {
helpImage = [UIImage imageNamed:#"HelpImage.png"];
helpText = [NSArray arrayWithObjects:
[NSDictionary dictionaryWithObjectsAndKeys:
NSLocalizedString(#"Tab1 - sample text", nil), kHelpTextKeyString,
#"Arial", kHelpTextKeyFontName,
//#"Helvetica-Bold", kHelpTextKeyFontName,
[NSNumber numberWithInt:20], kHelpTextKeyFontSize,
[[UIColor blackColor] CGColor], kHelpTextKeyColor,
CGRectCreateDictionaryRepresentation(CGRectMake(30.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
nil],
// CGRectCreateDictionaryRepresentation(CGRectMake(38.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
//CGRectCreateDictionaryRepresentation(CGRectMake(30.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSArray arrayWithObjects:
NSLocalizedString(#"sample text ", nil),
NSLocalizedString(#" ", nil),
NSLocalizedString(#"more sample text", nil),
nil], kHelpTextKeyString,
#"Helvetica-Light", kHelpTextKeyFontName,
[NSNumber numberWithInt:10], kHelpTextKeyFontSize,
[[UIColor blackColor] CGColor], kHelpTextKeyColor,
CGRectCreateDictionaryRepresentation(CGRectMake(10.0, 80.0, 200.0, 28.0)), kHelpTextKeyRect,
nil],
nil];
}
// display actual image
[self displaySelectedHelpImage:helpImage withTextArray:helpText];
}
/
/ merge selected help image to text
- (void)displaySelectedHelpImage:(UIImage *)orgImage withTextArray:(NSArray *)textArr {
CGImageRef cgImage = [orgImage CGImage];
int pixelsWide = CGImageGetWidth(cgImage);
int pixelsHigh = CGImageGetHeight(cgImage);
int bitsPerComponent = CGImageGetBitsPerComponent(cgImage);//8; // fixed
int bitsPerPixel = CGImageGetBitsPerPixel(cgImage);//bitsPerComponent * numberOfCompnent;
int bytesPerRow = CGImageGetBytesPerRow(cgImage);//(pixelsWide * bitsPerPixel) // 8; // bytes
int byteCount = (bytesPerRow * pixelsHigh);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cgImage);//CGColorSpaceCreateDeviceRGB();
// Allocate data
NSMutableData *data = [NSMutableData dataWithLength:byteCount];
// Create a bitmap context
CGContextRef context = CGBitmapContextCreate([data mutableBytes], pixelsWide, pixelsHigh, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); //kCGImageAlphaPremultipliedLast);//kCGImageAlphaNoneSkipLast); //kCGImageAlphaOnly);
// Set the blend mode to copy to avoid any alteration of the source data or to invert to invert image
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Set alpha
CGContextSetAlpha(context, 1.0);
// Color image
//CGContextSetRGBFillColor(context, 1 ,1, 1, 1.0);
//CGContextFillRect(context, CGRectMake(0.0, 0.0, pixelsWide, pixelsHigh));
// Draw the image to extract the alpha channel
CGContextDrawImage(context, CGRectMake(0.0, 0.0, pixelsWide, pixelsHigh), cgImage);
// add text to image
// Changes the origin of the user coordinate system in a context
//CGContextTranslateCTM (context, pixelsWide, pixelsHigh);
// Rotate context upright
//CGContextRotateCTM (context, -180. * M_PI/180);
for (NSDictionary *dic in textArr) {
CGContextSelectFont (context,
//todo
[[dic objectForKey:kHelpTextKeyFontName] UTF8String],
[[dic objectForKey:kHelpTextKeyFontSize] intValue],
kCGEncodingMacRoman);
CGContextSetCharacterSpacing (context, 2);
CGContextSetTextDrawingMode (context, kCGTextFillStroke);
CGColorRef color = (CGColorRef)[dic objectForKey:kHelpTextKeyColor];
CGRect rect;
CGRectMakeWithDictionaryRepresentation((CFDictionaryRef)[dic objectForKey:kHelpTextKeyRect], &rect);
CGContextSetFillColorWithColor(context, color);
CGContextSetStrokeColorWithColor(context, color);
if ([[dic objectForKey:kHelpTextKeyString] isKindOfClass:[NSArray class]]) {
for (NSString *str in [dic objectForKey:kHelpTextKeyString]) {
CGContextShowTextAtPoint(context,
rect.origin.x,
pixelsHigh - rect.origin.y,
[str cStringUsingEncoding:[NSString defaultCStringEncoding]],
[str length]);
rect.origin.y += [[dic objectForKey:kHelpTextKeyFontSize] intValue];
}
} else {
CGContextShowTextAtPoint(context,
rect.origin.x,
pixelsHigh - rect.origin.y,
[[dic objectForKey:kHelpTextKeyString] cStringUsingEncoding:[NSString defaultCStringEncoding]],
[[dic objectForKey:kHelpTextKeyString] length]);
}
}
// Now the alpha channel has been copied into our NSData object above, so discard the context and lets make an image mask.
CGContextRelease(context);
// Create a data provider for our data object (NSMutableData is tollfree bridged to CFMutableDataRef, which is compatible with CFDataRef)
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((CFMutableDataRef)data);
// Create our new mask image with the same size as the original image
//CGImageRef maskingImage = CGImageMaskCreate(pixelsWide, pixelsHigh, bitsPerComponent, bitsPerPixel, bytesPerRow, dataProvider, NULL, YES);
CGImageRef finalImage = CGImageCreate(pixelsWide,
pixelsHigh,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
dataProvider,
NULL,
YES,
kCGRenderingIntentDefault);
// And release the provider.
CGDataProviderRelease(dataProvider);
UIImage *theImage = [UIImage imageWithCGImage:finalImage];
// remove old scroll view
if (scrollView) {
[scrollView removeFromSuperview];
}
// construct new scroll view and size according to image
UIScrollView *tempScrollView = [[UIScrollView alloc] initWithFrame:containerView.bounds];
tempScrollView.contentSize = theImage.size;
scrollView = tempScrollView;
// construct an image view (sized at zero) and assign the help image to it
UIImageView *tempImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, theImage.size.width, 0.0)];
[tempImageView setImage:theImage];
// push image view to scrolle view and scroll view to container view
[tempScrollView addSubview:tempImageView];
[containerView addSubview:tempScrollView];
// animate
[UIView beginAnimations:#"ResizeImageView" context:NULL];
[UIView setAnimationDuration:1.0];
// recover actual image size through animation
[tempImageView setFrame:CGRectMake(0.0, 0.0, theImage.size.width, theImage.size.height)];
[UIView setAnimationDelegate:self];
[UIView setAnimationCurve: UIViewAnimationCurveEaseOut];
[UIView commitAnimations];- (IBAction)segmentedControlChanged:(UISegmentedControl *)sender {
UIImage *helpImage;
NSArray *helpText;
if (sender.selectedSegmentIndex == 1) {
helpImage = [UIImage imageNamed:#"HelpImage.png"];
helpText = [NSArray arrayWithObjects:
[NSDictionary dictionaryWithObjectsAndKeys:
NSLocalizedString(#"Tab1 - sample text", nil), kHelpTextKeyString,
#"Arial", kHelpTextKeyFontName,
//#"Helvetica-Bold", kHelpTextKeyFontName,
[NSNumber numberWithInt:20], kHelpTextKeyFontSize,
[[UIColor blackColor] CGColor], kHelpTextKeyColor,
CGRectCreateDictionaryRepresentation(CGRectMake(30.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
nil],
// CGRectCreateDictionaryRepresentation(CGRectMake(38.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
//CGRectCreateDictionaryRepresentation(CGRectMake(30.0, 55.0, 200.0, 28.0)), kHelpTextKeyRect,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSArray arrayWithObjects:
NSLocalizedString(#"sample text ", nil),
NSLocalizedString(#" ", nil),
NSLocalizedString(#"more sample text", nil),
nil], kHelpTextKeyString,
#"Helvetica-Light", kHelpTextKeyFontName,
[NSNumber numberWithInt:10], kHelpTextKeyFontSize,
[[UIColor blackColor] CGColor], kHelpTextKeyColor,
CGRectCreateDictionaryRepresentation(CGRectMake(10.0, 80.0, 200.0, 28.0)), kHelpTextKeyRect,
nil],
nil];
}
// display actual image
[self displaySelectedHelpImage:helpImage withTextArray:helpText];
}
// merge selected help image to text
- (void)displaySelectedHelpImage:(UIImage *)orgImage withTextArray:(NSArray *)textArr {
CGImageRef cgImage = [orgImage CGImage];
int pixelsWide = CGImageGetWidth(cgImage);
int pixelsHigh = CGImageGetHeight(cgImage);
int bitsPerComponent = CGImageGetBitsPerComponent(cgImage);//8; // fixed
int bitsPerPixel = CGImageGetBitsPerPixel(cgImage);//bitsPerComponent * numberOfCompnent;
int bytesPerRow = CGImageGetBytesPerRow(cgImage);//(pixelsWide * bitsPerPixel) // 8; // bytes
int byteCount = (bytesPerRow * pixelsHigh);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cgImage);//CGColorSpaceCreateDeviceRGB();
// Allocate data
NSMutableData *data = [NSMutableData dataWithLength:byteCount];
// Create a bitmap context
CGContextRef context = CGBitmapContextCreate([data mutableBytes], pixelsWide, pixelsHigh, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); //kCGImageAlphaPremultipliedLast);//kCGImageAlphaNoneSkipLast); //kCGImageAlphaOnly);
// Set the blend mode to copy to avoid any alteration of the source data or to invert to invert image
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Set alpha
CGContextSetAlpha(context, 1.0);
// Color image
//CGContextSetRGBFillColor(context, 1 ,1, 1, 1.0);
//CGContextFillRect(context, CGRectMake(0.0, 0.0, pixelsWide, pixelsHigh));
// Draw the image to extract the alpha channel
CGContextDrawImage(context, CGRectMake(0.0, 0.0, pixelsWide, pixelsHigh), cgImage);
// add text to image
// Changes the origin of the user coordinate system in a context
//CGContextTranslateCTM (context, pixelsWide, pixelsHigh);
// Rotate context upright
//CGContextRotateCTM (context, -180. * M_PI/180);
for (NSDictionary *dic in textArr) {
CGContextSelectFont (context,
//todo
[[dic objectForKey:kHelpTextKeyFontName] UTF8String],
[[dic objectForKey:kHelpTextKeyFontSize] intValue],
kCGEncodingMacRoman);
CGContextSetCharacterSpacing (context, 2);
CGContextSetTextDrawingMode (context, kCGTextFillStroke);
CGColorRef color = (CGColorRef)[dic objectForKey:kHelpTextKeyColor];
CGRect rect;
CGRectMakeWithDictionaryRepresentation((CFDictionaryRef)[dic objectForKey:kHelpTextKeyRect], &rect);
CGContextSetFillColorWithColor(context, color);
CGContextSetStrokeColorWithColor(context, color);
if ([[dic objectForKey:kHelpTextKeyString] isKindOfClass:[NSArray class]]) {
for (NSString *str in [dic objectForKey:kHelpTextKeyString]) {
CGContextShowTextAtPoint(context,
rect.origin.x,
pixelsHigh - rect.origin.y,
[str cStringUsingEncoding:[NSString defaultCStringEncoding]],
[str length]);
rect.origin.y += [[dic objectForKey:kHelpTextKeyFontSize] intValue];
}
} else {
CGContextShowTextAtPoint(context,
rect.origin.x,
pixelsHigh - rect.origin.y,
[[dic objectForKey:kHelpTextKeyString] cStringUsingEncoding:[NSString defaultCStringEncoding]],
[[dic objectForKey:kHelpTextKeyString] length]);
}
}
// Now the alpha channel has been copied into our NSData object above, so discard the context and lets make an image mask.
CGContextRelease(context);
// Create a data provider for our data object (NSMutableData is tollfree bridged to CFMutableDataRef, which is compatible with CFDataRef)
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((CFMutableDataRef)data);
// Create our new mask image with the same size as the original image
//CGImageRef maskingImage = CGImageMaskCreate(pixelsWide, pixelsHigh, bitsPerComponent, bitsPerPixel, bytesPerRow, dataProvider, NULL, YES);
CGImageRef finalImage = CGImageCreate(pixelsWide,
pixelsHigh,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
dataProvider,
NULL,
YES,
kCGRenderingIntentDefault);
// And release the provider.
CGDataProviderRelease(dataProvider);
UIImage *theImage = [UIImage imageWithCGImage:finalImage];
// remove old scroll view
if (scrollView) {
[scrollView removeFromSuperview];
}
// construct new scroll view and size according to image
UIScrollView *tempScrollView = [[UIScrollView alloc] initWithFrame:containerView.bounds];
tempScrollView.contentSize = theImage.size;
scrollView = tempScrollView;
// construct an image view (sized at zero) and assign the help image to it
UIImageView *tempImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, theImage.size.width, 0.0)];
[tempImageView setImage:theImage];
// push image view to scrolle view and scroll view to container view
[tempScrollView addSubview:tempImageView];
[containerView addSubview:tempScrollView];
// animate
[UIView beginAnimations:#"ResizeImageView" context:NULL];
[UIView setAnimationDuration:1.0];
// recover actual image size through animation
[tempImageView setFrame:CGRectMake(0.0, 0.0, theImage.size.width, theImage.size.height)];
[UIView setAnimationDelegate:self];
[UIView setAnimationCurve: UIViewAnimationCurveEaseOut];
[UIView commitAnimations];
You're drawing text using CGContextShowTextAtPoint, which has poor support for non-ASCII text.
Similar issues: one, two. Apple's documentation explains the issue.
Use a higher-level API to draw the text, such as the methods in UIKit/UIStringDrawing.h, like -[NSString drawAtPoint:withFont:].

Resources