I'm just taking a signature and saving with imageMask . Here actually the imageMask rendering properly but the main signature behaves abnormally like 2 lines of it.
Here is my code .
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0); //retina res
[self.imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
[imageView.image drawInRect:CGRectMake(0, 0, 703, 273)];
[maskImages.image drawAtPoint:CGPointMake(10, 10) blendMode:kCGBlendModeNormal alpha:0.2];
[lblAckNo drawTextInRect:CGRectMake(320, 230,100,50)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[[UIColor redColor] set];
NSData *imgData = UIImageJPEGRepresentation(image, 1.0);
UIGraphicsEndImageContext();
NSString *jpgPath = #"/Users/kumaralakshmanna/Pictures/Test.jpg";
[UIImageJPEGRepresentation(image, 1.0) writeToFile:jpgPath atomically:YES];
Here is the screenshots of it. && This is what I'm getting -
Any Solution to overcome from this issue .? Thanks.
Make sure that you are drawing using the same CGSize. You are probably using two different size to capture the image and to draw it, so it gets stretched.
Related
I want to add text on an image programmatically, but I can't seem to find how to do it. I've found one solution on here, but a lot of things are deprecated so it doesn't work...
Please help!
EDIT:
Here's my code:
UIImage *backgroundImage = image;
NSMutableDictionary *stringAttributes = [NSMutableDictionary dictionary];
[stringAttributes setObject: [UIFont fontWithName:#"Helvetica" size:20] forKey: NSFontAttributeName];
[stringAttributes setObject: [UIColor whiteColor] forKey: NSForegroundColorAttributeName];
[stringAttributes setObject: [NSNumber numberWithFloat: 2.0] forKey: NSStrokeWidthAttributeName];
[stringAttributes setObject: [UIColor blackColor] forKey: NSStrokeColorAttributeName];
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
NSString *myString = [NSString stringWithFormat:#"Yolo"];
[myString drawInRect:CGRectMake(0, 0, 200, 50) withAttributes:stringAttributes];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageView.image = result;
NEW EDIT:
I'd like to clearify some things to understand the question better. My app lets the user send a photo that they have taken themselves via text messaging or by email, and I want to add some pre-written text from strings, on the photo.
So my question is: How do I get the text from the strings, on to the photo?
It took forever, but I figured it out.
Code:
NSMutableDictionary *stringAttributes = [NSMutableDictionary dictionary];
[stringAttributes setObject: [UIFont fontWithName:#"Avenir Book" size:250] forKey: NSFontAttributeName];
[stringAttributes setObject: [UIColor whiteColor] forKey: NSForegroundColorAttributeName];
NSString *placeString = [NSString stringWithFormat:#"Text here."];
CGSize size = [placeString sizeWithAttributes:stringAttributes];
//Create a bitmap context into which the text will be rendered.
UIGraphicsBeginImageContext(size);
//Render the text.
[placeString drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:stringAttributes];
//Retrieve the image.
UIImage *imagene = UIGraphicsGetImageFromCurrentImageContext();
UIImage *mergedImage = _imageView.image;
CGSize newSize = image.size;
UIGraphicsBeginImageContext(newSize);
//Use existing opacity as is.
[mergedImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
//Apply supplied opacity if applicable.
CGRect drawingRect = (CGRect) {.size = size};
drawingRect.origin.x = (newSize.width - size.width) * 0.01;
drawingRect.origin.y = (newSize.height - size.height) * 0.03;
[imagene drawInRect:drawingRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[_imageView setImage:newImage];
self.imageView.image = newImage;
Here's a general approach, code is likely missing but it gives you the important bits.
CGContextRef ctx = CGBitmapContextCreate(...)
CGContextDrawImage (gtx, myContextRect, myimage);
CGContextSelectFont(ctx, "Helvetica", 10.0, kCGEncodingMacRoman);
CGContextSetCharacterSpacing(ctx, 1.7);
CGContextSetTextDrawingMode(ctx, kCGTextFill);
CGContextShowTextAtPoint(ctx, 100.0, 100.0, "SOME TEXT", 9);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
This will give you an image you can put in a view, or share via UIActivityViewController. I'm sure you can work out the bits.
The approach is:
1) Create bitmap context.
2) Render image
3) Add text
4) Save context to image
Simply place a UILabel on top of a UIImageView in IB and position them as desired using constraints. Then set the text into the label as desired.
EDIT:
Now that I know you want to be able to save I'd suggest using UIGraphicsBeginImageContextWithOptions to create an off-screen graphics context. I find that easier to deal with than CGContext and Core Graphics calls.
Draw into the context and the fetch and image from it.
Make the context the size of your image, and pass in a scale of 0 to use the device's native scale.
Your code might look something like this:
//Create an off-screen graphics context for drawing.
CGSize imageSize = myImage.size;
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0);
//draw your image using drawAtPoint(CGPointmake(0,0));
//draw your string using one of the drawAtPoint or drawInRect methods
//available in NSString UIKit additions
//Fetch the resulting image from the context.
UIImage *maskImage = UIGraphicsGetImageFromCurrentImageContext();
//End the off-screen context
UIGraphicsEndImageContext();
I am trying to resize a PNG which has transparent sections, first I used:
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
[sourceImage drawInRect:CGRectMake(0,0, newSize.width, newSize.height)];
UIImage* targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but the resultant image is opaque. Then I read that drawInRect by defaults draws the image as opaque, so I modified the drawInRect line to:
[fromImage drawInRect:CGRectMake(0,0, newSize.width, newSize.height) blendMode:kCGBlendModeNormal alpha:0.0];
The resultant image is blank, I think there should be some combination of parameters in the drawInRect that will retain the transparency of the image.
I have searched other threads, and everywhere I see the generic resizing code, but nowhere it talks about images with transparent portions.
Anybody has any Idea ?
CGRect rect = CGRectMake(0,0,newsize.width,newsize.height);
UIGraphicsBeginImageContext( rect.size );
[sourceImage drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
textureColor=[UIImage imageWithData:imageData];
Basically I have a main UIImage, which acts as a background/border. Within that UIImage I have 2 more UIImages, vertically split with a gap around them so you can still see a border of the main background UIImage. On each side I have a UILabel, to describe the images. Below is a picture of what I mean to help put into context.
What I want to achieve is to make this into 1 image, but keeping all of the current positions, layouts, image layouts (Aspect Fill) and label sizes and label background colours the same. I also want this image to be the same quality so it still looks good.
I have looked at many other stackoverflow questions and have so far come up with the follow, but it has the following problems:
Doesn't position the image labels to their correct places and sizes
Doesn't have the background colour for the labels or main image
Doesn't have the images as Aspect Fill (like the UIImageViews) so the outside of each picture is shown as well and isn't cropped properly, like in the above example.
Below is my code so far, can anyone help me achieve it like the image above please? I am fairly new to iOS development and am struggling a bit:
-(UIImage *)renderImagesForSharing{
CGSize newImageSize = CGSizeMake(640, 640);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0);
[self.mainImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.beforeImageSide.image drawInRect:CGRectMake(0,0,(newImageSize.width/2),newImageSize.height)];
[self.afterImageSize.image drawInRect:CGRectMake(320,0,(newImageSize.width/2),newImageSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.beforeLabel drawTextInRect:CGRectMake(60.0f, 0.0f, 200.0f, 50.0f)];
[self.afterLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9);
UIImage * imagePNG = [UIImage imageWithData:imgData]; //wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
}
Thank you in advance for any help guys!
I don't understand why you want use drawInRect: to accomplish this task.
Since you have the images and everything with you, you can easily create a view as you have shown in the image. Then take a screenshot of it like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData=UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Change the self.view to the view just created
It will give some idea.
UIGraphicsBeginImageContextWithOptions(DiagnosisView.bounds.size, DiagnosisView.opaque, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor redColor] set];
CGContextFillRect(ctx, DiagnosisView.frame);
[DiagnosisView.layer renderInContext:ctx];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSString *imagePath = [KdiagnosisFolderPath stringByAppendingPathComponent:FileName];
NSData *pngData = UIImagePNGRepresentation(img);
[pngData writeToFile:imagePath atomically:YES];
pngData = nil,imagePath = nil;
I've developed an app on iOS5 and iOS6. After I upgraded to XCode 5 and iOS7, I have some new bugs to play with.
The main one is the colorMasking no longer works. The exact same code still compiles and works on a phone with iOS6. On iOS7, the masked color is still there. I tried to find the answer on Google, but haven't found an answer. Is it a bug of iOS7, or does anybody know of a better way of doing colormasking?
Here is the code:
- (UIImage*) processImage :(UIImage*) image
{
UIImage *inputImage = [UIImage imageWithData:UIImageJPEGRepresentation(image, 1.0)];
const float colorMasking[6]={100.0, 255.0, 0.0, 100.0, 100.0, 255.0};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return finalImage;
}
Here are a couple StackOverflow posts I found that helped me get it working in iOS6 the first place:
Transparency iOS
iOS color to transparent in UIImage
I have stumbled across some strange behavior of CGImageCreateWithMaskingColors in conjunction with UIImagePNGRepresentation. This may or may not be related to your problem. I have found that if:
If use CGImageCreateWithMaskingColors and immediately add that image to an image view, I can see that the transparency appears to have been applied correctly;
But in iOS 7, if I then:
take this image from CGImageCreateWithMaskingColors and create a NSData using UIImagePNGRepresentation; and
if reload the image from that NSData using imageWithData, then the resulting image will no longer have its transparency.
To confirm this, if I writeToFile for this NSData and examine the saved image in a tool like Photoshop, I can confirm that the file does not have any transparency applied.
This only manifests itself in iOS 7. In iOS 6 it's fine.
But if I take the image in step 1 and roundtrip it through drawInRect, the same process of saving the image and subsequently loading it works fine.
This following code illustrates the issue:
- (UIImage*) processImage :(UIImage*) inputImage
{
const float colorMasking[6] = {255.0, 255.0, 255.0, 255.0, 255.0, 255.0};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// If I put this image in an image view, I see the transparency fine.
self.imageView.image = finalImage; // this works
// But if I save it to disk and the file does _not_ have any transparency
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *pathWithoutTransparency = [documentsPath stringByAppendingPathComponent:#"image-but-no-transparency.png"];
NSData *data = UIImagePNGRepresentation(finalImage);
[data writeToFile:pathWithoutTransparency atomically:YES]; // save it so I can check out the file in Photoshop
// In iOS 7, the following imageview does not honor the transparency
self.imageView2.image = [UIImage imageWithData:data]; // this does not work in iOS 7
// but, if I round-trip the original image through `drawInRect` one final time,
// the transparency works
UIGraphicsBeginImageContextWithOptions(finalImage.size, NO, 1.0);
[finalImage drawInRect:CGRectMake(0, 0, finalImage.size.width, finalImage.size.height)];
UIImage *anotherRendition = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
data = UIImagePNGRepresentation(anotherRendition);
NSString *pathWithTransparency = [documentsPath stringByAppendingPathComponent:#"image-with-transparancy.png"];
[data writeToFile:pathWithTransparency atomically:YES];
// But this image is fine
self.imageView3.image = [UIImage imageWithContentsOfFile:pathWithTransparency]; // this does work
return anotherRendition;
}
I was loading a JPEG which for some reason loads with an alpha channel, which won't work when masking, so here I recreate the CGImage ignoring the alpha channel. There may be a better way of doing this but this works!
- (UIImage *)imageWithChromaKeyMasking {
const CGFloat colorMasking[6]={255.0,255.0,255.0,255.0,255.0,255.0};
CGImageRef oldImage = self.CGImage;
CGBitmapInfo oldInfo = CGImageGetBitmapInfo(oldImage);
CGBitmapInfo newInfo = (oldInfo & (UINT32_MAX ^ kCGBitmapAlphaInfoMask)) | kCGImageAlphaNoneSkipLast;
CGDataProviderRef provider = CGImageGetDataProvider(oldImage);
CGImageRef newImage = CGImageCreate(self.size.width, self.size.height, CGImageGetBitsPerComponent(oldImage), CGImageGetBitsPerPixel(oldImage), CGImageGetBytesPerRow(oldImage), CGImageGetColorSpace(oldImage), newInfo, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider); provider = NULL;
CGImageRef im = CGImageCreateWithMaskingColors(newImage, colorMasking);
UIImage *ret = [UIImage imageWithCGImage:im];
CGImageRelease(im);
return ret;
}
So the app I'm working on lets users write text, and the app translate that text into separate UILabels (one UILabel that only contains the text, and another UILabel that is the same width as the text, but with a transparent color background). I want to combine all the uilabels with the UIImage in the background to create a new UIImage.
So far I have the following code, but it does not produce any tangible results and would love anyone's help with this problem:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 416), NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[_imgViewFinal.layer renderInContext:ctx];
CGContextSaveGState(ctx);
for (int i = 0; i < _allLabels.count; i++) {
UILabel *tempLabelBg = [_allBgs objectAtIndex:i];
CGContextTranslateCTM(ctx, tempLabelBg.frame.origin.x, tempLabelBg.frame.origin.y);
CGContextSaveGState(ctx);
[tempLabelBg.layer renderInContext:ctx];
CGContextRestoreGState(ctx);
UILabel *tempLabelText = [_allLabels objectAtIndex:i];
[tempLabelText drawTextInRect:tempLabelText.frame];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 1.0);
UIImage * imagePNG = [UIImage imageWithData:imgData];
UIGraphicsEndImageContext();
*I know that drawTextInRect does not use the contextref I create. I'm not sure how to draw the text into the context. I don't think I'm using the CGContextSave and Restore properly either.
UIGraphicsBeginImageContextWithOptions(myView.bounds.size, myView.opaque, 0.0);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
where myView is the view holding whatever you want in the image (img) we're creating.. I didn't test it for Labels, but it should work fine.