I've developed an app on iOS5 and iOS6. After I upgraded to XCode 5 and iOS7, I have some new bugs to play with.
The main one is the colorMasking no longer works. The exact same code still compiles and works on a phone with iOS6. On iOS7, the masked color is still there. I tried to find the answer on Google, but haven't found an answer. Is it a bug of iOS7, or does anybody know of a better way of doing colormasking?
Here is the code:
- (UIImage*) processImage :(UIImage*) image
{
UIImage *inputImage = [UIImage imageWithData:UIImageJPEGRepresentation(image, 1.0)];
const float colorMasking[6]={100.0, 255.0, 0.0, 100.0, 100.0, 255.0};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return finalImage;
}
Here are a couple StackOverflow posts I found that helped me get it working in iOS6 the first place:
Transparency iOS
iOS color to transparent in UIImage
I have stumbled across some strange behavior of CGImageCreateWithMaskingColors in conjunction with UIImagePNGRepresentation. This may or may not be related to your problem. I have found that if:
If use CGImageCreateWithMaskingColors and immediately add that image to an image view, I can see that the transparency appears to have been applied correctly;
But in iOS 7, if I then:
take this image from CGImageCreateWithMaskingColors and create a NSData using UIImagePNGRepresentation; and
if reload the image from that NSData using imageWithData, then the resulting image will no longer have its transparency.
To confirm this, if I writeToFile for this NSData and examine the saved image in a tool like Photoshop, I can confirm that the file does not have any transparency applied.
This only manifests itself in iOS 7. In iOS 6 it's fine.
But if I take the image in step 1 and roundtrip it through drawInRect, the same process of saving the image and subsequently loading it works fine.
This following code illustrates the issue:
- (UIImage*) processImage :(UIImage*) inputImage
{
const float colorMasking[6] = {255.0, 255.0, 255.0, 255.0, 255.0, 255.0};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// If I put this image in an image view, I see the transparency fine.
self.imageView.image = finalImage; // this works
// But if I save it to disk and the file does _not_ have any transparency
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *pathWithoutTransparency = [documentsPath stringByAppendingPathComponent:#"image-but-no-transparency.png"];
NSData *data = UIImagePNGRepresentation(finalImage);
[data writeToFile:pathWithoutTransparency atomically:YES]; // save it so I can check out the file in Photoshop
// In iOS 7, the following imageview does not honor the transparency
self.imageView2.image = [UIImage imageWithData:data]; // this does not work in iOS 7
// but, if I round-trip the original image through `drawInRect` one final time,
// the transparency works
UIGraphicsBeginImageContextWithOptions(finalImage.size, NO, 1.0);
[finalImage drawInRect:CGRectMake(0, 0, finalImage.size.width, finalImage.size.height)];
UIImage *anotherRendition = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
data = UIImagePNGRepresentation(anotherRendition);
NSString *pathWithTransparency = [documentsPath stringByAppendingPathComponent:#"image-with-transparancy.png"];
[data writeToFile:pathWithTransparency atomically:YES];
// But this image is fine
self.imageView3.image = [UIImage imageWithContentsOfFile:pathWithTransparency]; // this does work
return anotherRendition;
}
I was loading a JPEG which for some reason loads with an alpha channel, which won't work when masking, so here I recreate the CGImage ignoring the alpha channel. There may be a better way of doing this but this works!
- (UIImage *)imageWithChromaKeyMasking {
const CGFloat colorMasking[6]={255.0,255.0,255.0,255.0,255.0,255.0};
CGImageRef oldImage = self.CGImage;
CGBitmapInfo oldInfo = CGImageGetBitmapInfo(oldImage);
CGBitmapInfo newInfo = (oldInfo & (UINT32_MAX ^ kCGBitmapAlphaInfoMask)) | kCGImageAlphaNoneSkipLast;
CGDataProviderRef provider = CGImageGetDataProvider(oldImage);
CGImageRef newImage = CGImageCreate(self.size.width, self.size.height, CGImageGetBitsPerComponent(oldImage), CGImageGetBitsPerPixel(oldImage), CGImageGetBytesPerRow(oldImage), CGImageGetColorSpace(oldImage), newInfo, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider); provider = NULL;
CGImageRef im = CGImageCreateWithMaskingColors(newImage, colorMasking);
UIImage *ret = [UIImage imageWithCGImage:im];
CGImageRelease(im);
return ret;
}
Related
I'm implementing a react-native component for iOS and need to return a UIImage.
What I have is return [UIImage imageNamed:#"myAsset"]; which is working but the image presented is way too small.
How can I load an asset image and make it bigger?
Another aspect is, that the impl returning the UIImage is invoked quite often for objects we draw onto the screen, but they are all the same, so scaling with every call is maybe not a good idea, but I have no idea how to make assets having a size. Last but not least, it's a PDF asset.
What I've tried is this ...
So I search for image scaling and came here:
NSURL *url = [NSBundle.mainBundle URLForResource:#"myAsset" withExtension:nil];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data scale:0.5];
return image;
but now url is nil and it's not working.
Then I found this:
UIImage *image = [UIImage imageNamed:#"myAsset"];
NSData *rawData = (__bridge NSData *) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
return [UIImage imageWithData:rawData scale:0.5];
Which is somehow also not working at all.
Now I hope maybe you can help me and thank you in advance.
func resizeImage(image: UIImage, newWidth: CGFloat, newHeight: CGFloat) -> UIImage {
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
I have question about correct using UIImage and method drawInRect.
I use iPad 4,Xcode Version 6.1.1, IOS 8.1.2 and I used ARC and I have tried without ARC
So, I have image "1.jpg".
Image Properties:
Dimension: 7500 x 8871 pixels
Resolution: 72 pixels/inch
Color Space: RGB
Alpha Channel: NO
I need to rescale the original image "1.jpg"
I use this code:
UIImage *originalImage=[UIImage imageNamed:#"1.jpg"];
CGSize newSize=(CGSizeMake(4096, 4096));
originalImage=[self GetScaledImage : originalImage andSize: newSize];
//----Scaling Method----------------------//
-(UIImage *) GetScaledImage :(UIImage *) inputImage andSize :(CGSize) inputSize
{
UIImage *newImage=[[[UIImage alloc] initWithCGImage:inputImage.CGImage] autorelease];
if (newImage)
{
UIGraphicsBeginImageContext(inputSize);
[newImage drawInRect: CGRectMake(0, 0, inputSize.width, inputSize.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
My question:
if I use newSize: (4096, 4096)
UIImage *originalImage=[UIImage imageNamed:#"1.jpg"];
CGSize newSize=(CGSizeMake(4096, 4096));
Memory now: 3.1 MB
originalImage=[self GetScaledImage : originalImage andSize: newSize];
Memory now: 4.3 MB
and it works correct
but if I use newSize: (3000, 4096)
I have this:
UIImage *originalImage=[UIImage imageNamed:#"1.jpg"];
CGSize newSize=(CGSizeMake(3000, 4096));
Memory now: 3.1 MB
originalImage=[self GetScaledImage : originalImage andSize: newSize];
Memory now: 52 MB
and it works incorrect
and magic: if I use newSize: (4096,3000) it works correct too but (3000,4096) works incorrect
So, my question: How does it work?
You can solve the problem by using the following method instead.
+ (UIImage *)scaleImageWithData:(NSData *)data withSize:(CGSize)size
scale:(CGFloat)scale
orientation:(UIImageOrientation)orientation {
CGFloat maxPixelSize = MAX(size.width, size.height);
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSDictionary *options = #{(__bridge id)kCGImageSourceCreateThumbnailFromImageAlways:(__bridge id)kCFBooleanTrue,
(__bridge id)kCGImageSourceThumbnailMaxPixelSize:[NSNumber numberWithFloat:maxPixelSize]
};
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
UIImage *resultImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
CFRelease(sourceRef);
return resultImage;
}
I suppose there is no magic. The CGImage object is not deallocated but owned. For this, you need to set image.CGImage to some variable and then release it with
CFRelease(cgImage);
Also try to clear graphics context after all processing done.
Basically I have a main UIImage, which acts as a background/border. Within that UIImage I have 2 more UIImages, vertically split with a gap around them so you can still see a border of the main background UIImage. On each side I have a UILabel, to describe the images. Below is a picture of what I mean to help put into context.
What I want to achieve is to make this into 1 image, but keeping all of the current positions, layouts, image layouts (Aspect Fill) and label sizes and label background colours the same. I also want this image to be the same quality so it still looks good.
I have looked at many other stackoverflow questions and have so far come up with the follow, but it has the following problems:
Doesn't position the image labels to their correct places and sizes
Doesn't have the background colour for the labels or main image
Doesn't have the images as Aspect Fill (like the UIImageViews) so the outside of each picture is shown as well and isn't cropped properly, like in the above example.
Below is my code so far, can anyone help me achieve it like the image above please? I am fairly new to iOS development and am struggling a bit:
-(UIImage *)renderImagesForSharing{
CGSize newImageSize = CGSizeMake(640, 640);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0);
[self.mainImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.beforeImageSide.image drawInRect:CGRectMake(0,0,(newImageSize.width/2),newImageSize.height)];
[self.afterImageSize.image drawInRect:CGRectMake(320,0,(newImageSize.width/2),newImageSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.beforeLabel drawTextInRect:CGRectMake(60.0f, 0.0f, 200.0f, 50.0f)];
[self.afterLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9);
UIImage * imagePNG = [UIImage imageWithData:imgData]; //wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
}
Thank you in advance for any help guys!
I don't understand why you want use drawInRect: to accomplish this task.
Since you have the images and everything with you, you can easily create a view as you have shown in the image. Then take a screenshot of it like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData=UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Change the self.view to the view just created
It will give some idea.
UIGraphicsBeginImageContextWithOptions(DiagnosisView.bounds.size, DiagnosisView.opaque, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor redColor] set];
CGContextFillRect(ctx, DiagnosisView.frame);
[DiagnosisView.layer renderInContext:ctx];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSString *imagePath = [KdiagnosisFolderPath stringByAppendingPathComponent:FileName];
NSData *pngData = UIImagePNGRepresentation(img);
[pngData writeToFile:imagePath atomically:YES];
pngData = nil,imagePath = nil;
I'm just taking a signature and saving with imageMask . Here actually the imageMask rendering properly but the main signature behaves abnormally like 2 lines of it.
Here is my code .
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0); //retina res
[self.imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
[imageView.image drawInRect:CGRectMake(0, 0, 703, 273)];
[maskImages.image drawAtPoint:CGPointMake(10, 10) blendMode:kCGBlendModeNormal alpha:0.2];
[lblAckNo drawTextInRect:CGRectMake(320, 230,100,50)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[[UIColor redColor] set];
NSData *imgData = UIImageJPEGRepresentation(image, 1.0);
UIGraphicsEndImageContext();
NSString *jpgPath = #"/Users/kumaralakshmanna/Pictures/Test.jpg";
[UIImageJPEGRepresentation(image, 1.0) writeToFile:jpgPath atomically:YES];
Here is the screenshots of it. && This is what I'm getting -
Any Solution to overcome from this issue .? Thanks.
Make sure that you are drawing using the same CGSize. You are probably using two different size to capture the image and to draw it, so it gets stretched.
How can I clear out the magenta part of an UIImage and make it transparent?
I've looked through numerous answers and links on SO and nothing works (e.g. How to make one color transparent on a UIImage? answer 1 removes everything but red, answer 2 apparently doesn't work because of Why is CGImageCreateWithMaskingColors() returning nil in this case?).
Update:
If I use CGImageCreateWithMaskingColors with the UIImage I get a nil value. If I remove the alpha channel (I represent the image as JPEG and read it back) CGImageCreateWithMaskingColors returns an image painted with a black background.
Update2, the code:
Returning nil:
const float colorMasking[6] = {222, 255, 222, 255, 222, 255};
CGImageRef imageRef = CGImageCreateWithMaskingColors(anchorWithMask.CGImage, colorMasking);
NSLog(#"image ref %#", imageRef);
// this is going to return a nil imgref.
UIImage *image = [UIImage imageWithCGImage:imageRef];
Returning an image with black background (which is normal since there is not alpha channel):
UIImage *inputImage = [UIImage imageWithData:UIImageJPEGRepresentation(anchorWithMask, 1.0)];
const float colorMasking[6] = {222, 255, 222, 255, 222, 255};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
NSLog(#"image ref %#", imageRef);
// imgref is NOT nil.
UIImage *image = [UIImage imageWithCGImage:imageRef];
Update3:
I got it working by adding the alpha channel after the masking process.
UIImage *image = [UIImage imageNamed:#"image.png"];
const float colorMasking[6] = {1.0, 1.0, 0.0, 0.0, 1.0, 1.0};
image = [UIImage imageWithCGImage: CGImageCreateWithMaskingColors(image.CGImage, colorMasking)];
You receive nil, because parameters, that you send is invalid. If you open Apple documentation, you will see this:
Components
An array of color components that specify a color or range of colors
to mask the image with. The array must contain 2N
values { min[1], max[1], ... min[N], max[N] } where N is the number of
components in color space of image. Each value in components must be a
valid image sample value. If image has integer pixel components, then
each value must be in the range [0 .. 2**bitsPerComponent - 1] (where
bitsPerComponent is the number of bits/component of image). If image
has floating-point pixel components, then each value may be any
floating-point number which is a valid color component.
You can quickly open documentation by holding Option + Mouse Click on some function or class, like CGImageCreateWithMaskingColors.
I made this static function that removes the white background you can use it replacing the mask with the color range you want to remove:
+ (UIImage*) processImage :(UIImage*) image
{
const float colorMasking[6]={222,255,222,255,222,255};
CGImageRef imageRef = CGImageCreateWithMaskingColors(image.CGImage, colorMasking);
UIImage* imageB = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return imageB;
}
I did this in CIImage with the post-processing from Vision being a pixelBuffer:
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let filteredImage = ciImage.applyingFilter("CIMaskToAlpha")
self.picture.image = UIImage(ciImage: filteredImage)