iOS load scaled UIImage from assets - ios

I'm implementing a react-native component for iOS and need to return a UIImage.
What I have is return [UIImage imageNamed:#"myAsset"]; which is working but the image presented is way too small.
How can I load an asset image and make it bigger?
Another aspect is, that the impl returning the UIImage is invoked quite often for objects we draw onto the screen, but they are all the same, so scaling with every call is maybe not a good idea, but I have no idea how to make assets having a size. Last but not least, it's a PDF asset.
What I've tried is this ...
So I search for image scaling and came here:
NSURL *url = [NSBundle.mainBundle URLForResource:#"myAsset" withExtension:nil];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data scale:0.5];
return image;
but now url is nil and it's not working.
Then I found this:
UIImage *image = [UIImage imageNamed:#"myAsset"];
NSData *rawData = (__bridge NSData *) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
return [UIImage imageWithData:rawData scale:0.5];
Which is somehow also not working at all.
Now I hope maybe you can help me and thank you in advance.

func resizeImage(image: UIImage, newWidth: CGFloat, newHeight: CGFloat) -> UIImage {
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

Related

Download image through AFNetworking & edit it

I'm trying to download some image files from a remote server through the AFNetworking (UIImageView+AFNetworking.h) & retrieve the UIImage from it, edit that image. Editing means add that image(downloaded image) on top of another png file.( background image - UIImage )
I have tried several code blocks finally I'm stuck in here . I'm getting a black box only. Can't see the actual server image .
-(UIImage *)downloadImages:(NSString *)url{
UIImageView *downloadedImageView = [[UIImageView alloc]initWithFrame:CGRectMake(0,0,40,40)];
[downloadedImageView setImageWithURL:[NSURL URLWithString:url]
placeholderImage:[UIImage imageNamed:#"Loading_image"]];
UIGraphicsBeginImageContextWithOptions(downloadedImageView.bounds.size, downloadedImageView.opaque, 0.0);
[downloadedImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I'm calling through that function in a for loop.
for( int i=0;i<[Images count];i++){
NSString *image_Url = [NSString stringWithFormat:#"%#%#",imageURL, imagename];
UIImage *downloadimage =[[UIImage alloc]init];
downloadimage = [self downloadImages:image_Url];
UIImage *bottomImage = [UIImage imageNamed:#"map_marker_black"];
CGSize newSize = CGSizeMake(60, 60);
UIGraphicsBeginImageContext( newSize );
[backgroundImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[downloadimage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.7];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Where am I doing wrong ? Your help is highly appreciated. Thanks a lot
A few issues. Lets start with the download code. There is no reason to deal with an image view and drawing it to create an image. Just use the actual image:
- (UIImage *)downloadImage:(NSString *)url{
NSData *imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:url]];
UIImage *image = [UIImage imageWithData:imageData];
return image;
}
The big issue with your original code is that the AFNetworking method you used runs in the background. So you were always trying to draw the "loading" image.
Now the drawing code:
// Only need to load this once
UIImage *backgroundImage = [UIImage imageNamed:#"map_marker_black"];
// What is the loop used for?
for (NSUInteger i = 0; i < [Images count]; i++){
// Where does imageURL and imagename come from?
NSString *imageURL = [NSString stringWithFormat:#"%#%#",imageURL, imagename];
UIImage *downloadImage = [self downloadImage:imageURL];
CGRect newRect = CGRectMake(0, 0, 60, 60);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
[backgroundImage drawInRect:newRect];
[downloadimage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:0.7];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Do something here with newImage
UIGraphicsEndImageContext();
}
Note the comments in the code.

Saving UIImage in custom pixel dimension

I have the below implementation to save a UIView as an image to the device's photo album. It works correctly, however the saved image adapts to the device's screen resolution. As an instance if I run it on a iPhone 5, the saved image will be 640 x 640 px. My goal is to save custom sized images like 1800 x 1800 px or something like that on every device. So I would really appreciate if somebody could give me an example or any guidance, that helps me to find the right solution. Any other tips welcomed, my goal is to make custom pixel sized images, it doesn't matter if I have to use a different implementation.
- (IBAction)saveImg:(id)sender {
UIImage *imageToSave = [[UIImage alloc] init];
// self.fullVw holds the image, that I want to save
imageToSave = [self.fullVw pb_takeSnapshot];
NSData *pngData = UIImagePNGRepresentation(imageToSave);
UIImage *imageToSave2 = [[UIImage alloc] init];
imageToSave2 = [UIImage imageWithData:pngData];
UIImageWriteToSavedPhotosAlbum(imageToSave2,nil, nil, nil);
}
// This method is in a UIView category
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, 0.0);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

Memory leak with UIImage method "drawInRect"

I have question about correct using UIImage and method drawInRect.
I use iPad 4,Xcode Version 6.1.1, IOS 8.1.2 and I used ARC and I have tried without ARC
So, I have image "1.jpg".
Image Properties:
Dimension: 7500 x 8871 pixels
Resolution: 72 pixels/inch
Color Space: RGB
Alpha Channel: NO
I need to rescale the original image "1.jpg"
I use this code:
UIImage *originalImage=[UIImage imageNamed:#"1.jpg"];
CGSize newSize=(CGSizeMake(4096, 4096));
originalImage=[self GetScaledImage : originalImage andSize: newSize];
//----Scaling Method----------------------//
-(UIImage *) GetScaledImage :(UIImage *) inputImage andSize :(CGSize) inputSize
{
UIImage *newImage=[[[UIImage alloc] initWithCGImage:inputImage.CGImage] autorelease];
if (newImage)
{
UIGraphicsBeginImageContext(inputSize);
[newImage drawInRect: CGRectMake(0, 0, inputSize.width, inputSize.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
My question:
if I use newSize: (4096, 4096)
UIImage *originalImage=[UIImage imageNamed:#"1.jpg"];
CGSize newSize=(CGSizeMake(4096, 4096));
Memory now: 3.1 MB
originalImage=[self GetScaledImage : originalImage andSize: newSize];
Memory now: 4.3 MB
and it works correct
but if I use newSize: (3000, 4096)
I have this:
UIImage *originalImage=[UIImage imageNamed:#"1.jpg"];
CGSize newSize=(CGSizeMake(3000, 4096));
Memory now: 3.1 MB
originalImage=[self GetScaledImage : originalImage andSize: newSize];
Memory now: 52 MB
and it works incorrect
and magic: if I use newSize: (4096,3000) it works correct too but (3000,4096) works incorrect
So, my question: How does it work?
You can solve the problem by using the following method instead.
+ (UIImage *)scaleImageWithData:(NSData *)data withSize:(CGSize)size
scale:(CGFloat)scale
orientation:(UIImageOrientation)orientation {
CGFloat maxPixelSize = MAX(size.width, size.height);
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSDictionary *options = #{(__bridge id)kCGImageSourceCreateThumbnailFromImageAlways:(__bridge id)kCFBooleanTrue,
(__bridge id)kCGImageSourceThumbnailMaxPixelSize:[NSNumber numberWithFloat:maxPixelSize]
};
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
UIImage *resultImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
CFRelease(sourceRef);
return resultImage;
}
I suppose there is no magic. The CGImage object is not deallocated but owned. For this, you need to set image.CGImage to some variable and then release it with
CFRelease(cgImage);
Also try to clear graphics context after all processing done.

CGImageCreateWithMaskingColors Doesn't Work with iOS7

I've developed an app on iOS5 and iOS6. After I upgraded to XCode 5 and iOS7, I have some new bugs to play with.
The main one is the colorMasking no longer works. The exact same code still compiles and works on a phone with iOS6. On iOS7, the masked color is still there. I tried to find the answer on Google, but haven't found an answer. Is it a bug of iOS7, or does anybody know of a better way of doing colormasking?
Here is the code:
- (UIImage*) processImage :(UIImage*) image
{
UIImage *inputImage = [UIImage imageWithData:UIImageJPEGRepresentation(image, 1.0)];
const float colorMasking[6]={100.0, 255.0, 0.0, 100.0, 100.0, 255.0};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return finalImage;
}
Here are a couple StackOverflow posts I found that helped me get it working in iOS6 the first place:
Transparency iOS
iOS color to transparent in UIImage
I have stumbled across some strange behavior of CGImageCreateWithMaskingColors in conjunction with UIImagePNGRepresentation. This may or may not be related to your problem. I have found that if:
If use CGImageCreateWithMaskingColors and immediately add that image to an image view, I can see that the transparency appears to have been applied correctly;
But in iOS 7, if I then:
take this image from CGImageCreateWithMaskingColors and create a NSData using UIImagePNGRepresentation; and
if reload the image from that NSData using imageWithData, then the resulting image will no longer have its transparency.
To confirm this, if I writeToFile for this NSData and examine the saved image in a tool like Photoshop, I can confirm that the file does not have any transparency applied.
This only manifests itself in iOS 7. In iOS 6 it's fine.
But if I take the image in step 1 and roundtrip it through drawInRect, the same process of saving the image and subsequently loading it works fine.
This following code illustrates the issue:
- (UIImage*) processImage :(UIImage*) inputImage
{
const float colorMasking[6] = {255.0, 255.0, 255.0, 255.0, 255.0, 255.0};
CGImageRef imageRef = CGImageCreateWithMaskingColors(inputImage.CGImage, colorMasking);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// If I put this image in an image view, I see the transparency fine.
self.imageView.image = finalImage; // this works
// But if I save it to disk and the file does _not_ have any transparency
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *pathWithoutTransparency = [documentsPath stringByAppendingPathComponent:#"image-but-no-transparency.png"];
NSData *data = UIImagePNGRepresentation(finalImage);
[data writeToFile:pathWithoutTransparency atomically:YES]; // save it so I can check out the file in Photoshop
// In iOS 7, the following imageview does not honor the transparency
self.imageView2.image = [UIImage imageWithData:data]; // this does not work in iOS 7
// but, if I round-trip the original image through `drawInRect` one final time,
// the transparency works
UIGraphicsBeginImageContextWithOptions(finalImage.size, NO, 1.0);
[finalImage drawInRect:CGRectMake(0, 0, finalImage.size.width, finalImage.size.height)];
UIImage *anotherRendition = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
data = UIImagePNGRepresentation(anotherRendition);
NSString *pathWithTransparency = [documentsPath stringByAppendingPathComponent:#"image-with-transparancy.png"];
[data writeToFile:pathWithTransparency atomically:YES];
// But this image is fine
self.imageView3.image = [UIImage imageWithContentsOfFile:pathWithTransparency]; // this does work
return anotherRendition;
}
I was loading a JPEG which for some reason loads with an alpha channel, which won't work when masking, so here I recreate the CGImage ignoring the alpha channel. There may be a better way of doing this but this works!
- (UIImage *)imageWithChromaKeyMasking {
const CGFloat colorMasking[6]={255.0,255.0,255.0,255.0,255.0,255.0};
CGImageRef oldImage = self.CGImage;
CGBitmapInfo oldInfo = CGImageGetBitmapInfo(oldImage);
CGBitmapInfo newInfo = (oldInfo & (UINT32_MAX ^ kCGBitmapAlphaInfoMask)) | kCGImageAlphaNoneSkipLast;
CGDataProviderRef provider = CGImageGetDataProvider(oldImage);
CGImageRef newImage = CGImageCreate(self.size.width, self.size.height, CGImageGetBitsPerComponent(oldImage), CGImageGetBitsPerPixel(oldImage), CGImageGetBytesPerRow(oldImage), CGImageGetColorSpace(oldImage), newInfo, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider); provider = NULL;
CGImageRef im = CGImageCreateWithMaskingColors(newImage, colorMasking);
UIImage *ret = [UIImage imageWithCGImage:im];
CGImageRelease(im);
return ret;
}

UIImagePNGRepresentation returns nil data?

I am trying to make thumbnail images and save to the document directory.
But the problem is that when I am trying to convert the thumbnail images to NSData. It returns nil.
Here is my code,
UIImage *thumbNailimage=[image thumbnailImage:40 transparentBorder:0.2 cornerRadius:0.2 interpolationQuality:1.0];
NSData *thumbNailimageData = UIImagePNGRepresentation(thumbNailimage);// Returns nil
[thumbNailimageData writeToFile:[DOCUMENTPATH stringByAppendingPathComponent:#"1.png"] atomically:NO];
So what is the problem I have also tried with UIImageJPEGRepresentation but it not works for me.
Thanks.
Try this:
UIGraphicsBeginImageContext(originalImage.size);
[originalImage drawInRect:CGRectMake(0, 0, originalImage.size.width, originalImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This creates a copy of the original UIImage. You can then call UIImagePNGRepresentation and it will work correctly.
Try this code,
-(void) createThumbnail
{
UIImage *originalImage = imgView2.image; // Give your original Image
CGSize destinationSize = CGSizeMake(25, 25); // Give your Desired thumbnail Size
UIGraphicsBeginImageContext(destinationSize);
[originalImage drawInRect:CGRectMake(0,0,destinationSize.width,destinationSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
NSData *thumbNailimageData = UIImagePNGRepresentation(newImage);
UIGraphicsEndImageContext();
[thumbNailimageData writeToFile:[NSHomeDirectory() stringByAppendingPathComponent:#"1.png"] atomically:NO];
}
Hope this Helps you,
happy Coding
To Swift programmers, Rickster's answer helped me a lot! UIImageJPEGRepresentation was crashing my app when selecting a certain image. I'm sharing my extension of UIImage (or Category in Objective-C term).
import UIKit
extension UIImage {
/**
Creates the UIImageJPEGRepresentation out of an UIImage
#return Data
*/
func generateJPEGRepresentation() -> Data {
let newImage = self.copyOriginalImage()
let newData = UIImageJPEGRepresentation(newImage, 0.75)
return newData!
}
/**
Copies Original Image which fixes the crash for extracting Data from UIImage
#return UIImage
*/
private func copyOriginalImage() -> UIImage {
UIGraphicsBeginImageContext(self.size);
self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return newImage!
}
}

Resources