difference between UIImageView.image = ... and UIimageView.layer.content = - ios

Two ways of set UIImage to A UIImageView:
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
what is the difference between the two ways?
which one is better?
In fact.What I want to do is displaying part of one PNG in UIImageView.
There are two ways:
First:
UIImage *image = [UIImage imageNamed:#"clothing.png"];
CGImageRef imageRef = CGImageCreateWithImageInRect(image, rect);
self.imageview.image = [UIImage imageWithCGImage:imageRef];
Second:
self.imageview2.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);//way2
self.imageview2.layer.contentsRect = rect;
Which one is better? Why? Thanks!

First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
By using this you can directly assign your image to any UIImageView While in
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
You can not assign image directly to layer. so you need to put a CGImage into a layer.
So First is best forever. Thank you.

First option is better, Bridge concept is used before introduction of ARC

Of course the better way is the first one :
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Indeed, this is the way an UIImageView is built for.
For clarification, every view (UIView, UIImageView, etc) has a .layer property to add visual content on it. Here you're adding an image in it. You could have achieved the same result with a single UIView. But, in performances terms and clarity, you should (have to) use .image property.
Edit :
Even with your new edit, first option is still the better one.

You can do that but it's your responsibility to modify the image to make the image.CGImage draw correctly respect to imageOrientation, contentMode and so on. If you set the image with imageView.image = image;, it's apple's responsibility to do that.
I give you an example that causes the problem:
as image.CGImage doesn't contain image orientation, so if you set it directly and if the source image is not UIImageOrientationUp, the image will be rotated, unless you "fix the orientation" of the source image like this. (You can get a image that is not UIImageOrientationUp, just take a photo with your iPhone)
- (UIImage *)fixOrientation {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
Here are two screen shot, the first is set image.CGImage directly to layer.contents, the second is set image to imageView.image.
So always use imageView.image unless you know what you're doing.

Related

Xcode: Image becomes filled with color on UIBarItem while scaling, function initeWithImage(UIImage)

please, help me to find some solution to the following problem. IOS, Xcode.
I have UIBarItem component - the button that switches languages on the toolbar. The bar has function initeWithImage(UIImage). The problem is when i am trying to scale the image it becomes filled with color...The picture is too big for the toolbar. I have changed the size of the picture manually, but it looks ugly (i need scaling).
I have tried to solve the problem:
1. trough Scale
2. through Frame size
3. and variations of the two above :)
Some code:
self initWithImage:image //
style: UIBarButtonItemStyleBordered //UIBarButtonItemStylePlain
target:target
action:action
UIImage *image;
image = [[UIImage imageNamed:#“image.png"] imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal];
-(UIImage )imageWithImages:(UIImage )image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Please, help me with this. Quite simple but tricky task. I have already spent several hours with that... Thank you in advance :)
You should not use initWithImage method.
If you want to display the exact image you should use initWithCustomView method.
For example:
UIImageView *imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"image"]];
UIBarButtonItem *barButtonItem = [[UIBarButtonItem alloc] initWithCustomView:imageView];
You can scale down the image before setting it to the image view if you want to.

How to add several UIImages to one base UIImage in iOS

I know there has to be a way to take one base image (in my case a transparent 4096x4096 base layer) and adding several dozen (up to 100) very small (100x100) images to it in different locations. BUT I cannot figure out how to do it: Assume for the following that imageArray has many elements, each containing a NSDictionary with the following structure:
#{
#"imagename":imagename,
#"x":xYDict[#"x"],
#"y":xYDict[#"y"]
}
Now, here is the code (that is not working. all I have is a transparent screen...and the performance is STILL horrible...about 1-2 seconds per "image" in the loop...and massive memory use.
NSString *PNGHeatMapPath = [[myGizmoClass dataDir] stringByAppendingPathComponent:#"HeatMap.png"];
#autoreleasepool
{
CGSize theFinalImageSize = CGSizeMake(4096, 4096);
NSLog(#"populateHeatMapJSON: creating blank image");
UIGraphicsBeginImageContextWithOptions(theFinalImageSize, NO, 0.0);
UIImage *theFinalImage = UIGraphicsGetImageFromCurrentImageContext();
for (NSDictionary *theDict in imageArray)
{
NSLog(#"populateHeatMapJSON: adding: %#",theDict[#"imagename"]);
UIImage *image = [UIImage imageNamed:theDict[#"imagename"]];
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
}
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(theFinalImage);
theFinalImage = nil;
[imageData writeToFile:PNGHeatMapPath atomically:YES];
NSLog(#"populateHeatMapJSON: PNGHeatMapPath = %#",PNGHeatMapPath);
}
I KNOW I am misunderstanding UIGraphics contexts...can someone help me out? I am trying to superimpose a bunch of UIImages on the blank transparent canvas at certain x,y locations, get the final image and save it.
PS: This is also .. if the simulator is anything to go by ... leaking like heck. It goes up to 2 or 3 gig...
Thanks in advance.
EDIT:
I did find this and tried it but it requires me to rewrite the entire image, copy that image to a new one and add to that new one. Was hoping for a "base image", "add this image", "add that image", "add the other image" and get the new base image without all the copying:
Is this as good as it gets?
+(UIImage*) drawImage:(UIImage*) fgImage
inImage:(UIImage*) bgImage
atPoint:(CGPoint) point
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:CGRectMake( 0, 0, bgImage.size.width, bgImage.size.height)];
[fgImage drawInRect:CGRectMake( point.x, point.y, fgImage.size.width, fgImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You are getting the image UIGraphicsGetImageFromCurrentImageContext() before you have actually drawn anything. Move this call to after your loop and before the UIGraphicsEndImageContext()
Your drawing actually occurs on the line
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
the function UIGraphicsGetImageFromCurrentImageContext() gives you the image of the current contents of the context so you need to ensure that you have done all your drawing before you grab the image.

How to get UIImage to send new image and not original image to server

The user picks an image with picker and the following code manipulates the image
UIImage *image = info[UIImagePickerControllerOriginalImage];
_imageView.image = image;
_imageView.contentMode = UIViewContentModeScaleAspectFill;
_imageView.layer.cornerRadius = 10;
_imageView.clipsToBounds = YES;
_imageView.layer.shouldRasterize = YES;
_imageView.layer.rasterizationScale = [[UIScreen mainScreen] scale];
When I send imageView to the server it is sending the original image and not the new image. I am guessing this is because UIImageView does not actually change the image but is just making on the fly visual changes to the original image.
Can someone lead me in the right direction as to either what I need to learn in order to make permanent changes to an image/create new image or is there a simple way to make these changes I've made permanent?
Thanks
I suppose you talking about getting a new image like it appear on screen (with the corner radius set etc etc) so you might want to test some like that :
UIGraphicsBeginImageContext(self.imageView.image.size);
[self.imageView drawViewHierarchyInRect:self.imageView.frame afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Memory increases when merging multiple high resolution images into single image, iOS

I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.

iOS7's drawViewHierarchyInRect doesn't work?

From what I've read, iOS7's new drawViewHierarchyInRect is supposed to be faster than CALayer's renderInContext. And according to this and this, it should be a simple matter of calling:
[myView drawViewHierarchyInRect:myView.frame afterScreenUpdates:YES];
instead of
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
However, when I try this, I just get blank images. Full code that does the capture, where "self" is a subclass of UIView,
// YES = opaque. Ignores alpha channel, so less memory is used.
// This method for some reasons renders the
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, self.window.screen.scale); // Still slow.
if ( [AIMAppDelegate isOniOS7OrNewer] )
[self drawViewHierarchyInRect:self.frame afterScreenUpdates:YES]; // Doesn't work!
else
[self.layer renderInContext:UIGraphicsGetCurrentContext()]; // Works!
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
contentImageView.image = image; // this is empty if done using iOS7's way
and contentImageView is a UIImageView that is added as a subView to self during initialization.
Additionally, the drawing that I want captured in the image is contained in other sub-views that are also added to self as a sub-view during initialization (including contentImageView).
Any ideas why this is failing when using drawViewHierarchyInRect?
* Update *
I get an image if I draw a specific sub-view, such as:
[contentImageView drawViewHierarchyInRect:contentImageView.frame afterScreenUpdates:YES];
or
[self.curvesView drawViewHierarchyInRect:self.curvesView.frame afterScreenUpdates:YES];
however I need all the visible sub-views combined into one image.
Try it with self.bounds rather than self.frame—it’s possible you’re getting an image of your view rendered outside the boundaries of the image context you’ve created.

Resources