Why imageWithCGImage:scale:orientation: doesn't work? - ios

This is my question:
I want to rotate my image when I got the degree it should rotate.
And my code is here
UIImage *image = imageView.image;
UIImage *originalImage = imageView.image;
CGAffineTransform transform = imageView.transform;
if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI_2))) {
image = [UIImage imageWithCGImage:originalImage.CGImage scale:originalImage.scale orientation:UIImageOrientationRight];
} else if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI))) {
image = [UIImage imageWithCGImage:originalImage.CGImage scale:originalImage.scale orientation:UIImageOrientationDown];
} else if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI_2 * 3))) {
image = [UIImage imageWithCGImage:originalImage.CGImage scale:originalImage.scale orientation:UIImageOrientationLeft];
} else if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI * 2))) {
image = originalImage; // UIImageOrientationUp
}
As I hope , the image will show as rotated. But after rotating, this images is still what it was. It means the method imageWithCGImage:scale:orientation: didn't work.
Some one can tell me why? Thanks.

You use
UIImage *image = imageView.image;
to get the handle of imageView.image, and you've set your changed image for (UIImage*)image which is just a reference of imageView.image.So if you want imageView.image to be change, you should set imageView.image again to make it change
add this in the back after your codes
imageView.image = image;

You need set the image again (after all that code)
imageView.image = image;

This is something that you will need to fundamentally understand. When you get a pointer to another object, and then set that pointer to something else, it doesn't affect the original object. Think of it this way:
Lunch *myLunch = myFriend.CurrentLunch;
This means that your lunch is current your friend's lunch.
myLunch = new MisoRamen();
Now your lunch is a different lunch.
myLunch.InsertWeirdSauce();
You just inserted a sauce into your own lunch, while your friend's lunch remains safe. If you want to change your friend's lunch you have to do this.
Lunch *newLunch = new MisoRamen();
newLunch.InsertWeirdSauce();
myFriend.CurrentLunch = newLunch;
Now your friend will gasp in shock as he/she eats a lunch with weird sauce in it.

Related

difference between UIImageView.image = ... and UIimageView.layer.content =

Two ways of set UIImage to A UIImageView:
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
what is the difference between the two ways?
which one is better?
In fact.What I want to do is displaying part of one PNG in UIImageView.
There are two ways:
First:
UIImage *image = [UIImage imageNamed:#"clothing.png"];
CGImageRef imageRef = CGImageCreateWithImageInRect(image, rect);
self.imageview.image = [UIImage imageWithCGImage:imageRef];
Second:
self.imageview2.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);//way2
self.imageview2.layer.contentsRect = rect;
Which one is better? Why? Thanks!
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
By using this you can directly assign your image to any UIImageView While in
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
You can not assign image directly to layer. so you need to put a CGImage into a layer.
So First is best forever. Thank you.
First option is better, Bridge concept is used before introduction of ARC
Of course the better way is the first one :
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Indeed, this is the way an UIImageView is built for.
For clarification, every view (UIView, UIImageView, etc) has a .layer property to add visual content on it. Here you're adding an image in it. You could have achieved the same result with a single UIView. But, in performances terms and clarity, you should (have to) use .image property.
Edit :
Even with your new edit, first option is still the better one.
You can do that but it's your responsibility to modify the image to make the image.CGImage draw correctly respect to imageOrientation, contentMode and so on. If you set the image with imageView.image = image;, it's apple's responsibility to do that.
I give you an example that causes the problem:
as image.CGImage doesn't contain image orientation, so if you set it directly and if the source image is not UIImageOrientationUp, the image will be rotated, unless you "fix the orientation" of the source image like this. (You can get a image that is not UIImageOrientationUp, just take a photo with your iPhone)
- (UIImage *)fixOrientation {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
Here are two screen shot, the first is set image.CGImage directly to layer.contents, the second is set image to imageView.image.
So always use imageView.image unless you know what you're doing.

How to add several UIImages to one base UIImage in iOS

I know there has to be a way to take one base image (in my case a transparent 4096x4096 base layer) and adding several dozen (up to 100) very small (100x100) images to it in different locations. BUT I cannot figure out how to do it: Assume for the following that imageArray has many elements, each containing a NSDictionary with the following structure:
#{
#"imagename":imagename,
#"x":xYDict[#"x"],
#"y":xYDict[#"y"]
}
Now, here is the code (that is not working. all I have is a transparent screen...and the performance is STILL horrible...about 1-2 seconds per "image" in the loop...and massive memory use.
NSString *PNGHeatMapPath = [[myGizmoClass dataDir] stringByAppendingPathComponent:#"HeatMap.png"];
#autoreleasepool
{
CGSize theFinalImageSize = CGSizeMake(4096, 4096);
NSLog(#"populateHeatMapJSON: creating blank image");
UIGraphicsBeginImageContextWithOptions(theFinalImageSize, NO, 0.0);
UIImage *theFinalImage = UIGraphicsGetImageFromCurrentImageContext();
for (NSDictionary *theDict in imageArray)
{
NSLog(#"populateHeatMapJSON: adding: %#",theDict[#"imagename"]);
UIImage *image = [UIImage imageNamed:theDict[#"imagename"]];
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
}
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(theFinalImage);
theFinalImage = nil;
[imageData writeToFile:PNGHeatMapPath atomically:YES];
NSLog(#"populateHeatMapJSON: PNGHeatMapPath = %#",PNGHeatMapPath);
}
I KNOW I am misunderstanding UIGraphics contexts...can someone help me out? I am trying to superimpose a bunch of UIImages on the blank transparent canvas at certain x,y locations, get the final image and save it.
PS: This is also .. if the simulator is anything to go by ... leaking like heck. It goes up to 2 or 3 gig...
Thanks in advance.
EDIT:
I did find this and tried it but it requires me to rewrite the entire image, copy that image to a new one and add to that new one. Was hoping for a "base image", "add this image", "add that image", "add the other image" and get the new base image without all the copying:
Is this as good as it gets?
+(UIImage*) drawImage:(UIImage*) fgImage
inImage:(UIImage*) bgImage
atPoint:(CGPoint) point
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:CGRectMake( 0, 0, bgImage.size.width, bgImage.size.height)];
[fgImage drawInRect:CGRectMake( point.x, point.y, fgImage.size.width, fgImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You are getting the image UIGraphicsGetImageFromCurrentImageContext() before you have actually drawn anything. Move this call to after your loop and before the UIGraphicsEndImageContext()
Your drawing actually occurs on the line
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
the function UIGraphicsGetImageFromCurrentImageContext() gives you the image of the current contents of the context so you need to ensure that you have done all your drawing before you grab the image.

How to get UIImage to send new image and not original image to server

The user picks an image with picker and the following code manipulates the image
UIImage *image = info[UIImagePickerControllerOriginalImage];
_imageView.image = image;
_imageView.contentMode = UIViewContentModeScaleAspectFill;
_imageView.layer.cornerRadius = 10;
_imageView.clipsToBounds = YES;
_imageView.layer.shouldRasterize = YES;
_imageView.layer.rasterizationScale = [[UIScreen mainScreen] scale];
When I send imageView to the server it is sending the original image and not the new image. I am guessing this is because UIImageView does not actually change the image but is just making on the fly visual changes to the original image.
Can someone lead me in the right direction as to either what I need to learn in order to make permanent changes to an image/create new image or is there a simple way to make these changes I've made permanent?
Thanks
I suppose you talking about getting a new image like it appear on screen (with the corner radius set etc etc) so you might want to test some like that :
UIGraphicsBeginImageContext(self.imageView.image.size);
[self.imageView drawViewHierarchyInRect:self.imageView.frame afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Memory increases when merging multiple high resolution images into single image, iOS

I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.

check the same images after flip cards (objective-C)

I have 4 cards and I can flip my cards, I want to check if my images is the same show 2 images and if it's not back the flip,
would you please give me some tutorial or sample code;
Thanks in advance!
Edit:
Here is my images:
UIImage *img1 = [UIImage imageNamed:#"card_front.png"];
[self addCardAtX:90 y:120 andFrontImage:img1 andTag:1];
[self addCardAtX:230 y:120 andFrontImage:img1 andTag:1];
[self addCardAtX:90 y:340 andFrontImage:img1 andTag:1];
[self addCardAtX:230 y:340 andFrontImage:img1 andTag:1];
- (void)addCardAtX:(CGFloat)x y:(CGFloat)y andFrontImage:(UIImage *) img1 andTag:(int)tag
{
UIImage *img2= [UIImage imageNamed:#"card_back.png"];
CardView *cv = [[CardView alloc] initWithFrontImage:img1 backImage:img2];
CGRect f = cv.frame;
f.origin.x = x-(f.size.width/2.0);
f.origin.y = y-(f.size.height/2.0);
cv.frame = f;
[self.view addSubview:cv];
}
You can Compare image by its NAME or URL/Path
But in objective-c you can also compare two objects, By,
if ([object1 isEqual:object2])
take BOOL variable imgCompare=NO; and check condition
if([Imgobject1 isEqual:ImgObject2])
imgCompare=YES;
else
imgCompare=NO;
Here is best Question of related discussion and also check This Link.
Thanks :)
You can check if images are equal buy using...
BOOL imagesAreTheSame = [image1 isEqual:image2];

Resources