I know there has to be a way to take one base image (in my case a transparent 4096x4096 base layer) and adding several dozen (up to 100) very small (100x100) images to it in different locations. BUT I cannot figure out how to do it: Assume for the following that imageArray has many elements, each containing a NSDictionary with the following structure:
#{
#"imagename":imagename,
#"x":xYDict[#"x"],
#"y":xYDict[#"y"]
}
Now, here is the code (that is not working. all I have is a transparent screen...and the performance is STILL horrible...about 1-2 seconds per "image" in the loop...and massive memory use.
NSString *PNGHeatMapPath = [[myGizmoClass dataDir] stringByAppendingPathComponent:#"HeatMap.png"];
#autoreleasepool
{
CGSize theFinalImageSize = CGSizeMake(4096, 4096);
NSLog(#"populateHeatMapJSON: creating blank image");
UIGraphicsBeginImageContextWithOptions(theFinalImageSize, NO, 0.0);
UIImage *theFinalImage = UIGraphicsGetImageFromCurrentImageContext();
for (NSDictionary *theDict in imageArray)
{
NSLog(#"populateHeatMapJSON: adding: %#",theDict[#"imagename"]);
UIImage *image = [UIImage imageNamed:theDict[#"imagename"]];
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
}
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(theFinalImage);
theFinalImage = nil;
[imageData writeToFile:PNGHeatMapPath atomically:YES];
NSLog(#"populateHeatMapJSON: PNGHeatMapPath = %#",PNGHeatMapPath);
}
I KNOW I am misunderstanding UIGraphics contexts...can someone help me out? I am trying to superimpose a bunch of UIImages on the blank transparent canvas at certain x,y locations, get the final image and save it.
PS: This is also .. if the simulator is anything to go by ... leaking like heck. It goes up to 2 or 3 gig...
Thanks in advance.
EDIT:
I did find this and tried it but it requires me to rewrite the entire image, copy that image to a new one and add to that new one. Was hoping for a "base image", "add this image", "add that image", "add the other image" and get the new base image without all the copying:
Is this as good as it gets?
+(UIImage*) drawImage:(UIImage*) fgImage
inImage:(UIImage*) bgImage
atPoint:(CGPoint) point
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:CGRectMake( 0, 0, bgImage.size.width, bgImage.size.height)];
[fgImage drawInRect:CGRectMake( point.x, point.y, fgImage.size.width, fgImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You are getting the image UIGraphicsGetImageFromCurrentImageContext() before you have actually drawn anything. Move this call to after your loop and before the UIGraphicsEndImageContext()
Your drawing actually occurs on the line
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
the function UIGraphicsGetImageFromCurrentImageContext() gives you the image of the current contents of the context so you need to ensure that you have done all your drawing before you grab the image.
Related
Two ways of set UIImage to A UIImageView:
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
what is the difference between the two ways?
which one is better?
In fact.What I want to do is displaying part of one PNG in UIImageView.
There are two ways:
First:
UIImage *image = [UIImage imageNamed:#"clothing.png"];
CGImageRef imageRef = CGImageCreateWithImageInRect(image, rect);
self.imageview.image = [UIImage imageWithCGImage:imageRef];
Second:
self.imageview2.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);//way2
self.imageview2.layer.contentsRect = rect;
Which one is better? Why? Thanks!
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
By using this you can directly assign your image to any UIImageView While in
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
You can not assign image directly to layer. so you need to put a CGImage into a layer.
So First is best forever. Thank you.
First option is better, Bridge concept is used before introduction of ARC
Of course the better way is the first one :
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Indeed, this is the way an UIImageView is built for.
For clarification, every view (UIView, UIImageView, etc) has a .layer property to add visual content on it. Here you're adding an image in it. You could have achieved the same result with a single UIView. But, in performances terms and clarity, you should (have to) use .image property.
Edit :
Even with your new edit, first option is still the better one.
You can do that but it's your responsibility to modify the image to make the image.CGImage draw correctly respect to imageOrientation, contentMode and so on. If you set the image with imageView.image = image;, it's apple's responsibility to do that.
I give you an example that causes the problem:
as image.CGImage doesn't contain image orientation, so if you set it directly and if the source image is not UIImageOrientationUp, the image will be rotated, unless you "fix the orientation" of the source image like this. (You can get a image that is not UIImageOrientationUp, just take a photo with your iPhone)
- (UIImage *)fixOrientation {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
Here are two screen shot, the first is set image.CGImage directly to layer.contents, the second is set image to imageView.image.
So always use imageView.image unless you know what you're doing.
I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.
I am making application related to images. I have multiple images on my screen. I had take screen shot of that. But it should not provide my whole screen.
Little part of the top most & bottom most part need not be shown in that.
I have navigation bar on top. And some buttons at bottom. I don't want to capture that buttons and navigation bar in my screenshot image.
Below is my code for screen shot.
-(UIImage *) screenshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, YES, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
After taking screenshot I am using it by below code in facebook share method,
UIImage *image12 =[self screenshot];
[mySLComposerSheet addImage:image12];
the easiest way to achieve this would be to add a UIView which holds all the content you want to take a screenshot of and then call drawViewHierarchyInRect from that UIView instead of the main UIView.
Something like this:
-(UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(contentView.bounds.size, YES, [UIScreen mainScreen].scale);
[contentView drawViewHierarchyInRect:contentView.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hope this helps!
You can use my below code to take screen shot of a view.
I have put the condition to check the size of a screenshot.
With this code image is saved in your documents folder and from there you can use your image to share on Facebook or anywhere you want to share.
CGSize size = self.view.bounds.size;
CGRect cropRect;
if ([self isPad])
{
cropRect = CGRectMake(110 , 70 , 300 , 300);
}
else
{
if (IS_IPHONE_5)
{
cropRect = CGRectMake(55 , 25 , 173 , 152);
}
else
{
cropRect = CGRectMake(30 , 25 , 164 , 141);
}
}
/* Get the entire on screen map as Image */
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Crop the desired region */
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
/* Save the cropped image
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);*/
//save to document folder
NSData * imageData = UIImageJPEGRepresentation(cropImage, 1.0);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString *imagename=[NSString stringWithFormat:#"Pic.jpg"];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imagename];
////NSLog(#"full path %#",fullPathToFile);
[imageData writeToFile:fullPathToFile atomically:NO];
Hope it helps you.
use this code
-(IBAction)captureScreen:(id)sender
{
UIGraphicsBeginImageContext(webview.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
sample project www.cocoalibrary.blogspot.com
http://www.bobmccune.com/2011/09/08/screen-capture-in-ios-apps/
snapshotViewAfterScreenUpdates
but it is only Available in iOS 7.0 and later.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This method captures the current visual contents of the screen from the render server and uses them to build a new snapshot view. You can use the returned snapshot view as a visual stand-in for the screen’s contents in your app. For example, you might use a snapshot view to facilitate a full screen animation. Because the content is captured from the already rendered content, this method reflects the current visual appearance of the screen and is not updated to reflect animations that are scheduled or in progress. However, this method is faster than trying to render the contents of the screen into a bitmap image yourself.
https://developer.apple.com/library/ios/documentation/uikit/reference/UIScreen_Class/Reference/UIScreen.html#//apple_ref/occ/instm/UIScreen/snapshotViewAfterScreenUpdates:
I would like to take an image and duplicate it. Then increase it by 105% and overlay it on the original image.
What is the correct way to do this on iOS?
This is your basic code for drawing the image and then saving it as an image again:
- (UIImage *)renderImage:(UIImage *)image atSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0.0, 0.0, size.width, size.height)];
// draw anything else into the context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Where it says "draw anything else into the context" you can draw the image at a reduced size by setting the appropriate rect to draw in. Then, call the renderImage method with whatever size you want the full image to be. You can use CGContextSetAlpha to set the transparency.
I have a drawing app where you have one UIImageView that serves as the "drawing layer." You have another UIImageView beneath it that is the "image layer," containing the image you are drawing on. I like having this separation. However, I want the user to be able to "save and email" the drawing they have made on top of the image as one unified image. How do I do this?
Your UIImageView instances must be part of a UIView hierachy so all you need to do is paint that top containing UIView into a context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
[container.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
or if that gives you trouble successively paint the two images into a context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
[image1.layer renderInContext:UIGraphicsGetCurrentContext()];
[image2.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
from there you can write the data where you choose
NSData *data = UIImagePNGRepresentation(fimage);
Pass that data into the MailComposer setup.