Save a view as image in original size - ios

there are many threads about how to save a view as an image. And all of them give the same answer:
CGRect rect = [myView bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,[[UIScreen mainScreen] scale]);
CGContextRef context = UIGraphicsGetCurrentContext();
[myView.layer renderInContext:context];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
my problem is that I have a view that contains several joined imageViews that contain images. I want to save that "holder" view in a size that corresponds to the size of images residing inside it. So if i have 2 images 400x400 side by side, I'd like to save 400x800 image, regardless that they are displayed lets say 30x60.
Plus the above code only captures what's on the screen and leaves out the rest. For instance if i wanted to get the entire scrollview with its content size, it's not possible.
any ideas?

If you want to get an image of the entire contentSize of a UIScrollView, take a look at this.

Related

How to get a screenshot of a CATiledLayer based TilingView

I'm trying in my app to take a screenshot of the current screen contents, where my TilingView (based on CATiledLayers) displays a number of transparent large tiled images.
Also I added some subViews to the TilingView, which are magically captured in the screenshot, however the underlying contents of the TilingView is not captured!??
The following code-snippets takes a snapshot of the visible screen, which seems to work well for a NON CATiledLayer based view-hierarchy, but unfortunately doesn't work for my setup. Even if I pass the topmost superview of the TilingView (being the actual UIViewController.view), I see only in my snapshot the StatusBar, NavigationBar, the TilingViews subViews and the TabBar, but again NOT the TilingViews contents.
- (UIImage*)captureView:(UIView *)viewToCapture {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[viewToCapture.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Does anybody know or see here what I'm missing? Do I need to delve deeper into the CG-related display stack with some, for me, unknown CG-API calls? Thanks in advance.
By searching some more in StackOverflow I've found code which seems to do what I wanted. Basically I need to change the above method into:
- (UIImage*)captureView:(UIView*)viewToCapture {
UIGraphicsBeginImageContextWithOptions(viewToCapture.bounds.size, NO, [UIScreen mainScreen].scale);
[viewToCapture drawViewHierarchyInRect:viewToCapture.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Thanks go to the question/answer at: How to get a screenshot of a view containing GPUImageView?

Crop an area of oversized image to what is currently showing onscreen

I have an oversized image loaded in a image view that goes out of bounds both vertically and horizontally.
The end user can scroll around the image (the oversized imageview is in a scrollview) and when they find an area that they like I would like to crop out the area of the image that is shown on the screen. (much like a screenshot but only of the imageview.image I'm then going to put that into a different Imageview.
I can't seem to work out how to accomplish the "screenshot" of the area of the image view's image that is currently showing on the screen.
You can use CGImageCreateWithImageInRect to create a subimage of the displayed image. Use contentOffset and the scrollViews bounds to create the rect from which you want to create the image.
CGRect rect = CGRectMake(scrollView.contentOffset.x, scrollView.contentOffset.y, CGRectGetWidth(scrollView.bounds), CGRectGetHeight(scrollView.bounds));
CGImageRef subImageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
If you zoom your scrollView you will need to take the zoomLevel into account too.
I ended up using the following code to achieve what I was looking for to grab the image. Thank you to Karl for his input and a thank you to iNoob whom answer to a previous question [Located here on StackOverflow][1] I used for mine.
Just use the below code to take a "screenshot" just set anything you don't want in the image to.hidden = True; before the code to hide it from the screenshot and set them to .Hidden = FALSE; after the code to bring them back.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to save 2 UIImageView in one image to cameraroll

I have two UIImageView one over the other, and I'dlike to save them in one single file to camera roll.
UIImageWriteToSavedPhotosAlbum(self.myimage.image,nil,nil,nil);
});
the images are laying on top of each other and are the same size. the top one has few alpha in order to see the other one
you can do it in this way..,
Add your both imageview in one UIView., and then take screenshot of This UiView and stored generated image in your desired destination.
Here is Code to take screen shot by coding
// code to take Screen shot
-(void) takeScreenshot
{
// Replace self.view with your view name which containing your ImageViews
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
// get a UIImage from the image context
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
UIGraphicsEndImageContext();
// Then save your Image in your desired destination
}
Hope it will Help you, Happy Coding

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

Merge two UIView into one UIImage

I've got two UIImageView: the first one is laying on the top of the other (eg. an overlay).
I want now to take a screenshot of the whole thing.
Note that before that step, I allow the user to change the overlay by panning,scaling and ROTATING it, so I must keep track of his editing.
So, here's the homework:
rotating the context basing on the view's transform rotation value
positioning on the origin, where the user finished to pan the overlay
calculate the size of the overlay view (it's always a rectangle, however!)
I'm gonna merge them inside a similar piece of code:
UIGraphicsBeginImageContext...
...
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but... what does "best fit" instead of the "..."?
Example code is well accepted!
Thanks
UIGraphicsBeginImageContext(firstImage.size);
[firstImage drawAtPoint:CGPointMake(0,0)];
[secondImage drawAtPoint:CGPointMake(0,0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Resources