Screenshot at required CGRect on UIView - ios

I made a 3x3 cell layout with UIViews. So there are 9 UIViews!!!
I am trying to screenshot these 9 UIviews individually. But Whatever I tried, I can only screenshot the 1st UIview.
HEre is the screenshot of the main view that holds the 9 subviews:
And I want to screenshot the second tile by sending _tile2, but the final result I get is _tile1 only.
:
Here is the code:
[self saveImage:[self screenshotTile:_tile2]]; //_tile1,_tile2...._tile9 gave the same result
What I believe is If I send the _tile1, it should screenshot the _tile1 area on the self.view, and If I send the _tile2, it should screenshot the _tile2 area on the self.view .But whatever I send, it only captures the _tile1 area on the view.
-(void)saveImage:(UIImage *)img{
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
}
-(UIImage *)screenshotTile :(UIView *)imgV{
UIGraphicsBeginImageContext(imgV.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
_tileImg1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return _tileImg1;
}
I tried this solution which suggests to shift the image after taking the screenshot, but it didnt work too:
UIGraphicsBeginImageContext(sshot.frame.size);
[sourceImage drawAtPoint:CGPointMake(-50, -100)]; // I tried -imgV.frame.origin.v, -imgV.frame.origin.y but it didnt work for me!!!

You pass a tile to the screenshotTile: method. In that method, you never actually use the tile. Maybe you want to send renderInContext: to imgV.layer, not to self.view.layer.

After few trials, I managed to get it worked by using this code:
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, -imgV.frame.origin.x, -imgV.frame.origin.y);
[self.view.layer renderInContext:c];
I need to shift the context x axis and y axis according to the tile frame. The context always starts from the 0,0 point, So I need to move the image according to the tile frame.
In the process of doing this, I encountered one wierd error which stopped me in saving the images to the camera roll:
Connection to assetsd was interrupted or assetsd died
I am not sure why and how it occured. After few trials, the error disappeared. I dont even know how ti reproduce it!!!!!

Related

How to optimize memory usage in UIImage

I try to take screenshot of uiwebview and send it with observer to another UIImageView in another class.
I using this method to take screenshot:
-(UIImage*)takeScreenshoot{
#autoreleasepool {
UIGraphicsBeginImageContext(CGSizeMake(self.view.frame.size.width,self.view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[self.webPage.layer renderInContext:context];
UIImage *__weak screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
}
But then I have problem. Everytime I take screenshot with this method, memory rate grows about 10-15mb and it's not realising it. And if I take screenshot in every webviewdidfinishload, you can imagine how much it can take memory!
How can I fix that issue?
If possible try to use [UIScreen snapshotViewAfterScreenUpdates] which returns UIView .
This is the snapshot of currently displayed content (snapshot of app).
Even apple also says " this method is faster than trying to render the contents of the screen into a bitmap image yourself."
According to your code, you are passing this bitmap image to just display in some other UIImageView. so i think using UIScreen method is appropriate here.
To display UIWebView part only->
Create another UIView instance. Set its frame to the frame of your webView.
Now add this screenShot view as subView to createdView and set its frame such that webView portion will be displayed.
Try calling CGContextRelease(context); after you have got your screen shot.
Or as #Greg said, remove the line and use UIGraphicsGetCurrentContext() directly

Why does my programmatically created screenshot look so bad on iOS 7?

I am trying to implement sharing app with facebook.
I used this code to take the screenshot:
CGSize imageSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height);
UIGraphicsBeginImageContext(imageSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
It works great on iOS 6, but in iOS 7 the image looks very bad.
I used this answer: iOS: what's the fastest, most performant way to make a screenshot programmatically?
for trying to fix it, and it helped, still the screenshot looks bad.
The screen gets another color, and some objects (like labels) aren't showing on the image taken.
Any help?
----Update----
I managed to solve the most objects, by change them to retain instead of weak. My main problem remained my tableview, that shown as a big white block (It supposed to be transparent, with labels with white text, so all we see is white cells). I did try to define the table background as clearcolor,not helps..
----Last Update---
There are wonderful answers here that not really regarding to my issue.. I wanted to make it work on device that runs with iOS7 but without using iOS7 SDK, since it takes to much effort to switch the project SDK in this point, when the project is almost done.
Anyway, I added the peace of code that finally solved my issue:
This change simply solve the problem:
UIGraphicsBeginImageContextWithOptions(imageSize, NO , 0.0f);
instead of:
UIGraphicsBeginImageContext(imageSize);
New API has been added since iOS 7, that should provide efficient way of getting snapshot
snapshotViewAfterScreenUpdates: renders the view into a UIView with unmodifiable content
resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets : same thing, but with resizable insets
drawViewHierarchyInRect:afterScreenUpdates: : same thing if you need all subviews to be drawn too (like labels, buttons...)
You can use the UIView returned for any UI effect, or render in into an image like you did if you need to export.
I don't know how good this new method performs VS the one you provided (although I remember Apple engineers saying this new API was more efficient)
you can try this
- (UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
in iOS 8 : this is how i am doing to get ScreenShot
just added one UIImageView and Method to take screenshot in .h file
#property (weak, nonatomic) IBOutlet UIImageView *imageView;
-(IBAction)takeSnapShot:(id)sender;
2 added code snip for taking screen shot and set on UIImageView in .m file
- (IBAction)takeSnapShot:(id)sender
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *snapShotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image = snapShotImage;
}
below is the output i got.
On iOS7 you can have glitches if you use
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES]
during ongoing animation. Set afterScreenUpdates = NO to get rid of glitches.
Make sure that opaque is set to NO

Taking a Screenshot (UIImage) from UIView takes far too long

I have the following method to take a screenshot (UIImage) of a UIView which is far too slow
+ (UIImage *)imageWithView:(UIView *)view
{
CGSize size = view.bounds.size;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext()
return image;
}
On my iPad I now have an app that needs this method to make a copy of a view that is drag&dropped. This view is one with rounded corners and therefore is not opaque (which not makes a difference to if I would set the isOpaque param to YES I found out)...
Also the view that is screenshotted contains a UITableView with quite some complex entries in it...
Do you have any suggestions on how I can improve the speed of the screenshotting. Right now, for a bit bigger tableview (maybe 20 entries) it takes about 1 second (!!!)
And the view is already on screen, rendered correctly... so I just need the Pixels to but into an UIImageView...
I need to support iOS 6+.
I use this same code to take a screenshot of a really complex views. I think your bottleneck is using a big image for the drag&drop. Maybe you can resize the UIImage.
In my case the performance in a iPad2 is about 100ms for screenshot.

Crop an area of oversized image to what is currently showing onscreen

I have an oversized image loaded in a image view that goes out of bounds both vertically and horizontally.
The end user can scroll around the image (the oversized imageview is in a scrollview) and when they find an area that they like I would like to crop out the area of the image that is shown on the screen. (much like a screenshot but only of the imageview.image I'm then going to put that into a different Imageview.
I can't seem to work out how to accomplish the "screenshot" of the area of the image view's image that is currently showing on the screen.
You can use CGImageCreateWithImageInRect to create a subimage of the displayed image. Use contentOffset and the scrollViews bounds to create the rect from which you want to create the image.
CGRect rect = CGRectMake(scrollView.contentOffset.x, scrollView.contentOffset.y, CGRectGetWidth(scrollView.bounds), CGRectGetHeight(scrollView.bounds));
CGImageRef subImageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
If you zoom your scrollView you will need to take the zoomLevel into account too.
I ended up using the following code to achieve what I was looking for to grab the image. Thank you to Karl for his input and a thank you to iNoob whom answer to a previous question [Located here on StackOverflow][1] I used for mine.
Just use the below code to take a "screenshot" just set anything you don't want in the image to.hidden = True; before the code to hide it from the screenshot and set them to .Hidden = FALSE; after the code to bring them back.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Merge two UIView into one UIImage

I've got two UIImageView: the first one is laying on the top of the other (eg. an overlay).
I want now to take a screenshot of the whole thing.
Note that before that step, I allow the user to change the overlay by panning,scaling and ROTATING it, so I must keep track of his editing.
So, here's the homework:
rotating the context basing on the view's transform rotation value
positioning on the origin, where the user finished to pan the overlay
calculate the size of the overlay view (it's always a rectangle, however!)
I'm gonna merge them inside a similar piece of code:
UIGraphicsBeginImageContext...
...
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but... what does "best fit" instead of the "..."?
Example code is well accepted!
Thanks
UIGraphicsBeginImageContext(firstImage.size);
[firstImage drawAtPoint:CGPointMake(0,0)];
[secondImage drawAtPoint:CGPointMake(0,0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Resources