Screen shot not provide image of whole screen - ios

I am making application related to images. I have multiple images on my screen. I had take screen shot of that. But it should not provide my whole screen.
Little part of the top most & bottom most part need not be shown in that.
I have navigation bar on top. And some buttons at bottom. I don't want to capture that buttons and navigation bar in my screenshot image.
Below is my code for screen shot.
-(UIImage *) screenshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, YES, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
After taking screenshot I am using it by below code in facebook share method,
UIImage *image12 =[self screenshot];
[mySLComposerSheet addImage:image12];

the easiest way to achieve this would be to add a UIView which holds all the content you want to take a screenshot of and then call drawViewHierarchyInRect from that UIView instead of the main UIView.
Something like this:
-(UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(contentView.bounds.size, YES, [UIScreen mainScreen].scale);
[contentView drawViewHierarchyInRect:contentView.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hope this helps!

You can use my below code to take screen shot of a view.
I have put the condition to check the size of a screenshot.
With this code image is saved in your documents folder and from there you can use your image to share on Facebook or anywhere you want to share.
CGSize size = self.view.bounds.size;
CGRect cropRect;
if ([self isPad])
{
cropRect = CGRectMake(110 , 70 , 300 , 300);
}
else
{
if (IS_IPHONE_5)
{
cropRect = CGRectMake(55 , 25 , 173 , 152);
}
else
{
cropRect = CGRectMake(30 , 25 , 164 , 141);
}
}
/* Get the entire on screen map as Image */
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Crop the desired region */
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
/* Save the cropped image
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);*/
//save to document folder
NSData * imageData = UIImageJPEGRepresentation(cropImage, 1.0);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString *imagename=[NSString stringWithFormat:#"Pic.jpg"];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imagename];
////NSLog(#"full path %#",fullPathToFile);
[imageData writeToFile:fullPathToFile atomically:NO];
Hope it helps you.

use this code
-(IBAction)captureScreen:(id)sender
{
UIGraphicsBeginImageContext(webview.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
sample project www.cocoalibrary.blogspot.com
http://www.bobmccune.com/2011/09/08/screen-capture-in-ios-apps/

snapshotViewAfterScreenUpdates
but it is only Available in iOS 7.0 and later.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This method captures the current visual contents of the screen from the render server and uses them to build a new snapshot view. You can use the returned snapshot view as a visual stand-in for the screen’s contents in your app. For example, you might use a snapshot view to facilitate a full screen animation. Because the content is captured from the already rendered content, this method reflects the current visual appearance of the screen and is not updated to reflect animations that are scheduled or in progress. However, this method is faster than trying to render the contents of the screen into a bitmap image yourself.
https://developer.apple.com/library/ios/documentation/uikit/reference/UIScreen_Class/Reference/UIScreen.html#//apple_ref/occ/instm/UIScreen/snapshotViewAfterScreenUpdates:

Related

Programatic Screenshot with Camera View

I am using the below code to take a screenshot of the screen, but it replaces the camera view with all black (though all other UI elements are fine-- I need the Camera View as well as UI elements).
All answers to similar questions that I've found are either extremely old/deprecated, or in Swift. If anyone has a simple, Obj-C solution to this problem, it would be much appreciated.
Thanks!
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(self.view.window.bounds.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(self.view.window.bounds.size);
}
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image);
if (imageData) {
[imageData writeToFile:#"Screenshot.png" atomically:YES];
NSLog(#"Screenshot write successful");
} else {
NSLog(#"Error while taking screenshot");
}
If i understand correct, it can help you get video screen - https://stackoverflow.com/a/37789235/1678018
And you can add this image in screen image (which you have with UIGraphicsGetImageFromCurrentImageContext).
Example way for add UIImage on top of another UIImage: Add UIImage on top of another UIImage

UIPickerView get darker on screenshot

I had to alter the navigation on certain circumstances, and due to complexity of the transitions I had take an paint and screenshot until the transition is finished. In almost cases, that works pretty well, but there is a point that disturb me. I have a view controller with two picker views:
But the screenshot is not working well on this VC. I get this:
The code that takes the screenshot is the following in both cases:
- (UIImage *)takeScreenshot {
CALayer *layer = [[UIApplication sharedApplication] keyWindow].layer;
UIGraphicsBeginImageContext(layer.frame.size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
return screenshot;
}
Anyone knows how could be happened?
You could try to use a different method for screenshot. Apple introduced in iOS 7 some methods for fast view screenshot.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Here is an answer from Apple that provides more info on how the 2 methods works. While the respective user encountered some pb and was advised to use the old way of snapshotting the view, I never had any problem with it. Maybe they fixed it since then.
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"image.png" atomically:YES];
if you have a retina display then replace the first line with the below code:-
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO,[UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.window.bounds.size);

How to add several UIImages to one base UIImage in iOS

I know there has to be a way to take one base image (in my case a transparent 4096x4096 base layer) and adding several dozen (up to 100) very small (100x100) images to it in different locations. BUT I cannot figure out how to do it: Assume for the following that imageArray has many elements, each containing a NSDictionary with the following structure:
#{
#"imagename":imagename,
#"x":xYDict[#"x"],
#"y":xYDict[#"y"]
}
Now, here is the code (that is not working. all I have is a transparent screen...and the performance is STILL horrible...about 1-2 seconds per "image" in the loop...and massive memory use.
NSString *PNGHeatMapPath = [[myGizmoClass dataDir] stringByAppendingPathComponent:#"HeatMap.png"];
#autoreleasepool
{
CGSize theFinalImageSize = CGSizeMake(4096, 4096);
NSLog(#"populateHeatMapJSON: creating blank image");
UIGraphicsBeginImageContextWithOptions(theFinalImageSize, NO, 0.0);
UIImage *theFinalImage = UIGraphicsGetImageFromCurrentImageContext();
for (NSDictionary *theDict in imageArray)
{
NSLog(#"populateHeatMapJSON: adding: %#",theDict[#"imagename"]);
UIImage *image = [UIImage imageNamed:theDict[#"imagename"]];
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
}
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(theFinalImage);
theFinalImage = nil;
[imageData writeToFile:PNGHeatMapPath atomically:YES];
NSLog(#"populateHeatMapJSON: PNGHeatMapPath = %#",PNGHeatMapPath);
}
I KNOW I am misunderstanding UIGraphics contexts...can someone help me out? I am trying to superimpose a bunch of UIImages on the blank transparent canvas at certain x,y locations, get the final image and save it.
PS: This is also .. if the simulator is anything to go by ... leaking like heck. It goes up to 2 or 3 gig...
Thanks in advance.
EDIT:
I did find this and tried it but it requires me to rewrite the entire image, copy that image to a new one and add to that new one. Was hoping for a "base image", "add this image", "add that image", "add the other image" and get the new base image without all the copying:
Is this as good as it gets?
+(UIImage*) drawImage:(UIImage*) fgImage
inImage:(UIImage*) bgImage
atPoint:(CGPoint) point
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:CGRectMake( 0, 0, bgImage.size.width, bgImage.size.height)];
[fgImage drawInRect:CGRectMake( point.x, point.y, fgImage.size.width, fgImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You are getting the image UIGraphicsGetImageFromCurrentImageContext() before you have actually drawn anything. Move this call to after your loop and before the UIGraphicsEndImageContext()
Your drawing actually occurs on the line
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
the function UIGraphicsGetImageFromCurrentImageContext() gives you the image of the current contents of the context so you need to ensure that you have done all your drawing before you grab the image.

Merge multiple views in an uiview

In my app I've to take a picture and add the following information over the pic, these information are:
Weather forecast
Temperature
GPS location
Until now I obtained these information by using GPS and a web service for weather forecast (open weather map). I made so:
I take the picture with the standard UIImagePicker
I put a button on my interface to show the picture to the user
When the user press the button the app open a new ViewController in which I show the picture just take and I added 2 UILabel (one for temperature and one for the location) and a UIImageView (to show an icon about the weather forecast). The UILabels and the UIImageView I draw directly on the StoryBoard.
Now I need to merge the picture with the 2 UILabel and with the UIImageView, there's a way to merge them in a single UIImageView?
I've to do that to save the picture with the weather forecast and location
UPDATE
I create a button to save the picture with the labels and the imageview and the code I wrote it's this:
- (IBAction)buttonSavePicture:(UIButton *)sender {
[self.imageView addSubview:self.labelPlace];
[self.imageView addSubview:self.labelTemperature];
[self.imageView addSubview:self.imageViewWeather];
UIGraphicsBeginImageContext(self.imageView.bounds.size);
[self.imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentationDirectory, NSUserDomainMask, YES);
NSString *filePath = [[paths objectAtIndex:0] stringByAppendingPathComponent:self.filename];
[UIImageJPEGRepresentation(finalImage, 1) writeToFile:filePath atomically:YES];
}
But when I go to see in the Documents directory if I saved correctly the picture I don't find it.
Yes, you can easily do it by capturing them. Follow steps.
Create a small parent view in storyboard put all controls you want to capture together inside. Create an outlet say captureView.
Call the following function when you need.
-(void)capture{
UIGraphicsBeginImageContext(self.captureView.bounds.size);
[self.captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//FINAL OUTPUT
self.imageView.image=capturedImage;
}
Cheers.
If you using iOS7 have a look at the snapshotViewAfterScreenUpdates: and the drawViewHierarchyInRect:afterScreenUpdates: this one is used to include or capture subviews like labels etc. this will return a single UIView of everything on screen, then save that as a UIImage.
CGSize imgSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height);
UIGraphicsBeginImageContextWithOptions(imgSize, NO , 0.0f);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *weatherImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(weatherImage, nil, nil, nil; //save to saved image album
If all went right you should have your "screenshot" in the photo album

Save scene to camera roll

I'm working on augmented reality app for iPhone and I'm using sample code "ImageTargets" from Vuforia SDK. I'm using my own images as templates and my own model to augment the scene (just a few vertices in OpenGL). Next thing I wanna do is to save the scene to camera roll after pushing a button. I created the button as well as the method the button responds to. Here comes the tricky part. When I press the button the method gets called, image is properly saved, but the image is completely white showing only the button icon (like this http://tinypic.com/r/16c2kjq/5).
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.layer.frame.size);
[self.view.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
- (void)image: (UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo: (void *) contextInfo {
NSLog(#"Image Saved");
}
I have these 2 methods in ImageTargetsParentViewController class but I also tried saving the view from ARParentViewController (and even moved the methods to the class). Has anyone found solution to this? I'm not so sure which view to save and/or whether there aren't any tricky parts with saving the view that contains OpeglES. Thanks for any reply.
try to use this code for save photo:
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imagee = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(0, 0, 320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([imagee CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageWriteToSavedPhotosAlbum(img, Nil, Nil, Nil);

Resources