Programatic Screenshot with Camera View - ios

I am using the below code to take a screenshot of the screen, but it replaces the camera view with all black (though all other UI elements are fine-- I need the Camera View as well as UI elements).
All answers to similar questions that I've found are either extremely old/deprecated, or in Swift. If anyone has a simple, Obj-C solution to this problem, it would be much appreciated.
Thanks!
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(self.view.window.bounds.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(self.view.window.bounds.size);
}
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image);
if (imageData) {
[imageData writeToFile:#"Screenshot.png" atomically:YES];
NSLog(#"Screenshot write successful");
} else {
NSLog(#"Error while taking screenshot");
}

If i understand correct, it can help you get video screen - https://stackoverflow.com/a/37789235/1678018
And you can add this image in screen image (which you have with UIGraphicsGetImageFromCurrentImageContext).
Example way for add UIImage on top of another UIImage: Add UIImage on top of another UIImage

Related

Capture snapshot taken by iOS when going to background

Whenever an **iOS** application goes to background, the system takes a snapshot of it to present it in the multitask switcher. Is there any chance to get access to this screenshot? I know it corresponds to the top window, and this could be edited from
- (void)applicationDidEnterBackground:(UIApplication *)application
But the results are not the one expected, and would like to know what exactly is iOS snapshotting.
use this code to capture snapshot:
#import <QuartzCore/QuartzCore.h>
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * imgData = UIImagePNGRepresentation(image);
if(imgData)
[imgData writeToFile:#"screenshot.png" atomically:YES];
else
NSLog(#"error while taking screenshot");
In the App Programming Guide for iOS, Apple says:
obscuring/replacing sensitive information, you might also want to tell iOS 7 to not take the screen snapshot via ignoreSnapshotOnNextApplicationLaunch. According to the documentation for this method:
Hope this will be helpful.

UIPickerView get darker on screenshot

I had to alter the navigation on certain circumstances, and due to complexity of the transitions I had take an paint and screenshot until the transition is finished. In almost cases, that works pretty well, but there is a point that disturb me. I have a view controller with two picker views:
But the screenshot is not working well on this VC. I get this:
The code that takes the screenshot is the following in both cases:
- (UIImage *)takeScreenshot {
CALayer *layer = [[UIApplication sharedApplication] keyWindow].layer;
UIGraphicsBeginImageContext(layer.frame.size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
return screenshot;
}
Anyone knows how could be happened?
You could try to use a different method for screenshot. Apple introduced in iOS 7 some methods for fast view screenshot.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Here is an answer from Apple that provides more info on how the 2 methods works. While the respective user encountered some pb and was advised to use the old way of snapshotting the view, I never had any problem with it. Maybe they fixed it since then.
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"image.png" atomically:YES];
if you have a retina display then replace the first line with the below code:-
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO,[UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.window.bounds.size);

How to add several UIImages to one base UIImage in iOS

I know there has to be a way to take one base image (in my case a transparent 4096x4096 base layer) and adding several dozen (up to 100) very small (100x100) images to it in different locations. BUT I cannot figure out how to do it: Assume for the following that imageArray has many elements, each containing a NSDictionary with the following structure:
#{
#"imagename":imagename,
#"x":xYDict[#"x"],
#"y":xYDict[#"y"]
}
Now, here is the code (that is not working. all I have is a transparent screen...and the performance is STILL horrible...about 1-2 seconds per "image" in the loop...and massive memory use.
NSString *PNGHeatMapPath = [[myGizmoClass dataDir] stringByAppendingPathComponent:#"HeatMap.png"];
#autoreleasepool
{
CGSize theFinalImageSize = CGSizeMake(4096, 4096);
NSLog(#"populateHeatMapJSON: creating blank image");
UIGraphicsBeginImageContextWithOptions(theFinalImageSize, NO, 0.0);
UIImage *theFinalImage = UIGraphicsGetImageFromCurrentImageContext();
for (NSDictionary *theDict in imageArray)
{
NSLog(#"populateHeatMapJSON: adding: %#",theDict[#"imagename"]);
UIImage *image = [UIImage imageNamed:theDict[#"imagename"]];
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
}
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(theFinalImage);
theFinalImage = nil;
[imageData writeToFile:PNGHeatMapPath atomically:YES];
NSLog(#"populateHeatMapJSON: PNGHeatMapPath = %#",PNGHeatMapPath);
}
I KNOW I am misunderstanding UIGraphics contexts...can someone help me out? I am trying to superimpose a bunch of UIImages on the blank transparent canvas at certain x,y locations, get the final image and save it.
PS: This is also .. if the simulator is anything to go by ... leaking like heck. It goes up to 2 or 3 gig...
Thanks in advance.
EDIT:
I did find this and tried it but it requires me to rewrite the entire image, copy that image to a new one and add to that new one. Was hoping for a "base image", "add this image", "add that image", "add the other image" and get the new base image without all the copying:
Is this as good as it gets?
+(UIImage*) drawImage:(UIImage*) fgImage
inImage:(UIImage*) bgImage
atPoint:(CGPoint) point
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:CGRectMake( 0, 0, bgImage.size.width, bgImage.size.height)];
[fgImage drawInRect:CGRectMake( point.x, point.y, fgImage.size.width, fgImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You are getting the image UIGraphicsGetImageFromCurrentImageContext() before you have actually drawn anything. Move this call to after your loop and before the UIGraphicsEndImageContext()
Your drawing actually occurs on the line
[image drawInRect:CGRectMake([theDict[#"x"] doubleValue],[theDict[#"y"] doubleValue],image.size.width,image.size.height)];
the function UIGraphicsGetImageFromCurrentImageContext() gives you the image of the current contents of the context so you need to ensure that you have done all your drawing before you grab the image.

Screen shot not provide image of whole screen

I am making application related to images. I have multiple images on my screen. I had take screen shot of that. But it should not provide my whole screen.
Little part of the top most & bottom most part need not be shown in that.
I have navigation bar on top. And some buttons at bottom. I don't want to capture that buttons and navigation bar in my screenshot image.
Below is my code for screen shot.
-(UIImage *) screenshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, YES, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
After taking screenshot I am using it by below code in facebook share method,
UIImage *image12 =[self screenshot];
[mySLComposerSheet addImage:image12];
the easiest way to achieve this would be to add a UIView which holds all the content you want to take a screenshot of and then call drawViewHierarchyInRect from that UIView instead of the main UIView.
Something like this:
-(UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(contentView.bounds.size, YES, [UIScreen mainScreen].scale);
[contentView drawViewHierarchyInRect:contentView.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hope this helps!
You can use my below code to take screen shot of a view.
I have put the condition to check the size of a screenshot.
With this code image is saved in your documents folder and from there you can use your image to share on Facebook or anywhere you want to share.
CGSize size = self.view.bounds.size;
CGRect cropRect;
if ([self isPad])
{
cropRect = CGRectMake(110 , 70 , 300 , 300);
}
else
{
if (IS_IPHONE_5)
{
cropRect = CGRectMake(55 , 25 , 173 , 152);
}
else
{
cropRect = CGRectMake(30 , 25 , 164 , 141);
}
}
/* Get the entire on screen map as Image */
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Crop the desired region */
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
/* Save the cropped image
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);*/
//save to document folder
NSData * imageData = UIImageJPEGRepresentation(cropImage, 1.0);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString *imagename=[NSString stringWithFormat:#"Pic.jpg"];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imagename];
////NSLog(#"full path %#",fullPathToFile);
[imageData writeToFile:fullPathToFile atomically:NO];
Hope it helps you.
use this code
-(IBAction)captureScreen:(id)sender
{
UIGraphicsBeginImageContext(webview.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
sample project www.cocoalibrary.blogspot.com
http://www.bobmccune.com/2011/09/08/screen-capture-in-ios-apps/
snapshotViewAfterScreenUpdates
but it is only Available in iOS 7.0 and later.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This method captures the current visual contents of the screen from the render server and uses them to build a new snapshot view. You can use the returned snapshot view as a visual stand-in for the screen’s contents in your app. For example, you might use a snapshot view to facilitate a full screen animation. Because the content is captured from the already rendered content, this method reflects the current visual appearance of the screen and is not updated to reflect animations that are scheduled or in progress. However, this method is faster than trying to render the contents of the screen into a bitmap image yourself.
https://developer.apple.com/library/ios/documentation/uikit/reference/UIScreen_Class/Reference/UIScreen.html#//apple_ref/occ/instm/UIScreen/snapshotViewAfterScreenUpdates:

Save scene to camera roll

I'm working on augmented reality app for iPhone and I'm using sample code "ImageTargets" from Vuforia SDK. I'm using my own images as templates and my own model to augment the scene (just a few vertices in OpenGL). Next thing I wanna do is to save the scene to camera roll after pushing a button. I created the button as well as the method the button responds to. Here comes the tricky part. When I press the button the method gets called, image is properly saved, but the image is completely white showing only the button icon (like this http://tinypic.com/r/16c2kjq/5).
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.layer.frame.size);
[self.view.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
- (void)image: (UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo: (void *) contextInfo {
NSLog(#"Image Saved");
}
I have these 2 methods in ImageTargetsParentViewController class but I also tried saving the view from ARParentViewController (and even moved the methods to the class). Has anyone found solution to this? I'm not so sure which view to save and/or whether there aren't any tricky parts with saving the view that contains OpeglES. Thanks for any reply.
try to use this code for save photo:
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imagee = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(0, 0, 320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([imagee CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageWriteToSavedPhotosAlbum(img, Nil, Nil, Nil);

Resources