Save scene to camera roll - ios

I'm working on augmented reality app for iPhone and I'm using sample code "ImageTargets" from Vuforia SDK. I'm using my own images as templates and my own model to augment the scene (just a few vertices in OpenGL). Next thing I wanna do is to save the scene to camera roll after pushing a button. I created the button as well as the method the button responds to. Here comes the tricky part. When I press the button the method gets called, image is properly saved, but the image is completely white showing only the button icon (like this http://tinypic.com/r/16c2kjq/5).
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.layer.frame.size);
[self.view.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
- (void)image: (UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo: (void *) contextInfo {
NSLog(#"Image Saved");
}
I have these 2 methods in ImageTargetsParentViewController class but I also tried saving the view from ARParentViewController (and even moved the methods to the class). Has anyone found solution to this? I'm not so sure which view to save and/or whether there aren't any tricky parts with saving the view that contains OpeglES. Thanks for any reply.

try to use this code for save photo:
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imagee = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(0, 0, 320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([imagee CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageWriteToSavedPhotosAlbum(img, Nil, Nil, Nil);

Related

Core Graphics image blending doesn't release memory

I am blending two images with the following code:
+ (UIImage *)xxx_blendedImageWithFirstImage:(UIImage *)image
secondImage:(UIImage *)secondImage
renderedInFrame:(CGRect)frame
alpha:(CGFloat)alpha {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, UIScreen.mainScreen.scale);
[image drawInRect:frame];
[secondImage drawInRect:frame blendMode:kCGBlendModeNormal alpha:alpha];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This code is called in an IBAction for a UISlider, so that it is called on every slider position change. This is the memory footprint for this code:
It takes 230 MB and then fails due to memory pressure.
How to make this code working properly?
Perform this action on background thread, if you have connected this IBAction with Value Changed event in xib. Make sure do all UI changes on main thread.
Otherwise connect this IBAction with Touch Up Inside event. After this method will call, when you leave the slider for one time in one drag. (not for all dragging values)
Hope it will solve your problem
After adding #autoreleasepool:
+ (UIImage *)xxx_blendedImageWithFirstImage:(UIImage *)image
secondImage:(UIImage *)secondImage
renderedInFrame:(CGRect)frame
alpha:(CGFloat)alpha {
UIImage *newImage = nil;
#autoreleasepool {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, UIScreen.mainScreen.scale);
[image drawInRect:frame];
[secondImage drawInRect:frame blendMode:kCGBlendModeNormal alpha:alpha];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
Maybe, there should be some manual release, however now the memory footprint looks like:

Screen shot not provide image of whole screen

I am making application related to images. I have multiple images on my screen. I had take screen shot of that. But it should not provide my whole screen.
Little part of the top most & bottom most part need not be shown in that.
I have navigation bar on top. And some buttons at bottom. I don't want to capture that buttons and navigation bar in my screenshot image.
Below is my code for screen shot.
-(UIImage *) screenshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, YES, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
After taking screenshot I am using it by below code in facebook share method,
UIImage *image12 =[self screenshot];
[mySLComposerSheet addImage:image12];
the easiest way to achieve this would be to add a UIView which holds all the content you want to take a screenshot of and then call drawViewHierarchyInRect from that UIView instead of the main UIView.
Something like this:
-(UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(contentView.bounds.size, YES, [UIScreen mainScreen].scale);
[contentView drawViewHierarchyInRect:contentView.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hope this helps!
You can use my below code to take screen shot of a view.
I have put the condition to check the size of a screenshot.
With this code image is saved in your documents folder and from there you can use your image to share on Facebook or anywhere you want to share.
CGSize size = self.view.bounds.size;
CGRect cropRect;
if ([self isPad])
{
cropRect = CGRectMake(110 , 70 , 300 , 300);
}
else
{
if (IS_IPHONE_5)
{
cropRect = CGRectMake(55 , 25 , 173 , 152);
}
else
{
cropRect = CGRectMake(30 , 25 , 164 , 141);
}
}
/* Get the entire on screen map as Image */
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Crop the desired region */
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
/* Save the cropped image
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);*/
//save to document folder
NSData * imageData = UIImageJPEGRepresentation(cropImage, 1.0);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString *imagename=[NSString stringWithFormat:#"Pic.jpg"];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imagename];
////NSLog(#"full path %#",fullPathToFile);
[imageData writeToFile:fullPathToFile atomically:NO];
Hope it helps you.
use this code
-(IBAction)captureScreen:(id)sender
{
UIGraphicsBeginImageContext(webview.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
sample project www.cocoalibrary.blogspot.com
http://www.bobmccune.com/2011/09/08/screen-capture-in-ios-apps/
snapshotViewAfterScreenUpdates
but it is only Available in iOS 7.0 and later.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This method captures the current visual contents of the screen from the render server and uses them to build a new snapshot view. You can use the returned snapshot view as a visual stand-in for the screen’s contents in your app. For example, you might use a snapshot view to facilitate a full screen animation. Because the content is captured from the already rendered content, this method reflects the current visual appearance of the screen and is not updated to reflect animations that are scheduled or in progress. However, this method is faster than trying to render the contents of the screen into a bitmap image yourself.
https://developer.apple.com/library/ios/documentation/uikit/reference/UIScreen_Class/Reference/UIScreen.html#//apple_ref/occ/instm/UIScreen/snapshotViewAfterScreenUpdates:

Crop Image using CGRect

I have been trying to do this since forever. I have a camera overlay. I want to get my final image to be the part of the image viewable from the in-built camera.
What I did was make CGRect with dimensions equal to the square in the camera. Then I tried cropping it using this function.
- (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
I called it like this
CGRect rect = CGRectMake(10, 72, 300, 300);
UIImage *realImage = [self imageByCropping:[self.capturedImages objectAtIndex:0] toRect:rect];
What I get is a bad quality image with the wrong orientation.
::EDIT::
With Nitin's answer I can crop the correct part of the screen but the problem is it crops the view that follows the camera view, 'the confirmation view'. I suspect this is because Nitin's code uses
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
and because the ViewController in which all this is happening because the View Controller for the Confirmation View is the Controller in which this code is being executed. I will try to explain this with a small map
CameraOverlay.xib(it uses this xib to create an overlay) <===== CameraOverlayViewController ---------> ConfirmationView
So when first the ViewController is evoked(button on Tab bar), it opens the camera(UIImagePickerController) with an overlay over it. Then once user clicks an image, the image is shown on the ConfirmationView.
What I think is happening is when
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 1.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
these lines are being executed, the View at that time is ConfirmationView.
Note: I call the function in
(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info method.
Refer Drawing and printing Guide.
The default coordinate system is different between CoreGraphics and UIKit. I think your issue is because of this fact.
Using these may help you solve the issue
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context , 0.0, rect.size.height);
CGContextScaleCTM(context , 1.0, -1.0);

Resizing UIImage to post to Twitter Sheet- iOS

I am trying to resize my image in order to attach to twitter sheet. But I am getting error as "No known class for selector method "imageWithImage: (UIImage)image....""
- (void)twitterButtonPressed {
UIImage *iconImage=[UIImage imageNamed:#"male_small_0.png"];
// I am having problem in the following line
UIImage *iconImage2=[UIImage imageWithImage:iconImage scaledToSize:CGSizeMake(73.0, 73.0)];
}
-(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage =UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You call imageWithImage:scaledToSize: on UIImage, but your method is implemented in what I assume is your view controller. To make it work, change twitterButtonPressed to:
- (void)twitterButtonPressed {
UIImage *iconImage=[UIImage imageNamed:#"male_small_0.png"];
// I am having problem in the following line
UIImage *iconImage2=[self imageWithImage:iconImage scaledToSize:CGSizeMake(73.0, 73.0)];
}
A better solution would be to create a category on UIImage with imageWithImage:scaledToSize: in it. Then, when you import this category, you don't need the method in your view controller anymore and you can leave twitterButtonPressed as-is and it'll work.

Make a UIImage from a MKMapView

I want to create a UIImage from a MKMapView. My map is correctly displayed in the view, however the UIImage produced is just a gray image. Here's the relevant snippet.
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Anyone know how to make a UIImage using MapKit?
I am using the same code that is tested with ios sdk 4.1 and works fine. So, when map is already displayed to user and user press the button this action will be called:
UIImage *image = [mapView renderToImage];
and here is the wrapper function realized as UIView extension:
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So, the problem is not in that code part.
On iOS7, there is a new API on MapKit for this called MKMapSnapshotter. So you actually don't need to create a mapview, load the tiles and create a graphic context capturing yourself.
Take a look into it at https://developer.apple.com/library/ios/documentation/MapKit/Reference/MKMapSnapshotter_class/Reference/Reference.html
Here is the improved function for retina display:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
// IMPORTANT: using weak link on UIKit
if(UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Hey Loren. There are multiple layers in the mapView. I think the first one is the map and the second on is the google layer. They might have changed something in the mapkit after 3.1. You can try
[[[mapView.layer sublayers] objectAtIndex:1] renderInContext:UIGraphicsGetCurrentContext()];
You can also try
CGRect rect = [mapView bounds];
CGImageRef mapImage = [mapView createSnapshotWithRect:rect];
Hope this helps.
Note that, mapView may not finish load so image may be grey.
as
mapViewDidFinishLoadingMap:
will not always be called, you should get UIImage in
mapViewDidFinishRenderingMap:fullyRendered:
so, the code just like this
- (UIImage *)renderToImage:(MKMapView *)mapView
{
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you are calling this immediately after initializing the map (maybe in viewDidLoad?), you could get a gray image as the map is not finished drawing yet.
Try:
Calling the capture code using performSelector:withObject:afterDelay: using a short delay (even 0 seconds might work so it fires right after the current method is finished)
If you are adding annotations, call the capture code in the didAddAnnotationViews delegate method
Edit:
On the simulator, using performSelector, a zero delay works. On the device, a longer delay is required (about 5 seconds).
However, if you add annotations (and you capture in the didAddAnnotationViews method), it works immediately on both the simulator and device.

Resources