Remove status bar strip from screen capture - ios

When I take a screen shot programmatically with the code below it leaves a white strip where the status bar is supposed to be. I know you cannot capture the status bar but I just want the white blank strip cropped out.
- (UIImage*)captureView:(UIView *)view
{
CALayer *layer = [[UIApplication sharedApplication] keyWindow].layer;
CGFloat scale = [UIScreen mainScreen].scale;
UIGraphicsBeginImageContextWithOptions(layer.frame.size, NO, scale);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

I duplicate my answer from another question https://stackoverflow.com/a/16067463/837244, it should solve your problems :
I had written this below small class previously. You can make use of it. The latter function gets screenshot of whole screen (and it is obtained from apple's guide, so it is definitely safe). And the first part which I've added, handles different scales (retina or regular). May it help.
#import "ScreenshotTaker.h"
#import <QuartzCore/QuartzCore.h>
#implementation ScreenshotTaker
+(UIImage *) captureRectOfScreen:(CGRect) rect
{
UIImage *wholeScreen = [ScreenshotTaker screenshot];
//Add status bar height
rect.origin.y += UIInterfaceOrientationIsLandscape([UIApplication sharedApplication].statusBarOrientation) ? [[UIApplication sharedApplication] statusBarFrame].size.width : [[UIApplication sharedApplication] statusBarFrame].size.height;
//NSLog(#"%#",NSStringFromCGSize([wholeScreen size]));
CGFloat scale = wholeScreen.scale;
rect.origin.x *= scale;
rect.origin.y *= scale;
rect.size.width *= scale;
rect.size.height *= scale;
UIImage *cropped = [UIImage imageWithCGImage:CGImageCreateWithImageInRect([wholeScreen CGImage], rect) scale:wholeScreen.scale orientation:wholeScreen.imageOrientation];
//NSLog(#"Whole Screen Capt :%# Scale: %f",NSStringFromCGSize([wholeScreen size]), wholeScreen.scale);
//NSLog(#"Rect to Crop :%# Cropped :%#",NSStringFromCGRect(rect), NSStringFromCGSize([cropped size]));
return cropped;
}
+(UIImage *) screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end

Related

Screenshot of animated Image

I have one imageview which is changing on every few seconds with curl animation. I want to take the video of that so for that what I am doing is, I take the screenshots every seconds and create video from that.
But I am unable to take the screenshot when image is animating, at that time it will return the screenshot in which image is stable or not animating.
I am using below code to take screenshot of image
{
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[[window layer] presentationLayer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Does any one have idea about it?
Thanks in advance.
-Austin
I think you should be using:
[[[[window contentView] layer] presentationLayer] renderInContext:context];
in order to capture the current state of the animation, not the origin or destination states
According to your code, you seem to snapshot the screen.
So just try this:
[[UIScreen mainScreen] snapshotViewAfterScreenUpdates:NO]
Otherwise:
[yourAnimatedView.layer.presentationLayer snapshotViewAfterScreenUpdates:NO]

how can I remove the blur in screenshots?

I have been testing how to post screenshot images to Facebook. All the images I've taken have a little blur. How can I remove the blur in the screenshots? Below is the method that takes the image.
-(UIImage*)takeASnapshot
{
UIView *glView = [[CCDirector sharedDirector] view];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(glView.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(glView.bounds.size);
[glView drawViewHierarchyInRect:glView.bounds afterScreenUpdates:YES];
UIImage *screenShot ;
screenShot = UIGraphicsGetImageFromCurrentImageContext();
NSData *imageData = UIImageJPEGRepresentation(screenShot, 0.60);
screenShot = [UIImage imageWithData:imageData];
UIGraphicsEndImageContext();
return screenShot;
}
Try this.
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

iOS programatic screenshot of single pixel

I am trying to get the RGBA colour of a single pixel programatically on an iOS device. I currently have a method, however it involves taking a screenshot of the whole screen and then finding the RGB colour of the central pixel, because I want to find the central pixel every time-step of the game this causes a large lag in the performance of the app whilst the screenshot is generated. For this reason I wish to take a screenshot of only a single pixel of the screen but cannot find a method for this anywhere online... Here is my current method:
UIImage *screenImage = [self screenshot];
CGPoint pointToCheck = CGPointMake(0.5 , 0.5);
UIColor *color = [self colorFromImage:screenImage sampledAtPoint:pointToCheck];
- (UIColor*)colorFromImage:(UIImage*)image sampledAtPoint:(CGPoint)p {
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
int col = p.x*(width-1);
int row = p.y*(height-1);
const UInt8* pixel = data + row*bytesPerRow+col*4;
UIColor* returnColor = [UIColor colorWithRed:pixel[0]/255. green:pixel[1]/255. blue:pixel[2]/255. alpha:1.0];
CFRelease(bitmapData);
return returnColor;
}
- (UIImage *) screenshot
{
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Follow the code here to get the color of a single pixel:
iOS -- detect the color of a pixel?
Hope it helps :)

UIPopoverController missing side images in screenshot

I'm trying to take a screenshot of my interface and the code I'm using is missing the status bar and the sides of the UIPopoverController. I hear it may be impossible to include the status bar (is this true?) but the sides of the UIPopoverController are what really concern me. Here is the screenshot code:
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen]) {
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// rotate the image
double rotation = 0;
// rotate image based on orientation
switch ([UIApplication sharedApplication].statusBarOrientation) {
case UIInterfaceOrientationLandscapeLeft:
rotation = M_PI/2;
break;
case UIInterfaceOrientationLandscapeRight:
rotation = -M_PI/2;
break;
case UIInterfaceOrientationPortraitUpsideDown:
rotation = M_PI;
break;
default:
break;
}
// [[UIApplication sharedApplication] statusBarOrientation]
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,image.size.width, image.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(rotation);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, rotation);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), [image CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
Also I've created a sample project which demonstrates this.
The issue is related to how UIPopoverController is presented. Its actually not a UIViewController, instead is in its own window. You need to update your drawing code to something more like:
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
I haven't tested this, so give it a shot and let me know how it goes. :)

Screen shot of UIImageView after rotation

I want to take screen shot of a UIImageView after rotate it through some angle.
I use this code,but I get screenshot which is particaly cut from boundries means I am unable to get full screen shot.
Here imgDisplayImage is my instance of UIInageVew,lastRotation is total Rotation angle
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = self.imgDisplayImage.frame.size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, [self.imgDisplayImage center].x, [self.imgDisplayImage center].y);
CGContextRotateCTM(context, lastRotation);
CGContextTranslateCTM(context,
-self.imgDisplayImage.frame.size.width * [[self.imgDisplayImage layer] anchorPoint].x,
-self.imgDisplayImage.frame.size.height * [[self.imgDisplayImage layer] anchorPoint].y);
[self.imgDisplayImage.layer renderInContext:context];
CGContextRestoreGState(context);
// Retrieve the screenshot image
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil);
Try using the following code. Here Mailimage is the final screenshot
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
Mailimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return Mailimage;
}

Resources