How to take screenshot of entire screen on a Jailbroken iOS Device? - ios

I need to take screenshot of the whole screen including the status bar, I use CARenderServerRenderDisplay to achieve this, it works right on iPad, but wrong at iPhone 6 Plus. As the * marked part in the code, if I set width=screenSize.width*scale and height=screenSize.height*scale, it will cause crash, if I just change them as:width=screenSize.height*scale and height=screenSize.width*scale, it will works, but produce a image like that:
, I've tried much but no reason found, does anyone know that? I hope I've described it clear enough.
- (void)snapshot
{
CGFloat scale = [UIScreen mainScreen].scale;
CGSize screenSize = [UIScreen mainScreen].bounds.size;
//*********** the place where problem appears
size_t width = screenSize.height * scale;
size_t height = screenSize.width * scale;
//***********
size_t bytesPerElement = 4;
OSType pixelFormat = 'ARGB';
size_t bytesPerRow = bytesPerElement * width;
size_t surfaceAllocSize = bytesPerRow * height;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:surfaceAllocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceLock(destSurf, 0, NULL);
CARenderServerRenderDisplay(0, CFSTR("LCD"), destSurf, 0, 0);
IOSurfaceUnlock(destSurf, 0, NULL);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}

If you are on a Jailbroken environment, you can use the private UIImage method _UICreateScreenUIImage:
OBJC_EXTERN UIImage *_UICreateScreenUIImage(void);
// ...
- (void)takeScreenshot {
UIImage *screenImage = _UICreateScreenUIImage();
// do something with your screenshot
}
This method uses CARenderServerRenderDisplay for faster rendering of the entire device screen. It replaces the UICreateScreenImage and UIGetScreenImage methods that were removed in the arm64 version of the iOS 7 SDK.

Related

Screen capture while backgrounded in iOS 9 [duplicate]

I need to take screenshot of the whole screen including the status bar, I use CARenderServerRenderDisplay to achieve this, it works right on iPad, but wrong at iPhone 6 Plus. As the * marked part in the code, if I set width=screenSize.width*scale and height=screenSize.height*scale, it will cause crash, if I just change them as:width=screenSize.height*scale and height=screenSize.width*scale, it will works, but produce a image like that:
, I've tried much but no reason found, does anyone know that? I hope I've described it clear enough.
- (void)snapshot
{
CGFloat scale = [UIScreen mainScreen].scale;
CGSize screenSize = [UIScreen mainScreen].bounds.size;
//*********** the place where problem appears
size_t width = screenSize.height * scale;
size_t height = screenSize.width * scale;
//***********
size_t bytesPerElement = 4;
OSType pixelFormat = 'ARGB';
size_t bytesPerRow = bytesPerElement * width;
size_t surfaceAllocSize = bytesPerRow * height;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:surfaceAllocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceLock(destSurf, 0, NULL);
CARenderServerRenderDisplay(0, CFSTR("LCD"), destSurf, 0, 0);
IOSurfaceUnlock(destSurf, 0, NULL);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
If you are on a Jailbroken environment, you can use the private UIImage method _UICreateScreenUIImage:
OBJC_EXTERN UIImage *_UICreateScreenUIImage(void);
// ...
- (void)takeScreenshot {
UIImage *screenImage = _UICreateScreenUIImage();
// do something with your screenshot
}
This method uses CARenderServerRenderDisplay for faster rendering of the entire device screen. It replaces the UICreateScreenImage and UIGetScreenImage methods that were removed in the arm64 version of the iOS 7 SDK.

glReadPixels returns incorrect image for iPhone 6, but works ok for iPad and iPhone 5

I'm using following code for reading on image from OpenGL ES scene:
- (UIImage *)drawableToCGImage
{
CGRect myRect = self.bounds;
NSInteger myDataLength = myRect.size.width * myRect.size.height * 4;
glFinish();
glPixelStorei(GL_PACK_ALIGNMENT, 4);
int width = myRect.size.width;
int height = myRect.size.height;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer2);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 < width * 4; x1++) {
buffer[(height - 1 - y1) * width * 4 + x1] = buffer2[y1 * 4 * width + x1];
}
}
free(buffer2);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * myRect.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(myRect.size.width, myRect.size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
It works perfectly for iPad and old iPhone versions, but I noticed that on iPhone 6 (both device and simulators) it looks like monochrome glitches.
What could it be?
Also, here is my code for CAEAGLLayer properties:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
#YES, kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
Could somebody shed the light on this crazy magic, please?
Thanks to #MaticOblak, I've figured out the problem.
Buffer was filled incorrectly, because float values of rect size were not correctly rounded (yeah, only for iPhone 6 dimension). Instead integer values should be used.
UPD: my issue was fixed with the following code:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = viewport[3];

How to go from vImage_Buffer to CVPixelBufferRef

I'm recording live video in my iOS app. On another Stack Overflow page, I found that you can use vImage_Buffer to work on my frames.
The problem is that I have no idea how to get back to a CVPixelBufferRef from the outputted vImage_buffer.
Here is the code that is given in the other article:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0 * bytesPerRow + 4 * cropX0;
inBuff.data = baseAddress + startpos;
unsigned char *outImg = (unsigned char*)malloc(4 * outWidth * outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4 * outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(#" error %ld", err);
And now I need to convert outBuff to a CVPixelBufferRef.
I assume I need to use vImageBuffer_CopyToCVPixelBuffer, but I'm not sure how.
My first attempts failed with an EXC_BAD_ACCESS: CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst, //BGRX8888
.colorSpace = NULL, //sRGB
};
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
&format,
pixelBuffer,
NULL,
NULL,
kvImageNoFlags); // Here is the crash!
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Any idea?
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt : 480], kCVPixelBufferWidthKey,
[NSNumber numberWithInt : 480], kCVPixelBufferHeightKey,
nil];
status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
(__bridge CFDictionaryRef)options,
&pixbuffer);
You should generate a new pixelBuffer like above.
Just in case... if you want a cropped live video feed into your interface, use an AVPlayerLayer, AVCaptureVideoPreviewLayer and/or other CALayer subclasses, use the layer bounds, frame and position for your 100x100 pixel area to 480x480 area.
Notes for vImage for your question (different circumstances may differ):
CVPixelBufferCreateWithBytes will not work with vImageBuffer_CopyToCVPixelBuffer() because you need to copy the vImage_Buffer data into a "clean" or "empty" CVPixelBuffer.
No need for locking/unlocking - make sure you know when to lock & when not to lock pixel buffers.
Your inBuff vImage_Buffer just needs to be initialized from the pixel buffer data, not manually (unless you know how to use CGContexts etc, to init the pixel grid)
use vImageBuffer_InitWithCVPixelBuffer()
vImageScale_ARGB8888 will scale the entire CVPixel data to a smaller/larger rectangle. It won't SCALE a portion/crop area of the buffer to another buffer.
When you use vImageBuffer_CopyToCVPixelBuffer(),
vImageCVImageFormatRef & vImage_CGImageFormat need to be filled out correctly.
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);
vImage_CGImageFormat format = {
.bitsPerComponent = 16,
.bitsPerPixel = 64,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder16Big ,
.colorSpace = dstColorSpace
};
vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(kCVPixelFormatType_4444AYpCbCr16,
kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,
kCVImageBufferChromaLocation_Center,
format.colorSpace,
0);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_4444AYpCbCr16,
NULL,
&destBuffer);
NSParameterAssert(status == kCVReturnSuccess && destBuffer != NULL);
err = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer, &format, destBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
NOTE: these are settings for 64 bit ProRes with Alpha - adjust for 32 bit.

Why my cv::Mat become grey color?

Here is how I implement the AVCaptureVideoDataOutputSampleBufferDelegate:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));
AVCaptureVideoOrientation videoOrientation = [[[_captureOutput connections] objectAtIndex:0] videoOrientation];
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
cv::Mat my_mat = cv::Mat(videoRect.size.height, videoRect.size.width, NULL, baseaddress, 0); //<<<<----HERE
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Here is how I set the capture format the format:
OSType format = kCVPixelFormatType_32BGRA;
// Check YUV format is available before selecting it (iPhone 3 does not support it)
if ([_captureOutput.availableVideoCVPixelFormatTypes containsObject:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]]) {
format = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
}
_captureOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
The problem happens because of NULL passed as 3rd parameter. It should be CV_8UC4 for 4-channel image:
cv::Mat my_mat = cv::Mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress);

Why AVAssetWriter inflates video file?

Strange problem. I take frames from a video file (.mov) and write them with AVAssetWriter to another file without any explicit processing. Actually I just copy the frame from one memory buffer to another and them flush them through PixelbufferAdaptor. Then I take the resulting file, delete the original file, put the resulting file instead the original and do the same operation. Interesting thing is that the size of the file constantly grows! Can somebody explain why?
if(adaptor.assetWriterInput.readyForMoreMediaData==YES) {
CVImageBufferRef cvimgRef=nil;
CMTime lastTime=CMTimeMake(fcounter++, 30);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
CMSampleBufferRef framebuffer=nil;
CGImageRef frameImg=nil;
if ( [asr status]==AVAssetReaderStatusReading ){
framebuffer = [asset_reader_output copyNextSampleBuffer];
frameImg = [self imageFromSampleBuffer:framebuffer withColorSpace:rgbColorSpace];
}
if(frameImg && screenshot){
//CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(framebuffer);
CVReturn stat= CVPixelBufferLockBaseAddress(screenshot, 0);
pxdata=CVPixelBufferGetBaseAddress(screenshot);
bufferSize = CVPixelBufferGetDataSize(screenshot);
// Get the number of bytes per row for the pixel buffer.
bytesPerRow = CVPixelBufferGetBytesPerRow(screenshot);
// Get the pixel buffer width and height.
width = CVPixelBufferGetWidth(screenshot);
height = CVPixelBufferGetHeight(screenshot);
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, pxdata, bufferSize, NULL);
CGImageAlphaInfo ai=CGImageGetAlphaInfo(frameImg);
size_t bpx=CGImageGetBitsPerPixel(frameImg);
CGColorSpaceRef fclr=CGImageGetColorSpace(frameImg);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,rgbColorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big,dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
stat= CVPixelBufferLockBaseAddress(finalPixelBuffer, 0);
pxdata=CVPixelBufferGetBaseAddress(finalPixelBuffer);
bytesPerRow = CVPixelBufferGetBytesPerRow(finalPixelBuffer);
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,imgsize.height, 8, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(frameImg), CGImageGetHeight(frameImg)), frameImg);
//CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
//CGImageRef myMaskedImage;
const float myMaskingColors[6] = { 0, 0, 0, 1, 0, 0 };
CGImageRef myColorMaskedImage = CGImageCreateWithMaskingColors (cgImage, myMaskingColors);
//CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(myColorMaskedImage), CGImageGetHeight(myColorMaskedImage)), myColorMaskedImage);
[adaptor appendPixelBuffer:finalPixelBuffer withPresentationTime:presentTime];}
well, the mystery seems to be solved. The problem was in inappropriate codec configuration.
This is set of configuration options I use now and it seems to do the work:
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:1100000], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:5],AVVideoMaxKeyFrameIntervalKey,
nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.width], AVVideoWidthKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.height], AVVideoHeightKey,
codecSettings,AVVideoCompressionPropertiesKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
Now the file size still grows but at much slower pace. There is a tradeoff between the file size and the video quality - size reduction affects the quality.

Resources