How to take clear screenShot IOS - ios

I am working on IOS Application in which taking clear screen shot is required I've tried
func captureView() -> UIImage {
//hide controls if needed
let rect: CGRect = self.imageView.frame
UIGraphicsBeginImageContext(rect.size)
let context: CGContextRef = UIGraphicsGetCurrentContext()!
self.view.layer.renderInContext(context)
let img: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
But the quality of image is not good. Please suggest me something for it.

change the line
UIGraphicsBeginImageContext(rect.size)
to
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0);

Add the below code block in a new file. And, there you go.
extension UIView {
// Convert a uiview to uiimage
func captureView() -> UIImage {
// Gradually increase the number for high resolution.
let scale = 1.0
UIGraphicsBeginImageContextWithOptions(bounds.size, opaque, scale)
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Now, you can call captureView() on any subclass of UIView and UIView itself to get the UIImage of the view.
You can change the value of scale in UIGraphicsBeginImageContextWithOptions method to get a high resolution image.

You can use the IOSurface private framework to do so
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
void* outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage=CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL,
YES, kCGRenderingIntentDefault);
UIImage *img = [UIImage imageWithCGImage:cgImage];

use
UIGraphicsBeginImageContextWithOptions(your size, opaque, scale
instead of
UIGraphicsBeginImageContext(rect.size)

The last parameter in UIGraphicsBeginImageContextWithOptions is scale. If you only need High resolution, try setting a value for scale to 1.0.
func captureView() -> UIImage {
//hide controls if needed
let rect: CGRect = self.imageView.frame
UIGraphicsBeginImageContextWithOptions(rect.size, true, 1.0)//add this line
let context: CGContextRef = UIGraphicsGetCurrentContext()!
self.view.layer.renderInContext(context)
let img: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}

Related

Ios rotate, filter video stream in ios

Hello There I am rotating and applying image filters by GPUImage on vide live stream
The task is consuming more time than expected resulting over-heating of iPhone
Can anybody help me out in optimising my code
Following is my used code:
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer{
//return if invalid sample buffer
if (!CMSampleBufferIsValid(sampleBuffer)) {
return;
}
//Get CG Image from sample buffer
CGImageRef cgImageFromBuffer = [self cgImageFromSampleBuffer:sampleBuffer];
if(!cgImageFromBuffer || (cgImageFromBuffer == NULL)){
return;
}
//We need rotation to perform
UIImage *rotatedPlainImage = [UIUtils rotateImage:[UIImage imageWithCGImage:cgImageFromBuffer] byDegree:90];
if (rotatedPlainImage == nil) {
CFRelease(cgImageFromBuffer);
return;
}
//Apply image filter using GPU Image on CGImage
CGImageRef filteredCGImage = [self.selectedPublishFilter newCGImageByFilteringCGImage:rotatedPlainImage.CGImage];
//Convert back in CMSamplbuffer
CMSampleBufferRef outputBufffer = [self getSampleBufferUsingCIByCGInput:filteredCGImage andProvidedSampleBuffer:sampleBuffer];
//Pass to custom encode of Red5Pro to server for live stream
[self.encoder encodeFrame:outputBufffer ofType:r5_media_type_video_custom];
//Release data if needed
CFRelease(outputBufffer);
CFRelease(filteredCGImage);
CFRelease(cgImageFromBuffer);
}
- (CGImageRef)cgImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
- (CMSampleBufferRef)getSampleBufferUsingCIByCGInput:(CGImageRef)imageRef andProvidedSampleBuffer:(CMSampleBufferRef)sampleBuffer{
CIImage *nm = [CIImage imageWithCGImage:imageRef];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, (size_t)nm.extent.size.width, (size_t)nm.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
CIContext *ciContext = [CIContext contextWithOptions: nil];
[ciContext render:nm toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CVPixelBufferRelease(pixelBuffer);
CFRelease(videoInfo);
return oBuf;
}
I used OpenGL 2.0 and Accelerate Framework
Accelerate framework to rotate CMSampleBuffer
Now without filter the time is 3 - 8 milliseconds
With Filters its 7-21 milliseconds
OpenGL to make CI Image render fast on CVPixelBuffer
#implementation ColorsVideoSource{
CIContext *coreImageContext;
}
- (instancetype)init{
if((self = [super init]) != nil){
EAGLContext *glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView *glView = [[GLKView alloc] initWithFrame:CGRectMake(0.0, 0.0, 360.0, 480.0) context:glContext];
coreImageContext = [CIContext contextWithEAGLContext:glView.context];
}
return self;
}
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer{
if (!CMSampleBufferIsValid(sampleBuffer)) {
return;
}
CVPixelBufferRef rotateBuffer = [self correctBufferOrientation:sampleBuffer];
CGImageRef cgImageFromBuffer = [self cgImageFromImageBuffer:rotateBuffer];
if(!cgImageFromBuffer || (cgImageFromBuffer == NULL)){
return;
}
UIImage *rotatedPlainImage = [UIImage imageWithCGImage:cgImageFromBuffer];
if (rotatedPlainImage == nil) {
CFRelease(rotateBuffer);
CFRelease(cgImageFromBuffer);
return;
}
if (_currentFilterType == SWPublisherFilterNone) {
if (_needPreviewImage) {
_previewImage = rotatedPlainImage;
}
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, rotateBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, rotateBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CFRelease(videoInfo);
if(!self.pauseEncoding){
#try {
[self.encoder encodeFrame:oBuf ofType:r5_media_type_video_custom];
} #catch (NSException *exception) {
NSLog(#"Encoder error: %#", exception);
}
}
CFRelease(oBuf);
}
else {
CGImageRef filteredCGImage = [self.selectedPublishFilter newCGImageByFilteringCGImage:rotatedPlainImage.CGImage];
if (_needPreviewImage) {
_previewImage = [UIImage imageWithCGImage:filteredCGImage];
}
CMSampleBufferRef outputBuffer = [self getSampleBufferUsingCIByCGInput:filteredCGImage andProvidedSampleBuffer:sampleBuffer];
if(!self.pauseEncoding){
#try {
[self.encoder encodeFrame:outputBuffer ofType:r5_media_type_video_custom];
} #catch (NSException *exception) {
NSLog(#"Encoder error: %#", exception);
}
}
CFRelease(outputBuffer);
CFRelease(filteredCGImage);
}
CFRelease(rotateBuffer);
CFRelease(cgImageFromBuffer);
}
#pragma mark - Methods Refactored GPUImage - Devanshu
- (CVPixelBufferRef)correctBufferOrientation:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t currSize = bytesPerRow * height * sizeof(unsigned char);
size_t bytesPerRowOut = 4 * height * sizeof(unsigned char);
void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer);
/* rotationConstant:
* 0 -- rotate 0 degrees (simply copy the data from src to dest)
* 1 -- rotate 90 degrees counterclockwise
* 2 -- rotate 180 degress
* 3 -- rotate 270 degrees counterclockwise
*/
uint8_t rotationConstant = 3;
unsigned char *dstBuff = (unsigned char *)malloc(currSize);
vImage_Buffer inbuff = {srcBuff, height, width, bytesPerRow};
vImage_Buffer outbuff = {dstBuff, width, height, bytesPerRowOut};
uint8_t bgColor[4] = {0, 0, 0, 0};
vImage_Error err = vImageRotate90_ARGB8888(&inbuff, &outbuff, rotationConstant, bgColor, 0);
if (err != kvImageNoError) NSLog(#"%ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(NULL,
height,
width,
kCVPixelFormatType_32BGRA,
outbuff.data,
bytesPerRowOut,
freePixelBufferDataAfterRelease,
NULL,
NULL,
&rotatedBuffer);
return rotatedBuffer;
}
void freePixelBufferDataAfterRelease(void *releaseRefCon, const void *baseAddress)
{
// Free the memory we malloced for the vImage rotation
free((void *)baseAddress);
}
- (CGImageRef)cgImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
return [self cgImageFromImageBuffer:imageBuffer];
}
- (CGImageRef)cgImageFromImageBuffer:(CVImageBufferRef) imageBuffer // Create a CGImageRef from sample buffer data
{
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return newImage;
}
- (CMSampleBufferRef)getSampleBufferUsingCIByCGInput:(CGImageRef)imageRef andProvidedSampleBuffer:(CMSampleBufferRef)sampleBuffer{
CIImage *theCoreImage = [CIImage imageWithCGImage:imageRef];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, (size_t)theCoreImage.extent.size.width, (size_t)theCoreImage.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
[coreImageContext render:theCoreImage toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CVPixelBufferRelease(pixelBuffer);
CFRelease(videoInfo);
return oBuf;
}
NSLog(#"start rotate");
CFAbsoluteTime t0 = CFAbsoluteTimeGetCurrent();
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CFAbsoluteTime t1 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur to ciimage: %#", #(t1-t0));
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *newImage = [ciimage imageByApplyingCGOrientation:kCGImagePropertyOrientationRight];
CFAbsoluteTime t2 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur rotate ciimage: %#", #(t2-t1));
CVPixelBufferRef newPixcelBuffer = nil;
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferCreate(kCFAllocatorDefault, height, width, kCVPixelFormatType_32BGRA, nil, &newPixcelBuffer);
CFAbsoluteTime t3 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur alloc pixel: %#", #(t3-t2));
[_ciContext render:newImage toCVPixelBuffer:newPixcelBuffer];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFAbsoluteTime t4 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur render pixel: %#", #(t4-t3));
//
CMSampleTimingInfo sampleTimingInfo = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
//
CMVideoFormatDescriptionRef videoInfo = nil;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, newPixcelBuffer, &videoInfo);
CMSampleBufferRef newSampleBuffer = nil;
CMSampleBufferCreateForImageBuffer(kCFAllocatorMalloc, newPixcelBuffer, true, nil, nil, videoInfo, &sampleTimingInfo, &newSampleBuffer);
CFAbsoluteTime t5 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur create CMSample: %#", #(t5-t4));
// release
CVPixelBufferRelease(newPixcelBuffer);
CFAbsoluteTime t6 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur end rotate: %#", #(t6-t0));
return newSampleBuffer;

about IOSurface recording video

A function need to be solved: background desktop video recording
I use NSTimer cycle call IOSurface for screenshots,The following code:
NSTimer * Timer = [NSTimer scheduledTimerWithTimeInterval:0.1f target:self selector:#selector(ScreenShot) userInfo:nil repeats:YES];
-(void)ScreenShot{io_service_t framebufferService;
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if (!framebufferService) {
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
}
if (!framebufferService) {
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
}
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef ScreenSurface;
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &ScreenSurface);uint32_t aseed;
uint32_t Width;
uint32_t Height;
int Pitch;
int size;
int bPE;
CFMutableDictionaryRef dict;
IOSurfaceLock(ScreenSurface, kIOSurfaceLockReadOnly, &aseed);
Width = IOSurfaceGetWidth(ScreenSurface);
Height = IOSurfaceGetHeight(ScreenSurface);
Pitch = Width * 4;
size = Width * Height * 4;
bPE = 4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &Pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &Width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &Height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
CGDataProviderRef provider;
IOSurfaceRef destSurf;
void* outAcc;
CGImageRef cgImage;
destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, ScreenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(ScreenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (Width * Height * 4), NULL);
cgImage = CGImageCreate(Width, Height, 8, 8*4, IOSurfaceGetBytesPerRow(destSurf), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, provider, NULL, YES, kCGRenderingIntentDefault);}
can screenshot the success。
Now the question is:
but probably run 1 minute or so, prompt “Received memory warning.”
Sliding screen, screen method will be delayed
Or what method can the background desktop video recording?

Using IOSurface to take screenshot in iOS7 in games

I'm trying to take screenshots from background using IOSurface, here's my code
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, (CoreSurfaceBufferRef*)&screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage=CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL,
YES, kCGRenderingIntentDefault);
UIImage *img = [UIImage imageWithCGImage:cgImage];
saveImage(img);
I combined the code from IOSurface - IOS Private API - Capture screenshot in background and IOSurfaces - Artefacts in video and unable to grab video surfaces with little modification, the code works well in homescreen and most apps, but returns a black screen in games. Any ideas how to fix that? Thanks.

IOSurface - IOS Private API - Capture screenshot in background

I want to capture a screenshot from a background service. Private API is just fine as I don't need to submit to app store. I have already tried UIGetScreenImage and it does not work from background app.
I am using following code which I got from SO. IOSurfaceCreate returns null to me. Any help appreciated.
CFMutableDictionaryRef dict;
IOSurfaceRef screenSurface = NULL;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
uint32_t width ;
uint32_t height;
void *pitch;
void *bPE;
void *size;
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
CFDictionaryRef ed = (__bridge CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys: nil];
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, ed, NULL);
uint32_t aseed;
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width*height*4), NULL);
CGImageRef cgImage=CGImageCreate(width, height, 8, 8*4, IOSurfaceGetBytesPerRow(destSurf), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, provider, NULL, YES, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage: cgImage];
UIImageWriteToSavedPhotosAlbum(image, self_object, nil, nil);
CGImageRelease(cgImage);
CFRelease(destSurf);
After combining code from 2 places, I was able to save photo to photo album (with my application in background).
IOSurfaces - Artefacts in video and unable to grab video surfaces
https://github.com/tomcool420/SMFramework/blob/master/SMFScreenCapture.m
Please try out and let me know if it works for you.
-(void)SavePhoto
{
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
//int m_width = 320;
//int m_height = 480;
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
void* outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, IOSurfaceGetBaseAddress(destSurf), (width * height * 4), NULL);
CGImageRef cgImage=CGImageCreate(width, height, 8,
8*4, IOSurfaceGetBytesPerRow(destSurf),
CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst |kCGBitmapByteOrder32Little,
provider, NULL,
YES, kCGRenderingIntentDefault);
UIImage *img = [UIImage imageWithCGImage:cgImage];
UIImageWriteToSavedPhotosAlbum(img, self, nil, nil);
// MOST RELEVANT PART OF CODE
//CVPixelBufferCreateWithBytes(NULL, width, height, kCVPixelFormatType_32BGRA, IOSurfaceGetBaseAddress(destSurf), IOSurfaceGetBytesPerRow(destSurf), NULL, NULL, NULL, &sampleBuffer);
}

Memory leak in CoreImage/CoreVideo

I'm build an iOS app that does some basic detection.
I get the raw frames from AVCaptureVideoDataOutput, convert the CMSampleBufferRef to a UIImage, resize the UIImage, then convert it to a CVPixelBufferRef.
As far as I can detect with Instruments the leak is the last part where I convert the CGImage to a CVPixelBufferRef.
Here's the code I use:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]];
// ASMotionDetect is my class for detection and I use videof to calculate the movement
}
-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer {
UIImage *img;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
img = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
newContext = nil;
img = [self resizeImageToSquare:img];
return img;
}
-(UIImage*)resizeImageToSquare:(UIImage*)_temp {
UIImage *img;
int w = _temp.size.width;
int h = _temp.size.height;
CGRect rect;
if (w>h) {
rect = CGRectMake((w-h)/2,0,h,h);
} else {
rect = CGRectMake(0, (h-w)/2, w, w);
}
//
img = [self crop:_temp inRect:rect];
return img;
}
-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{
UIImage *sourceImage = image;
CGRect selectionRect = rect;
CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(resultImageRef);
return resultImage;
}
And in my detection class I have:
- (id)initWithSampleImage:(UIImage*)sampleImage {
if ((self = [super init])) {
_frame = new CVMatOpaque();
_histograms = new CVMatNDOpaque[kGridSize *
kGridSize];
[self extractFrameFromImage:sampleImage];
}
return self;
}
- (void)extractFrameFromImage:(UIImage*)sampleImage {
CGImageRef imageRef = [sampleImage CGImage];
CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef];
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Collect some information required to extract the frame.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Extract the frame, convert it to grayscale, and shove it in _frame.
cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow);
cv::cvtColor(frame, frame, CV_BGR2GRAY);
_frame->matrix = frame;
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGImageRelease(imageRef);
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
CVPixelBufferRef pxbuffer = NULL;
int width = CGImageGetWidth(image)*2;
int height = CGImageGetHeight(image)*2;
NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
CVPixelBufferPoolRef pixelBufferPool;
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
NSParameterAssert(theError == kCVReturnSuccess);
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, width*4, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
/* here is the problem: */
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
With Instrument I found out that the problem is with CVPixelBufferRef allocations but I don't understand why - can someone see the problem?
Thank you
In -pixelBufferFromCGImage:, both pxBuffer and pixelBufferPool are not released. That makes sense for pxBuffer, as it is a return value, but not for pixelBufferPool – you create and leak one per call of the method.
A quick fix should be to
Release pixelBufferPool in -pixelBufferFromCGImage:
Release pxBuffer (the return value of -pixelBufferFromCGImage:) in -extractFrameFromImage:
You should also rename -pixelBufferFromCGImage: to -createPixelBufferFromCGImage: to make clear that it returns a retained object.

Resources