glReadPixel() return black image - ios

I'm using AVCaptureSession for live cameraview feed then i render some images on the camera overlay view. I didn't use any EAGLView, just overlaying some images using AVCaptureSession with previewlayer. I want to take screenshot live camera feed image and overlay image. So, i searched some links finally got with glReadPixel(), but when i implement this code it returns black image. Just added OpenGLES.framework library and imported.
- (void)viewDidLoad
{
[super viewDidLoad];
[self setCaptureSession:[[AVCaptureSession alloc] init]];
[self addVideoInputFrontCamera:NO]; // set to YES for Front Camera, No for Back camera
[self addStillImageOutput];
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] ];
[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CGRect layerRect = [[[self view] layer] bounds];
[[self previewLayer]setBounds:layerRect];
[[self previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
[[[self view] layer] addSublayer:[self previewLayer]];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(saveImageToPhotoAlbum) name:kImageCapturedSuccessfully object:nil];
[[self captureSession] startRunning];
UIImageView *dot =[[UIImageView alloc] initWithFrame:CGRectMake(50,50,200,200)];
dot.image=[UIImage imageNamed:#"draw.png"];
[self.view addSubview:dot];
}
capturing the live camera feed with overlay content using glReadPixel():
- (UIImage*) glToUIImage
{
CGFloat scale = [[UIScreen mainScreen] scale];
// CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
CGRect s = CGRectMake(0, 0, 768.0f * scale, 1024.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
NSLog(#"Screenshot size: %d, %d", (int)[outputImage size].width, (int)[outputImage size].height);
return outputImage;
}
-(void)screenshot:(id)sender{
[self glToUIImage];
}
But it returns black image.

glReadPixels() won't work with an AV Foundation preview layer. There's no OpenGL ES context to capture pixels from, and even if there was, you'd need to capture from it before the scene was presented to the display.
If what you're trying to do is to capture an image overlaid on live video from the camera, my GPUImage framework could handle that for you. All you'd need to do would be to set up a GPUImageVideoCamera, a GPUImagePicture instance for what you needed to overlay, and a blend filter of some sort. You would then feed the output to a GPUImageView for display, and be able to capture still images from the blend filter at any point. The framework handles the heavy lifting for you with all of this.

Related

Landscape screenshot returns portrait mode screen

I'm taking openGL screenshot , if camera at portrait mode then take the snapshot it returns portrait mode. But if i rotate the camera into landscape mode from portrait mode then take screenshot it returns portrait mode screenshot only. But my camera view is showing live stream full mode and screenshot saving 1024X768.
ImageTargetsEAGLView.mm:
- (BOOL)presentFramebuffer
{
if (_takePhotoFlag1)
{
[self glToUIImage1];
UIImageWriteToSavedPhotosAlbum([self glToUIImage1], nil, nil, nil);
NSLog(#"Screenshot size: %d, %d", (int)[[self glToUIImage1] size].width, (int)[[self glToUIImage1] size].height);
_takePhotoFlag1 = NO;
}
// setFramebuffer must have been called before presentFramebuffer, therefore
// we know the context is valid and has been set for this (render) thread
// Bind the colour render buffer and present it
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
return [context presentRenderbuffer:GL_RENDERBUFFER];
}
- (UIImage*) glToUIImage1
{
UIImage *outputImage = nil;
UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
if (UIInterfaceOrientationIsLandscape(orientation))
{
NSLog(#"landscape screen");
CGRect screenBounds = [[UIScreen mainScreen] bounds];
// CGFloat scale = [[UIScreen mainScreen] scale];
// CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
CGRect s = CGRectMake(0, 0, 1024 , 768);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
}else{
NSLog(#"portrait screen");
// CGFloat scale = [[UIScreen mainScreen] scale];
// CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
CGRect s = CGRectMake(0, 0, 768, 1024);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
}
return outputImage;
}
}
I don't have the same issue with your code you do, but I do get artifacting, etc. I would suggest using the method found here: http://www.unagames.com/blog/daniele/2011/10/opengl-es-screenshots-ios
It worked perfectly for me as a drop-in replacement for yours (if you are using a GLKViewController then you just give it self.view as the eaglview) and it actually is pulling the correct size for the SS from the OpenGL ES context so you know it's always correct^^

Save screenshot with overlay image OpenGLES

I'm using vuforia for Augmented Reality applciation. When i detect image i can able display or render 3D object and UIImageView then i can take the screenshot of 3D object but i can't save the normal image. I'm just displaying normal UIImageview holding of images. Do i need to render 2D image instead of normal UIImageView?
Render 3D:
- (void)renderFrameQCAR
{
[self setFramebuffer];
// Clear colour and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Render video background and retrieve tracking state
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::Renderer::getInstance().drawVideoBackground();
glEnable(GL_DEPTH_TEST);
// We must detect if background reflection is active and adjust the culling direction.
// If the reflection is active, this means the pose matrix has been reflected as well,
// therefore standard counter clockwise face culling will result in "inside out" models.
if (offTargetTrackingEnabled) {
glDisable(GL_CULL_FACE);
} else {
glEnable(GL_CULL_FACE);
}
glCullFace(GL_BACK);
if(QCAR::Renderer::getInstance().getVideoBackgroundConfig().mReflection == QCAR::VIDEO_BACKGROUND_REFLECTION_ON)
glFrontFace(GL_CW); //Front camera
else
glFrontFace(GL_CCW); //Back camera
for (int i = 0; i < state.getNumTrackableResults(); ++i) {
// Get the trackable
// _numResults = state.getNumTrackableResults();
[self performSelectorOnMainThread:#selector(DisplayPhotoButton
) withObject:nil waitUntilDone:YES];
const QCAR::TrackableResult* result = state.getTrackableResult(i);
const QCAR::Trackable& trackable = result->getTrackable();
//const QCAR::Trackable& trackable = result->getTrackable();
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());
// OpenGL 2
QCAR::Matrix44F modelViewProjection;
if (offTargetTrackingEnabled) {
SampleApplicationUtils::rotatePoseMatrix(90, 1, 0, 0,&modelViewMatrix.data[0]);
SampleApplicationUtils::scalePoseMatrix(kObjectScaleOffTargetTracking, kObjectScaleOffTargetTracking, kObjectScaleOffTargetTracking, &modelViewMatrix.data[0]);
} else {
SampleApplicationUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScaleNormal, &modelViewMatrix.data[0]);
SampleApplicationUtils::scalePoseMatrix(kObjectScaleNormal, kObjectScaleNormal, kObjectScaleNormal, &modelViewMatrix.data[0]);
}
SampleApplicationUtils::multiplyMatrix(&vapp.projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);
glUseProgram(shaderProgramID);
if (offTargetTrackingEnabled) {
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)buildingModel.vertices);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)buildingModel.normals);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)buildingModel.texCoords);
} else {
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)teapotVertices);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)teapotNormals);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)teapotTexCoords);
}
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
// Choose the texture based on the target name
int targetIndex = 0; // "stones"
if (!strcmp(trackable.getName(), "chips"))
targetIndex = 1;
else if (!strcmp(trackable.getName(), "tarmac"))
targetIndex = 2;
glActiveTexture(GL_TEXTURE0);
if (offTargetTrackingEnabled) {
glBindTexture(GL_TEXTURE_2D, augmentationTexture[3].textureID);
} else {
glBindTexture(GL_TEXTURE_2D, augmentationTexture[targetIndex].textureID);
}
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&modelViewProjection.data[0]);
glUniform1i(texSampler2DHandle, 0 /*GL_TEXTURE0*/);
if (offTargetTrackingEnabled) {
glDrawArrays(GL_TRIANGLES, 0, buildingModel.numVertices);
} else {
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT, (const GLvoid*)teapotIndices);
}
SampleApplicationUtils::checkGlError("EAGLView renderFrameQCAR");
}
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glDisableVertexAttribArray(vertexHandle);
glDisableVertexAttribArray(normalHandle);
glDisableVertexAttribArray(textureCoordHandle);
QCAR::Renderer::getInstance().end();
[self presentFramebuffer];
}
Display UIImageView when cameraview open:
- (id)initWithFrame:(CGRect)frame appSession:(SampleApplicationSession *) app
{
self = [super initWithFrame:frame];
if (self) {
vapp = app;
// takePhotoFlag = NO;
// [self DisplayPhotoButton];
// Enable retina mode if available on this device
if (YES == [vapp isRetinaDisplay]) {
[self setContentScaleFactor:2.0f];
}
// Load the augmentation textures
for (int i = 0; i < NUM_AUGMENTATION_TEXTURES; ++i) {
augmentationTexture[i] = [[Texture alloc] initWithImageFile:[NSString stringWithCString:textureFilenames[i] encoding:NSASCIIStringEncoding]];
}
// Create the OpenGL ES context
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// The EAGLContext must be set for each thread that wishes to use it.
// Set it the first time this method is called (on the main thread)
if (context != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:context];
}
// Generate the OpenGL ES texture and upload the texture data for use
// when rendering the augmentation
for (int i = 0; i < NUM_AUGMENTATION_TEXTURES; ++i) {
GLuint textureID;
glGenTextures(1, &textureID);
[augmentationTexture[i] setTextureID:textureID];
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [augmentationTexture[i] width], [augmentationTexture[i] height], 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)[augmentationTexture[i] pngData]);
}
offTargetTrackingEnabled = NO;
[self loadBuildingsModel];
[self initShaders];
_takePhotoFlag = NO;
[self DisplayPhotoButton];
}
return self;
}
- (void)DisplayPhotoButton
{
UIImage *closeButtonImage = [UIImage imageNamed:#"back.png"];
// UIImage *closeButtonTappedImage = [UIImage imageNamed:#"button_close_pressed.png"];
CGRect aRect = CGRectMake(20,20,
closeButtonImage.size.width,
closeButtonImage.size.height);
photo = [UIButton buttonWithType:UIButtonTypeCustom];
photo.frame = aRect;
photo.userInteractionEnabled=YES;
[photo setImage:closeButtonImage forState:UIControlStateNormal];
// photo.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin;
[photo addTarget:self action:#selector(takePhoto) forControlEvents:UIControlEventTouchUpInside];
[self addSubview:photo];
UIButton *arrowButton = [UIButton buttonWithType:UIButtonTypeCustom];
[arrowButton setImage:[UIImage imageNamed:#"back_btn.png"] forState:UIControlStateNormal];
// [overlayButton setFrame:CGRectMake(80, 420, 60, 30)];
[arrowButton setFrame:CGRectMake(100, 10, 40, 40)];
[arrowButton addTarget:self action:#selector(showActionSheet:forEvent:) forControlEvents:UIControlEventTouchUpInside];
[self addSubview:arrowButton];
}
OpenGL screenshot:
- (UIImage*) glToUIImage
{
UIImage *outputImage = nil;
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
UIImage *imageFromRawData(uint8_t *data, int width, int height) {
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,data,width*height*4,NULL);
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, width*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
return newImage;
}
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
return outputImage;
}
Save from present frame buffer:
- (BOOL)presentFramebuffer
{
if (_takePhotoFlag)
{
[self glToUIImage];
_takePhotoFlag = NO;
}
// setFramebuffer must have been called before presentFramebuffer, therefore
// we know the context is valid and has been set for this (render) thread
// Bind the colour render buffer and present it
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
return [context presentRenderbuffer:GL_RENDERBUFFER];
}
You can either have an openGL screenshot or UI screenshot. To combine the 2 I suggest you to do both images. This will sound silly but it is probably the fastest and most powerful way to do this:
Take a screenshot from openGL (as you already do)
Create an image view from that screenshot
Insert that image view on your main view
Take an UI screenshot* of the main view
Remove the image view
*By UI screenshot I mean something like this:
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
If you will have an issue such as seeing black background instead of the background image there is probably an issue generating the CGImage somewhere in the pipeline where it is skipping or premultiplying the alpha channel. (a very common mistake)
EDIT: Getting image from read pixels RGBA data:
This is what I used for getting an UIImage from raw RGBA data. Note this procedure will not handle any orientation but you can modify it a bit take the orientation as an argument as well and then use imageWithCGImage:scale:orientation:
UIImage *imageFromRawData(uint8_t *data, int width, int height) {
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,data,width*height*4,NULL);
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, width*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
return newImage;
}

Add sublayer over CALayer

I've been developing with Vuforia SDK where I want to put video over a layer.
When the Augmented image is recognize its caling method to show targetOverlayView (UIView) layer.
[booksDelegate setOverlayLayer:self.targetOverlayView.layer];
While the method for setOverlayLayer is this:
- (void)setOverlayLayer:(CALayer *)overlayLayer {
UIImage* image = nil;
UIGraphicsBeginImageContext(overlayLayer.frame.size);
{
[overlayLayer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
// Get the inner CGImage from the UIImage wrapper
CGImageRef cgImage = image.CGImage;
// Get the image size
NSInteger width = CGImageGetWidth(cgImage);
NSInteger height = CGImageGetHeight(cgImage);
// Record the number of channels
NSInteger channels = CGImageGetBitsPerPixel(cgImage)/CGImageGetBitsPerComponent(cgImage);
// Generate a CFData object from the CGImage object (a CFData object represents an area of memory)
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
unsigned char* pngData = new unsigned char[width * height * channels];
const int rowSize = width * channels;
const unsigned char* pixels = (unsigned char*)CFDataGetBytePtr(imageData);
// Copy the row data from bottom to top
for (int i = 0; i < height; ++i) {
memcpy(pngData + rowSize * i, pixels + rowSize * (height - 1 - i), width * channels);
}
glClearColor(0.0f, 0.0f, 0.0f, QCAR::requiresAlpha() ? 0.0f : 1.0f);
if (!trackingTIDSet) {
glGenTextures(1, &tID);
}
glBindTexture(GL_TEXTURE_2D, tID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, (GLvoid*)pngData);
trackingTIDSet = YES;
trackingTextureAvailable = YES;
delete[] pngData;
CFRelease(imageData);
renderState = RS_OVERLAY;
}
It will show a blank screen, even i change the background colour for targetOverlayView, it still only black screen.
targetOverlayView.m :
self.layer.backgroundColor = [UIColor orangeColor].CGColor;
and this is when i want to put the avplayer layer on the layer, but still no luck,it blanks.
AVURLAsset *avasset = [[AVURLAsset alloc] initWithURL:URL1 options:nil];
AVPlayerItem *item = [[AVPlayerItem alloc] initWithAsset:avasset];
player = [[[AVPlayer alloc] initWithPlayerItem:item]retain];
playerLayer = [AVPlayerLayer playerLayerWithPlayer:player];
CGSize size = self.layer.bounds.size;
float x = size.width/2.0-187.0;
float y = size.height/2.0 - 125.0;
playerLayer.frame = CGRectMake(x, y, 374, 250);
[self.layer insertSublayer:playerLayer atIndex:0];
[player play]
How can i present a video over the layer?FYI, the video is played because i can hear the sound.
Thanks..

Why is glReadPixels() failing in this code in iOS 6.0?

The following is code I use for reading an image from an OpenGL ES scene:
-(UIImage *)getImage{
GLint width;
GLint height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
NSLog(#"%d %d",width,height);
NSInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer);
free(buffer2);
return myImage;
}
This is working in iOS 5.x and lower versions, but on iOS 6.0 this is now returning a black image. Why is glReadPixels() failing on iOS 6.0?
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer;
eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};
set
kEAGLDrawablePropertyRetainedBacking = YES
(I do not know why this tip is going well..///)
bro try this method to get your screenshot image. The output image is MailImage
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
Mailimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return Mailimage;
}

glReadPixels only saves 1/4 screen size snapshots

I'm working on an Augmented Reality app for a client. The OpenGL and EAGL part has been done in Unity 3D, and implemented into a View in my application.
What i need now, is a button that snaps a screenshot of the OpenGL content, which is the backmost view.
I tried writing it myself, but when i click a button with the assigned IBAction, it only saves 1/4 of the screen (the lower left corner) - though it does save it to the camera roll.
So basically, how can i make it save the entire screensize, instead of just one fourth?
here's my code for the method:
-(IBAction)tagBillede:(id)sender
{
UIImage *outputImage = nil;
CGRect s = CGRectMake(0, 0, 320, 480);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
if (!buffer) goto error;
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
if (!ref) goto error;
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
if (!iref) goto error;
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
if (!pixels) goto error;
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
if (!context) goto error;
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
if (!outputRef) goto error;
outputImage = [UIImage imageWithCGImage: outputRef];
if (!outputImage) goto error;
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, self, #selector(image: didFinishSavingWithError: contextInfo:), nil);
}
I suspect you are using a device with a Retina display, which is 640x960. You need to take the screen scale into account; it is 1.0 on non-Retina displays and 2.0 on Retina displays. Try initializing s like this:
CGFloat scale = UIScreen.mainScreen.scale;
CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale);
If the device is a retina device, you need to scale the opengl stuff yourself. You're actually specifying that you want the lower-left corner by only capturing half the width and half the height.
You need to double both your width and height for the retina screens, but realistically, you should be multiplying it by the screen's scale:
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
Thought i'd chime and, and at the same time, throw some gratitude :)
I got it working like a charm now, here's the cleaned up code:
UIImage *outputImage = nil;
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);

Resources