UIIView to CGContext coordinate system in landscape orientation - ios

I am trying to capture screen shots and make a movie. Everything works fine in portrait mode, but my app is only a landscape app.
I am using this method to add the UIImage.CGImageRef into the video:
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image withSize:(CGSize)frameSize
{
CVPixelBufferRef pxbuffer;
if (pixelBufferPool == NULL) {
NSLog(#"pixelBufferPool is null!");
return nil;
}
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, pixelBufferPool, &pxbuffer);
if(status != kCVReturnSuccess) {
NSLog(#"failed to create pixel buffer pool pixel buffer %i", status);
return nil;
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
//CGAffineTransform t = CGAffineTransformIdentity;
//t = CGAffineTransformRotate(t, 0/*M_PI_2*/);
//t = CGAffineTransformScale(t, -1.0, 1.0);
//CGContextConcatCTM(context, t);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
The video is as if the images are skewed. I know its because the coordinate system of UIImage and CGContext is different, hence the frames are skewed. But I don't know how to fix this.
I tried rotating, scaling, etc. But everything needs to be in the correct order.
What is the coordinate system of a UIImage in a landscape app, and the coordinate system of the CGContext?
What is the correct transform that I need?

Actually, if the app is launching in landscape mode, the frame of the view should already have the correct sizing, therefore you don't need to rotate your context.
You will only have to flip the y-axis of the context, by the following:
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -size.height);

Could this be happening because the size of the CGContext doesn't match the size of the CVPixelBuffer? When you draw your image, you're probably walking past the end of the row and start drawing pixels on the following row, because your landscape image is wider than the pixel buffer's width.
What if you created your CGContext with a size that's based on the pixel buffer's size, and then if the pixel buffer size doesn't match the frameSize, then you apply your rotation transform?
Or, alternatively, when you're in landscape mode, re-create the pixel buffer pool with a different size that matches the landscape size.
(I'm not familiar with the CoreVideo API, so that's just a guess, though)

Related

iOS OpenGL: Why has scaling my renderbuffer for retina shrunk my image, and how do I fix it?

I'm working on an augmented reality project using a Retina iPad but the two layers - the camera feed and the OpenGL overlay - are not making use of the high resolution screen. The camera feed is being drawn to a texture, which appears to be being scaled and sampled, where as the overlay is using the blocky 4 pixels scale up:
I have looked through a bunch of questions and added the following lines to my EAGLView class.
To initWithCoder, before calling setupFrameBuffer and setupRenderBuffer:
self.contentScaleFactor = [[UIScreen mainScreen] scale];
and in setupFrameBuffer
float screenScale = [[UIScreen mainScreen] scale];
float width = self.frame.size.width;
float height = self.frame.size.height;
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width * screenScale, height * screenScale);
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width*screenScale, height*screenScale, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
the last two lines simply being modified to include the scale factor.
Running this code gives me the following results:
As you can see, the image now only fills the lower left quarter of the screen, but I can confirm the image is only scaled, not cropped. Can anyone help me work out why this is?
Its not actually being scaled, it is that you are actually drawing the frames to a defined size that you defined prior to allowing the render buffer to be 2X the size in both directions.
Most likely what is going on is you defined your sizing in terms of pixels rather than the more general OpenGL coordinate space which moves from -1 to 1 in both the x and y directions (this is really when you are working in 2D, as you are).
Also, calling:
float width = self.frame.size.width;
float height = self.frame.size.height;
will return a size that is NOT the retina size. If you NSLog those out, you will see that even on a retina based device, those return values based on non-retina based screens, or more generally a movement unit, not a pixel.
The way I have chosen to obtain the view's actual size in pixels is:
GLint myWidth = 0;
GLint myHeight = 0;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &myWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &myHeight);
In iOS, I have been using the below code as my setup:
-(void)setupView:(GLView*)theView{
const GLfloat zNear = 0.00, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
//CGRect rect = theView.bounds;
GLint width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &height);
// NSLog(#"setupView rect width = %d, height = %d", width, height);
glFrustumf(-size, size, -size / ((float)width / (float)height), size /
((float)width / (float)height), zNear, zFar);
glViewport(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
The above routine is used within code I am testing on both retina and non-retina setups, and is working just fine. This setupView routine is an overridable within a viewController.
Here's the solution I found.
I found out that in my draw function I was setting the glViewport sizes based off the glview.bounds.size, which is in points rather than pixels. Switching that to the be set based off the framebufferWidth and framebufferHeight solved the problem.
glViewport(0, 0, m_glview.framebufferWidth, m_glview.framebufferHeight);

UIImageView rotation without moving Image

Hi,
I want to rotate my UIImageView without moving the whole "png". No code is only to test what happens
_fanImage.transform = CGAffineTransformMakeRotation(45);
It turns but the whole image moves. What can I do that this doesn't happen ?
You can try something like this.. You should rotate the UIImage rather than UIImageView.
- (UIImage *)imageWithTransform:(CGAffineTransform)transform {
CGRect rect = CGRectMake(0, 0, self.size.height, self.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
self.size.width,
self.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, rect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
I think you mean that you want your image view to rotate around it's center point. Is that right? If so, that's what a view should do by default.
You should do a search on "Translating, Scaling, and Rotating Views" in Xcode and read the resulting article.
Note that all of iOS's angles are specified in radians, not degrees.
Your sample images aren't really helpful, since we can't see the frame that the image view is drawn into. It's almost impossible to tell what your image views are doing and what they are supposed to be doing instead based on the pictures you linked from your dropbox.
A full 360 degrees is 2pi.
You should use
CGFloat degrees = 45;
CGFloat radians = degrees/180*M_PI;
_fanImage.transform = CGAffineTransformMakeRotation(radians);
That will fix the rotation amount for your code, but probably not the rotation position.

How to detect when the non-transparent part of an UIImage is in contact with another non-transparent part of an UIImage

I am having trouble accomplishing something that I thought was going to be much easier. I am trying to run a method whenever a non transparent part of a picture inside a UIImage touches another non-transparent part of an image contained within a UIImage. I have included an example to help further explain my question.
As you can see in the image above, I have two triangles that are both inside a UIImage. The triangles are both PNG pictures. Only the triangle is visible because the background has been made transparent. Both of the UIImages are inside a UIImageView. I want to be able to run a method when the visible part of the triangle touches the visible part of the other triangle. Can someone please help me?
The brute force solution to this problem is to create a 2D array of bools for each image, where each array entry is true for an opaque pixel, and false for the transparent pixels. If CGRectIntersectsRect returns true (indicating a possible collision), then the code scans the two arrays (with appropriate offsets depending on relative positions) to check for an actual collision. That gets complicated, and is computationally intensive.
One alternative to the brute force method is to use OpenGLES to do all of the work. This is still a brute force solution, but it offloads the work to the GPU, which is much better at such things. I'm not an expert on OpenGLES, so I'll leave the details to someone else.
A second alternative is to place restrictions on the problem that allow it to be solved more easily. For example, given two triangles A and B, collisions can only occur if one of the vertices of A is contained within the area of B, or if one of the vertices of B is in A. This problem can be solved using the UIBezierPath class in objective-C. The UIBezierPath can be used to create a path in the shape of a triangle. Then the containsPoint: method of UIBezierPath can be used to check if the vertex of the opposing triangle is contained in the area of the target triangle.
In summary, the solution is to add a UIBezierPath property to each object. Initialize the UIBezierPath to approximate the object's shape. If CGRectIntersectsRect indicates a possible collision, then check if the vertices of one object are contained in the area of the other object using the containsPoint: method.
What I did is:
counted the amount of non alpha pixels in image A
did the same for image B
merged A + B image into one image: C
compared the resulting pixel count
If pixes amount was less after merging then we have a hit.
if (C.count < A.count + B.count) -> we have a hit
+ (int)countPoints:(UIImage *)img
{
CGImageRef cgImage = img.CGImage;
NSUInteger width = img.size.width;
NSUInteger height = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = 1;
size_t bytesPerRow = width * bitsPerComponent * bytesPerPixel;
size_t dataSize = bytesPerRow * height;
unsigned char *bitmapData = malloc(dataSize);
memset(bitmapData, 0, dataSize);
CGContextRef bitmap = CGBitmapContextCreate(bitmapData, width, height, bitsPerComponent, width, NULL,(CGBitmapInfo)kCGImageAlphaOnly);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(bitmap, 0, img.size.height);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), cgImage);
int p = 0;
int i = 0;
while (i < width * height) {
if (bitmapData[i] > 0) {
p++;
}
i++;
}
free(bitmapData);
bitmapData = NULL;
CGContextRelease(bitmap);
bitmap = NULL;
//NSLog(#"points: %d",p);
return p;
}
+ (UIImage *)marge:(UIImage *)imageA withImage:(UIImage *)imageB {
CGSize itemSize = CGSizeMake(imageA.size.width, imageB.size.width);
UIGraphicsBeginImageContext(itemSize);
CGRect rect = CGRectMake(0,
0,
itemSize.width,
itemSize.height);
[imageA drawInRect:rect];
[imageB drawInRect:rect];
UIImage *overlappedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return overlappedImage;
}

Taking an iOS Screenshot Mixing OpenGLES and UIKIT from a parent class

I know this question is answered quite a bit but I have a different situation it seems. I'm trying to write a top level function where I can take a screenshot of my app at anytime, be it openGLES or UIKit, and I won't have access to the underlying classes to make any changes.
The code I've been trying works for UIKit, but returns a black screen for OpenGLES parts
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
for (UIView *subview in window.subviews)
{
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) subview.layer;
if([eaglLayer respondsToSelector:#selector(drawableProperties)]) {
NSLog(#"reponds");
/*eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};*/
UIImageView *glImageView = [[UIImageView alloc] initWithImage:[self snapshotx:subview]];
glImageView.transform = CGAffineTransformMakeScale(1, -1);
[glImageView.layer renderInContext:context];
//CGImageRef iref = [self snapshot:subview withContext:context];
//CGContextDrawImage(context, CGRectMake(0.0, 0.0, 640, 960), iref);
}
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
and
- (UIImage*)snapshotx:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
//don't know how to access the renderbuffer if i can't directly access the below code
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate (
width,
height,
8,
32,
width * 4,
colorspace,
// Fix from Apple implementation
// (was: kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast).
kCGBitmapByteOrderDefault,
ref,
NULL,
true,
kCGRenderingIntentDefault
);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Any advice on how to mix the two without having the ability to modify the classes in the rest of the application?
Thanks!
I see what you tried to do there and it is not really a bad concept. There does seem to be one big problem though: You can not just call glReadPixels at any time you want. First of all you should make sure the buffer is full with data (pixels) you really need and second is it should be called on the same thread as its GL part is working on...
If the GL views are not yours you might have some big trouble calling that screenshot method, you need to call some method that will trigger binding its internal context and if it is animating you will have to know when the cycle is done to ensure that the pixels you receive are the same as the ones presented on the view.
Anyway if you get past all those you will still probably need to "jump" through different threads or need to wait for a cycle to finish. In that case I suggest you use blocks that return the screenshot image which should be passed as a method parameter so you can catch it whenever it is returned. That being said it would be best if you could override some methods on the GL views to be able to return you the screenshot image via callback block and write some recursive system.
To sum it up you need to anticipate multithreading, setting the context, binding the correct frame buffer, waiting for everything to be rendered. This all might result in impossibility to create a screenshot method that would simply work for any application, view, system without overloading some internal methods.
Note that you are simply not allowed to make a whole screenshot (like the one pressing home and lock button at the same time) in your application. As for the UIView part being so easy to create an image from it is because UIView is being redrawn into graphics context independently to the screen; as if you could take some GL pipeline and bind it to your own buffer and context and draw it, this would result in being able to get its screenshot independently and could be performed on any thread.
Actually, I'm trying to do something similar: I'll post in full when I've ironed it out, but in brief:
use your superview's layer's renderInContext method
in the subviews which use openGL, implement the layer delegate's drawLayer:inContext: method
to render your view into the context, use a CVOpenGLESTextureCacheRef
Your superview's layer will call renderInContext: on each of it's sublayers - by implementing the delegate method, your GLView respond for it's layer.
Using a texture cache is much, much faster than glReadPixels: that will probably be a bottleneck.
Sam

Creating a higher-resolution PNG image programmatically

So I have some copypasta way to create a PNG file and it's working splendidly, however, when going to print this PNG it appears kind of "blurry" like a low resolution image. Is there any way to create the PNG with a higher pixel depth?
Here's my current code:
- (UIImage*) renderScrollViewToImage
{
UIImage* image = nil;
UIGraphicsBeginImageContext(self.scrollView.contentSize);
{
CGPoint savedContentOffset = self.scrollView.contentOffset;
CGRect savedFrame = self.scrollView.frame;
self.scrollView.contentOffset = CGPointZero;
self.scrollView.frame = CGRectMake(0, 0, _scrollView.contentSize.width, _scrollView.contentSize.height);
[self.scrollView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
_scrollView.contentOffset = savedContentOffset;
_scrollView.frame = savedFrame;
}
UIGraphicsEndImageContext();
if (image != nil) {
return image;
}
return nil;
}
try replacing:
UIGraphicsBeginImageContext(self.scrollView.contentSize);
with
UIGraphicsBeginImageContextWithOptions(self.scrollView.contentSize, NO, 0.0);
which will take into account retina scaling. docs:
UIGraphicsBeginImageContextWithOptions
Creates a bitmap-based graphics context with the specified options.
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
Parameters
size
The size (measured in points) of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function. To get the size of the bitmap in pixels, you must multiply the width and height values by the value in the scale parameter.
opaque
A Boolean flag indicating whether the bitmap is opaque. If you know the bitmap is fully opaque, specify YES to ignore the alpha channel and optimize the bitmap’s storage. Specifying NO means that the bitmap must include an alpha channel to handle any partially transparent pixels.
scale
The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.

Resources