How to resize OpenGL ES renderbuffer on iOS? - ios

I am working on an application and a lot of the code is based on the GLPaint sample from Apple.
In the GLPaint sample the framebuffer and colorbuffer are destroyed and recreated in layoutSubviews.
I load an image from the imagepicker and resize it so width/height is within within maximum texture size. I then set the GLview frame to the same size.
When I resize my view and layoutSubviews is called for the second time calling context renderbufferStorage:fromDrawable: returns NO and therefore my FBO is incomplete. This is the exact same code that is initially used to set up the FBO and colorbuffer.
What's the proper way to resize the renderbuffer?
Code: https://gist.github.com/1340465

I'm pretty sure that there is no way to resize the render buffer. The only way is to recreate it when the target view is resized.

Related

ios drawing lines to an imageView and combine the lines and the image to a new image, size of the image changed

I have an imageView and say its size is the screen size. It displays an image which has a larger size, and the imageView's content mode is set to scaleAspectFill. Then I drawing some lines on the imageView by using UIBeizerPath.
Later I would like to generate an new image which includes the lines I drew by using drawViewHierarchyInRect. The problem is the new image size is the imageView's size, since the drawViewHierarchyInRect method works only like taking a snapshot. How Can I combine the original image with the lines I drew and at the same time, keep the image's size?
You want to use the method UIGraphicsBeginImageContextWithOptions to create an off-screen context of the desired size. (In your case, the size of the image.)
Then draw your image into the context, draw the lines on top, and extract your composite image from the context. Finally, dispose of the context.
There is tons of sample code online showing how to use UIGraphicsBeginImageContextWithOptions. It's quite easy.

Force NSView drawing to behave as if display is retina

I've got a custom NSView which draws a chart in my app. I am generating a PDF which includes the image. In iOS I do this using code like this:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self drawRect:self.frame];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In iOS, the displays are retina which means the image is very high resolution, however, I'm trying to do this in my Mac app now and the quality of the image is poor because non-retina Macs will generate a non-high res version of the image.
I would like to force my NSView to behave as if it was retina when I'm using it to generate an image. That way, when I put the image into my PDF, it'll be much higher resolution. Right now, it's very blurry and not attractive.
Even a Retina bitmap will still be blurry and unattractive when scaled up enough. Assuming the view draws its contents in drawRect:, rather than trying to render the view into a PDF at a fixed resolution, a better approach is to draw directly into a PDF graphics context. This will produce a nice scalable PDF. The drawing code will need to be factored so it can be used for both the view’s drawRect: and the PDF.
Also, the iOS documentation states you should never call drawRect: yourself. Call renderInContext: on the view‘s layer, or use the newer drawViewHierarchyInRect:afterScreenUpdates:.
You can call -[NSView dataWithPDFInsideRect:] to get PDF data from the drawing in a view. For example:
NSData* data = [someView dataWithPDFInsideRect:someView.bounds];
[data writeToFile:#"/tmp/foo.pdf" atomically:YES];
Any vector drawing (e.g. text, Bezier paths, etc.) that your view and its subviews do will end up as scalable vector graphics in the PDF.

OpenGLES 2.0 texture rendering

I'm creating a drawing app and I need the end result to be saved as a png image. But I then need to be able to edit the image with further drawing.
Is a framebuffer object the way to go here? Rendering into an offscreen texture?
It depends how you want to edit the image afterwards. There are two parts to your question:
1) Saving the image as a png
2) Editing the image after drawing to it
1) It is straightforward to save a frame buffer drawing as a png. There is a similar question here for OpenGL ES 1.x (http://stackoverflow.com/questions/5062978/how-can-i-dump-opengl-renderbuffer-to-png-or-jpg-image) that should be a good base to work off of.
2) It depends how soon you want to edit the image. If you are editing the image continuously throughout the program, then keep everything in memory in the frame buffer and only write to a png when you are done editing. If you need to draw on top of the image at a later time (for instance, when you reopen the program) you can save as a png and then load the png as a texture for a new frame buffer when you want to edit the image again. When you draw to this new frame buffer, you will be drawing on top of the texture (which was your previous image).

Again edit of edited UIImage in iOS app

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

Blackberry Scrolling in Canvas

Hi friends I searched so many threads but I didnt get any solution for scrolling vertically
when the bitmaps are drawn using graphics with paint method.
Please help me.
If you are trying to (as Jonathan said) scroll trough an image that is larger than the screen.
To do it without any fancy function´s help and you are manually painting inside your graphics paint method, i would think of using 2 bitmaps, one as a buffer for your image and another one for the actual frame:
Have your large_image in a "buffer" bitmap, and then create another bitmap to use as a canvas and paint this canvas to the screen (the same size of the screen).
Clip your large_image to the size of the screen on the region of your bitmap that you want to paint on the next frame. Save that clipped bitmap to your canvas.
Draw that canvas bitmap.
Scroll the clipping of your large_image again (move x and y values) into your "canvas" bitmap
repeat 3 and 4 until scrolling ends.
Hope it is clear, think of it as having your canvas as a camera to take smaller snapshots of your large_image and moving the large_image so that each snapshot in sequence creates the scrolling effect.
Cheers and hope it helps!.

Resources