How to apply a Gaussian Blur while rendering a CATiledLayer? - ios

I'm using CATiledLayer to render some 2D graphics in the draw(in:) callback.
The scene is composed of elements such as open & filled paths, images, etc that are drawn procedurally using the painter's model. Some areas need to be blurred, and then there may be non-blurred graphics drawn on top of them.
I believe that Gaussian blurs need a CIImage to apply to, but don't know what the best way is to create a CIImage in this scenario. I've spent a fair bit of time searching for a solution, but haven't come up with anything. I would like to avoid having to compose the scene using one or more offscreen bitmaps, and having to blit the result back to the CALayer.

Related

How to use affineTransform on subset of CAEAGLLayer (iOS)?

I am playing around with Open GL and core animation and have been able to do affine transforms on open GL layers and everything works great. Looking for help on how would I transform a subset of a layer, meaning a top half or bottom quarter and only rotate those pixels while keeping the rest of the layer untouched.
Alternatively if I have 1 openGL layer would it be possible to split it into 2 (top and bottom sections). Then I can perform transforms as needed. I cannot access the subviews in the layer, only the layer as a whole.
Any advice would be appreciated.
To do it in the view pipeline you would need multiple views. In general there is nothing wrong with that but you will need to do a bit of work to draw to each of the views so that it seems as a whole. If you are using standard projection matrices such as glOrtho you only need to split the border parameters (top, bottom, left and right) accordingly to your view split.
To do it with openGL directly there are multiple ways. Which to choose depends on your needs.
One way is to use viewport. This describes what part of the buffer you are drawing to so you can split it into multiple draw calls drawing to different positions. This is generally more useful for view within the view situation.
Probably the best way would be to draw the whole scene to a FBO (frame buffer object) with attached texture. Then create sprites (rectangles) which you want to animate and then draw parts of the texture to those rectangles.
Still then you need a system which is able to animate within the openGL. To achieve that you need to do a matrix interpolation. It might take a bit of time but is generally worth it as you have a total control over the animations and how they are done. Note that due to rotations you will need to do the interpolation in the polar coordinate system which means transforming the 3 base vectors (top left 3x3 part of the matrix) to angle+radius and interpolate those.

cocos2d - is it faster to draw a primitive shape (e.g. ccDrawCircle, drawDot) or draw a sprite?

I have to draw a number sprites and also a very large circle, and I'm wondering if is it better to draw that circle using ccDrawCircle, or to make a circle sprite and draw it as a regular sprite? In terms of memory, I'd be using up a lot to store basically nothing but a giant circle; but is it faster to do all the drawing as one batch? What if I had to split it up into two batches, one for the circle and another for the other sprites? Is it faster to draw a primitive shape or to draw a sprite?
ccDraw* methods aren't batch-drawn and meant for debugging, they are not particularly efficient.
CCDrawNode is a much better option as it can batch-draw primitives, but it doesn't allow primitives to be removed without recreating them all.
Batched sprites are usually the best option. How much this affects fps depends on your use case and targeted devices. Problem is they don't "scale" well, so it's best to have each required size of a shape as a separate image. On the other hand you can tint (colorize) a grayscale image in order to have more color variety without having to have each shape with different colors as separate images.

Is there are way to draw a pixel in SpriteKit?

I am trying to put together ray tracing routine and the width/shade of every pixel in a single ray will change over the length of the line. Is there a way in SpriteKit to draw a single pixel on the screen? or should I be doing this using UIImage?
SpriteKit isn't a pixel drawing API. It's more of a "moving pictures around on the screen" API. There are a couple of cases, though, where it makes sense to do custom drawing, and SpriteKit (as of iOS 8 and OS X 10.10) has a few facilities for this.
If you want to create custom art to apply to sprites, with that art being mostly static (that is, not needing to be redrawn frequently as SpriteKit animates the scene), just use the drawing API of your choice — Core Graphics, OpenGL, emacs, whatever — to create an image. Then you can stick that image in an SKTexture object to apply to sprites in your scene.
To directly munge the bits inside a texture, use SKMutableTexture. This doesn't really provide drawing facilities — you just get to work with the raw image data. (And while you could layer drawing tools on top of that — for example, by creating a CG image context from the data and rewriting the data with CG results — that'd slow you down too much to keep up with the animation framerate.)
If you need high-framerate, per-pixel drawing, your best bet is to do that entirely on the GPU. Use SKShader for that.
If your application is redrawing every frame, then you can use an offscreen buffer to dynamically update pixels, aka SKMutableTexture. I'm not sure how slow it will be.

OpenGL ES 2 2D layered drawing

I am rewriting an iPad drawing application with OpenGL ES2 instead of Core Graphics.
I have already written a subclass of GLKView that can draw line segments and I can just drag a GLKView in my storyboard and set it a custom class. So far, drawing works, but I also want to implement layers like in Photoshop and GIMP.
I thought of creating multiple GLKViews for each layer and letting UIKit handle the blending and reordering, but that won't allow blend modes and may not have a good performance.
So far, I think doing everything in one GLKView is the best solution. I guess I will have to use some kind off buffer as a layer. My app should also be able to handle undo/redo, so maybe I will have to use textures to store previous data.
However, I am new to openGL so my question is:
How should I implement layers?
Since the question is very broad, here is a broad and general answer that should give you some starting points for more detailed researches.
Probably a good way would be to manage the individual layers as individual textures. With the use of framebuffer objects (FBOs) you can easily render directly into a texture for drawing inside the layers. Each texture would (more or less) persistently store the image of a single layer. For combining the layers you would then render each of the layer textures one over the other (in the appropriate order, whatever that may be) using a simple textured quad and with the blending functions you need.

Example code for Resizing an image using DirectX

I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.

Resources