iOS: is Core Graphics implemented on top of OpenGL? - ios

I have found one diagram that shows Core Graphics implemented above OpenGL, and another that puts it alongside OpenGL. I would think that Apple would be smart to give each equal access to the graphics hardware but then again, I don't know much about the graphics chip they are using... maybe it is 3D all the way?
Does anybody here know the specifics?

Yes, on iOS Core Graphics (Quartz) appears to be layered on top of OpenGL ES for drawing that targets the screen, although not in an explicit way that we have access to.
Core Graphics takes vector elements (lines, arcs, etc.) and some raster ones (images) and processes them for display to the screen or for other forms of output (PDF files, printing, etc.). If the target is the screen on iOS, those vector elements will be hosted in a CALayer, either directly or through the backing layer of a UIView.
These Core Animation layers are effectively wrappers around rectangular textures on the GPU, which is how Core Animation can provide the smooth translation, scaling, and rotation of layers on even the original iPhone hardware. I can't find a reference for it right now, but at least one WWDC presentation states that OpenGL ES is used by Core Animation to communicate with the GPU to perform this hardware acceleration. Something similar can be observed on the new dual-GPU MacBook Pros, where the more powerful GPU kicks in when interacting with an application using Core Animation.
Because Core Graphics rasterizes the vector and raster elements into a CALayer when drawing to the screen, and a CALayer effectively wraps around an OpenGL ES texture, I would place OpenGL ES below Core Graphics on iOS, but only for the case where Core Graphics is rendering to the screen. The reason for the side-by-side placement in the hierarchy you saw may be due to three factors: on the Mac, not all views are layer-backed, so they may not be hardware accelerated in the same way; we can't really interact with the OpenGL ES backing of standard UI elements, so from a developer's point of view they are distinct concepts; and Core Graphics can be used to render to offscreen contexts, like PDF files or images.

Core Graphics and OpenGL are two completely separate systems. Look at the image below (source), which shows both listed at the same level. The description of Core Graphics lower on the same page also indicates that it is the lowest-level native drawing system.
Also see the About OpenGL ES page. It shows that the OpenGL code runs directly on the GPU, and if you scroll down you will see that there are some things which cannot be done with an application that uses OpenGL. Obviously, if CG was based on OpenGL, you wouldn't be able to do those things ever.
Finally, look at the Drawing Model page for iOS. At the top, it compares OpenGL to native drawing, indicating that they work separately from each other.

Core Graphics and OpenGL are separate technologies. UIKit and AppKit are built on top of both, as well as Core Animation. You can see the graphics technology stack inside Apple's documentation (Core Animation Programming Guide)

As of iOS 9 Core Graphics on iOS are based on Apple's Metal framework, not OpenGL.

Related

Reasons to use CoreGraphics instead of SpriteKit?

SpriteKit runs efficiently on the GPU.
CoreGraphics runs on the CPU.
I can't think of any drawing that CoreGraphics can do that SpriteKit can't do.
Given this, can you name reasons for why someone may still want to prefer CoreGraphics over SpriteKit for new apps?
It's not an "either or" question, because there are disparities in their abilities.
Core Graphics can make very complex imagery, with incredibly sophisticated build ups of layers with differing effects and content. But most of all, it's very good at drawing shapes and lines at a quality that no other iOS framework matches. As Apple says:
Core Graphics... provides low-level, lightweight 2D rendering with
unmatched output fidelity. You use this framework to handle path-based
drawing, transformations, color management, offscreen rendering,
patterns, gradients and shadings, image data management, image
creation, and image masking, as well as PDF document creation,
display, and parsing.
https://developer.apple.com/reference/coregraphics
You won't find PDF export creation, image creation (texture creation, yes, but not image creation), nor complex gradients, color management, complex patterns, transforms and offscreen rendering with a context in SpriteKit.
Similarly, you won't find the kind of anti-aliasing in Core Graphics in SpriteKit.
If you want to integrate your creations from image making into UIKit applications, you're far better off using a blend of Core Graphics, Core Image and Core Animation than even attempting to use SpriteKit for that kind of image creation and animation in an app.
Use SpriteKit for games that suitably benefit from the focus on Sprites as the primary graphic content.
You might, for example, choose Core Animation and Core Graphics for games that focus on more dynamic content or a demand for higher quality programmatically created content than you can get from just SpriteKit. Or you could use Core Graphics to make content for sprites at a higher quality than you'll ever get out of SKShapeNode.
So... horses for courses.
The courses being, basically:
A) Sprites and Simple 2D rendering and drawing
B) All kinds of graphics, dynamic drawing and much higher demands in quality and output types
or
C) A bit of a blend of both

iOS: why does overriding drawRect resort to software rendering?

Im not a huge fan of iOS graphics APIs and their documentation and have been trying for a while now to form a high level view and structure of the rendering process but only have bits and pieces of information. Essentially, I am trying to understand (again, in high level);
1) The role of Coregraphics and CoreAnimation APIs in the rendering pipeline all the way from CGContext to the front frame buffer.
2) And along the way(this is has been the most confusing and least elaborate in the documentation), which tasks are performed by the CPU and GPU.
With Swift and Metal out, I'm hoping the APIs would be revisited.
Have you started with the WWDC videos? They cover many of the details extensively. For example, this year's Advanced Graphics & Animations for iOS Apps is a good starting point. The Core Image talks are generally useful as well (I haven't watched this year's yet). I highly recommend going back to previous years. They've had excellent discussions about the CPU/GPU pipeline in previous years. The WWDC 2012 Core Image Techniques was very helpful. And of course learning to use Instruments effectively is just as important as understanding the implementations.
Apple does not typically provide low-level implementation details in the main documentation. The implementation details are not interface promises, and Apple changes them from time to time to improve performance for the majority of applications. This can sometimes degrade performance on corner cases, which is one reason you should avoid being clever with performance tricks.
But the WWDC videos have exactly what you're describing, and will walk you through the rendering pipeline and how to optimize it. The recommendations they make tend to be very stable from release to release and device to device.
1) The role of Coregraphics and CoreAnimation APIs in the rendering pipeline all the way from CGContext to the front frame buffer.
Core Graphics is a drawing library that implements the same primitives as PDF or PostScript. So you feed it bitmaps and various types of path and it produces pixels.
Core Animation is a compositor. It produces the screen display by compositing buffers (known as layers) from video memory. While compositing it may apply a transform, moving, rotating, adding perspective or doing something else to each layer. It also has a timed animation subsystem that can make timed adjustments to any part of that transform without further programmatic intervention.
UIKit wires things up so that you use CoreGraphics to draw the contents of your view to a layer whenever the contents themselves change. That primarily involves the CPU. For things like animations and transitions you then usually end up applying or scheduling compositing adjustments. So that primarily involves the GPU.
2) And along the way(this is has been the most confusing and least elaborate in the documentation), which tasks are performed by the CPU and GPU.
Individual layer drawing: CPU
Transforming and compositing layers to build up the display: GPU
iOS: why does overriding drawRect resort to software rendering?
It doesn't 'resort' to anything. The exact same pipeline is applied whether you wrote the relevant drawRect: or Apple did.
With Swift and Metal out, I'm hoping the APIs would be revisited.
Swift and Metal are completely orthogonal to this issue. The APIs are very well formed and highly respected. Your issues with them are — as you freely recognise — lack of understanding. There is no need to revisit them and Apple has given no indication that it will be doing so.

OpenGL ES 2 2D layered drawing

I am rewriting an iPad drawing application with OpenGL ES2 instead of Core Graphics.
I have already written a subclass of GLKView that can draw line segments and I can just drag a GLKView in my storyboard and set it a custom class. So far, drawing works, but I also want to implement layers like in Photoshop and GIMP.
I thought of creating multiple GLKViews for each layer and letting UIKit handle the blending and reordering, but that won't allow blend modes and may not have a good performance.
So far, I think doing everything in one GLKView is the best solution. I guess I will have to use some kind off buffer as a layer. My app should also be able to handle undo/redo, so maybe I will have to use textures to store previous data.
However, I am new to openGL so my question is:
How should I implement layers?
Since the question is very broad, here is a broad and general answer that should give you some starting points for more detailed researches.
Probably a good way would be to manage the individual layers as individual textures. With the use of framebuffer objects (FBOs) you can easily render directly into a texture for drawing inside the layers. Each texture would (more or less) persistently store the image of a single layer. For combining the layers you would then render each of the layer textures one over the other (in the appropriate order, whatever that may be) using a simple textured quad and with the blending functions you need.

I've started using Stage3D. Which of these classes are usable in Stage3D?

Are these classes supported in Stage3D? Or are there equivalents or similar classes that exist?
flash.display.BitmapData;
flash.display.GraphicsSolidFill;
flash.display.GraphicsStroke;
flash.display.GraphicsPath;
flash.display.IGraphicsData;
flash.display.Shape;
flash.filters.BlurFilter;
flash.geom.ColorTransform;
Stage3D is an entirely different, fairly low-level beast. Those classes you list there are all related to the traditional Flash DisplayList, which is a CPU-driven rendering engine, so no, they don't exist, per se. But there's much more to it than that:
If you're using the raw Stage3D APIs (example tutorial here), then it feels very much like OpenGL programming. You're loading Vertex buffers, Index buffers, and textures into the GPU, and defining Vertex and fragment shader programs in an assembly language called AGAL. All this gets you a cross-platform, hardware accelerated application that's probably very fast, but it's very different than the traditional Flash DisplayList. Can you get gradients, filters and vector shapes - sure, but probably with custom shaders and such, not using those classes.
In some applications, it makes sense to use the traditional DisplayList for interactive UI controls on top of the Stage3D hardware accelerated backdrop. The DisplayList sits on top of the Stage3D plane, so this is entirely possible.
However, if such low-level 3D programming is not what you're interested in, you can choose to build on top of a framework. There are many Stage3D frameworks - some are intended for creating 3D applications, others are intended for 2D (but using the underlying 3D acceleration for speed). Adobe has a list of these frameworks here.
For example, Starling is a Stage3D framework that's intended to mimic the traditional Flash DisplayList, so it'll get you close to some of the classes you've mentioned above - check out their demo and API docs for specifics.
Another technique that Flash enables is blitting, which generates Bitmaps for 3D acceleration on the fly. You can draw into Bitmaps (aka blit) any Flash DisplayObjects you like (Shapes, drawn gradients, with filters, whatever), then push those Bitmaps into the 3D acceleration framework. You can blit individual objects separately, or blit the entire stage into one full-screen texture using this technique. But you have to be careful how often and how much you upload new textures into the GPU, because this can affect performance significantly. In fact, a significant performance consideration in GPU programming is the ability to batch several bitmaps into a single texture.
So there are many facets to consider when thinking about transitioning from the traditional DisplayList to Stage3D. Hope this helps. :)

confusion regarding quartz2d, core graphics, core animation, core images

i am working on a project which requires some image processing, i also asked question regarding it and i got very good solution here is the link create whole new image in iOS by selecting different properties
but now i want to learn this in more detail and i am confused from where should i start learning quartz 2d or core animation or core graphics or core image
apple documents say regarding quartz 2d that
The Quartz 2D API is part of the Core Graphics framework, so you may
see Quartz referred to as Core Graphics or, simply, CG.
and apple docs says about core graphics that
The Core Graphics framework is a C-based API that is based on the
Quartz advanced drawing engine.
this is confusing how they both relate to each other...
now core animation contains all concepts of coordinates, bounds, frames etc which is also required in drawing images
and core image is introduced in ios 5
from where should i start learning or i which sequence i start learning all these.
Quartz and Core Graphics are effectively synonymous. I tend to avoid using "Quartz" because the term is very prone to confusion (indeed, the framework that includes Core Animation is "QuartzCore," confusing matters further).
I would say:
Learn Core Graphics (CoreGraphics.framework) if you need high performance vector drawing (lines, rectangles, circles, text, etc.), perhaps intermingled with bitmap/raster graphics with simple modifications (e.g. scaling, rotation, borders, etc.). Core Graphics is not particularly well suited for more advanced bitmap operations (e.g. color correction). It can do a lot in the way of bitmap/raster operations, but it's not always obvious or straightforward. In short, Core Graphics is best for "Illustrator/Freehand/OmniGraffle" type uses.
Learn Core Animation (inside QuartzCore.framework) if, well, you need to animate content. Basic animations (such as moving a view around the screen) can be accomplished entirely without Core Animation, using basic UIView functionality, but if you want to do fancier animation, Core Animation is your friend. Somewhat unintuitively, Core Animation is also home to the CALayer family of classes, which in addition to being animatable allow you to do some more interesting things, like quick (albeit poorly performing) view shadows and 3D transforms (giving you what might be thought of as "poor man's OpenGL"). But it's mainly used for animating content (or content properties, such as color and opacity).
Learn Core Image (inside QuartzCore.framework) if you need high performance, pixel-accurate image processing. This could be everything from color correction to lens flares to blurs and anything in between. Apple publishes a filter reference that enumerates the various pre-built Core Image filters that are available. You can also write your own, though this isn't necessarily for the faint of heart. In short, if you need to implement something like "[pick your favorite photo editor] filters" then Core Image is your go-to.
Does that clarify matters?
Core Animation is a technology that relies a lot more on OpenGL, which means its GPU-bound.
Core Graphics on the other hand uses the CPU for rendering. It's a lot more precise (pixel-wise) than Core Animation, but will use your CPU.

Resources