SpriteKit runs efficiently on the GPU.
CoreGraphics runs on the CPU.
I can't think of any drawing that CoreGraphics can do that SpriteKit can't do.
Given this, can you name reasons for why someone may still want to prefer CoreGraphics over SpriteKit for new apps?
It's not an "either or" question, because there are disparities in their abilities.
Core Graphics can make very complex imagery, with incredibly sophisticated build ups of layers with differing effects and content. But most of all, it's very good at drawing shapes and lines at a quality that no other iOS framework matches. As Apple says:
Core Graphics... provides low-level, lightweight 2D rendering with
unmatched output fidelity. You use this framework to handle path-based
drawing, transformations, color management, offscreen rendering,
patterns, gradients and shadings, image data management, image
creation, and image masking, as well as PDF document creation,
display, and parsing.
https://developer.apple.com/reference/coregraphics
You won't find PDF export creation, image creation (texture creation, yes, but not image creation), nor complex gradients, color management, complex patterns, transforms and offscreen rendering with a context in SpriteKit.
Similarly, you won't find the kind of anti-aliasing in Core Graphics in SpriteKit.
If you want to integrate your creations from image making into UIKit applications, you're far better off using a blend of Core Graphics, Core Image and Core Animation than even attempting to use SpriteKit for that kind of image creation and animation in an app.
Use SpriteKit for games that suitably benefit from the focus on Sprites as the primary graphic content.
You might, for example, choose Core Animation and Core Graphics for games that focus on more dynamic content or a demand for higher quality programmatically created content than you can get from just SpriteKit. Or you could use Core Graphics to make content for sprites at a higher quality than you'll ever get out of SKShapeNode.
So... horses for courses.
The courses being, basically:
A) Sprites and Simple 2D rendering and drawing
B) All kinds of graphics, dynamic drawing and much higher demands in quality and output types
or
C) A bit of a blend of both
Related
I am trying to put together ray tracing routine and the width/shade of every pixel in a single ray will change over the length of the line. Is there a way in SpriteKit to draw a single pixel on the screen? or should I be doing this using UIImage?
SpriteKit isn't a pixel drawing API. It's more of a "moving pictures around on the screen" API. There are a couple of cases, though, where it makes sense to do custom drawing, and SpriteKit (as of iOS 8 and OS X 10.10) has a few facilities for this.
If you want to create custom art to apply to sprites, with that art being mostly static (that is, not needing to be redrawn frequently as SpriteKit animates the scene), just use the drawing API of your choice — Core Graphics, OpenGL, emacs, whatever — to create an image. Then you can stick that image in an SKTexture object to apply to sprites in your scene.
To directly munge the bits inside a texture, use SKMutableTexture. This doesn't really provide drawing facilities — you just get to work with the raw image data. (And while you could layer drawing tools on top of that — for example, by creating a CG image context from the data and rewriting the data with CG results — that'd slow you down too much to keep up with the animation framerate.)
If you need high-framerate, per-pixel drawing, your best bet is to do that entirely on the GPU. Use SKShader for that.
If your application is redrawing every frame, then you can use an offscreen buffer to dynamically update pixels, aka SKMutableTexture. I'm not sure how slow it will be.
I am rewriting an iPad drawing application with OpenGL ES2 instead of Core Graphics.
I have already written a subclass of GLKView that can draw line segments and I can just drag a GLKView in my storyboard and set it a custom class. So far, drawing works, but I also want to implement layers like in Photoshop and GIMP.
I thought of creating multiple GLKViews for each layer and letting UIKit handle the blending and reordering, but that won't allow blend modes and may not have a good performance.
So far, I think doing everything in one GLKView is the best solution. I guess I will have to use some kind off buffer as a layer. My app should also be able to handle undo/redo, so maybe I will have to use textures to store previous data.
However, I am new to openGL so my question is:
How should I implement layers?
Since the question is very broad, here is a broad and general answer that should give you some starting points for more detailed researches.
Probably a good way would be to manage the individual layers as individual textures. With the use of framebuffer objects (FBOs) you can easily render directly into a texture for drawing inside the layers. Each texture would (more or less) persistently store the image of a single layer. For combining the layers you would then render each of the layer textures one over the other (in the appropriate order, whatever that may be) using a simple textured quad and with the blending functions you need.
Are these classes supported in Stage3D? Or are there equivalents or similar classes that exist?
flash.display.BitmapData;
flash.display.GraphicsSolidFill;
flash.display.GraphicsStroke;
flash.display.GraphicsPath;
flash.display.IGraphicsData;
flash.display.Shape;
flash.filters.BlurFilter;
flash.geom.ColorTransform;
Stage3D is an entirely different, fairly low-level beast. Those classes you list there are all related to the traditional Flash DisplayList, which is a CPU-driven rendering engine, so no, they don't exist, per se. But there's much more to it than that:
If you're using the raw Stage3D APIs (example tutorial here), then it feels very much like OpenGL programming. You're loading Vertex buffers, Index buffers, and textures into the GPU, and defining Vertex and fragment shader programs in an assembly language called AGAL. All this gets you a cross-platform, hardware accelerated application that's probably very fast, but it's very different than the traditional Flash DisplayList. Can you get gradients, filters and vector shapes - sure, but probably with custom shaders and such, not using those classes.
In some applications, it makes sense to use the traditional DisplayList for interactive UI controls on top of the Stage3D hardware accelerated backdrop. The DisplayList sits on top of the Stage3D plane, so this is entirely possible.
However, if such low-level 3D programming is not what you're interested in, you can choose to build on top of a framework. There are many Stage3D frameworks - some are intended for creating 3D applications, others are intended for 2D (but using the underlying 3D acceleration for speed). Adobe has a list of these frameworks here.
For example, Starling is a Stage3D framework that's intended to mimic the traditional Flash DisplayList, so it'll get you close to some of the classes you've mentioned above - check out their demo and API docs for specifics.
Another technique that Flash enables is blitting, which generates Bitmaps for 3D acceleration on the fly. You can draw into Bitmaps (aka blit) any Flash DisplayObjects you like (Shapes, drawn gradients, with filters, whatever), then push those Bitmaps into the 3D acceleration framework. You can blit individual objects separately, or blit the entire stage into one full-screen texture using this technique. But you have to be careful how often and how much you upload new textures into the GPU, because this can affect performance significantly. In fact, a significant performance consideration in GPU programming is the ability to batch several bitmaps into a single texture.
So there are many facets to consider when thinking about transitioning from the traditional DisplayList to Stage3D. Hope this helps. :)
i am working on a project which requires some image processing, i also asked question regarding it and i got very good solution here is the link create whole new image in iOS by selecting different properties
but now i want to learn this in more detail and i am confused from where should i start learning quartz 2d or core animation or core graphics or core image
apple documents say regarding quartz 2d that
The Quartz 2D API is part of the Core Graphics framework, so you may
see Quartz referred to as Core Graphics or, simply, CG.
and apple docs says about core graphics that
The Core Graphics framework is a C-based API that is based on the
Quartz advanced drawing engine.
this is confusing how they both relate to each other...
now core animation contains all concepts of coordinates, bounds, frames etc which is also required in drawing images
and core image is introduced in ios 5
from where should i start learning or i which sequence i start learning all these.
Quartz and Core Graphics are effectively synonymous. I tend to avoid using "Quartz" because the term is very prone to confusion (indeed, the framework that includes Core Animation is "QuartzCore," confusing matters further).
I would say:
Learn Core Graphics (CoreGraphics.framework) if you need high performance vector drawing (lines, rectangles, circles, text, etc.), perhaps intermingled with bitmap/raster graphics with simple modifications (e.g. scaling, rotation, borders, etc.). Core Graphics is not particularly well suited for more advanced bitmap operations (e.g. color correction). It can do a lot in the way of bitmap/raster operations, but it's not always obvious or straightforward. In short, Core Graphics is best for "Illustrator/Freehand/OmniGraffle" type uses.
Learn Core Animation (inside QuartzCore.framework) if, well, you need to animate content. Basic animations (such as moving a view around the screen) can be accomplished entirely without Core Animation, using basic UIView functionality, but if you want to do fancier animation, Core Animation is your friend. Somewhat unintuitively, Core Animation is also home to the CALayer family of classes, which in addition to being animatable allow you to do some more interesting things, like quick (albeit poorly performing) view shadows and 3D transforms (giving you what might be thought of as "poor man's OpenGL"). But it's mainly used for animating content (or content properties, such as color and opacity).
Learn Core Image (inside QuartzCore.framework) if you need high performance, pixel-accurate image processing. This could be everything from color correction to lens flares to blurs and anything in between. Apple publishes a filter reference that enumerates the various pre-built Core Image filters that are available. You can also write your own, though this isn't necessarily for the faint of heart. In short, if you need to implement something like "[pick your favorite photo editor] filters" then Core Image is your go-to.
Does that clarify matters?
Core Animation is a technology that relies a lot more on OpenGL, which means its GPU-bound.
Core Graphics on the other hand uses the CPU for rendering. It's a lot more precise (pixel-wise) than Core Animation, but will use your CPU.
I have found one diagram that shows Core Graphics implemented above OpenGL, and another that puts it alongside OpenGL. I would think that Apple would be smart to give each equal access to the graphics hardware but then again, I don't know much about the graphics chip they are using... maybe it is 3D all the way?
Does anybody here know the specifics?
Yes, on iOS Core Graphics (Quartz) appears to be layered on top of OpenGL ES for drawing that targets the screen, although not in an explicit way that we have access to.
Core Graphics takes vector elements (lines, arcs, etc.) and some raster ones (images) and processes them for display to the screen or for other forms of output (PDF files, printing, etc.). If the target is the screen on iOS, those vector elements will be hosted in a CALayer, either directly or through the backing layer of a UIView.
These Core Animation layers are effectively wrappers around rectangular textures on the GPU, which is how Core Animation can provide the smooth translation, scaling, and rotation of layers on even the original iPhone hardware. I can't find a reference for it right now, but at least one WWDC presentation states that OpenGL ES is used by Core Animation to communicate with the GPU to perform this hardware acceleration. Something similar can be observed on the new dual-GPU MacBook Pros, where the more powerful GPU kicks in when interacting with an application using Core Animation.
Because Core Graphics rasterizes the vector and raster elements into a CALayer when drawing to the screen, and a CALayer effectively wraps around an OpenGL ES texture, I would place OpenGL ES below Core Graphics on iOS, but only for the case where Core Graphics is rendering to the screen. The reason for the side-by-side placement in the hierarchy you saw may be due to three factors: on the Mac, not all views are layer-backed, so they may not be hardware accelerated in the same way; we can't really interact with the OpenGL ES backing of standard UI elements, so from a developer's point of view they are distinct concepts; and Core Graphics can be used to render to offscreen contexts, like PDF files or images.
Core Graphics and OpenGL are two completely separate systems. Look at the image below (source), which shows both listed at the same level. The description of Core Graphics lower on the same page also indicates that it is the lowest-level native drawing system.
Also see the About OpenGL ES page. It shows that the OpenGL code runs directly on the GPU, and if you scroll down you will see that there are some things which cannot be done with an application that uses OpenGL. Obviously, if CG was based on OpenGL, you wouldn't be able to do those things ever.
Finally, look at the Drawing Model page for iOS. At the top, it compares OpenGL to native drawing, indicating that they work separately from each other.
Core Graphics and OpenGL are separate technologies. UIKit and AppKit are built on top of both, as well as Core Animation. You can see the graphics technology stack inside Apple's documentation (Core Animation Programming Guide)
As of iOS 9 Core Graphics on iOS are based on Apple's Metal framework, not OpenGL.