How to draw gizmos in iOS with MetalKit & Metal? - ios

Is there any way to integrate SceneKit object gizmos in Metal or do I have to implement it from scratch? I want to rotate, scale and translate object in a simple way with gizmo like Blender style. I've researched for some libraries but OpenGL has a lot more options than Metal.
Thanks.

Unfortunately you can't integrate SceneKit object gizmos, because the are no bridge APIs. Gizmos themselves are just primitives, I don't think it will be difficult for you to write your own solution.

Related

Can we develop LiDAR apps using ARKit with SceneKit?

I have read on many forums that if we want to develop a LiDAR application, we need to use RealityKit, instead of SceneKit. I am in the middle of development of Apple LiDAR Tutorial. But instead of using RealityKit, I used SceneKit. But now I got a problem since SceneKit doesn't offer sceneUnderstanding feature to render graphics. So I want to know basically:
Can't we develop LiDAR applications using ARKit with SceneKit?
Can we achieve sceneUnderstanding feature using SceneKit?
Can we develop LiDAR apps without using sceneUnderstanding?
Really appreciate your answers and comments. Thank you.
You can use scene understanding with any renderer. But only RealityKit comes with integration for this feature.
The ARWorldTrackingConfiguration comes with a sceneReconstruction flag that can be enabled.
Then, ARKit creates ARMeshAnchor instances for you in the ARSessionDelegate and ARSCNViewDelegate methods.
However, because SceneKit does not come with an out-of-the box support for these features you would have to build a visualization or physics interaction yourself based on the ARMeshAncor properties.

iOS - Combining SpriteKit and Metal

Is it possible to combine SpriteKit with Metal? and if it is, how could one achieve to combine metal particles and SKNodes in a physics world so that the collide with each other, what's the usual approach for this kind of requirement.
Thanks
They are two totally different technologies. Sprite Kit is a framework that abstracts all of the rendering work for you as well as provides you with a built-in physics engine. Whereas Metal is purely a low-level GPU-accelerated graphics API which gives you complete control over the rendering process. It is similar to OpenGL ES but with much less overhead.
Sprite Kit will use Metal (on eligible devices) to render your scene. You don't need to do a single thing because Sprite Kit handles all rendering behind-the-scenes.
You don't combine them, they are two totally different frameworks. If you are looking to add physics to Metal then you will either need to write your own physics engine or use an an already existing engine like Box2D (which I believe Sprite Kit uses internally).
This appears to be possible now using SKRenderer which allows you to mix SpriteKit and Metal (by the looks of it adding SpriteKit to Metal and vice versa).
It's iOS 11+, macOS 10.13+ and tvOS 11+.

Developing Shaders With SpriteKit

I've read that some of the downfalls of SpriteKit is that you're unable to develop shaders if you use it.
However, I read a post here on SO that suggest otherwise:
How to apply full-screen SKEffectNode for post-processing in SpriteKit
Can you develop your own shaders if you decide to use SpriteKit?
Thanks
It is not supported in iOS 7, but iOS 8 will support custom shaders. For more information, view the pre-release documentation of SKShader.
An SKShader object holds a custom OpenGL ES fragment shader. Shader objects are used to customize the drawing behavior of many different kinds of nodes in Sprite Kit.
Sprite Kit does not provide an interface for using custom OpenGL shaders. The SKEffectNode class lets you use Core Image filters to post-process parts of a Sprite Kit scene, though. Core Image provides a number of built-in filters that might do some of what you're after, and on OS X you can create custom filter kernels using a language similar to GLSL.

OpenGL vs Cocos2d: What to choose?

I understand that cocos2d it's really simple API, and that I can use it to do simple and huge 2D or even sometimes 3D games/applications. As well I understand that OpenGL it's more complicated, it's lower level API etc.
Question: What is better for implementing 2D/3D games? Why do we need to learn OpenGL if we have simple frameworks like cocos2d? What you can do with OpenGL that you can't do with cocos2d?
Thanks in advance!
What is better for implementing 2D/3D games?
Hard to tell, but a higher level API is always there to make things easier for you. For example you are writing a 2D shootem up. You will likely use a game loop, you will want to use sprites and make those move on the screen. You may want animations like explosions taking place. You'll end up writing your own higher level API to do those things. Cocos2D has solved those problems for you already. Any other frameworld should have solved it.
Why do we need to learn OpenGL if we have simple frameworks like cocos2d?
In case you like to cusomize the standard behaviour of a framework, especially the drawing part, you should get into openGL. If there is something you like to have which doesn't come out of the box you may find yourself reimplementing a base framework class. For example, look at the shaders used in Cocos2D 2.0. If you like some special blending mode, like a tinting effect, you won't get it for free. There is a colour attribute for a CCSprite but this may not be the result you're expecting. So you'll have to write your own shader and plug it into the sprite you like to be displayed in a different way.
What you can do with OpenGL that you can't do with cocos2d?
This comparison doesn't really work out, since cocos2d facilitates opengGL for the drawing part to build up that higher level api and make your life easier as a game developer.
Cocos2d is a wrapper around the 2D features of OpenGL (as of this: http://www.cocos2d-iphone.org/about) . Under the hood it itself uses OpenGL ES to implement its features. This is good because it means that there will be minimal performance overhead so you can start using its simpler API without having to get immersed to the definitely bigger learning path of OpenGL.
It has however only strong 2D support and if you plan to write later 3d games you loose all benefits of Cocos2d: why would you rewrite a 3d rendering engine with a 2d framework that under the hood uses a very strong 3d engine? You loose performance for a lot of unnecessary work.
So the simpler answer is: for 2d Cocos2d, for 3d OpenGL.
If you want to start OpenGL ES, this is a very good tutorial for beginners: http://iphonedevelopment.blogspot.it/2009/05/opengl-es-from-ground-up-table-of.html

When to use OpenGL at iOS

I am planning to develop a game which is actually a 2D collect'em up style game. There will be not much real time graphic synthesis, mostly there will be sprites floating around.
I know what I ask does not have a "%100 right" answer, but just want to hear opinions, when do you recommend using OpenGL, when to go straight with moving NSObjects on the screen with animations?
IMO it is always better to use OpenGL for graphics: moving, resizing, other transformation. And I suggest not to start from scratch but use some existing solution like http://www.cocos2d-iphone.org/
OpenGL is a 3D API, so if you are writing a 3D app then it is your only rational solution. If you are writing a 2D app, you would be better off using CoreGraphics as 2D animations are well supported and it is really quite easy to work with. Much easier than OpenGL at any rate. For example, rendering text is easy in a 2D app using UIKit and CoreGraphics but it is not trivial in OpenGL.

Resources