Blender-File to Xcode - ios

we want to make a 3D Game for Apple iPad and we ar thinking about
a possibility, to import 3D-Models from Blender into Xcode.
Is there a way to do that?
We want to use opengl-es 2.0.

XCode isn't a game engine or 3D SDK. You can't 'import' blender files into XCode. What you can do is take your 3D assets and use them within your apps, either directly through OpenGL (rather low level), or using a 3D engine such as Unity (easier).
Either way, there are a number of questions already on Stackoverflow that you might find useful:
Opengl ES how to import a 3D model and map textures to it on runtime
Iphone + OpenGL ES + Blender Model: Rotation by Touch
Choosing 3D Engine for iOS in C++
...I highly recommend you take a look at what options are out there, decide on the best way to implement your 3D game (be it raw OpenGL or an engine), and go from there.

Related

Can we develop LiDAR apps using ARKit with SceneKit?

I have read on many forums that if we want to develop a LiDAR application, we need to use RealityKit, instead of SceneKit. I am in the middle of development of Apple LiDAR Tutorial. But instead of using RealityKit, I used SceneKit. But now I got a problem since SceneKit doesn't offer sceneUnderstanding feature to render graphics. So I want to know basically:
Can't we develop LiDAR applications using ARKit with SceneKit?
Can we achieve sceneUnderstanding feature using SceneKit?
Can we develop LiDAR apps without using sceneUnderstanding?
Really appreciate your answers and comments. Thank you.
You can use scene understanding with any renderer. But only RealityKit comes with integration for this feature.
The ARWorldTrackingConfiguration comes with a sceneReconstruction flag that can be enabled.
Then, ARKit creates ARMeshAnchor instances for you in the ARSessionDelegate and ARSCNViewDelegate methods.
However, because SceneKit does not come with an out-of-the box support for these features you would have to build a visualization or physics interaction yourself based on the ARMeshAncor properties.

Scanning a 3d object in ARKit via video camera?

This is probably an insanely hard question. So far ARKit works with 3D models which are built in 3d modelling software. I was wondering if there was a way to use iPhone camera to scan 3d object (let's say a car), then use it in ARKit.
Any open source projects available which do this on other platforms or iOS?
You are looking for software in the "photogrammetry" category. There are various software tools that will stitch your photos into 3D models, but one option is Autodesk Remake. There is a free version.
ARKit/RealityKit on iPad/iPhone with a LiDAR scanner let you reconstruct a current scene and obtain a 3D geometry with an Occlusion Material applied. This geometry allows you occlude any object including a human being and physically "interact" with this generated mesh. LiDAR's working distance is up to 5 meters.
However, scanning a car isn't a good idea due to paint's high reflectivity.

how to make "Azimuthal Equidistant Projection" in iOS

I'm thinking about creating an iOS app that transforms a 3D sphere into a 2D image using the azimuthal equidistant projection. Here is a nice sample of this type of projection.
Azimuthal Map, Anywhere
I'm new to 3D programming, so I would like to ask for advice which framework / tool is good to use in this case. These are the options that I know:
Unity (+ OpenGL?)
SceneKit (+ CoreGraphics?)
Processing + Processing.js (inside WebView)
Please tell me if there are other solutions. I'd be also glad if you could tell me if there is any sample code or an open source library for this projection.
Thanks in advance!
this can easily be done using shaders, and does not require external libraries.
See http://rogerallen.github.io/webgl/2014/01/27/azimuthal-equidistant-projection/
I would highly recommend to use the c++ 3D libraries such as GXmap and VES/VTK.
GXmap is a small virtual globe and map program. Apart from showing an ordinary globe view of the earth, it can also generate Azimuthal equidistant projection maps suitable for amateur radio use.
VES is the VTK OpenGL ES Rendering Toolkit. It is a C++ library for mobile devices using OpenGL ES 2.0

Workflow between blender and xcode (scenekit)

Im using blender to create landscape for a game being built with scenekit.
As it is just a landscape, I won't be using any animations so I'm wondering before I dive too deep into blender, should I be using blender to create the geometry and then create my own materials in scenekit?
I could still create the shadow, emission, specular etc. maps in blender but would there be a performance benefit or penalty for doing it this way?
Also if this is a path I could take should I then be exporting as .dae or is there a way to export to a normal map that xcode would be happy with?
SceneKit supports materials exported in DAE from Blender. It doesn't support every possible shading option that Blender has, but unless you're doing exotic stuff it should cover most of what you're looking for.
At run time there's no difference between materials loaded from DAE and those created programmatically.
What you do want to think about at authoring/export time is stuff like real-time versus static lighting/shadows and high-poly geometry versus baked normal maps. In other words, material performance is more about how the materials are set up (complexity) than where they're set up (imported or at run time). See WWDC 2014 session 610: Building a Game with SceneKit for some tips.

Developing Shaders With SpriteKit

I've read that some of the downfalls of SpriteKit is that you're unable to develop shaders if you use it.
However, I read a post here on SO that suggest otherwise:
How to apply full-screen SKEffectNode for post-processing in SpriteKit
Can you develop your own shaders if you decide to use SpriteKit?
Thanks
It is not supported in iOS 7, but iOS 8 will support custom shaders. For more information, view the pre-release documentation of SKShader.
An SKShader object holds a custom OpenGL ES fragment shader. Shader objects are used to customize the drawing behavior of many different kinds of nodes in Sprite Kit.
Sprite Kit does not provide an interface for using custom OpenGL shaders. The SKEffectNode class lets you use Core Image filters to post-process parts of a Sprite Kit scene, though. Core Image provides a number of built-in filters that might do some of what you're after, and on OS X you can create custom filter kernels using a language similar to GLSL.

Resources