COLLADA 3D content creation, how? - ios

I am looking for a tool to start with simple 3d objects for Scene Kit. I know there are a lot professional tools out there, but just buying blind a program to try a bit how things work would be wasted money.
How do I create 3D content for a Scene Kit scene, beyond using the builtin geometry and geometry creation tools within Scene Kit?

Short answer: Blender.
Long answer: Blender, because it's free and as good as any other 3D app (they are unfortunately all quite bad) but take this to the Apple Developer forums if you'd like more info, as this question is not appropriate for Stack Overflow. The way you're thinking about this is also not appropriate; no app is a "Colada editor". That would be like saying Photoshop is a JPEG editor.
The Unity forum is also a great place to hear a bunch of opinions; unlike any of Apple's tools other than Logic or perhaps Final Cut, it has a wide-ranging user base of technically-minded artists from whom to gather opinions.

Use Polygon Modelling.
COLLADA content is created in 3D modelling programs. There are three main ways of creating 3D content: polygon modelling, clay/brush style push/pull modelling of highly complex meshes, and what's most commonly known as NURBS modelling.
Polygon modelling involves creating shapes by articulating and adding primitive pieces of geometry to primitive pieces of geometry and progressively building up complex shapes in this manner. It has several massive advantages for game engine content creation that have seen it become (far and away) the most popular manner of making content for game engines.
Namely:
Strict and absolute control of the amount and type of geometry (performance)
Strict and absolute seam and inter-object connectivity control (animation)
Absolute texture control and material/shader control via geometry and bitmaps with 1:1 UVW mapping (ideal for expressing textures onto geometry with great performance)
Well designed and commonly used smoothing and tessellation algorithms nondestructive to, and considerate of, the above points
Given these massive benefits for game content creation, it's imperative you learn Polygon modelling first, for any and all game content creation. Most nearly all polygon modellers output a format compatible with COLLADA.
Understanding Polygon modelling will also give you understanding of some of how Geometry Shaders work and what they act on. The addition of Geometry Shaders is a relatively new feature of Scene Kit that provides vastly more interesting ways to manipulate geometry than the basic geometry creation tools provided within Scene Kit.

Related

Visualization of voxels/octrees in SceneKit

I want to render Voxel/Octree objects (cubes with different sizes basically) in iOS; so far, I have been looking at SceneKit to perform the visualization. The goal is to render 10 to 15k voxels in a scene like rendering an Octomap for robotics applications (such as: https://www.youtube.com/watch?v=yKNzTg25RM8).
In SceneKit framework we rendered these cube objects with SCCNNode with SCNBox geometry in it. The performance degrades significantly if we use a lot of geometry objects, so we are unsure if we are using the best approach for it, we also discovered MDLVoxelArray for this purpose but have not used it yet. I have not been able to find the proper documentation or references to include it in SceneKit. Can MDLVoxelArray be a solution to visualize voxels properly?

What is NeRF(Neural Radiance Fields) used for?

Recently I am studying the research NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis(https://www.matthewtancik.com/nerf), and I am wondering: What is it used for? Will there be any application of NeRF?
The result of this technique is very impressive, but what is it used for? I keep thinking of this question over and over again. It is very realistic, the quality is perfect, but we don't want to see the camera swinging around all the time, right?
Personally, this technique has some limitations:
Cannot generate views that never seen in input images. This technique interpolates between two views.
Long training and rendering time: According to the authors, it takes 12 hours to train a scene, and 30s to render one frame.
The view is static and not interactable.
I don't know if it is appropriate to compare NeRF with Panorama and 360° image/video, essentially they are different, only NeRF uses deep learning to generate new views, the others basically are just using smart phone/camera to capture scenes plus some computer vision techniques. Still, the long training time makes NeRF less competitive in this application area. Am I correct?
Another utility I can think of is product rendering, however, NeRF doesn't show advantages compare to using 3D software to render. Like commercial advertisement, usually it requires animation and special effects, then definitely 3D software can do better.
The potential use of NeRF might be 3D reconstruction, but that would be out of the scope, although it is able to do that. Why do we need to use NeRF for 3D reconstruction? Why not use other reconstruction techniques? The unique feature of NeRF is the ability of creating photo-realistic views, if we use NeRF for 3D reconstruction, then this feature becomes pointless.
Does anyone have new ideas? I would like to know.
Why do we need to use NeRF for 3D reconstruction?
The alternative would be multi-view stereo, which produces point clouds of finite resolution and is susceptible to illumination changes. If you then render such point cloud without non-trivial post-processing, it will not look photorealistic.
I don't know if it is appropriate to compare NeRF with Panorama and 360° image/video,
Well, if you deal with exactly flat scene with simple lighting (i.e. ambient light and Lambertian objects), then you can use panorama techniques for new view synthesis. In general though, it won’t produce the result you expect. You have to know the depth to interpolate correctly.
When it comes to practical limitations (slow; does not model deformations), NeRF should be considered a milestone that provided a proof of concept that representing surface as a level set of MLP-modelled function can result in sharp rendering. There is already good progress in addressing those limitations, and multiple works apply this idea for practical tasks.

How to create a hole in the box in SceneKit?

I'm using SceneKit to create a 3D Room for a Swift iOS app.
I'm using multiple boxes and placing it together to create different walls of the room. I want to also add doors and windows to the room for which I need to cut holes into the walls. This looks like a very common scenario but yet I couldn't find any relevant answers out there.
I know there are multiple ways of doing it -
Simplest being, don't cut the box. Place another box with door or wall texture.
But I do want to keep a light source outside of the room and want it to flow into the room through these doors and windows
Create multiple boxes for single wall and put them together to make a geometry
My last resort maybe.
Create custom geometry.
Feels too complicated since it requires me to draw each triangle myself. Not sure?
But what I was actually expecting -
Subtract geometries from geometries?
Library that's already handling these complexities?
Any pointers would be very helpful.
Thanks.
Scene kit offers some awesome potential but it's not a substitute for a 3D modeling program. If you want something much beyond assembling with primitives and extrusion in a plane you should think about constructing your model in a dedicated 3-D package and exporting the model into SceneKit as a .dae file. You might take a look at Blender. It's free and readily available on the net. I suspect it can easily do what you want and the learning curve will be compensated by the higher level functions of a graphics program versus coding.
I think #bpedit described the best approach.
A weak second choice would be to use SCNShape to build your geometry. That still leaves you the problem of constructing a Bezier path that matches your wall layout/topology. That might be a helpful hack in the short term, to save you from an immediate learning curve in modeling software. But I predict you'll still eventually move to a tool like Blender, SketchUp, Cheetah 3D, or Maya.

Character Animation with Metal

I have just delved into the world of Metal, and I thought that I'd got the hang of it! But then it occurred to me that if I wanted to make a game, then static objects moving around a screen wouldn't suffice. So my question is, 'Is it possible to create animations for models with Metal?'
I have looked at using other APIs, such as SpriteKit, and SceneKit, but I found that they do not support shaders, and are not as powerful as Metal.
The only way that I can think about how I would go about this, is by creating 60 different models, and then loading each one one after the other, to give a 'stop-motion' kind of effect, but I think that this would probably be incredibly inefficient, and was hoping that there was an easier answer?
Thanks a lot!
Yes, there are other, more efficient ways to do animation. But before getting into that, a warning: it really looks like you're barking up the wrong tree here.
Metal is a (conceptually) very low-level interface. You use Metal to talk (almost) directly to the GPU, so to work with it you need to think (sort of) like a GPU: in terms of data buffers, vertex transformations, etc. You seem to be working at a much higher conceptual level, so you're probably better served by one of the high-level game engines: SpriteKit for 2D or SceneKit for 3D. (Or a third party engine like Cocos or Unity.) Metal, on the other hand, is better suited for building those game engines.
SpriteKit and SceneKit do support shaders. Look at SKShader and SCNShadable in the docs (and be sure to click the "More" links to read the full overviews). SceneKit also supports character animations (aka skeletal animation aka skinning): typically one designs and rigs a model for animation in an external authoring tool (Maya, Blender, etc), then uses SceneKit to work with the animations at run time.
It is possible to do things like GPU-based skeletal animation in Metal. But I haven't seen any tutorials or similar written about it yet, probably because Metal is such a new technology. Fundamentally, though, it'd be based on the same sorts of techniques you'd use for skeletal animation in OpenGL or Direct3D — and much has been written about animation for those technologies. If you're willing to invest the time and energy to work at a low level, adapting the subject matter from GL/D3D tutorials is relatively easy.
You can do skeletal animation in Metal, SCNKit would be using the GPU to deform the mesh as well. But to do it in Metal you would need to pass skin weights, along with bone matrices for the bind pose and the transformations of the bones as they animate then calculate the new vertex positions based on these. In fact I think you need the inverse of the bind pose matrices. Each mesh vertex is then transformed by a weighted sum of transformations dictated by the skin weights.
I tried it but screwed it up somehow it didn’t deform properly, I don’t know if I’d obtained the wrong matrices from my custom script to grab animation data from blender or a bug in my shader maths or from the weights.
It was probably close, but with all the possible things that I may have got wrong in the process it was difficult to fix so I abandoned it in the end.
Probably easier to stick with SceneKit and let apple take care of the rest or use an existing game engine such as Unity.
Then again if you want a challenge, I’m sure it’s possible, just a little tricky. You could try CPU first to make sure the maths is ok then port it to the GPU to make it faster?
SceneKit do support shaders. And an object that manages the relationship between skeletal animations and the nodes and geometries they animate is SCNSkinner from SceneKit.
Typically, you need to create a skinned model using, for example, Autodesk Maya, save it along with animations that use the skeleton, in a scene file. You load the model from the scene file and pose or animate it in your app, either by using animation objects also loaded from the scene file or by directly manipulating the nodes in the skeleton. That's it.
Watch this 7-parts video about Blender's skeletal system and how to use it in SceneKit.
convenience init(baseGeometry: SCNGeometry?, //character
bones: [SCNNode], //array of bones
boneInverseBindTransforms: [NSValue]?, //ibt of matrix4
boneWeights: SCNGeometrySource, //influence on geometry
boneIndices: SCNGeometrySource //index mapping
)

Data structures for a 2D vector editor

I'm trying to make a simple 2D editor with the following capabilities:
Create/delete rectangles, polygons, circles, etc
Hierarchical grouping of these shapes
Move, rotate, scale these shapes
Apply a certain texture to them (with UV coordinates for each vertex)
I've had some success, but the code is messy. Are they any simple projects or articles that I can read to get some more info on the kinds of data structures used for such projects?
There is a lot of literature out there to get good ideas from. Some good ones:
IEEE Tutorial: Computer Graphics '79 has all the important graphics algorithms from the 60s and 70s, many are the original articles.
Graphics Gems (I) surveys important techniques from the 80s.
You might also want to look at PHIGS which focuses on Hierarchical Graphics.
Open Source 2D Game Engines can give you a good idea.
Not exactly "simple" in any sense, but Inkscape might be worth looking at.

Resources