Adding animation to an 3d model via ARKit - ios

I have a 3d model of a human being standing. I implemented it into an project using arkit and can place it somewhere in the room. So far so good, but I would like to add an animation to the 3d model. For example when I press the buttonDance that it starts dancing. Not to move it up and down, but to add an animation to it.
What are keywords to make this work or does anyone have a brief way of doing this? Maybe what software to use or is it possible within sceneKit maybe?

You can use services such as Mixamo to generate an animation for your character.
I would advise you to use 3D models in Collade (.DAE) format because this format includes all your animations inside. You will have to clean the .DAE file to collect all the bone animations into one animation, more info here.
You will then need to read the animation from the .DAE file and add it to the node (your 3D model). Esteban Herrera has a great blog post on how to animate 3D models with ARKit.

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

ARcore Augmented images with 3D object interaction

I want to build a digital catelog application
where i detect the image in a catelogue and place a 3D object on it
This can be achieved by ARcore Augmented images.
what i need is When i click/touch the 3D object I need to show some information and videos
For this particular task i need some SDK options
without Vuforia can this be achieved using ARCore+Unity or Android OpenCV or any other.
This requires a lot of work from creating animations and layers to define colliders and controlling with backend code.
First you create the animations and animation controllers, then add colliders to the hot spots where you want to click on the object (e.g. touch the door to open), then map each collider click event to fire a specific animation.
actually it is better to follow a tutorial that shows the animating basics, then it will be easy to combine with AR project,
https://unity3d.com/learn/tutorials/s/animation

Animating 3d objects with Sceneform

I have been searching for animating 3D objects via Sceneform. I am very new to AR.
Can anybody provide a sample for 3D animation like a moving human?
Sceneform does not support animated renderables (like support for the animated FBX file format) right now. You can only move or rotate objects but you can't get something like a walking human easily.
Sceneform SDK for Android v1.7.0 supports animation (15th February 2019).
Sceneform includes an optional animation library, com.google.ar.sceneform:animation which enables animation playback.
Added ModelAnimator and AnimationData classes. Sceneform now has the ability to play animated models.
Added SkeletonNode class which can be used to bind nodes to bones in a skinned renderable, making it possible to attach objects to bones, access the positions of bones, and manipulate the positions of bones directly.
Added AugmentedFaceNode to the UX library which can be used to render visuals with ARCore's Augmented Faces feature. See the new sample in the /samples/augmentedfaces/ directory.
Added Vector3.equals(Vector3) and Quaterion.equals(Quaterion).
Exposed Quaternion(Vector3 eulerAngles) and Quaternion.eulerAngles() publicly.
Sceneform lets you import models with animation. You can use Sceneform APIs to play back and control the animation, and attach nodes to a model's skeleton.
For instance, the Sceneform Animation sample includes files used to build models of Andy the android and a baseball hat. The Andy model contains animation data, while the baseball hat is a non-animated model. When you run the sample, Andy breakdances and waves his arms while the hat remains fixed to his head using a node.
It's important to differentiate between model animation in Sceneform versus property animation in Android.
Model animations are created ahead of time by artists using modeling and animation software. They contain Skeletal Animation Data. These animations must be exported as fbx files, then imported into a sfb file (binary asset) to be used in Sceneform.
Property animation is a fundamental Android concept and is not specific to Sceneform. This kind of animation can change any mutable value on a Java object that has a getter and a setter. The animated values can be set dynamically, but cannot be packaged into an sfb file.
Hope this helps.

Blender + SceneKit (how-to)

A few questions to game developers. I am very beginner in this. I want to create a game level for example a green plane with trees. I have played a little in Blender and SceneKit. I know that I can export .dae from Blender and import it to Xcode. My questions:
Should I delete camera and light node before export? Why?
Should I design all level in one .dea file or make it separately? For example one .dea for plane and four different trees in for .dea's How to merge them in Xcode?
Can I use many times one .dea to generate for example a forest? How?
If creating design in separately is better way how to keep proportions between them to protect yourself from creation man bigger than tree?
I will be very great full if somebody someone dedicates to these questions. It will cut my time to learn basic. Thanks in advance. :)
I'll tell you how I do it:
1) .dea files use only for models(trees, charecters, building, etc...)
2) Game scene: floor, models, light, camera, obstacles build using Xcode scene builder or by code or mixed (based on the scene).
3) Based on size of world/level it can be split into several scenes(visible/invisible by player). Then you can create one blank scene and load/unload these scenes during runtime.
4) For a model you create a reference and after that build forest using reference of tree. If in the future you need to change the color of tree, all trees in all scenes will be updated.
5) For each model(SCNNode) (loaded from .dea file) you can set scale attribute (from code or by Xcode scene builder)
Also, 3D Apple Games by Tutorials is very good for starting.

How to make short film with WebGL

Is it possible to make a short film using WebGL? I see tons of examples on animating an object or trigger based animation but nothing like film. I am new to this field.
WebGL is just a graphics library. You'll need an animation engine (or game engine that has animation built in) and you'll need an authoring program to make the animation.
You might try babylon.js
Theoretically you could make an animation in Blender or 3DSMax or Maya, export to FBX and import through the converters included in the engine. I suspect it's not setup to handle whole 3D scenes as is though.
Three.js might do it as well but I suspect it also doesn't handle full scenes directly out of the 3D program.
I suggest you start small. Make a simple animated scene using a few primitives and see if you can export it into one of those libraries.
Inka3D, which is a Maya to WebGL exporter, has been used to create so-called demos which are close to short films. These are called "Azathioprine", "Radiotherapy" and "70s". You can simply use maya as usual only with some limitations and make your short film. See www.inka3d.com for links to the demos.

Resources