We are creating iOS message extension app which is similar to Apple's Animoji app, we have created our own models (Some 3d faces) and we have .dae files for these models. We are converting these files to scn files in XCode by using option "Convert to scenkit scene file format (.scn)". When we see this scn file in XCode viewer, camera's Perspective view is shown for the model, and in this perspective view the face of the model looks thinner. Once we tap on the Perspective option drop down and choose "Front" option, model's face is in correct posture and is in its normal shape. We want this shape to show in the app also while running, but everytime we show model only the thinner face (perspective view) is visible. Is there a way to set camera to front for the model programmatically ? Is there any specific step we need to take while exporting the dae file ? so that by default camera view is front only.
the "front" camera in the Xcode editor is an orthographic camera (its usesOrthographicProjection is set to YES).
Alternatively you can change the fieldOfView or focalLength properties to reduce the amount of deformation due to perspective. You can find an illustration in this post on the Photography site.
You are looking for an orthographic projection for the camera. To setup it correctly you can add some camera object to the scene graph in .scn XCode viewer and choose "camera" option in drop down list. Camera projection type is perspective by default therefore you should change it to orthographic.
P.S. can you share some screenshot of your 3d face? Is it better than Memoji?)
Related
I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.
How can I play animation, which is already in my .scn model? When I check it in model inspector, I can play it but how to play it in code?Do I need to have the model in .dae format to play the animation?
I bought a 3D model online in .blend format. Baked the actions and exported it as a .dae file (with animation like yours). In Xcode when you click on the .scn file, it has two "parts" one is the 3D model, the other the animation. I dragged the animation part onto the model and it merged in the hierarchy. After that I projected the 3D model just like a static 3D model and it was animated automatically. Sorry my terminology is bad as I am new too. Hope this helps!
I'm starting to learn how to use ARkit and I would like to add a button like the one in the Pokemon go application where you can switch between AR ON (with a model into the real world) and AR OFF (without using a camera, having just the 3D model with a fixed background). Are there any easy way to do it?
Another good example of what you're asking about is the AR Quick Look feature in iOS 12 (see WWDC video or this article): when you quick look a USDZ file you get a generic white-background preview where you can spin the object around with touch gestures, and you can seamlessly switch back and forth between that and a real-world AR camera view.
You've asked about ARKit but not said anything about which renderer you're using. Remember, ARKit itself only tells you about the real world and provides live camera imagery, but it's up to you to display that image and whatever 3D overlay content you want — either by using a 3D graphics framework like SceneKit, Unity, or Unreal, or by creating your own renderer with Metal. So the rest of this answer is renderer-agnostic.
There are two main differences between an AR view and a non-AR 3D view of the same content:
An AR view displays the live camera feed in the background; a non-AR view doesn't.
3D graphics frameworks typically involve some notion of a virtual camera that determines your view of the 3D scene — by moving the camera, you change what part of the scene you see and what angle you see it from. In AR, the virtual camera is made to match the movement of the real device.
Hence, to switch between AR and non-AR 3D views of the same content, you just need to manipulate those differences in whatever way your renderer allows:
Hide the live camera feed. If your renderer lets you directly turn it off, do that. Otherwise you can put some foreground content in front of it, like an opaque skybox and/or a plane for your 3D models to rest on.
Directly control the camera yourself and/or provide touch/gesture controls for the user to manipulate the camera. If your renderer supports multiple cameras in the scene and choosing which one is currently used for rendering, you can keep and switch between the ARKit-managed camera and your own.
I have a 3d model of a human being standing. I implemented it into an project using arkit and can place it somewhere in the room. So far so good, but I would like to add an animation to the 3d model. For example when I press the buttonDance that it starts dancing. Not to move it up and down, but to add an animation to it.
What are keywords to make this work or does anyone have a brief way of doing this? Maybe what software to use or is it possible within sceneKit maybe?
You can use services such as Mixamo to generate an animation for your character.
I would advise you to use 3D models in Collade (.DAE) format because this format includes all your animations inside. You will have to clean the .DAE file to collect all the bone animations into one animation, more info here.
You will then need to read the animation from the .DAE file and add it to the node (your 3D model). Esteban Herrera has a great blog post on how to animate 3D models with ARKit.
I downloaded a 3D model(https://www.cgtrader.com/free-3d-models/industrial/machine/fuel-gas-scrubber-skid) and converted it to .dae using SketchUp.
I am not able to apply texture to this model in Xcode 9 Scene Editor. When i select any texture image(using materials->diffuse), it turns into black!!
I did the same for other models before and it was working fine. Not able to figure out what is the issue now.
I tried even changing the multiply, emission, reflective etc. properties to white color but still not able to see the texture.
I found what is wrong with the model. The model i downloaded is not UV mapped. That is why i am not able to apply the texture to it.