I am currently working on ARCore for android. our use case is to load a bigger model in ARCore which is with the size of a building with more than two floors. this single big model will be extended to several floors in the building.
when we load the big model it is scaled to a smaller size by default and we could not zoom out to map/match the model for different floors.
when we tried to set node.getScaleController().setMaxScale(50f); the model gets zoomed and hides at certain point.
AnchorNode anchorNode = new AnchorNode(anchor);
TransformableNode node = new TransformableNode(fragment.getTransformationSystem());
node.getScaleController().setMinScale(0.01f);
node.getScaleController().setMaxScale(50f);
node.getScaleController().setSensitivity(0.1f);
node.setParent(anchorNode);
node.setRenderable(renderable);
node.select();
fragment.getArSceneView().getScene().addChild(anchorNode);
we found a youtube link which is similar to our use case, unfortunately, this is in ARKit and by using models with .scn formats.
https://www.youtube.com/watch?v=FG5SztPF2uY
Can someone please suggest whether this is feasible in Arcore.
Sharing some code snippets to scale this bigger model would be very helpful.
Note: our models are in .obj formats and converted to .sfa and .sfb for using in ARCore.
Hope after setting the max and min scale value you are using
public void setLocalScale (Vector3 scale)
Sets the scale of this node relative to its parent (local-space). If isTopLevel() is true, then this is the same as setWorldScale(Vector3).
Related
I've replaced the tree.glb model in the ThreeJS placeground example (https://github.com/8thwall/web/tree/master/examples/threejs/placeground), but it's not showing. It works fine when using tree.glb.
To debug, I've also tried replacing it with the jellyfish-model.glb available in the examples, but it also doesn't show when tapping on the floor plane.
Is there something wrong with my code, or with the .glb models I'm replacing tree.gbl with?
const modelFile = 'tree.glb' // 3D model to spawn at tap
to
const modelFile = 'jellyfish-model.glb' // 3D model to spawn at tap
File structure on github: 8thwall-3js-test-github
Ideally, I'd like to replicate what I've done using Unity+Vuforia in this example (which basically places a .png onto a floor plane): https://www.youtube.com/watch?v=poWvXVB4044
I'd start by looking at the scale of the 3d model. The tree model in the link you provided is quite large, so it's being scaled down in size. See https://github.com/8thwall/web/blob/master/examples/threejs/placeground/index.js#L7-L8
Prove to yourself that the model is being loaded by adding a console.log('model added!') type statement into animateIn() (as that is the model loaded handler)
My guess is that your jellyfish-model.glb is there, just very small. Trying adjusting startScale and endScale to larger values and see if that helps.
I have developed iOS application to have AR functionality using ARKit. I have used this project.
The application works with Collada (.dae) file dynamically which means the the client uploads the Collada file and all of the textures somewhere and the model gets shown accordingly with the help of this solution.
The application works mostly fine but for this one we experience the black part of the model flashing.
As you could see in the pictures most of the floor should be completely black(apart from one small part which is black and white). When we zoom in the model (make the model bigger) the model gets shown with less white spots on the floor(in the first picture almost no white spot but sometimes we can see some). When we zoom out and make the model smaller, we see many white spots on the floor and other places which should be black.[zoomed in the model with less white spot]
Is there any solution for this problem?
The issue you are seeing is known as Z-fighting.
In the given model, the black (bottom/floor) plane and the black/white patterned plane are very close so as the user moves further away the renderer doesn't accurately differentiate which plane should be shown first.
One solution is to set the readsFromDepthBuffer property of the plane's materials to false.
Once you have a reference to the node/plane, you can set the property to its material.
if let geom = node.geometry {
for material in geom.materials {
material.readsFromDepthBuffer = false
}
}
I'm currently trying to combine the following sources:
Apples SceneKit Vehicle Demo, Resp. its Swift version,
ARKit by example, and resp. its Swift version.
Each project on its own works like a charm (although I changed the vehicle demo so that the car can be controlled by on-screen buttons).
Now, when I try to combine both projects to create an augmented reality racing game, I run into problems regarding the size of the .dae model of the car: it's too big.
I can scale the model using the (chassis) nodes .scale property, but as soon as I add the SCNPhysicsVehicle properties and behaviour, the car gets reset(?) to its original size. I tried to scale the model in Xcode (open dae file, change scale), but its bounding box remains the same - that tells me that the rescaling didn't work properly.
Any hints?
1)you can scale the dae models by art.scnassets directly.
art.scnassets -> car.dae -> node inspector -> transforms -> scale the object
2) can scale 3dmodel by SCNAction
let scene = SCNScene(named: "art.scnassets/cup.dae")!
let node = scene.rootNode.childNode(withName: "cup", recursively: true)!
let action = SCNAction.scale(by: sender.scale, duration: 1.0)
node.runAction(action)
What I like to do is use Blender or some other 3d modeling program to resize your dae model to work in meters. Everything in ARKit is based on meters, so by sticking to the same metric you can get all your models to play well together without having to guess what the scale factor needs to be.
I'm not sure how to fix the model directly in Xcode. However, you can fix it in blender. Start by importing the object into blender. Select the object and observe it's dimensions. Scale the object to the desired dimensions and apply them by hittin Ctrl + A, and selecting scale. Alternatively, from the object menu, you can select Apply -> Scale. Now you can export your model with the corrected size.
I am using SceneKit to show a 3D model on iOS. I would like to use PBR. At the moment I am using the scene editor not coding to edit my materials. I can set any field to PNGs at the models material (metalness, normals, etc..) except the roughness. When I set it, the whole model turns pink. If I use float value, there is no problem. Do you have any idea, what the problem could be?
Model doesn't display correctly in XNA, ignores some bone deformations Reply Quote Edit
I am very new to 3D modelling, however I am required to do some for a project I have undertaken.
The basic principal is that I need a human model that can be deformed to the users measurements (measured using Kinect, but that is another story!). For example I want to stretch the stomach area for larger users etc.
Using 3Ds Max I have been able to rig a human model using a biped and then add some extra bones to change the stomach:
This all looks well and good, however when I load it into XNA, the stomach deformation has vanished-
I am somewhat at a loss as to why this has happened and any suggestions would be most welcome, or any links to tutorials on how this kind of thing should be done.
Furthermore, when I view the exported FBX in an FBX viewer plugin for QuickTime, the deformations show absolutely fine.
The code for displaying the model (its F# code, converted form a c# example, however I have tried it with the original c# code and get the same results) is:
override this.Draw(gameTime)=
// Copy any parent transforms.
let (transforms:Matrix array) = Array.zeroCreate model.Bones.Count
model.CopyAbsoluteBoneTransformsTo(transforms);
this.Game.GraphicsDevice.BlendState <- BlendState.Opaque
this.Game.GraphicsDevice.DepthStencilState <- DepthStencilState.Default
// Draw the model. A model can have multiple meshes, so loop.
for mesh in model.Meshes do
// This is where the mesh orientation is set, as well
// as our camera and projection.
for e:Effect in mesh.Effects do
let effect = e :?> BasicEffect
effect.EnableDefaultLighting()
effect.World <- mesh.ParentBone.Transform *
Matrix.CreateRotationZ(modelRotation)
* Matrix.CreateTranslation(modelPosition)
effect.View <- Matrix.CreateLookAt(cameraPosition,
focusPoint, Vector3.Up)
effect.Projection <- Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(45.0f), aspectRatio,
1.0f, 10000.0f)
// Draw the mesh, using the effects set above.
mesh.Draw();
base.Draw(gameTime);
Wonder if anyone has any ideas as to what has gone wrong or any ideas of how to sort this.
Any suggestions would be much appreciated.
Thanks
If you added the extra bones to the stomach, then told max to morph some associated vertices in accordance with the new bones by some weighting factor, then you would need to modify Xna's default content processor to tell it how to build the model to take that into account. By default, it won't.
Look at the Skinned model sample on the app hub: http://create.msdn.com/en-US/education/catalog/sample/skinned_model
All the joints(elbows, knees, etc) morph a bit as the joints flex. Ultimately, you are wanting a single vertex to be influenced by more than one transform.