8th Wall tap to place example not showing model replacement - 8thwall-xr

I've replaced the tree.glb model in the ThreeJS placeground example (https://github.com/8thwall/web/tree/master/examples/threejs/placeground), but it's not showing. It works fine when using tree.glb.
To debug, I've also tried replacing it with the jellyfish-model.glb available in the examples, but it also doesn't show when tapping on the floor plane.
Is there something wrong with my code, or with the .glb models I'm replacing tree.gbl with?
const modelFile = 'tree.glb' // 3D model to spawn at tap
to
const modelFile = 'jellyfish-model.glb' // 3D model to spawn at tap
File structure on github: 8thwall-3js-test-github
Ideally, I'd like to replicate what I've done using Unity+Vuforia in this example (which basically places a .png onto a floor plane): https://www.youtube.com/watch?v=poWvXVB4044

I'd start by looking at the scale of the 3d model. The tree model in the link you provided is quite large, so it's being scaled down in size. See https://github.com/8thwall/web/blob/master/examples/threejs/placeground/index.js#L7-L8
Prove to yourself that the model is being loaded by adding a console.log('model added!') type statement into animateIn() (as that is the model loaded handler)
My guess is that your jellyfish-model.glb is there, just very small. Trying adjusting startScale and endScale to larger values and see if that helps.

Related

Getting the current visible entities in RealityKit

Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera

The Collada 3D model is flashing

I have developed iOS application to have AR functionality using ARKit. I have used this project.
The application works with Collada (.dae) file dynamically which means the the client uploads the Collada file and all of the textures somewhere and the model gets shown accordingly with the help of this solution.
The application works mostly fine but for this one we experience the black part of the model flashing.
As you could see in the pictures most of the floor should be completely black(apart from one small part which is black and white). When we zoom in the model (make the model bigger) the model gets shown with less white spots on the floor(in the first picture almost no white spot but sometimes we can see some). When we zoom out and make the model smaller, we see many white spots on the floor and other places which should be black.[zoomed in the model with less white spot]
Is there any solution for this problem?
The issue you are seeing is known as Z-fighting.
In the given model, the black (bottom/floor) plane and the black/white patterned plane are very close so as the user moves further away the renderer doesn't accurately differentiate which plane should be shown first.
One solution is to set the readsFromDepthBuffer property of the plane's materials to false.
Once you have a reference to the node/plane, you can set the property to its material.
if let geom = node.geometry {
for material in geom.materials {
material.readsFromDepthBuffer = false
}
}

Isometric Depth Sorting With SpriteKit

I am making a relatively simple isometric map using SpriteKit. I've tried both using the editor as well as creating it through code, and each time, it seems to have some "weighting" between the various tiles even though they should overlap gracefully given that I'm just setting the styling of a tile.
Here is an example of me using the tiles from https://kenney.nl. The green is just a standard grass patch and the road is the same exact size as it.
When I create this map in the XCode UI or if i iterate through in code and paint them, this continues to occur.
However, if I was to do something like flip the tiles around and paint it all with roads with grass in the middle, it then seems to sort whichever tile there are "more of" like in this example:
If i go and make more of one tile group over another, it seems to overpower it.
So my question is, how can I keep them from using this behavior? I've tried different tilemaps together, nested them inside of eachother etc... But at the end of the day, I cant get different tiles to exist at the same "plane". I've tried with code, the UI, etc. I'd like to use the SKTileMap if possible to use the downstream features as opposed to doing all of the math myself, like in the approach in this article (http://bigspritegames.com/isometric-tile-based-game-part-1/)

Select AR placed objects with a dot in the middle of the screen in Unity

I have seen numerous AR application behaving like this: there is a dot in the middle of the screen and we can position that dot on some objects and some content is displayed (I attacked an image if i was not clear enough). My question is how is this kind of behaviour obtained in Unity, my guess is that from that point you cast a ray, but I don't think that AR placed objects, from an ADF for example, can be found with the hit from that ray. The dot selecting objects placed on AR
I have made it work, with the aid of the Google Tango's Area Learning demo scene. I've placed some objects in the area and I have started to send a Raycast from the middle of the camera with "ViewportPointToRay" method. When that Ray would collide with a GameObject you can implement whatever functionality you need.

Model doesn't display correctly in XNA

Model doesn't display correctly in XNA, ignores some bone deformations Reply Quote Edit
I am very new to 3D modelling, however I am required to do some for a project I have undertaken.
The basic principal is that I need a human model that can be deformed to the users measurements (measured using Kinect, but that is another story!). For example I want to stretch the stomach area for larger users etc.
Using 3Ds Max I have been able to rig a human model using a biped and then add some extra bones to change the stomach:
This all looks well and good, however when I load it into XNA, the stomach deformation has vanished-
I am somewhat at a loss as to why this has happened and any suggestions would be most welcome, or any links to tutorials on how this kind of thing should be done.
Furthermore, when I view the exported FBX in an FBX viewer plugin for QuickTime, the deformations show absolutely fine.
The code for displaying the model (its F# code, converted form a c# example, however I have tried it with the original c# code and get the same results) is:
override this.Draw(gameTime)=
// Copy any parent transforms.
let (transforms:Matrix array) = Array.zeroCreate model.Bones.Count
model.CopyAbsoluteBoneTransformsTo(transforms);
this.Game.GraphicsDevice.BlendState <- BlendState.Opaque
this.Game.GraphicsDevice.DepthStencilState <- DepthStencilState.Default
// Draw the model. A model can have multiple meshes, so loop.
for mesh in model.Meshes do
// This is where the mesh orientation is set, as well
// as our camera and projection.
for e:Effect in mesh.Effects do
let effect = e :?> BasicEffect
effect.EnableDefaultLighting()
effect.World <- mesh.ParentBone.Transform *
Matrix.CreateRotationZ(modelRotation)
* Matrix.CreateTranslation(modelPosition)
effect.View <- Matrix.CreateLookAt(cameraPosition,
focusPoint, Vector3.Up)
effect.Projection <- Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(45.0f), aspectRatio,
1.0f, 10000.0f)
// Draw the mesh, using the effects set above.
mesh.Draw();
base.Draw(gameTime);
Wonder if anyone has any ideas as to what has gone wrong or any ideas of how to sort this.
Any suggestions would be much appreciated.
Thanks
If you added the extra bones to the stomach, then told max to morph some associated vertices in accordance with the new bones by some weighting factor, then you would need to modify Xna's default content processor to tell it how to build the model to take that into account. By default, it won't.
Look at the Skinned model sample on the app hub: http://create.msdn.com/en-US/education/catalog/sample/skinned_model
All the joints(elbows, knees, etc) morph a bit as the joints flex. Ultimately, you are wanting a single vertex to be influenced by more than one transform.

Resources