THREE.js glTF loader (binary) does not display vertex colors? - gltf

I have a model with precomputed vertex colors. If I generate glTF file and load it using THREE.GLTFLoader, I can call scene.overrideMaterial = new THREE.MeshBasicMaterial({vertexColors: THREE.VertexColors}) to convert the default material from MeshStandardMaterial to MeshBasicMaterial. The precomputed colors are then displayed correctly.
If however I generate a binary glTF (*.glb) file and override the material properties, I have to call scene.add(new THREE.AmbientLight(0xffffff) to add ambient lighting to the scene; otherwise, the display is black.
Is this a deficiency with glTFLoader, or am I (more likely) missing something?

Your first example (replacing the original materials) appears the way you expect because THREE.MeshBasicMaterial is an unlit/shadeless material type. From the three.js documentation, it "is not affected by lights", and doesn't require lights to appear onscreen.
When you don't replace the material, the default material created by THREE.GLTFLoader is usually THREE.MeshStandardMaterial, which is a physically-based rendering (PBR) material. Because it is physically-based, it requires lighting to appear onscreen. For best results, especially with metallic materials, you may need an environment map for realistic lighting.
It's also possible to create a glTF model that contains unlit materials (or THREE.MeshBasicMaterial) by default, but based on the results you describe it doesn't sound like your glTF model was authored that way.

Related

(2d Texture Array) vs (Texture Atlas) vs (Multiple Binds / Draws)

My question pertains to the best way to handle multiple textures. First some context:
I'm using DirectX-11 in a non-gaming application; the gui uses DirectX exclusively. I'm in the process of making the gui skinnable, so the user can customize the gui to their liking.
I've written the code in such a way that the gui layout and the size of each gui element can change based on a configuration file. The gui currently uses only DirectX primatives via DrawIndexedInstanced, but I'd like to support user supplied textures. The size of these textures can vary. There can be as many as two dozen of these different textures.
I can solve this problem by either:
Dynamically putting together a texture atlas, or...
Forcing all of the textures into a 2d texture array (by making all of the textures the same size via padding as needed), or ...
Splitting up the DrawIndexedInstanced calls so that there's one draw call for each of the different textures (i.e. multiple binds / draws).
I spent the afternoon looking for consensus. I didn't find it. Penny for your thoughts?
The approach that runs fastest is the texture atlas. This is why 2d games use sprite maps. Multiple binds / draws is the slowest approach.

How to overlay the RTDOSE and Image data in the correct position?

I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.

Opencv with blender or opengl?

I am new to opencv,I did a project with opencv,
My project is tracking object with stereo camera,so I find where is the object and I want to represent it in (blender or with opengl or another one),so my situation is that I have point 3d in YML file and I want to represent them . I dont know what I will use ,can any one help me ??
Its possible to do in Blender, but for your simple purpose Opengl should be enough. To get started with Modern Opengl check this list of contents: link
In opengl, before drawing anything you must "send" data (vertices) to your GPU. One part of this process is called Vertex Buffer Object. (its very simple after your program by yourself). When you use VBO, you can specify what type of data you have: STATIC or DYNAMIC. Dynamic means that you (the artist) will change the data over time, the position of each vertex might change. And that is what you want.

Can a 3dsmax model be exported to include lighting effects?

I created a really basic cylinder, added a material and a glow effect
can I export a model to include the glow effect so the model will look like the rendering ?
http://imgur.com/VaNJLj4
Clarification:
Can I export the model to .fbx or .x and have it contain the lightning information so that if I import it into unity or xna the model will look like the rendering ?
"glowing" is really a post-process kind of effect. Actually a blur. There are quite a few tutorials on how to do this in XNA, but I doubt that you can easily export this from you modeling software (as in not possible at all).
The reason is that doing it usually requires setting up multiple rendertargets, custom shader, etc, which you will have to do yourself.
The reason you need multiple rendertargets;
When you render a model, only the pixels WITHIN the (visibly) outer vertices are processed by the pixel-shader. Hence, you can't render a smooth "fade-out" outside the model itself as would be the case in your picture.
What you usually do is you use a shader that renders your object normally, but also renders a "glow-color" to an other rendertarget.
When all models have finished rendering, you do a blur-effect on this second RT.
Then you blend your main RT with the blurred glow-RT.
This is VERY superficial, and I havent done it in AGES, so please DO check out some tutorials. Also, this bloom-sample basicly does the same thing, but on the entire scene, I think: http://xbox.create.msdn.com/en-US/education/catalog/sample/bloom
Add the glow in 3dsmax with filters, then it wil render automaticly.
Minor notice, 3dsmax is a very big program with lots of possibilities, just take your time finding everything out. Believe me it will take time.

Problems with some textures in XNA

I have made a model using Sketchup, and have tested rendering it using Blender and it looks great.
However loading it in XNA has two problems.
1. One of the textures becomes see-thru not entierly transparent but items below on the inside of the model is visible (this is not the case in blender).
2. I have a rounded part on the model that is divided into smaller parts and the texture gets out of sync (the posisioning is all wrong).
I have tested exporting the model to 3ds and then use blender to save it as fbx (to eliminate any problems with Sketchup).
I have also tried using AutoDesks FBX Converter, same problems =(
I'm using myModel.Draw(World, View, Projection); to render the model.
Any suggestions?
/Jimmy
1)Sounds like a backface culling issue try this
device.RenderState.CullMode = CullMode.None; (try the CW and CCW variants)
also make sure the depth buffer is enabled
2) This may be similar problem to an issue I had with blender when you copy the bones try the gModel.CopyBoneTransformsTo(transforms); as well as gModel.CopyAbsoluteBoneTransformsTo(transforms);

Resources