Wrong normals when blender export object file to WebGL - webgl

I have an object created using blender and I set all the normal correctly using ctrl+n. I'm sure that all the normals are set correctly in blender. However when I export the object to .obj format with 'write normal' option and export to WebGL, half of my mesh are disappear, seems like the normals are export incorrectly.
The mesh that disappear are using mirror modifier and LoopTools-bridge to generate.
When I disable the 'write normal' option, the object display correctly but I cannot put lights on the object.
What is the possible problem, which step I miss when I export the file from blender?
It should be a sphere like object but it lost half of the mesh

I just found that WebGL only support 16bit vertex buffer, if my object
include normals. I have to downsample my vertex number to 16bit/3. And
the problem fixed.
Edit-
I fix this problem by reduce my vertex number

Related

Create lightmap baking using custom UV/texture coordinates in Xcode's SceneKit editor?

Im trying to bake light map texture for my models in Xcode's SceneKit editor. But it's not generating the texture based on my uv coordinates. Is there a way to specify the texture coordinates for Xcode to use?
I tried to do it with a simple rounded box. This is the result:
Import Collada model
I import the model with a single UV/texture coordinate source. The preview of it looks correct, as you can see below.
Bake the lightmap
When I then try to bake the lightmap to a texture, it creates new UV/texture coordinates with a corresponding texture. But this a different texture layout/texture coordinates (as you can see in the preview below). As you can see, the quality is not that great either, you can see a light gray square on the side. On the real models I'm testing on it looks MUCH worse, and everything looks like it's placed incorrectly.
This is what the texture looks like.
So is there any way to have it generate the texture using provided UV/texture coordinates? Or how I'm a supposed to use this?

How to load a high vertices and high polygons object using SceneKit?

I start to build a 3D scene using SceneKit, I have converted a .obj file into .scn file, and I drag a camera, but I couldn't see anything, only the whole white color. Then I found that the object I have converted have high polygons and vertices, almost 21000 vertices and 7220 polygons. I think this is the problem. So what can I do? Is it possible to display those object with high vertices and polygons?
Twice Edit
I have solved this 'problem', simply give the zFar property a higher value.

opengles display human face in iphone

I need to make a human 2D face to 3D face.
I used this link to load an ".obj" file and map the textures. This example is only for cube and pyramid. I loaded a human face ".obj" file.
This loads the .obj file and can get the human face properly as below.
But my problem here is I need to display different human faces without changing the ".obj" file. just by texture mapping.
But the texture is not getting mapped properly, as the obj file is of different model. I just tried changing the ".png" file which is used as texture and the below is the result, where the texture is mapped but not exactly what I expected, as shown below.
The below are my few questions on it :
1) I need to load texture on same model( with same .obj file ) with different images. Is it possible in opengles?
2) If the solution for above problem is "shape matching", how can I do it with opengles?
3) And finally a basic question, I need to display the image in large area, how to make the display area bigger?
mtl2opengl is actually my project, so thanks for using it!
1) The only way you can achieve perfect texture swapping without distortion is if both textures are mapped onto the UV vertices in exactly the same way. Have a look at the images below:
Model A: Blonde Girl
Model B: Ashley Head
As you can see, textures are made to fit the model. So any swapping to a different geometry target will result in distortion. Simplified, human heads/faces have two components: Interior (Bone/Geometry) and Exterior (Skin/Texture). The interior aspect obviously defines the exterior, so perfect texture swapping on the same .obj file will not work unless you change the geometry of the model with the swap.
2) This is possible with a technique called displacement mapping that can be implemented in OpenGL ES, although with anticipated difficulty for multiple heads/faces. This would require your target .obj geometry to start with a pretty generic model, like a mannequin, and then use each texture to shift the position of the model vertices. I think you need to be very comfortable with Modeling, Graphics, Shaders, and Math to pull this one off!
Via Wikipedia
3) I will add more transform options (scale & translate) in the next update. The Xcode project was actually made to show off the PERL script, not as a primer for OpenGL ES on iOS. For now, find the modelViewMatrix and fiddle with this little bit:
GLKMatrix4Scale(_modelViewMatrix, 0.30, 0.33, 0.30);
Hope that answers all your questions!

Flipping OpenGL ES Texture Obtained Through CVOpenGLESTexture

In the past day of my foray into OpenGL ES 2.0, while attempting to apply two projective textures -- one sprite animation and one video file texture -- to a skybox, I started simply pounding my hands on the keyboard like stubs, and miraculously it all started working.
However, the texture created from the video file is flipped upside-down. In other words, the texture coordinates for (0,0) seem to be mapping to (0,1), and vice-versa.
The function which creates the video file texture from a CVImageBufferRef, CVOpenGLESTextureCacheCreateTextureFromImage(), includes a parameter "CFDictionaryRef textureAttributes."
CVOpenGLESTextureCache.h helpfully explains: "A CFDictionaryRef containing attributes to be used for creating the CVOpenGLESTexture objects. May be NULL."
I immediately thought of GLKTextureLoader, which allows you to pass in an options dictionary, with one of the available options being used to flip the texture around.
So, I'm a bit confused on two points:
Will passing in a CFDictionaryRef of attributes allow me to easily change things about the texture, like rotation? Or does it somehow mean 'attribute' in the shader-sense? (I don't think it very likely means the shader-sense, but I also think it's odd that it calls them attributes and not options.)
Is there a list somewhere of the key/value pairs that will tell it to do useful things?
I wanted to look into this before finding some other way to flip it around, since if it's possible to do it here, it seems like it would be the most straightforward way, if the procedure is indeed parallel to GLKTextureLoader's options.
After reading through apple's RosyWriter sample code again, I've realized that after creating the texture with CVOpenGLESTextureCacheCreateTextureFromImage(), they flip the texture around by modifying the texture coordinates for the vertices.
Since I'm projecting the texture and computing the texture coordinates in the vertex shader, I think the simplest solution for me will be to flip the actual movie file asset around before I drop it into xcode. So that's probably what I'll do for each movie. Just realized what a simple solution that is. That way I don't need to fork my vertex shader code for projections that need to be rotated & don't need to be rotated.
I'd still really appreciate clarification on the attributes argument though, if anybody has info on that.

DirectX 9: Transform texture of one instance of a loaded mesh, but not others

In DirectX 9 I have a .X file which I've loaded and I draw several copies of it. I need to be able to alter the texture coordinates for each copy (e.g. give each one a different scale). Unfortunately because they're all the same mesh and use the same materials, it seems that transforming the texture for one does the transformation for all of them. Is there a way that I can use to transform the texture of each instance of a loaded mesh individually?
You could use a texture coordinate transform.
You could clone the mesh.
You could use a shader and scale the UVs in the shader.
You'll need to Clone the mesh in question, then adjust its information. This will prevent it from effecting the other Mesh instances.

Resources