Threejs Using objects from 3ds Max with textures - webgl

I purchased 3d human teeth model from turbosquid in a 3ds scene format. All I want to do is to extract individaul teeth from the file and use them in threejs script to display them on web page. What I did was exported one tooth from 3ds Max in .obj format and converted that to json using the converter provided with threejs. Though the image appears on the page but with no textures applied to it.
I am new to 3ds Max and Threejs having no idea what am I missing. Can you please guide me.
Edit:
Here is the Json metadata
"metadata" :
{
"formatVersion" : 3.1,
"sourceFile" : "toothz.obj",
"generatedBy" : "OBJConverter",
"vertices" : 1636,
"faces" : 1634,
"normals" : 1636,
"colors" : 0,
"uvs" : 1636,
"materials" : 1
},
"scale" : 1.000000,
"materials": [ {
"DbgColor" : 15658734,
"DbgIndex" : 0,
"DbgName" : "Teeth",
"colorAmbient" : [0.584314, 0.584314, 0.584314],
"colorDiffuse" : [0.584314, 0.584314, 0.584314],
"colorSpecular" : [0.538824, 0.538824, 0.538824],
"illumination" : 2,
"opticalDensity" : 1.5,
"specularCoef" : 70.0,
"transparency" : 1.0
}],
Edit:
Here's the complete code
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 1, 2000);
loader = new THREE.JSONLoader();
loader.load( "js/JsonModels/toothz.js", function( geometry, materials ) {
materials[0].shading = THREE.SmoothShading;
var material = new THREE.MeshFaceMaterial( materials );
mesh = new THREE.Mesh( geometry, material );
mesh.scale.set( 3, 3, 3 );
mesh.position.y = 0;
mesh.position.x = 0;
scene.add( mesh );
} );
camera.position.z = 340;
//var ambient = new THREE.AmbientLight( 0x101030 );
//scene.add( ambient );
var directionalLight = new THREE.DirectionalLight( 0xffeedd );
directionalLight.position.set( 0, 0, 1 ).normalize();
scene.add( directionalLight );
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();

Texture exporting is usually tricky in many exporters because there isn't always a clear mapping between 3d program materials and three.js materials. Furthermore, there is no materials defined at all in the .obj file, but a separate .mtl is required. I'm not sure if a) .mtl is exported and b) obj converter uses it, but in any case, your JSON is missing the texture definition in the material. You have a few choices:
Try the MAX exporter, which should allow exporting your stuff directly to JSON, without the intermediate .obj step.
With the OBJ route, you should check that all relevant options in the exporter are checked, a .mtl file is generated, and the obj converter finds it.
Alternatively add the texture manually into the JSON: "mapDiffuse": "my_texture_filename.jpg" (into the material definition in the "materials" section of the file).
You could also add the texture to the material in your model loading callback. However, this is a big hack and not recommended.

See the three.js Migration wiki.
Geometry no longer has a materials property, and loader callbacks, which previously had only a geometry parameter, are now also passed a second one, materials.
EDIT: You need to do something like this:
loader = new THREE.JSONLoader();
loader.load( "js/JsonModels/toothz.js", function( geometry, materials ) {
materials[0].shading = THREE.SmoothShading;
var material = new THREE.MeshFaceMaterial( materials );
mesh = new THREE.Mesh( geometry, material );
mesh.scale.set( 3, 3, 3 );
mesh.position.y = 0;
mesh.position.x = 0;
scene.add( mesh );
} );
three.js r.53

Related

Merging topojson using topomerge messes up winding order

I'm trying to create a custom world map where countries are merged into regions instead of having individual countries. Unfortunately for some reason something seems to get messed up with the winding order along the process.
As base data I'm using the natural earth 10m_admin_0_countries shape files available here. As criteria for merging countries I have a lookup map that looks like this:
const countryGroups = {
"EUR": ["ALA", "AUT", "BEL"...],
"AFR": ["AGO", "BDI", "BEN"...],
...
}
To merge the shapes I'm using topojson-client. Since I want to have a higher level of control than the CLI commands offer, I wrote a script. It goes through the lookup map and picks out all the topojson features that belong to a group and merges them into one shape and places the resulting merged features into a geojson frame:
const topojsonClient = require("topojson-client");
const topojsonServer = require("topojson-server");
const worldTopo = topojsonServer.topology({
countries: JSON.parse(fs.readFileSync("./world.geojson", "utf-8")),
});
const geoJson = {
type: "FeatureCollection",
features: Object.entries(countryGroups).map(([region, ids]) => {
const relevantCountries = worldTopo.objects.countries.geometries.filter(
(country, i) =>
ids.indexOf(country.properties.ISO_A3) >= 0
);
return {
type: "Feature",
properties: { region, countries: ids },
geometry: topojsonClient.merge(worldTopo, relevantCountries),
};
}),
};
So far everything works well (allegedly). When I try to visualise the map using github gist (or any other visualisation tool like vega lite) the shapes seem to be all messed up. I'm suspecting that I'm doing something wrong during the merging of the features but I can't figure out what it is.
When I try to do the same using the CLI it seems to work fine. But since I need more control over the merging, using just the CLI is not really an option.
The last feature, called "World", should contain all remaining countries, but instead, it contains all countries, period. You can see this in the following showcase.
var w = 900,
h = 300;
var projection = d3.geoMercator().translate([w / 2, h / 2]).scale(100);
var path = d3.geoPath().projection(projection);
var color = d3.scaleOrdinal(d3.schemeCategory10);
var svg = d3.select('svg')
.attr('width', w)
.attr('height', h);
var url = "https://gist.githubusercontent.com/Flave/832ebba5726aeca3518b1356d9d726cb/raw/5957dca433cbf50fe4dea0c3fa94bb4f91c754b7/world-regions-wrong.topojson";
d3.json(url)
.then(data => {
var geojson = topojson.feature(data, data.objects.regions);
geojson.features.forEach(f => {
console.log(f.properties.region, f.properties.countries);
});
svg.selectAll('path')
// Reverse because it's the last feature that is the problem
.data(geojson.features.reverse())
.enter()
.append('path')
.attr('d', path)
.attr('fill', d => color(d.properties.region))
.attr('stroke', d => color(d.properties.region))
.on('mouseenter', function() {
d3.select(this).style('fill-opacity', 1);
})
.on('mouseleave', function() {
d3.select(this).style('fill-opacity', null);
});
});
path {
fill-opacity: 0.3;
stroke-width: 2px;
stroke-opacity: 0.4;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.js"></script>
<script src="https://d3js.org/topojson.v3.js"></script>
<svg></svg>
To fix this, I'd make sure to always remove all assigned countries from the list. From your data, I can't see where "World" is defined, and if it contains all countries on earth, or if it's a wildcard assignment.
In any case, you should be able to fix it by removing all matches from worldTopo:
const topojsonClient = require("topojson-client");
const topojsonServer = require("topojson-server");
const worldTopo = topojsonServer.topology({
countries: JSON.parse(fs.readFileSync("./world.geojson", "utf-8")),
});
const geoJson = {
type: "FeatureCollection",
features: Object.entries(countryGroups).map(([region, ids]) => {
const relevantCountries = worldTopo.objects.countries.geometries.filter(
(country, i) =>
ids.indexOf(country.properties.ISO_A3) >= 0
);
relevantCountries.forEach(c => {
const index = worldTopo.indexOf(c);
if (index === -1) throw Error(`Expected to find country ${c.properties.ISO_A3} in worldTopo`);
worldTopo.splice(index, 1);
});
return {
type: "Feature",
properties: { region, countries: ids },
geometry: topojsonClient.merge(worldTopo, relevantCountries),
};
}),
};

Passing / packing attributes to web gl shader

I need help understanding how to pass an additional attribute in an existing system. I am using
nexus to render my mesh. Nexus sort of uses three.js, but down the line it actually does straight calls to web gl. I want to add a cool looking wireframe effect to it and i found this guy wireframe shader. Long story short, i added a barycentric attribute to the my shader that nexus loads. I can get the location of the attribute and all that jazz. What i am having trouble with is understanding how to modify nexus webgl calls to pass in additional data for that attribute. Code in question is this:
function readyNode(node) {
var m = node.mesh;
var n = node.id;
var nv = m.nvertices[n];
var nf = m.nfaces[n];
var model = node.model;
var vertices;
var indices;
indices = new Uint8Array(node.buffer, nv*m.vsize, nf*m.fsize);
const konSize = nv * m.vsize + 12 * nv;
vertices = new Uint8Array(konSize);
var view = new Uint8Array(node.buffer, 0, nv * m.vsize);
var v = view.subarray(0, nv*12);
vertices.set(v);
var konOff = nv*12;
var off = nv*12;
//var barycentric = addBarycentricCoordinates( indices, false );
//vertices.set(barycentric, konOff);
//var konOff = nv*12;
vertices.set(v, konOff);
konOff += nv*12;
if(m.vertex.texCoord) {
var uv = view.subarray(off, off + nv*8);
vertices.set(uv, konOff);
konOff += nv*8;
off += nv*8;
}
if(m.vertex.normal && m.vertex.color) {
var no = view.subarray(off, off + nv*6);
var co = view.subarray(off + nv*6, off + nv*6 + nv*4);
vertices.set(co, konOff );
vertices.set(no, konOff + nv*4);
}
else {
if(m.vertex.normal) {
var no = view.subarray(off, off + nv*6);
vertices.set(no, off);
}
if(m.vertex.color) {
var co = view.subarray(off, off + nv*4);
vertices.set(co, off);
}
}
var gl = node.context.gl;
var vbo = m.vbo[n] = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
var ibo = m.ibo[n] = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
}
Original code does not have barycentric part and without it, it all works. What i don't understand is how to correctly add barycentric data to the "vertices" array to pass it to the shader. The code above doesn't crash, but it does cause UVs to be out of whack. Barycentric coordinates are just a vec3, similar to position. They are just 1s and 0s. As i understand, 1 per vertex. I think I should be able to pack them in the "vertices" array with correct offsets. I even modified the above code to pass in positions instead of barycentric data, and it still doesn't work (it casues UVs to be out whack). Not sure if this enough context to reason about it.
Am i doing something wrong with offsets?
Also, do I need to de-index my data for this to work? Wireframe library does do that.
Let me know if there is more info I can provide.
P.S. Nexus is a chunking / LODing system. It loads patches over time. As far as i understand it's only one "mesh" object, it just adds more data to the buffers as it downloads it.

GDAL - Warp image to inverted y-axis coordinate system (Apple MapKit) gives image upside down

I'm having an image of a map that I want to geo reference and overlay on the built-in map on iOS. I'm using GDAL programmatically.
Step 1 (OK) - Geo referencing (works fine)
Currently I'm calculating a GeoTransform (the 6 coefficients) from three ground control points to geo reference the image and it gives me the correct 6 coefficients.
Step 2 (PROBLEM) - Warp image + get GeoTransform for new transformed image
The image becomes upside down! This is because the target coordinate system (Apple's own coordinate system in MapKit) has inverted y-axis which increases as you go south.
Question
How can I get GDAL to warp the image correctly (and at the same time give me a correct GeoTransform to go with it)?
What I've tried
I changed value 5/6 in the original GeoTransform before warping. This gives a correct warp of the image, but the new GeoTransform is wrong.
CURRENT CODE
- (WarpResultC*)warpImageWithGeoTransform:(NSArray<NSNumber*>*)geoTransformArray sourceFile:(NSString*)inFilepath destinationFile:(NSString*)outFilepath
{
GDALAllRegister();
GDALDriverH hDriver;
GDALDataType eDT;
GDALDatasetH hDstDS;
GDALDatasetH hSrcDS;
// Open the source file.
hSrcDS = GDALOpen( inFilepath.UTF8String, GA_ReadOnly );
CPLAssert( hSrcDS != NULL );
// Set the GeoTransform on the source image
// HERE IS WHERE I NEED NEGATIVE VALUES OF 4 & 5 TO GET A PROPER IMAGE
double geoTransform[] = { geoTransformArray[0].doubleValue, geoTransformArray[1].doubleValue, geoTransformArray[2].doubleValue, geoTransformArray[3].doubleValue, -geoTransformArray[4].doubleValue, -geoTransformArray[5].doubleValue };
GDALSetGeoTransform(hSrcDS, geoTransform);
// Create output with same datatype as first input band.
eDT = GDALGetRasterDataType(GDALGetRasterBand(hSrcDS, 1));
// Get output driver (GeoTIFF format)
hDriver = GDALGetDriverByName( "GTiff" );
CPLAssert( hDriver != NULL );
// Create a transformer that maps from source pixel/line coordinates
// to destination georeferenced coordinates (not destination
// pixel line). We do that by omitting the destination dataset
// handle (setting it to NULL).
void *hTransformArg = GDALCreateGenImgProjTransformer( hSrcDS, NULL, NULL, NULL, FALSE, 0, 1 );
CPLAssert( hTransformArg != NULL );
// Get approximate output georeferenced bounds and resolution for file.
double adfDstGeoTransform[6];
int nPixels=0, nLines=0;
CPLErr eErr = GDALSuggestedWarpOutput( hSrcDS, GDALGenImgProjTransform, hTransformArg, adfDstGeoTransform, &nPixels, &nLines );
CPLAssert( eErr == CE_None );
GDALDestroyGenImgProjTransformer( hTransformArg );
// Create the output file.
hDstDS = GDALCreate( hDriver, outFilepath.UTF8String, nPixels, nLines, 4, eDT, NULL );
CPLAssert( hDstDS != NULL );
// Write out the projection definition.
GDALSetGeoTransform( hDstDS, adfDstGeoTransform );
// Copy the color table, if required.
GDALColorTableH hCT = GDALGetRasterColorTable( GDALGetRasterBand(hSrcDS, 1) );
if( hCT != NULL )
GDALSetRasterColorTable( GDALGetRasterBand(hDstDS, 1), hCT );
// Setup warp options.
GDALWarpOptions *psWarpOptions = GDALCreateWarpOptions();
psWarpOptions->hSrcDS = hSrcDS;
psWarpOptions->hDstDS = hDstDS;
/* -------------------------------------------------------------------- */
/* Do we have a source alpha band? */
/* -------------------------------------------------------------------- */
bool enableSrcAlpha = GDALGetRasterColorInterpretation( GDALGetRasterBand(hSrcDS, GDALGetRasterCount(hSrcDS) )) == GCI_AlphaBand;
if(enableSrcAlpha) { printf( "Using band %d of source image as alpha.\n", GDALGetRasterCount(hSrcDS) ); }
/* -------------------------------------------------------------------- */
/* Setup band mapping. */
/* -------------------------------------------------------------------- */
if(enableSrcAlpha)
psWarpOptions->nBandCount = GDALGetRasterCount(hSrcDS) - 1;
else
psWarpOptions->nBandCount = GDALGetRasterCount(hSrcDS);
psWarpOptions->panSrcBands = (int *) CPLMalloc(psWarpOptions->nBandCount*sizeof(int));
psWarpOptions->panDstBands = (int *) CPLMalloc(psWarpOptions->nBandCount*sizeof(int));
for( int i = 0; i < psWarpOptions->nBandCount; i++ )
{
psWarpOptions->panSrcBands[i] = i+1;
psWarpOptions->panDstBands[i] = i+1;
}
/* -------------------------------------------------------------------- */
/* Setup alpha bands used if any. */
/* -------------------------------------------------------------------- */
if( enableSrcAlpha )
psWarpOptions->nSrcAlphaBand = GDALGetRasterCount(hSrcDS);
psWarpOptions->nDstAlphaBand = GDALGetRasterCount(hDstDS);
psWarpOptions->pfnProgress = GDALTermProgress;
// Establish reprojection transformer.
psWarpOptions->pTransformerArg = GDALCreateGenImgProjTransformer( hSrcDS, NULL, hDstDS, NULL, FALSE, 0.0, 1 );
psWarpOptions->pfnTransformer = GDALGenImgProjTransform;
// Initialize and execute the warp operation.
GDALWarpOperation oOperation;
oOperation.Initialize( psWarpOptions );
CPLErr warpError = oOperation.ChunkAndWarpImage( 0, 0, GDALGetRasterXSize( hDstDS ), GDALGetRasterYSize( hDstDS ) );
CPLAssert( warpError == CE_None );
GDALDestroyGenImgProjTransformer( psWarpOptions->pTransformerArg );
GDALDestroyWarpOptions( psWarpOptions );
GDALClose( hDstDS );
GDALClose( hSrcDS );
WarpResultC* warpResultC = [WarpResultC new];
warpResultC.geoTransformValues = #[#(adfDstGeoTransform[0]), #(adfDstGeoTransform[1]), #(adfDstGeoTransform[2]), #(adfDstGeoTransform[3]), #(adfDstGeoTransform[4]), #(adfDstGeoTransform[5])];
warpResultC.newX = nPixels;
warpResultC.newY = nLines;
return warpResultC;
}
You do this by using GDALCreateGenImgProjTransformer. From documentation:
Create a transformer that maps from source pixel/line coordinates to destination georeferenced coordinates (not destination pixel line). We do that by omitting the destination dataset handle (setting it to NULL).
Other related info for your task from gdal_alg.h:
Create image to image transformer.
This function creates a transformation object that maps from pixel/line coordinates on one image to pixel/line coordinates on another image. The images may potentially be georeferenced in different coordinate systems, and may used GCPs to map between their pixel/line coordinates and georeferenced coordinates (as opposed to the default assumption that their geotransform should be used).
This transformer potentially performs three concatenated transformations.
The first stage is from source image pixel/line coordinates to source image georeferenced coordinates, and may be done using the geotransform, or if not defined using a polynomial model derived from GCPs. If GCPs are used this stage is accomplished using GDALGCPTransform().
The second stage is to change projections from the source coordinate system to the destination coordinate system, assuming they differ. This is accomplished internally using GDALReprojectionTransform().
The third stage is converting from destination image georeferenced coordinates to destination image coordinates. This is done using the destination image geotransform, or if not available, using a polynomial model derived from GCPs. If GCPs are used this stage is accomplished using GDALGCPTransform(). This stage is skipped if hDstDS is NULL when the transformation is created.

How to clone an object3d in Three.js?

I want to clone a model loaded with loader, I found this issue on github,but the solution doesn't work. It seems there has been an structural change in Object3D.
How can I clone an Object3D in current stable version of Three.js?
In this new version of three.js you have a method clone.
For example, I use a queen from chess and I had to duplicate multiple times:
// queen is a mesh
var newQueen = queen.clone();
// make sure to re position to be able to see the new queen!
newQueen.position.set(100,100,100); // or any other coordinates
It will work with any mesh.
I used three.js r61.
Actually mrdoob's answer is your answer...
The loader output a geometry you use to create a mesh.
You need to create a new mesh with the loader-created mesh's geometry and material.
for ( var i = 0; i < 10; i ++ ) {
var mesh = new THREE.Mesh( loadedMesh.geometry, loadedMesh.material );
mesh.position.set( i * 100, 0, 0 );
scene.add( mesh );
}
You want to clone a Mesh and not an Object3D because the output of the loader is a Mesh.
I found one fast solution (not the most efficient)
The GLTFLoader uses the THREE.FileLoader() method internally, which allows you to cache files.
To do so, you need to add this line before you create an instance of the GLTFLoader
THREE.Cache.enabled = true;
Then you can load multiple times the same model, but only the first time will take longer, for example:
THREE.Cache.enabled = true;
var loader = new GLTFLoader();
var deeplyClonedModels = [];
for( var i = 0; i < 10; i++ ){
loader.load('foo.gltf', function ( gltf ) {
deeplyClonedModels.push(gltf.scene);
});
}

Reading output from geometry shader on CPU

I'm trying to read the output from a geometry shader which is using stream-output to output to a buffer.
The output buffer used by the geometry shader is described like this:
D3D10_BUFFER_DESC vbdesc =
{
numPoints * sizeof( MESH_VERTEX ),
D3D10_USAGE_DEFAULT,
D3D10_BIND_VERTEX_BUFFER | D3D10_BIND_STREAM_OUTPUT,
0,
0
};
V_RETURN( pd3dDevice->CreateBuffer( &vbdesc, NULL, &g_pDrawFrom ) );
The geometry shader creates a number of triangles based on a single point (at max 12 triangles per point), and if I understand the SDK correctly I have to create a staging resource in order to read the output from the geometry shader on the CPU.
I have declared another buffer resource (this time setting the STAGING flag) like this:
D3D10_BUFFER_DESC sbdesc =
{
(numPoints * (12*3)) * sizeof( VERTEX_STREAMOUT ),
D3D10_USAGE_STAGING,
NULL,
D3D10_CPU_ACCESS_READ,
0
};
V_RETURN( pd3dDevice->CreateBuffer( &sbdesc, NULL, &g_pStaging ) );
After the first draw call of the application the geometry shader is done creating all triangles and can be drawn. However, after this first draw call I would like to be able to read the vertices output by the geometry shader.
Using the buffer staging resource I'm trying to do it like this (right after the first draw call):
pd3dDevice->CopyResource(g_pStaging, g_pDrawFrom]);
pd3dDevice->Flush();
void *ptr = 0;
HRESULT hr = g_pStaging->Map( D3D10_MAP_READ, NULL, &ptr );
if( FAILED( hr ) )
return hr;
VERTEX_STREAMOUT *mv = (VERTEX_STREAMOUT*)ptr;
g_pStaging->Unmap();
This compiles and doesn't give any errors at runtime. However, I don't seem to be getting any output.
The geometry shader outputs the following:
struct VSSceneStreamOut
{
float4 Pos : POS;
float3 Norm : NORM;
float2 Tex : TEX;
};
and the VERTEX_STREAMOUT is declared like this:
struct VERTEX_STREAMOUT
{
D3DXVECTOR4 Pos;
D3DXVECTOR3 Norm;
D3DXVECTOR2 Tex;
};
Am I missing something here?
Problem solved by creating the staging resource buffer like this:
D3D10_BUFFER_DESC sbdesc;
ZeroMemory( &sbdesc, sizeof(sbdesc) );
g_pDrawFrom->GetDesc( &sbdesc );
sbdesc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;
sbdesc.Usage = D3D10_USAGE_STAGING;
sbdesc.BindFlags = 0;
sbdesc.MiscFlags = 0;
V_RETURN( pd3dDevice->CreateBuffer( &sbdesc, NULL, &g_pStaging ) );
Problem was with the ByteWidth.

Resources