How to export .3ds file for using RenderMonkey from 3dsMax? - directx

How to export .3ds file for using RenderMonkey from 3dsMax 2010?
When I look into Stream Mapping in RenderMonkey,
there are POSITION, NORMAL, TEXCOORD, TANGENT, BINORMAL, TESSFACTOR etc.
I want to know how that information export for sending vertex shader as streaming data.
Thanks in advance.

Which streams do you want/need? A 3ds usually contains POSITION, NORMAL, and TEXCOORD. If you export from 3DSMax to an OBJ then you have more control over exactly what gets exported. TANGENT and BINORMAL are used for shaders such as bump-mapping and are generated by RenderMonkey. I don't know how to use TESSFACTOR. Just make sure that your VS input struct uses the correct semantic (pink text in rendermonkey) for each input stream and RM will populate them appropriately.

You can use DirectX Exporter for 3ds Max to export your scenes to a x file which contains texcoords, normal and binormal. The latest release can even convert a standard material with multiple UV coordinates to a DirectX material(.fx) and generate shader code such as tangent-space normal mapping.

Related

Performing a transpose operation on a DirectX Surface Buffer

I am using IMFSourceReader with hardware acceleration enabled to decode videos and read them into my application. After the ReadSample call, I get hold of the IDirect3DSurface9 from the IMFSample. At this point, I use the LockRect() call to access the raw-bytes and copy them into my applications buffer.
I would like to perform additional operations on the GPU such as transpose and a possible conversion of the image data from row-major order to column-major order.
Is there a Blt operation I can setup to this?
I came across the ID3DXBaseEffect interface but I am not sure that is applicable in my case.
Would appreciate any inputs.
Dinesh
With IDirect3DSurface9, you can use shader (ID3DXBaseEffect).
To do it on GPU directly, before copy the raw-bytes to your application, i will try this :
Call IMFSourceReader::GetServiceForStream to query for MR_VIDEO_ACCELERATION_SERVICE and IDirect3DDeviceManager9.
use IDirect3DDeviceManager9 to query the IDirect3DDevice9 (IDirect3DDeviceManager9::LockDevice).
Use IDirect3DDevice9, IDirect3DSurface9, a new RenderTarget, shader, as usual with Directx.
copy the raw-bytes from the final RenderTarget (after shader apply).
EDIT
See here : mofo7777 github
Under MediaFoundationTransform > MFTDirectxAware > MFTVideoShaderEffect, i'll show the concept.

What is the best options for tippecanoe to process line string features?

I have a GeoJSON file with a FeatureCollection (more than 300 000 features) of LineStrings. It is a road traffic records. I need to convert it to the MVT format using Tippecanoe. I'm trying to convert the GeoJSON with this params:
tippecanoe data.geojson -pf -pS -zg --detect-shared-borders -o data.mbtiles -f
Then I uploading it to Mapbox account as a tileset and use to render with Mapbox GL JS. And there is a problem - not all the features are visible. Moreover, if if will reconvert the GeoJSON file - then I will get a different result! So - what is the best options to use with tippecanoe to convert all the features (lineStrings) without oversimplification to use it with Mapbox GL JS?
P.S. One more thing which I noticed is that datasets uploaded with Mapbox Studio and then converted to tileset has some info like this: "This layer contains mostly LineStrings", but with my own tilesets converted with the tippecanoe I see a next message: "* No dominant geometry type*"
-ae will auto-increase the maxzoom if features are still being dropped at that zoom level. But when zoomed out it doesn't always look good depending on the type of features (e.g.: mising cadastre doesn't look good)...

iOS Import .obj file to Model I/O without duplicating vertices

I'm trying to import a .obj file to use in Scene Kit using the Model I/O framework. I initially used the simple MDLAsset initWithURL: function, but after transferring the mesh to a SCNGeometry, I realized this function was triangulizing the mesh, such that each face had 3 unique vertices, and there were separate vertices at the same location for border faces. This was causing some major problems with my other functions, so I tried to fix it by instead using the MDLAsset initWithURL:vertexDescriptor:bufferAllocator:preserveTopology function with preserveTopology set to YES with the descriptor/allocator set to the default with nil. This preserving topology fixed my problem of duplicating vertices, so the faces/edges were all good, but in the process I lost the normals data.
By lost the normals, I don't mean multiple indexing, I mean after setting preserveTopology to YES, the buffer did not contain any normals values at all. Whereas before it was v1/n1/v2/n2... and the stride was 24 bytes (3 dimensions *4 bytes/float * 2 attributes), now the first half of the buffer is v1/v2/... with a stride of 12 and the entire 2nd half of the buffer is just 0.0 floats.
Also something weird with this, when you look at the SCNGeometrySources of the Geometry, there are 2 sources, 1 with semantic kGeometrySourceSemanticVertex, and 1 with semantic kGeometrySourceSemanticNormal. You would think that the semantic vertex source would contain the position data, and the semantic normal source would contain the normal data. However that is not the case. No matter what you set preserveTopology, they are buffers of size to contain both position and normal data with identical values. So when I said before there was no normal data, I mean both of these buffers, semantic vertex AND semantic normal went from being v1/n1/v2/n2... to v1/v2/.../(0.0, 0.0, 0.0)/(0.0, 0.0, 0.0)/... I went into the mdlmesh's buffer (before the transfer to scene kit) at found the same problem, so the problem must be with the initWithURL, not with the model i/o to scenekit bridge.
So I figured there must be something wrong with the default vertex descriptor and buffer allocator (since I was using nil) and went about trying to create my own that matched these 2 possible data formats. Alas after much trying I was unable to get something that worked.
Any ideas on how I should do this? How to give MDLAsset the proper vertexDescriptor and bufferAllocator (I feel like nil should be ok here) for importing a .obj file? Thanks
An obj file with vertices and normals has vertices, indicated by v lines, normals, indicated by vn lines, and faces, indicated by f lines.
The v and vn lines will just be the floating point values you expect, and the f line will be of the form -
f v0//n0 v1//n1 etc
Since OpenGL and Metal don't allow multiple indexing, you'll see the first effect of vertices being duplicated. For example,
f 0//0 1//2 2//0
can't work as a vertex buffer because it would require different indices per vertex. So typical OBJ parsers have to create new vertices that allow the face to become
f 0//0 1//1 2//2
The preserve topology option doesn't help you. It preserves the connectivity and shape of the mesh (no triangulation occurs, shared edges remain shared) but it still enforces a single index per vertex component.
One solution would be to make sure that your tool that is outputting the OBJ files uses single indexing during export, if that is an option.
Another option, and this won't solve the problem immediately, would be file a request that multiple-indexing be supported at the Model I/O level. SceneKit would still have to uniquely-index because it is has to be able to render.
Another option would be to use a format like PLY that doesn't have multiple indexing.

How to export from 3DSMAX to FBX with tangents and texture coords

I bought a model from TurboSquid in 3DS format and am trying to load it into an XNA project.
I've exported to FBX and turned ON the "Tangents and Binormals" export options.
If I do not set basicEffect.TexturesEnabled, it renders but without textures. If I turn on TexturesEnabled, though, I have problems:
If I turn off "Generate Tangent Frames" in the content processor, I get "The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing." at runtime.
If I turn on "Generate Tangent Frames" in the content processor, I get "Required Vertex Channel TextureCoordinate0 no found" at build time.
So, the question is how to take a model in 3DS, export it so I can use it as an FBX model in XNA and get all of the UV mapping and normals correct. Even the VS2012 FBX preview can render it properly, so it seems like it should have all it needs, but no.
This can be a number of things. If the model was using a 3rd party plugin or materials other than standard, this will cause the UVs not to align.
My suggestion is to make sure that the materials are in a standard format
Ungroup the entire model if necessary.
lastly,
If the model is not rigged, make sure it's an editable poly.
From there you can try and export the model again. Are there any other formats that XNA can import?
If this doesn't help, please go to Support.TurboSquid.com and create a support ticket. We can try our best to help.
Christopher Briere
TurboSquid Product Support

Convert .3DS or .OBJ files to .MD2

I am using Metaio's Creator to create an AR event and using a model the client purchased from TurboSquid.com. Everytime I try to convert the .3DS file to an .MD2 file I get an error that there are to many polygons.
Is there a program that can automatically convert the .3DS or .OBJ to an .MD2 without lowering the polygon count or automatically taking polygons out without risking the integrity of the model?
MD2 only supports 4096 polygons inherently. As #0r10n said, you have to reduce the number of polygons to make it working with MD2. For conversion, I had the best experience using the QTip plugin for 3Ds Max here: http://qtipplugin.com/
Very easy to use and very powerful.
If the model has too many polygons you can import it into a 3DCC-tool like Max, Maya or Blender and use their tools to reduce the polygon-count.
For example using Blender 2.49 you can use the PolyReducer-Script, that preserves UV-Coordinates.

Resources