THREE JS 3ds Max Gradients not applied - webgl

I exported my 3d model from 3ds Max as json using threejsexporter (https://github.com/mrdoob/three.js/blob/dev/utils/exporters/max/ThreeJSExporter.ms) and as .obj (which I then converted to json using convert_obj_three.py).
I loaded the models in my page using three.js but none of them retained the gradient that was applied to the model using Gradient Ramp (3ds Max 2009).
How can I recover the gradient so that the model look the same in webpage as in 3ds Max.

3ds Max' Gradient Ramp is not supported. You would have to either do it with a bitmap or dynamically creating the bitmap with the canvas API.

Related

How to add image texture to 3D asset in Xcode using SceneKit

I have a 3D model for a coffee mug which is in .dae format. Now what I need is - to place a logo (a png image) in on it. How can I achieve this?
This isn’t really a Scenekit or IOS question. To apply a texture to a 3D model the model needs UV coordinates per vertex. The process of mapping a 3d model to a 2d texture is known as UV mapping ( https://en.m.wikipedia.org/wiki/UV_mapping ) and is done in 3d software like Blender, 3D studio max and similar packages, before the assets (model and textures) are used in Scenekit.
That said, in this case, because a mug is largely a cylinder, you could perhaps get away with using a SCNCylinder (which automatically comes with UV coords) and using the image with the logo, with a transparent background, as a texture for the cylinder. And then scale and position the cylinder over the mug and add it as a child node of the mug.
If you have your model in a node, yo can access the material trough the geometry like this
node.geometry?.firstMaterial?.diffuse.contents = <put your image here>
with this you will replace the texture of your geometry, don't really know if thats you want.

OpenGL Image warping using lookup table

I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.

Difference between Texture2D and Texture2DMS in DirectX11

I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.

Darkening part of a surface in Direct3D 9

In Direct3D 9, I'm trying to modify a surface thus:
Given a rectangle, for each of the pixels in the given surface within the rectangle's bounds, each of the channels (R, G, B, A) would be multiplied by a certain (float) value to either dim or brighten it.
How would I go about doing this? Preferably I want to avoid using LockRect (especially as it seems to not work with the default pool).
If you are wanting to update a Surfaces pixels directly, you can use "Device.UpdateTexture". This updates a Texture created in Pool.SystemMemory to a Texture created in Pool.Default.
But this doesn't sound like what you want to be doing. Use an Effect to hardware accelerate this. If you would like to know how I can show you.

IDirect3DDevice9, setting how textures scale?

In Photoshop you can control how pictures are scaled up and down as 'image interpolation', it has different options like 'Bicubic', 'Bilinear', 'Nearest Neighbour' and such.
I was wondering if I could do something similar in DirectX? Basically if I slap a texture on a quad and stretch the quad how can I control how the texture on the quad is represented?
Thanks for any help!
If you are using fixed function pipeline :
http://msdn.microsoft.com/en-us/library/ee421769(VS.85).aspx
Setting the D3DSAMP_MAGFILTER, D3DSAMP_MINFILTER , D3DSAMP_MIPFILTER values.
Otherwise set the FILTER option of the sampler object if you're using HLSL.
There are 4 type of filtering. NONE, POINT, LINEAR, ANISOTROPIC.

Resources