Directx 3D the farther object covers the nearer object - directx

I'm new to the DirectX in c#, and there is a question that confused me a lot, basically I want to render two cubes on the screen, one is near from the camera and the other is far from the camera, what I expected is the nearer one always in front of the further one, but in fact, it depends on the rendering sequence, the last rendered one always in front of the other, I've tried to clear the z-buffer but that does not work at all, so I'm wondering if there is something I'm doing wrong?
Here are my code snippet
private void Form1_Load(object sender, EventArgs e)
{
PresentParameters presentParams = new PresentParameters();
presentParams.Windowed = true;
presentParams.SwapEffect = SwapEffect.Discard;
presentParams.EnableAutoDepthStencil = true;
presentParams.AutoDepthStencilFormat = DepthFormat.D16;
device = new Device(0, DeviceType.Hardware, this, CreateFlags.MixedVertexProcessing, presentParams);
device.VertexFormat = CustomVertex.PositionColored.Format;
device.RenderState.CullMode = Cull.CounterClockwise;
device.RenderState.Lighting = false;
Matrix projection = Matrix.PerspectiveFovLH((float)Math.PI / 4, this.Width / this.Height, 0f, 10000.0f);
device.Transform.Projection = projection;
}
protected override void OnPaint(PaintEventArgs e)
{
Cube a = new Cube(new Vector3(0, 0, 0), 5);
Cube b = new Cube(new Vector3(0, 0, 15), 5);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkGray, 1, 0);
device.BeginScene();
Matrix viewMatrix = Matrix.LookAtLH(cameraPosition, targetPosition, up);
device.Transform.View = viewMatrix;
device.DrawIndexedUserPrimitives(PrimitiveType.TriangleList, 0, 8, 12, a.IndexData, false, a.GetVertices());
device.DrawIndexedUserPrimitives(PrimitiveType.TriangleList, 0, 8, 12, b.IndexData, false, b.GetVertices());
device.EndScene();
device.Present();
}

Alright, I finally fixed the problem, by changing
Matrix projection = Matrix.PerspectiveFovLH((float)Math.PI / 4, this.Width / this.Height, 0f, 10000.0f);
to
Matrix projection = Matrix.PerspectiveFovLH((float)Math.PI / 4, this.Width / this.Height, 1f, 10000.0f);
But I don't know the reason, and why it happens, does anyone know that?

Related

Getting the right position in Unity

In OpenCV, there is a rectangle shown when something is detected.
Imgproc.rectangle (rgbaMat, new Point (rects [i].x, rects [i].y), new Point (rects [i].x + rects [i].width, rects [i].y + rects [i].height), new Scalar (0, 0, 255, 255), 2);
I want to put a 3D object or example a cube in that one corner of the rectangle.
myCube.transform.position = new Vector3(rects[0].x, rects[0].y);
The problem is that the position of the cube isnt in the rectangle but in the transform it shows the correct Vector3(float, float, float).
How can I put my cube in the position where the corner of the rectangle is?
I was having a similar problem. I wanted to place a 3d object in the center of the detected object. I asked on the OpenCV for unity forum and got a reply from the developer.
They provide a package with a solution that worked for me. Overlay a 3D object
In your case, you can change line 68 in DetectFace2DTo3DExample.cs to the positions of your rect.
Matrix4x4 point3DMatrix4x4 = Get2DTo3DMatrix4x4 (new Vector2 (rects [0].x, rects [0].y), gameObject);
After digging a lot (so I don't blame you) I've found the following which I think is what you where looking for in this forum:
There is an OpenCv Object detection example.
The for you most interesting method I guess would be the most bottom one:
/// <summary>
/// Results the rects to result game objects.
/// </summary>
/// <param name="rects">Rects.</param>
/// <param name="color">Color.</param>
/// <param name="zPos">Z position.</param>
private void ResultRectsToResultGameObjects (IList<object> rects, Color color, float zPos)
{
Vector3[] resultPoints = new Vector3[rects.Count];
float textureWidth = GetComponent<Renderer> ().material.mainTexture.width;
float textureHeight = GetComponent<Renderer> ().material.mainTexture.height;
Matrix4x4 transCenterM = Matrix4x4.TRS (new Vector3 (-textureWidth / 2, -textureHeight / 2, 0), Quaternion.identity, new Vector3 (1, 1, 1));
Vector3 translation = new Vector3 (gameObject.transform.localPosition.x, gameObject.transform.localPosition.y, gameObject.transform.localPosition.z);
Quaternion rotation = Quaternion.Euler (gameObject.transform.localEulerAngles.x, gameObject.transform.localEulerAngles.y, gameObject.transform.localEulerAngles.z);
Vector3 scale = new Vector3 (gameObject.transform.localScale.x / textureWidth, gameObject.transform.localScale.y / textureHeight, 1);
Matrix4x4 trans2Dto3DM = Matrix4x4.TRS (translation, rotation, scale);
for (int i = 0; i < resultPoints.Length; i++)
{
IDictionary rect = (IDictionary)rects [i];
//get center of rect.
resultPoints [i] = new Vector3 ((long)rect ["x"] + (long)rect ["width"] / 2, (long)rect ["y"] + (long)rect ["height"] / 2, 0);
//translate origin to center.
resultPoints [i] = transCenterM.MultiplyPoint3x4 (resultPoints [i]);
//transform from 2D to 3D
resultPoints [i] = trans2Dto3DM.MultiplyPoint3x4 (resultPoints [i]);
//Add resultGameObject.
GameObject result = Instantiate (resultPrefab, resultPoints [i], Quaternion.identity) as GameObject;
result.transform.parent = gameObject.transform;
result.transform.localPosition = new Vector3 (result.transform.localPosition.x, result.transform.localPosition.y, zPos);
result.transform.localEulerAngles = new Vector3 (0, 0, 0);
result.transform.localScale = new Vector3 ((long)rect ["width"] / textureWidth, (long)rect ["height"] / textureHeight, 20);
result.GetComponent<Renderer> ().material.color = color;
resultGameObjects.Add (result);
}
}
Ofcourse I didn't test that but it looks like it does what you want: place objects on the rects positions.
And ofcourse you will have to adopt it to your needs but I hope it is a good starting point for you.

How to create VHS effect on iOS using GPUImage or another library

I am trying to make a VHS effect for an iOS app, just like in this video:
https://www.youtube.com/watch?v=8ipML-T5yDk
I want to realize this effect with the less effect possible to generate less CPU charge.
Basically what I need is to crank up the color levels to create a "chromatic aberration", change Sharpen parameters, and add some gaussian blur + add some noise.
I am using GPUImage. For the Sharpen and Gaussian blur, easy to apply.
I am having two problems:
1) For the "chromatic aberration", the way they do it usually is to duplicate three times the video, and put Red to 0 on one video, blue to 0 on another one, and green to 0 on the last one, and blend them together (just like in the tutorial). But doing this in an iPhone would be too CPU consuming.
Any idea how to achieve the same effect withtout having to duplicate the video and blend it =
2) I also want to add some noise but do not know which GPUImage effect to use. Any idea on this one too ?
Thanks a lot,
Sébastian
(I'm not an iOS developer but I hope this can help someone.)
I wrote a VHS filter on Windows, this is what I did:
Crop the video frame to 4:3 aspect ratio and lower the resolution to 360*270.
Lower color saturation, and apply a color matrix to reduce green color to 93% (so the video will look purple).
Apply a convolve matrix to sharpen the video frame directionally. This is the kernel I used:
0 -0.5 0 0
-0.5 2.9 0 -0.5
0 -0.5 0 0
Blend a real blank VHS footage to your video for the noise (search for "VHS overlay" on YouTube).
Video: Before After
Screenshot: Before After
The CPU and GPU consumption is ok. I apply this filter to real time camera preview on my old windows phone (with Snapdragon 808), and it works fine.
Code (C#, using Win2D library for GPU acceleration, implementing Windows.Media.Effects.IBasicVideoEffect interface):
public void ProcessFrame(ProcessVideoFrameContext context) //This method is called each frame
{
int outputWidth = 360; //Output Resolution
int outputHeight = 270;
IDirect3DSurface inputSurface = context.InputFrame.Direct3DSurface;
IDirect3DSurface outputSurface = context.OutputFrame.Direct3DSurface;
using (CanvasBitmap inputFrame = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, inputSurface)) //The video frame to be processed
using (CanvasRenderTarget outputFrame = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, outputSurface)) //The video frame after processing
using (CanvasDrawingSession outputFrameDrawingSession = outputFrame.CreateDrawingSession())
using (CanvasRenderTarget croppedFrame = new CanvasRenderTarget(canvasDevice, outputWidth, outputHeight, outputFrame.Dpi))
using (CanvasDrawingSession croppedFrameDrawingSession = croppedFrame.CreateDrawingSession())
using (CanvasBitmap overlay = Task.Run(async () => { return await CanvasBitmap.LoadAsync(canvasDevice, overlayFrames[new Random().Next(0, overlayFrames.Count - 1)]); }).Result) //"overlayFrames" is a list containing video frames from https://youtu.be/SHhRFU2Jyfs, here we just randomly pick one frame for blend
{
double inputWidth = inputFrame.Size.Width;
double inputHeight = inputFrame.Size.Height;
Rect ractangle;
//Crop the inputFrame to 360*270, save it to "croppedFrame"
if (3 * inputWidth > 4 * inputHeight)
{
double x = (inputWidth - inputHeight / 3 * 4) / 2;
ractangle = new Rect(x, 0, inputWidth - 2 * x, inputHeight);
}
else
{
double y = (inputHeight - inputWidth / 4 * 3) / 2;
ractangle = new Rect(0, y, inputWidth, inputHeight - 2 * y);
}
croppedFrameDrawingSession.DrawImage(inputFrame, new Rect(0, 0, outputWidth, outputHeight), ractangle, 1, CanvasImageInterpolation.HighQualityCubic);
//Apply a bunch of effects (mentioned in step 2,3,4) to "croppedFrame"
BlendEffect vhsEffect = new BlendEffect
{
Background = new ConvolveMatrixEffect
{
Source = new ColorMatrixEffect
{
Source = new SaturationEffect
{
Source = croppedFrame,
Saturation = 0.4f
},
ColorMatrix = new Matrix5x4
{
M11 = 1f,
M22 = 0.93f,
M33 = 1f,
M44 = 1f
}
},
KernelHeight = 3,
KernelWidth = 4,
KernelMatrix = new float[]
{
0, -0.5f, 0, 0,
-0.5f, 2.9f, 0, -0.5f,
0, -0.5f, 0, 0,
}
},
Foreground = overlay,
Mode = BlendEffectMode.Screen
};
//And draw the result to "outputFrame"
outputFrameDrawingSession.DrawImage(vhsEffect, ractangle, new Rect(0, 0, outputWidth, outputHeight));
}
}

WebGL; Instanced rendering - setting up divisors

I'm trying to draw a lot of cubes in webgl using instanced rendering (ANGLE_instanced_arrays).
However I can't seem to wrap my head around how to setup the divisors. I have the following buffers;
36 vertices (6 faces made from 2 triangles using 3 vertices each).
6 colors per cube (1 for each face).
1 translate per cube.
To reuse the vertices for each cube; I've set it's divisor to 0.
For color I've set the divisor to 2 (i.e. use same color for two triangles - a face)).
For translate I've set the divisor to 12 (i.e. same translate for 6 faces * 2 triangles per face).
For rendering I'm calling
ext_angle.drawArraysInstancedANGLE(gl.TRIANGLES, 0, 36, num_cubes);
This however does not seem to render my cubes.
Using translate divisor 1 does but the colors are way off then, with cubes being a single solid color.
I'm thinking it's because my instances are now the full cube, but if I limit the count (i.e. vertices per instance), I do not seem to get all the way through the vertices buffer, effectively I'm just rendering one triangle per cube then.
How would I go about rendering a lot of cubes like this; with varying colored faces?
Instancing works like this:
Eventually you are going to call
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
So let's say you're drawing instances of a cube. One cube has 36 vertices (6 per face * 6 faces). So
numVertices = 36
And lets say you want to draw 100 cubes so
numInstances = 100
Let's say you have a vertex shader like this
Let's say you have the following shader
attribute vec4 position;
uniform mat4 matrix;
void main() {
gl_Position = matrix * position;
}
If you did nothing else and just called
var mode = gl.TRIANGLES;
var first = 0;
var numVertices = 36
var numInstances = 100
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
It would just draw the same cube in the same exact place 100 times
Next up you want to give each cube a different translation so you update your shader to this
attribute vec4 position;
attribute vec3 translation;
uniform mat4 matrix;
void main() {
gl_Position = matrix * (position + vec4(translation, 0));
}
You now make a buffer and put one translation per cube then you setup the attribute like normal
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0)
But you also set a divisor
ext.vertexAttribDivisorANGLE(translationLocation, 1);
That 1 says 'only advance to the next value in the translation buffer once per instance'
Now you want have a different color per face per cube and you only want one color per face in the data (you don't want to repeat colors). There is no setting that would to that Since your numVertices = 36 you can only choose to advance every vertex (divisor = 0) or once every multiple of 36 vertices (ie, numVertices).
So you say, what if instance faces instead of cubes? Well now you've got the opposite problem. Put one color per face. numVertices = 6, numInstances = 600 (100 cubes * 6 faces per cube). You set color's divisor to 1 to advance the color once per face. You can set translation divisor to 6 to advance the translation only once every 6 faces (every 6 instances). But now you no longer have a cube you only have a single face. In other words you're going to draw 600 faces all facing the same way, every 6 of them translated to the same spot.
To get a cube back you'd have to add something to orient the face instances in 6 direction.
Ok, you fill a buffer with 6 orientations. That won't work. You can't set divisor to anything that will use those 6 orientations advance only once every face but then resetting after 6 faces for the next cube. There's only 1 divisor setting. Setting it to 6 to repeat per face or 36 to repeat per cube but you want advance per face and reset back per cube. No such option exists.
What you can do is draw it with 6 draw calls, one per face direction. In other words you're going to draw all the left faces, then all the right faces, the all the top faces, etc...
To do that we make just 1 face, 1 translation per cube, 1 color per face per cube. We set the divisor on the translation and the color to 1.
Then we draw 6 times, one for each face direction. The difference between each draw is we pass in an orientation for the face and we change the attribute offset for the color attribute and set it's stride to 6 * 4 floats (6 * 4 * 4).
var vs = `
attribute vec4 position;
attribute vec3 translation;
attribute vec4 color;
uniform mat4 viewProjectionMatrix;
uniform mat4 localMatrix;
varying vec4 v_color;
void main() {
vec4 localPosition = localMatrix * position + vec4(translation, 0);
gl_Position = viewProjectionMatrix * localPosition;
v_color = color;
}
`;
var fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
var m4 = twgl.m4;
var gl = document.querySelector("canvas").getContext("webgl");
var ext = gl.getExtension("ANGLE_instanced_arrays");
if (!ext) {
alert("need ANGLE_instanced_arrays");
}
var program = twgl.createProgramFromSources(gl, [vs, fs]);
var positionLocation = gl.getAttribLocation(program, "position");
var translationLocation = gl.getAttribLocation(program, "translation");
var colorLocation = gl.getAttribLocation(program, "color");
var localMatrixLocation = gl.getUniformLocation(program, "localMatrix");
var viewProjectionMatrixLocation = gl.getUniformLocation(
program,
"viewProjectionMatrix");
function r(min, max) {
if (max === undefined) {
max = min;
min = 0;
}
return Math.random() * (max - min) + min;
}
function rp() {
return r(-20, 20);
}
// make translations and colors, colors are separated by face
var numCubes = 1000;
var colors = [];
var translations = [];
for (var cube = 0; cube < numCubes; ++cube) {
translations.push(rp(), rp(), rp());
// pick a random color;
var color = [r(1), r(1), r(1), 1];
// now pick 4 similar colors for the faces of the cube
// that way we can tell if the colors are correctly assigned
// to each cube's faces.
var channel = r(3) | 0; // pick a channel 0 - 2 to randomly modify
for (var face = 0; face < 6; ++face) {
color[channel] = r(.7, 1);
colors.push.apply(colors, color);
}
}
var buffers = twgl.createBuffersFromArrays(gl, {
position: [ // one face
-1, -1, -1,
-1, 1, -1,
1, -1, -1,
1, -1, -1,
-1, 1, -1,
1, 1, -1,
],
color: colors,
translation: translations,
});
var faceMatrices = [
m4.identity(),
m4.rotationX(Math.PI / 2),
m4.rotationX(Math.PI / -2),
m4.rotationY(Math.PI / 2),
m4.rotationY(Math.PI / -2),
m4.rotationY(Math.PI),
];
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.position);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.translation);
gl.enableVertexAttribArray(translationLocation);
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.color);
gl.enableVertexAttribArray(colorLocation);
ext.vertexAttribDivisorANGLE(positionLocation, 0);
ext.vertexAttribDivisorANGLE(translationLocation, 1);
ext.vertexAttribDivisorANGLE(colorLocation, 1);
gl.useProgram(program);
var fov = 60;
var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
var projection = m4.perspective(fov * Math.PI / 180, aspect, 0.5, 100);
var radius = 30;
var eye = [
Math.cos(time) * radius,
Math.sin(time * 0.3) * radius,
Math.sin(time) * radius,
];
var target = [0, 0, 0];
var up = [0, 1, 0];
var camera = m4.lookAt(eye, target, up);
var view = m4.inverse(camera);
var viewProjection = m4.multiply(projection, view);
gl.uniformMatrix4fv(viewProjectionMatrixLocation, false, viewProjection);
// 6 faces * 4 floats per color * 4 bytes per float
var stride = 6 * 4 * 4;
var numVertices = 6;
faceMatrices.forEach(function(faceMatrix, ndx) {
var offset = ndx * 4 * 4; // 4 floats per color * 4 floats
gl.vertexAttribPointer(
colorLocation, 4, gl.FLOAT, false, stride, offset);
gl.uniformMatrix4fv(localMatrixLocation, false, faceMatrix);
ext.drawArraysInstancedANGLE(gl.TRIANGLES, 0, numVertices, numCubes);
});
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/2.x/twgl-full.min.js"></script>
<canvas></canvas>

Computing a projective transformation to texture an arbitrary quad

I would like to compute a projective transformation to texture an arbitrary quad in webgl (with three.js and shaders if possible/necessary).
This is what I want to obtain, taken from this answer.
Everything is well described in the post, so I suppose that with a bit of work I could solve the problem. Here is a pseudo-code of the solution:
precompute A matrix (should be trivial since texture coordinates are in [0,1] interval)
compute B matrix according to the vertex positions (not possible in the vertex shader since we need the four coordinates of the points)
use B in the fragment shader to compute the correct texture coordinate at each pixel
However I am wondering if there is an easier method to do that in webgl.
---- Links to related topics ----
There is a similar way to solve the problem mathematically described here, but since it a solution to compute a many to many point mapping, it seems an overkill to me.
I thought that this is a solution in OpenGL but realized it is a solution to perform a simple perspective correct interpolation, which is luckily enabled by default.
I found many things on trapezoids, which is a simple version of the more general problem I want to solve: 1, 2 and 3. I first though that those would help, but instead they lead me to a lot of reading and misunderstanding.
Finally, this page describes a solution to solve the problem, but I was skeptical that it is the simplest and most common solution. Now I think it might be correct !
---- Conclusion ----
I have been searching a lot for the solution, not because it is a particularly complex problem, but because I was looking for a simple and typical/common solution. I though it is an easy problem solved in many cases (every video mapping apps) and that there would be trivial answers.
Ok I managed to do it with three.js and coffeescript (I had to implement the missing Matrix3 functions):
class Quad
constructor: (width, height, canvasKeyboard, scene) ->
#sceneWidth = scene.width
#sceneHeight = scene.height
# --- QuadGeometry --- #
#geometry = new THREE.Geometry()
normal = new THREE.Vector3( 0, 0, 1 )
#positions = []
#positions.push( x: -width/2, y: height/2 )
#positions.push( x: width/2, y: height/2 )
#positions.push( x: -width/2, y: -height/2 )
#positions.push( x: width/2, y: -height/2 )
for position in #positions
#geometry.vertices.push( new THREE.Vector3( position.x, position.y, 0 ) )
uv0 = new THREE.Vector4(0,1,0,1)
uv1 = new THREE.Vector4(1,1,0,1)
uv2 = new THREE.Vector4(0,0,0,1)
uv3 = new THREE.Vector4(1,0,0,1)
face = new THREE.Face3( 0, 2, 1)
face.normal.copy( normal )
face.vertexNormals.push( normal.clone(), normal.clone(), normal.clone() )
#geometry.faces.push( face )
#geometry.faceVertexUvs[ 0 ].push( [ uv0.clone(), uv2.clone(), uv1.clone() ] )
face = new THREE.Face3( 1, 2, 3)
face.normal.copy( normal )
face.vertexNormals.push( normal.clone(), normal.clone(), normal.clone() )
#geometry.faces.push( face )
#geometry.faceVertexUvs[ 0 ].push( [ uv1.clone(), uv2.clone(), uv3.clone() ] )
#geometry.computeCentroids()
# --- Mesh --- #
#texture = new THREE.Texture(canvasKeyboard[0])
#texture.needsUpdate = true
C = new THREE.Matrix4()
#uniforms = { "texture": { type: "t", value: #texture }, "resolution": { type: "v2", value: new THREE.Vector2(#sceneWidth, #sceneHeight) }, "matC": { type: "m4", value: C } }
shaderMaterial = new THREE.ShaderMaterial(
uniforms: #uniforms,
vertexShader: $('#vertexshader').text(),
fragmentShader: $('#fragmentshader').text()
)
#mesh = new THREE.Mesh( #geometry, shaderMaterial )
#mesh.position.set(0,0,1)
scene.add(#mesh)
# --- Sprites --- #
#sprites = []
for i in [0..3]
position = #positions[i]
m = new THREE.SpriteMaterial( {color: new THREE.Color('green') ,useScreenCoordinates: true } )
s = new THREE.Sprite( m )
s.scale.set( 32, 32, 1.0 )
s.position.set(position.x,position.y,1)
scene.add(s)
#sprites.push(s)
# --- Mouse handlers --- #
# those functions enable to drag the four sprites used to control the corners
scene.$container.mousedown(#mouseDown)
scene.$container.mousemove(#mouseMove)
scene.$container.mouseup(#mouseUp)
screenToWorld: (mouseX, mouseY) ->
return new THREE.Vector3(mouseX-#sceneX-#sceneWidth/2, -(mouseY-#sceneY)+#sceneHeight/2, 1)
worldToScreen: (pos) ->
return new THREE.Vector2((pos.x / #sceneWidth)+0.5, (pos.y / #sceneHeight)+0.5)
computeTextureProjection: ()=>
pos1 = #worldToScreen(#sprites[0].position)
pos2 = #worldToScreen(#sprites[1].position)
pos3 = #worldToScreen(#sprites[2].position)
pos4 = #worldToScreen(#sprites[3].position)
srcMat = new THREE.Matrix3(pos1.x, pos2.x, pos3.x, pos1.y, pos2.y, pos3.y, 1, 1, 1)
srcMatInv = #inverseMatrix(srcMat)
srcVars = #multiplyMatrixVector(srcMatInv, new THREE.Vector3(pos4.x, pos4.y, 1))
A = new THREE.Matrix3(pos1.x*srcVars.x, pos2.x*srcVars.y, pos3.x*srcVars.z, pos1.y*srcVars.x, pos2.y*srcVars.y, pos3.y*srcVars.z, srcVars.x, srcVars.y, srcVars.z)
dstMat = new THREE.Matrix3(0, 1, 0, 1, 1, 0, 1, 1, 1)
dstMatInv = #inverseMatrix(dstMat)
dstVars = #multiplyMatrixVector(dstMatInv, new THREE.Vector3(1, 0, 1))
B = new THREE.Matrix3(0, dstVars.y, 0, dstVars.x, dstVars.y, 0, dstVars.x, dstVars.y, dstVars.z)
Ainv = #inverseMatrix(A)
C = #multiplyMatrices(B,Ainv)
ce = C.elements
# I used a Matrix4 since I don't think Matrix3 works in Three.js shaders
#uniforms.matC.value = new THREE.Matrix4(ce[0], ce[3], ce[6], 0, ce[1], ce[4], ce[7], 0, ce[2], ce[5], ce[8], 0, 0, 0, 0, 0)
and here is the fragment shader:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D texture;
uniform vec2 resolution;
uniform mat4 matC;
void main() {
vec4 fragCoordH = vec4(gl_FragCoord.xy/resolution, 1, 0);
vec4 uvw_t = matC*fragCoordH;
vec2 uv_t = vec2(uvw_t.x/uvw_t.z, uvw_t.y/uvw_t.z);
gl_FragColor = texture2D(texture, uv_t);
}
Additional note
Maptastic is a Javascript/CSS projection mapping utility.
https://github.com/glowbox/maptasticjs

DirectX 11 models with even indices not rendered

I have a problem with directx 11 rendering - if i try to render more then one model, i see just models with odd index. All model that are rendered with even index are not visible.
my code based on rastertek tutorials:
m_dx->BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
{
m_camera->Render();
XMMATRIX view;
m_camera->GetViewMatrix(view);
XMMATRIX world;
m_dx->GetWorldMatrix(world);
XMMATRIX projection;
m_dx->GetProjectionMatrix(projection);
XMMATRIX ortho;
m_dx->GetOrthoMatrix(ortho);
world = XMMatrixTranslation(-2, 0, -4);
m_model->Render(m_dx->GetDeviceContext());
m_texture_shader->Render(m_dx->GetDeviceContext(), m_model->GetIndicesCount(), world, view, projection,
m_model->GetTexture());
world = XMMatrixTranslation(2, 0, -2);
m_model->Render(m_dx->GetDeviceContext());
m_texture_shader->Render(m_dx->GetDeviceContext(), m_model->GetIndicesCount(), world, view, projection,
m_model->GetTexture());
world = XMMatrixTranslation(0, 0, -3);
m_model->Render(m_dx->GetDeviceContext());
m_texture_shader->Render(m_dx->GetDeviceContext(), m_model->GetIndicesCount(), world, view, projection,
m_model->GetTexture());
}
m_dx->EndScene();
Model render method
UINT stride, offset;
stride = sizeof(VertexPosTextureNormal);
offset = 0;
device_context->IASetVertexBuffers(0, 1, &m_vertex_buffer, &stride, &offset);
device_context->IASetIndexBuffer(m_index_buffer, DXGI_FORMAT_R32_UINT, 0);
device_context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
Shader render method:
world = XMMatrixTranspose(world);
view = XMMatrixTranspose(view);
projection = XMMatrixTranspose(projection);
D3D11_MAPPED_SUBRESOURCE mapped_subres;
RETURN_FALSE_IF_FAILED(context->Map(m_matrix_buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped_subres));
MatrixBuffer* data = (MatrixBuffer*)mapped_subres.pData;
data->world = world;
data->view = view;
data->projection = projection;
context->Unmap(m_matrix_buffer, 0);
context->VSSetConstantBuffers(0, 1, &m_matrix_buffer);
context->PSSetShaderResources(0, 1, &texture);
// render
context->IASetInputLayout(m_layout);
context->VSSetShader(m_vertex_shader, NULL, 0);
context->PSSetShader(m_pixel_shader, NULL, 0);
context->PSSetSamplers(0, 1, &m_sampler_state);
context->DrawIndexed(indices, 0, 0);
What can be the reason of this?
thank you.
This code -
world = XMMatrixTranspose(world);
view = XMMatrixTranspose(view);
projection = XMMatrixTranspose(projection);
is transposing the same matrixes each time you call it so they only have the correct value alternate times. The world matrix is being reset each time in the calling code but the view and project matrices are wrong alternate times.

Resources