.net DirectX Flip or Mirror a IGraphBuilder (webcam) - directx

Been going round in circles, new to directx and using directshow in c# I need to flip and mirror a webcam stream, could anyone provide pointers on how to do this from a graphBuilder.
Thanks

You could use a full-screen quad to flip the image by changing the texture coordinates in DirectX. I.e. define 4 points with UV coordinates from 1.0 to 0.0 (instead of 0.0 to 1.0) to flip the image. Alternatively you can use dotNet to do it using Image.RotateFlip() with the appropriate param from System.Drawing.RotateFlipType

Related

Mapping infrared images to color images in the RealSense Library

I currently use an Intel d435 camera.
I want to align with the left-infrared camera and the color camera.
the align function provided by the RealSense library has only the ability to align depth and color.
I heard that RealSense Camera is already aligned with the left-infrared camera and the depth camera.
However, I cannot map the infrared image and the color image with this information. The depth image is again set to the color image through the align function. I wonder how I can fit the color image with the left-infrared image that is set to the depth of the initial state.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
[Realsense Customer Engineering Team Comment]
#Panepo
The align class used in librealsense demos maps between depth and some other stream and vice versa. We do not offer other forms of stream alignments.
But one suggestion for you to have a try, Basically the mapping is a triangulation technique where we go through the intersection point of a pixel in 3D space to find its origin in another frame, this method work properly when the source data is depth (Z16 format). One possible way to map between two none-depth stream is to play three streams (Depth+IR+RGB), then calculate the UV map for Depth to Color, and then use this UV map to remap IR frame ( remember that Depth and left IR are aligned by design).
Hope the suggestion give you some idea.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
This is the method suggested by Intel Corporation.
Can you explain what it means to be able to solve the problem by creating a UV map using deep and color images? and does the RealSense2 library have a UV map function?
I need your precious answer.
Yes, Intel RealSense SDK 2.0 provides class PointCloud.
So, you
-configure sensors
-start streaming
-obtain color & depth frames
-get UV Map as follows (C#):
var pointCloud = new PointCloud();
pointCloud.MapTexture(colorFrame);
var points = pointCloud.Calculate(depthFrame);
var vertices = new Points.Vertex[depth frame height * depth frame width];
var uvMap = new Points.TextureCoordinate[depth frame height * depth frame width];
points.CopyTo(vertices);
points.CopyTo(uvMap);
uvMap you'll get is a normalized depth to color mapping
NOTE: if depth is aligned to color, size of vertices and UV Map is calculated using color frame width and height

Are there any UV Coordinates (or similar) for UIImageView?

I have a simple UIImageView in my view, but I can't seem to find any feature in Apple's documentation to change the UV Coordinates of this UIImageView, to convey my idea to you, this GIF file should preview how changing 4 vertices coordinates can change how the image gets viewed on the final UIImageView.
I tried to find a solution online too (other than documentation) and found none.
I use Swift.
You can achieve that very animation using UIView.transform or CALayer.transform. You'll need basic geometry to convert UV coordinates to a CGAffineTransform or CATransform3D.
I made an assumption that affine transform would suffice because in your animation the transform is affine (parallel lines stay parallel). In that case, 3 vertices are free -- the 4th one is constrained by the other 3.
If you have 3 vertices, you can compute the affine transform matrix using: Affine transformation algorithm
To achieve the infinite repeat, use UIImageResizingMode.Tile.

Will WebGL ever render points as circles?

On desktop OpenGL, points will sometimes be rendered as circles (if you have set gl_PointSize in the vertex shader). I am tinkering with WebGL and it seems to consistently render points as squares (when gl_PointSize is set). Is there a way to get them to render as circles?
Yes, there is a solution. You can do that using point sprites. Just send texture to shader and using alpha blending cut of unnecessary part of sprite.
Normally (in desktop OpenGL) you may see points rendered as circles when you have got MSAA and POINT_SMOOTH feature enabled.
Below you have links where you can get all informations you need :)
OpenGL ES 2.0 Equivalent for ES 1.0 Circles Using GL_POINT_SMOOTH?
http://klazuka.tumblr.com/post/249698151/point-sprites-and-opengl-es-2-0

OpenGL ES 2.0 Vertex position for Overlay

Currently I'm drawing with OpenGL ES 2.0 an object with pins in it and display it on a CAEAGLLayer. I'm able to identify objects via color picking.
Now I need to calculate the screen coordinates for the pin's world coordinates in order to draw for example a label on the right position (I want to use cocoa touch components). What would be a proper way to calculate the screen coordinates (hidden objects should be ignored)?
Running through the whole image and use each pixel to perform a color picking on it doesn't sound like the right way to go.
Thanks in advance.
Apple provides its own flavour of GL math functions :
See GLKMathUtils documentation.
As Lukas pointed out, you can use it to project (world -> screen coordinates) or un-project (screen -> world coordinates)
So, if you're already using GLKit for your matrix transformations, you can use this :
GLKVector3 screenPoint = GLKMathProject(modelPoint, modelViewMatrix, projectionMatrix, viewport);
I could answer this question myself. On some versions of OpenGL glProject() is available and can be used to calculate the position of a vertex on the screen. Unfortunately this function is not available in OpenGL ES 2.0 so you have to do the calculations yourself or you use a math library like OpenGL Mathematics which provides a function glProject():
#include "matrix_transform.hpp"
#include "transform.hpp"
vec3 pinScreenPosition = glm::project(vec3(0,0,0), modelMatrix, projectionMatrix, vec4(0, 0, screenDimensions.x * screenScale, screenDimensions.y * screenScale));

Xna transform a 2d texture like photoshop transforming tool

I want to create the same transforming effect on XNA 4 as Photoshop does:
Transform tool is used to scale, rotate, skew, and just distort the perspective of any graphic you’re working with in general
This is what all the things i want to do in XNA with any textures http://www.tutorial9.net/tutorials/photoshop-tutorials/using-transform-in-photoshop/
Skew: Skew transformations slant objects either vertically or horizontally.
Distort: Distort transformations allow you to stretch an image in ANY direction freely.
Perspective: The Perspective transformation allows you to add perspective to an object.
Warping an Object(Im interesting the most).
Hope you can help me with some tutorial or somwthing already made :D, iam think vertex has the solution but maybe.
Thanks.
Probably the easiest way to do this in XNA is to pass a Matrix to SpriteBatch.Begin. This is the overload you want to use: MSDN (the transformMatrix argument).
You can also do this with raw vertices, with an effect like BasicEffect by setting its World matrix. Or by setting vertex positions manually, perhaps transforming them with Vector3.Transform().
Most of the transformation matrices you want are provided by the Matrix.Create*() methods (MSDN). For example, CreateScale and CreateRotationZ.
There is no provided method for creating a skew matrix. It should be something like this:
Matrix skew = Matrix.Identity;
skew.M12 = (float)Math.Tan(MathHelper.ToRadians(36.87f));
(That is to skew by 36.87f degrees, which I pulled off this old answer of mine. You should be able to find the full maths for a skew matrix via Google.)
Remember that transformations happen around the origin of world space (0,0). If you want to, for example, scale around the centre of your sprite, you need to translate that sprite's centre to the origin, apply a scale, and then translate it back again. You can combine matrix transforms by multiplying them. This example (untested) will scale a 200x200 image around its centre:
Matrix myMatrix = Matrix.CreateTranslation(-100, -100, 0)
* Matrix.CreateScale(2f, 0.5f, 1f)
* Matrix.CreateTranslation(100, 100, 0);
Note: avoid scaling the Z axis to 0, even in 2D.
For perspective there is CreatePerspective. This creates a projection matrix, which is a specific kind of matrix for projecting a 3D scene onto a 2D display, so it is better used with vertices when setting (for example) BasicEffect.Projection. In this case you're best off doing proper 3D rendering.
For distort, just use vertices and place them manually wherever you need them.

Resources