Im having trouble generating a decent looking mesh using an image.
Here is an example of an image:
In my project I convert each pixel to 3d point with its height determined on how far away it is from the center of the line.
Here is what it looks like when I have created a 3d mesh from the image:
The problem with the mesh is that there are a lot of triangles (and vertices) and it looks really blocky, I triangulate the points just going through the 2d image and joining pixel neighbours in triangles.
Are there any algorithms that could be used to generate something better looking (less triangles / vertices, smoother transition).
Why don't you just sample both the midline and the boundaries of the white region, and triangulate with a constraint that contiguous vertices of the midline be edges? To preserve shape, the sampling should include all places where midline and boundaries "bend", i.e. all curvature changes.
Related
Example of goal:
I see three.js has this example.
It's simply a 3D Cube with many Spheres on its surface.
How can I do something like this using SceneKit?
You could use an array of points, on planes, and place spheres at those locations.
Each plane divide by 10 in both directions (X and Y) and then make six of these planes and rotate them into the cube face positions.
I think performance is probably going to suck, though. This is a lot of polygons, for each of these spheres. Let's imagine each sphere has 200 tris. That's 100x 6x 200 = 1.2 million triangles.
Probably better to use circular textures on quads, placed facing the camera, at each of these 600 points. Then it's only 1200 triangles.
Cheats way to do this:
Create a SCNBox with the number of vertices desired in x, y & z axis.
Then use it as a particle emitter shape, and assign emittance to each vertex at a rate that makes them always appear at these locations, using a small circle texture, and the "look at camera" mode of placard presentation.
here is that cheat, done with particles:
I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.
I need to find the size or coordinates of a rectangle that is displayed as a quadrilateral in a 3D image. The quadrilateral is on a plane that lines up with 3d world vanishing points. To clarify, the quadrilateral IS a rectangle in the 3D world, and that's the rectangle I want the size of.
I do not need to get all the textures and make a new image. I also do not know the coordinates of the target rectangle as required by the homography (perspective transformation) solutions I've seen, because I don't know the aspect ratio it's supposed to have.
I've read through this thread: proportions of a perspective-deformed rectangle and the guy seemed to find an algorithm that works. However I've read other research papers that claim to calculate a homography yet they don't say how they did it. Also it seems such a basic function there would be something in the existing openCV library.
Thanks.
I can't seem to tell if I should be factoring in the Origin of the drawn texture when making a rectangle to do collision (intersects) detection. Most of the examples I have seen make the Origin X/2, Y/2 when drawing but then they do not do anything special when creating a rectangle of the location for detecting collision. I am experimenting with it but have not come to any concrete conclusion especially for small objects. Thanks for looking!
From my own experience, the origin of the quad factors in when considering linear transformations such as scaling and rotation. This can have a direct implication on the bounding square that you generate from the quad as it will effect the bounding square transformations also.
It is important to ensure that they both align so that one transformation maps correctly from one square to the other. So what I would do is ensure that the origin of the bounding square maps to the quad.
Personally, I just use the quads bounding space calculated from the center of the quad and test for AABB collision within those confines. Obviously you need to devise the confines based on how large the object is from the center.
I have a dataset of images with faces. I also have for each face within the dataset a set of 66 2D points that correspond to my face landmarks(nose, eyes, shape of my face, mouth).
So basically I have the shape of my face in terms of 2D points from my image.
Do you know any algorithm that I can use and that can rotate my shape so that the face shape is straight? Let's say that the pan angle is 30 degrees and I want it rotated to 30 degrees so that it is positioned at 0 degrees on the pan angle. I have illustrated bellow what I want to say.
Basically you can consider the above illustrated shapes outlines for my images, which are represented in 2D. I want to rotate my first shape points so that they can look like the second shape. A shape is made out of a set of 66 2D points which are basically pixel coordinates. All I want to do is to find the correspondence of each of those 66 points so that the new shape is rotated with a certain degree on the pan angle.
From your question, I can assume you either have the rotation parameters (e.g. degrees in x,y) or the point correspondences (since you have a database of matched points). Thus you either need to apply or estimate (and apply) a 2D similarity transformation for image alignment/registration. See also the response on this question: face alignment algorithm on images
From rotation angle and to new point locations: You can define a 2D rotation matrix R and transform your point coordinates with it.
From point correspondences between shape A and Shape B to rotation: Estimate a 2D similarity transform (image alignment) using 3 or more matching points.
From either rotation or point correspondences to warped image: From the similarity transform, map image values (accounting for interpolation or non-values) using the underlying coordinate transformation for the entire image grid.
(image courtesy of Denis Simakov, AAM Slides)
Most of these are already implemented in OpenCV and MATLAB. See also the background and relevant methods around Active Shape and Active Appearance Models (Tim Cootes page includes binaries and background material).