Ideally I’d like to be able to take a stencil, position the stencil somewhere on my face, trace the object, remove the stencil and then relocate the stencil in the exact same position as before. So say the stencil was of an “X” and I traced the X in the middle of my forehead. I would like to be able to step completely out of the camera’s frame, step back into frame and be able to relocate the stencil in the exact same position and orientation as before. So the stencil would be getting its position/orientation based off of my facial landmarks.
I have no idea what I’m doing so ideally I’m just looking for some guidance as to how this could be achieved.
Related
I am trying to make my first steps with the vulkan raytracing extension from nvidia.
More precisely, I try to trace a simple triangle as a first step to get things going. However, as I expect my code to render a triangle like I want to, somehow a cube is traced and I can't find my mistake.
I think it must have something to do with the creation of my acceleration structures, as the cube gets bigger , when I move the vertices around. Maybe I'm only tracing the top level acceleration structure (maybe the cube is the bounding volume of the triangle?) and ignore the content of the bottom level acceleration structure completely, but I'm not sure about that.
I oriented myself on the vulkan "raytracing basic" example of Sascha Willems. (https://github.com/SaschaWillems/Vulkan/blob/master/examples/nv_ray_tracing_basic/nv_ray_tracing_basic.cpp)
In other words, I tried to recreate his code in my own framework.
Therefore, my Expectation to see was the same as the output of his example:
But my own code produces the discussed cube as follows:
Cube front:
Cube after slight rotation:
The red color comes from the closest hit shader. The plan is to take the "in" value for the shader and interpret it as barycentric coordinates to create the desired shading.
In case of my own code, the value is always the same, as it will produce the same red color each time it is evaluated.
Does anybody have an idea about such a problem?
My source code can be found on github, if someone is interested: https://github.com/mharrer97/saiga/blob/master/src/saiga/vulkan/raytracing/Raytracer.cpp
I have a question and hope you can help me with it. I've been busy making a game with xna and have recently started getting into shaders (hlsl). There's this shader that i like, use, and would like to improve.
The shader creates an outline by drawing the back faces of a model (in black), and translating the vertex along its normal. Now, for a smooth-shaded model, this is fine. I, however, am using flat-shaded models (I'm posting this from my phone, but if anyone is confused about what that means, I can upload a picture later). This means that the vertices are translated along the normal of its corresponding face, resulting in visible gaps between the faces.
Now, the question is: is there a way to calculate (either in the shader or in xna), how i should translate each vertex, or is the only viable option just making a copy of the 3d model, but with smooth shading?
Thanks in advance, hope you can educate me!
EDIT: Alternatively, I could load only a smooth-shaded model, and try to flat-shade it in the shader. That would, however, mean that I have to be able to find the normals of all vertices of the corresponding face, add their normals, and normalize the result. Is there a way to do this?
EDIT2: So far, I've found a few options that don't seem to work in the end: setting "shademode" in hlsl is now deprecated. Setting the fillmode to "wireframe" would be neat (while culling front faces), if only I would be able to set the line thickness.
I'm working on a new idea here. I could maybe iterate through vertices, find their position on the screen, and draw 2d lines between those points using something like the RoundLine library. I'm going to try this, and see if it works.
Ok, I've been working on this problem for a while and found something that works quite nicely.
Instead doing a lot of complex mathematics to draw 2d lines at the right depth, I simply did the following:
Set a rasterizer state that culls front faces, draws in wireframe, and has a slightly negative depth bias.
Now draw the model in all black (I modified my shader for this)
Set a rasterizer state that culls back faces, draws in fillmode.solid and has a 0 depth bias.
Now draw the model normally.
Since we can't change the thickness of wireframe lines, we're left with a very slim outline. For my purposes, this was actually not too bad.
I hope this information is useful to somebody later on.
I want to be able to tell when 2 images collide (not just their frames). But here is the catch: the images are rotating.
So I know how to find whether a pixel in an image is transparent or not but that wont help in this scenario because it will only find the location in the frame relative to a non-rotated image.
Also I have gone as far as trying hit boxes but even those wont work because I can't find a way to detect the collision of UIViews that are contained in different subviews.
Is what I am trying to do even possible?
Thanks in advance
I don't know how you would go about checking for pixel collision on a rotated image. That would be hard. I think you would have to render the rotated image into a context, then fetch pixels from the context to check for transparency. That would be dreadfully slow.
I would suggest a different approach. Come up with a path that maps the bounds of your irregular image. You could then use CGPathContainsPoint to check to see if a set of points is contained in the path (That method takes a transform, which you would use to describe the rotation of your image's path.)
Even then though you're going to have performance problems, since you would have to call that method for a large number of points from the other image to determine if they intersect.
I propose you a simple strategy to solve that, based on looking for rectangles intersections.
The key for that is to create a simplified representation of your images with a set of rectangles laid out properly as bounding boxes of the different part of you image (like you would build your image with legos). For better performance use a small set of rectangles (a few big legos), for better precision use a biggest number of rectangles to precisely follow the image outline.
Your problem becomes equivalent to finding an intersection between rectangles. Or to be more precise to find wether at least one vertex of the rectangles of object A is inside at least one rectangle of object B (CGRectContainsPoint) or if rect intersects (CGRectIntersectsRect).
If you prefer the points lookup, you should define your rectangles by their 4 vertices then it is easy when you rotate your image to apply the same affine transform (use CGPointApplyAffineTransform) to your rectangle vertices to have the coordinates of your points after rotation. But of course you can lookup for frame intersections and represent you rectangle using the standard CGRect structure.
You could also use a CGPath (as explained in another answer below) instead of a set of rectangles and look for any vertex inside other path using CGPathContainsPoint. That would give the same result actually but probably the rectangles approach is faster in many cases.
The only trick is to take one of the objects as a reference axis. Imagine you are on object A and you only see B moving around you. Then if you have to rotate A you need to make an axis transform to always have B transform relatively to A and not to the screen or any other reference. If your transforms are only rotation around the object centre then rotating A by n radians is equivalent to rotating B by -n radians.
Then just loop through your vertices defining object A and find if one is inside a rectangle of object A.
You will probably need to investigate a bit to achieve that but at least you have some clues on how to solve that.
I'm creating a drawing iOS application, and in need of smoothing the lines being drawn by user.
I'm using multisampling as normal.
For each time a user moves their finger, the code is like this :
Create points to make a line and then draw these points to a sampling buffer.
Resolve the sampling buffer.
The result buffer is drawn to the canvas.
The problem is when user have a big canvas (e.g: 2048x2048), the resolve process takes quite a time that it's causing the drawing to lag/choppy. The resolve process will resolve all the pixels in the buffer, regardless whether that pixels needs to be resolved or not.
I saw a drawing app like Procreate, and it draws smoothly with no lag even for a big canvas.
So, it is possible, I just don't know how to do that.
Does anybody have an idea for solution?
Thanks.
Just in case someone has the same problem with me, I found a decent solution :
Create a smaller sampling FBO just for the purpose of drawing the lines from last point to the current point. I use a 256x256 buffer.
When drawing from the last point to the current point, use this sampling buffer and then resolve.
Draw this sampling buffer to the current layer.
The result is not bad, no more lag. The only problem is that setting the appropriate transform, matrix, etc is quite hard.
I am just trying to better understand the directX pipeline. Just curious if depth buffers are mandatory in order to get things work. Or is it just a buffer you need if you want objects to appear behind one another.
The depth buffer is not mandatory. In a 2D game, for example, there is usually no need for it.
You need a depth buffer if you want objects to appear behind each other, but still want to be able to draw them in arbitrary order.
If you draw all triangles from the back to the front, and none of them intersect, then you could do without the depth buffer. However, it's generally easier to do away with depth sorting and just to use the depth buffer anyway.
Depth buffers are not mandatory. They simply solve the following problem: suppose you have an object near to the camera which is drawn first. Then, after that is already drawn, you want to draw an object which is far away, but at the same position as the nearby object on-screen. Without depth buffers, it gets drawn on top, which looks wrong. With depth buffers, it is obscured, because the GPU figures out its behind something else that has already been drawn.
You can turn them off and deal with that, eg. by drawing back-to-front (but this has other problems solved by depth buffering), which is easy in 2D games. Alternatively for some reason you might want that over-draw as some kind of effect. But it is by no means necessary for basic rendering.