How to getting all ray hits instead of only the closest hit - nvidia

I found this example (https://developer.nvidia.com/rtx/raytracing/vkray) on how to use the ray-tracing extension.
But I need to get all hits on a ray with the model, not the closet hit (first intersection coordinates).
Is any solution to this? Thanks!

Yes, simply use the any-hit shader type instead of the closest-hit shader.
Details on the different Vulkan NV ray tracing shader types can be found at https://devblogs.nvidia.com/vulkan-raytracing/.

Related

Creating a sub-texture, from an existing texture, using D3D9

I'm working on an older project that uses D3D9 for rendering 3D environments.
I have a texture file loaded into memory, that I'm applying onto a simple 3D model for rendering. I'm loading this file using the D3DXCreateTextureFromFileInMemory function (MS Docs function link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemory), and everything works okay.
However, instead of simply reading & loading the entire texture file, I want to only be able to read & load a square portion of it (a sub-texture of sorts). I have a pair of UV coordinates of the supposed square portion of the sub-texture (one UV coordinate for top-left corner of square, one for the bottom-right), relative to the main texture file, but I can't find a D3D9 function that does such a thing (I believe the correct wording for this would be a "Texture Atlas", but I've only heard it a couple of times and I'm not sure).
Here is an example diagram, to make sure my question is clear:
Looking over the MS Docs for the D3D9 texture functions, D3DXCreateTextureFromFileInMemoryEx (MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemoryex) can also be found with is a supposed upgrade of the previous D3DXCreateTextureFromFileInMemory function, however it only accepts a "height" and a "width" parameters, but not any sort of positional parameter pair. There are also alternative functions that use "Resources" instead of files in memory, but they also do not appear to accept any sort of positional parameters (such as D3DXCreateTextureFromResourceEx, MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromresourceex).
There are also several functions for a "UV Atlas" present in the MS Docs archives (https://learn.microsoft.com/en-us/windows/win32/direct3d9/dx9-graphics-reference-d3dx-functions-uvatlas), however I do not think those would be helpful to me.
Is what I'm trying to achieve here even possible using D3D9? Are there any functions that I may be missing that could help me achieve this goal?

Metal and Model I/O - Add Texture Coordinates to Mesh

I'm working on a student project for which I want to texture a mesh that I scanned using an iPad equipped with the new LiDAR sensor.
To texture a mesh, however, I need to add texture coordinates. My current plan is to convert the scanned mesh to an MDLMesh and add all submeshes to an MDLAsset container. Afterwards, I iterate over the MDLMeshes using a foreach-loop. In each iteration I'm calling the function "MDLMesh.addUnwrappedTextureCoordinates" on the current mesh. unfortunately, it always results in a crash. Sometimes I can loop through 2 meshes before I get an error, sometimes I does not even add UV's to a single mesh.
I'm not at expert at swift or Model IO, but it seems strange to me, that this operation crashes while I can add normals just fine.
The error I'm getting looks like this:
Can't choose for edge creation
libc++abi.dylib: terminating with uncaught exception of type std::out_of_range: unordered_map::at: key not found
The code I'm using looks like this:
private func unwrapTextureCoordinates(asset: MDLAsset) -> MDLAsset{
let objects = asset.childObjects(of: MDLMesh.self)
for object in objects{
if let mesh = object as? MDLMesh{
mesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0.5)
mesh.addAttribute(withName: MDLVertexAttributeTextureCoordinate, format: .float2)
mesh.addUnwrappedTextureCoordinates(forAttributeNamed: MDLVertexAttributeTextureCoordinate)
}
}
return asset
}
Hopefully someone can tell me what's wrong or point me in the right direction.
After I could not figure out what was causing the issue, I resorted to Unity and its ARFoundation wrapper to see whether I was able to calculate any UVs there. I found that Unity's equivalent to Model I/O's "addUnwrappedTextureCoordinates", namely Unwrapping.GeneratePerTriangleUV, calculates 3 UVs for each triangle.
Now, when I run this function in Unity, I also get an out-of-range-exception for my mesh, just like in Swift. The error description says that the number of UV coordinates cannot exceed the number of vertices in the mesh - which makes sense since I get three times as many UV coordinates as I have vertices in my mesh. Therefore, I highly suspect that the out-of-range-exception in Swift using Model I/O has the same cause.
Surely, there are many workarounds for this, but I resorted to a different solution since the "Unwrapping" class is part of the "UnityEngine.Editor" namespace anyways and therefore I would not be able to use it in a finished build (which is what I want).
Instead, I came across the function in this thread to calculate a single set of UVs for my mesh. I utilized it, and it worked exactly as I want it to. The code is written in C# and therefore I decided to continue my project using the Unity Engine. However, I don't think it will be a lot of trouble to translate the function into Swift.

How to save intensity value in sensor_msgs/Image from PointCloud?

I am using ROS-Kinetic. I have a pointcloud of type PointCloud. I have projected the same pointcloud on a plane. I would like to convert the planar pointcloud to an image of type sensor_msgs/Image.
toROSMsg(cloud, image);
enter code hereis throwing an error as
error: ‘const struct pcl::PointXYZI’ has no member named ‘rgb’
memcpy (pixel, &cloud (x, y).rgb, 3 * sizeof(uint8_t));
Kindly enlighten me in this regard. If possible along with a code snippet.
Thanks in advance
If toROSMsg() is complaining that your input cloud does not have an 'rgb' member, try to input a cloud of type pcl::PointXYZRGB. This is another type of point cloud handled by PCL. You can look at the documentation of PCL point types.
Convert to type pcl::PointXYZRGB with these lines:
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudrgb (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::copyPointCloud(*cloud, *cloudrgb);
Then call your function as:
toROSMsg(cloudrgb, image);
What you try to achieve is some 2D voxelization. And I assume that you want to implement some "inverse sensor model" (ISM) as explained by Thrun, right?
This approach is commonly directly implemented into a mapping algorithm to circumvent the exhaustive calculation of the plain ISM.
Therefore, you'll hardly find an out of the box solution.
Anyway, you could do it in multiple ways like this:
Use pointcloud_to_laserscan for 2D projection (but you have it anyway)
Use the ISM alg. explained in the book
or
Transform the PCL to an octree
Downsample to a quadtree and convert it to an imge

What should I do for multiple histograms?

I'm working with openCV and I'm a newbie in this field. I'm researching about Camshift. I want to extend this method by using multiple histograms. It means when tracking an object has many than one apperance (ex: rubik cube with six apperance), if we use only one histogram, Camshift will most likely fail.
I know calcHist function in openCV (http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist) has a parameter is "accumulate", but I don't know how to use and when to use (apply for camshiftdemo.cpp in opencv samples folder). This function can help me solve this problem? Or I have to use difference solution?
I have an idea, that is: create an array histogram for object, for every appearance condition that strongly varies in color, we pre-compute and store all to this array. But when we compute new histogram? It means that the pre-condition to start compute new histogram is what?
And what happend if I have to track multiple object has same color?
Everybody please help me. Thank you so much!

Matrix Concatenation using Actionscript Matrix3D

I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.

Resources