I am new to opencv,I did a project with opencv,
My project is tracking object with stereo camera,so I find where is the object and I want to represent it in (blender or with opengl or another one),so my situation is that I have point 3d in YML file and I want to represent them . I dont know what I will use ,can any one help me ??
Its possible to do in Blender, but for your simple purpose Opengl should be enough. To get started with Modern Opengl check this list of contents: link
In opengl, before drawing anything you must "send" data (vertices) to your GPU. One part of this process is called Vertex Buffer Object. (its very simple after your program by yourself). When you use VBO, you can specify what type of data you have: STATIC or DYNAMIC. Dynamic means that you (the artist) will change the data over time, the position of each vertex might change. And that is what you want.
Related
My question pertains to the best way to handle multiple textures. First some context:
I'm using DirectX-11 in a non-gaming application; the gui uses DirectX exclusively. I'm in the process of making the gui skinnable, so the user can customize the gui to their liking.
I've written the code in such a way that the gui layout and the size of each gui element can change based on a configuration file. The gui currently uses only DirectX primatives via DrawIndexedInstanced, but I'd like to support user supplied textures. The size of these textures can vary. There can be as many as two dozen of these different textures.
I can solve this problem by either:
Dynamically putting together a texture atlas, or...
Forcing all of the textures into a 2d texture array (by making all of the textures the same size via padding as needed), or ...
Splitting up the DrawIndexedInstanced calls so that there's one draw call for each of the different textures (i.e. multiple binds / draws).
I spent the afternoon looking for consensus. I didn't find it. Penny for your thoughts?
The approach that runs fastest is the texture atlas. This is why 2d games use sprite maps. Multiple binds / draws is the slowest approach.
I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.
I want to create the feature as mentioned below in Picture. The number tells the touch order in the screen and dot tells the position. I want to create as same effect.
We can do this using normal drawing index primitive method. But I want to know whether Is it possible to create this effect using MTKMesh ? Please suggest/give some ideas to perform this in better way ?
You probably shouldn't use a MTKMesh in this case. After all, if you have all of the vertex and index data, you can just place it directly in one or more MTLBuffer objects and use those to draw. Using MetalKit means you'll need to create all kinds of intermediate objects (a MDLVertexDescriptor, a MTKMeshBufferAllocator, one or more mesh buffers, a submesh, and an MDLMesh) only to turn around and iterate all of those superfluous objects to get back to the underlying Metal buffers. MTKMesh exists to make it easy to import 3D content from model files via Model I/O.
I'm trying to render bitmap fonts in directX10 at the moment, and I want to do this as efficiently as possible. I'm having a hard time getting a start on my design because of this question though.
So should I reuse a single VertexBuffer or make multiple VertexBuffer objects?
Currently I allocate one dynamic VertexBuffer per Quad object in my program. This way I wouldn't have to map/unmap a VertexBuffer if nothing moves on my screen. For fonts I can implement a similar method on where I allocate one buffer per text box, or something similar.
After searching I read about reusing a single VertexBuffer for all objects. Vertex caching came up also. What is the advantage/disadvantage of this, and is it faster than my previous method?
For last, is there any other method I should look into rendering many 2d quads in the screen?
Thank you in advance.
Using a single dynamic Vertex Buffer with the proper combinations of DISCARD and NO_OVERWRITE is the best way to handle this kind of dynamic submission. The driver will perform buffer renaming with DISCARD to minimize GPU stalls.
This is the mechanism used by SpriteBatch/SpriteFont and PrimitiveBatch in the DirectX Tool Kit. You can check that source for details, and if really needed you could adopt it to Direct3D 10.x. Of course, moving to Direct3D 11 is probably the better choice.
I am needing a way to test out some heavy mathematical functionality in my code and have come to the point where I need to verify that such code is working properly. I would like to be able to create a path based on an array of points and use this path for testing without a graphics context.
As an example, Java has various classes such as the Path2D class that is completely independent on any kind of context or view unless you need to display the information in some kind of graphics context.
It looks like that Apple doesn't provide any methods that allow you to create, manipulate and change arbitrary geometric shapes but I wanted to come here and make sure.
CGPath and UIBezierPath can both be created without having a current context. But it depends what you want to do as to how much use they will be because their purpose is really for drawing. As such it isn't really easy to get the points back from the path once added.