DirectX render 2D image - directx

I want to simply render a 2D image to the window. After hours of digging into DirectX, still can't find a way to do it. Can I simply load the image into buffer then let swapchain to display this buffer of image?

The SimpleTexture sample on GitHub demonstrates exactly this scenario for C++.
DX11 Win32
DX11 UWP
DX12 Win32
DX12 UWP
As you are new to DirectX, you may also want to look at the DirectX Tool Kit for DX11 / DX12 which provides a very simple to use SpriteBatch class.

Related

How to use Stencil and Cover method to draw SVGs with metal ios?

I did some research on how to render vector graphics with metal. But unfortunately i couldn't find any supporting resources. But there was an approach in OpenGL which is published by NVIDIA, to render SVGs using stencil and cover method. Read more here http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/opengl/gpupathrender.pdf
. I am wondering if someone could help me out here to find out possibilities of rendering vector graphics using this approach with metal.
There is no easy way implementing something this. Because Metal does not support vectors. You might have to calculate everything on CPU and rasterize.

Image processing using OpenCV on OpenGL graphics

As title says, I want to know if there is a way to process graphics using OpenCV created by OpenGL?
I am displaying thousands of points in real-time using OpenGL. Now I want to create clusters for those points and later point-tracking.
I have found this but couldn't understand it well.
Apart from that on this page a guy mentioned "OpenCV generally operates on real image data, and wouldn't operate on graphics generated by OpenGL."
Is it true?
Below is one of the screenshot of real-time output.
Use glReadPixels to copy the rendered result to the main memory. Then you can create a cv::Mat out of the buffer.

DirectShow camera capturing to DirectX texture

I am using DirectShow to capture images from a USB-connected camera. My goal is to have the captured camera image on DirectX 11 Texture2D to use it for rendering, and I would like it to happen automatically by DirectShow graph, without having the buffers being copied to the user space at all.
Looked at many examples/threads, but could not see how to do exactly that. I find many recommendations to use Media Foundation instead, but for current project it's not an option at this point.
There seem to be examples of playback on DirectX 9 texture, maybe there is a way to get a "Dx9" texture out of "Dx11" and use it in the rendering later?

Displaying a video stream to the oculus Rift

I'm trying to mod Oculus World Demo to show an video stream from a camera and not a pre-set graphic, however, I'm finding it difficult to find the proper way to render an cv::IplImage or cv::mat image type onto the Oculus screen. If anyone knows how to display an image to the oculus I would be very grateful. This is for the DK 2.
Pure OpenCV isn't really well suited to rendering to the Rift, because you would need to manually implement the distortion mechanisms that are normally provided by the Oculus Rift SDK.
The best way to render an image from OpenCV onto the screen is to load the image into an OpenGL or Direct3D texture and use the 3D rendering API (GL or D3D) to place it into a rendered scene. There is an example of this in Github repository for my book on Rift development.
In summary, it sets up the video capture using the OpenCV API and then launches a thread which is responsible for capturing images from the camera device. In the main thread, the draw call renders a simple 3D scene which includes the captured image. Most of the interesting Rift related code is in the parent class, RiftApp.

Photo booth in iOS. Using OpenCV or OpenGL ES?

I want to make an application filtering videos like Apple's photo booth app
How can I make that??
Using OpenCV, OpenGL ES or anything else?
OpenCV and OpenGL have very different purposes:
OpenCV is a cross-platform computer vision library. It allows you to easily work with image and video files and presents several tools and methods to handle them and execute filters and several other image processing techniques and some more cool stuff in images.
OpenGL is a cross-platform API to produce 2D/3D computer graphics. It is used to draw complex three-dimensional scenes from simple primitives.
If you want to perform cool effects on images OpenCV is the way to go since it provides tools/effects that can be easily used together to achieve the desired effect you are looking for. And this approach doesn't stop you from processing the image with OpenCV and then render the result in a OpenGL window (if you have to). Remember, they have different purposes and every now and then somebody uses them together.
The point is that the effects you want to perform in the image should be done with OpenCV or any other image processing library.
Actually karlphillip, all though what you have said is correct, OpenGL can also be used to perform hardware accelerated image processing.
Apple even has an OpenGL sample project called GLImageProcessing that has hw accelerated brightness, contrast, saturation, hue and sharpness.

Resources