I want to implement NVIDIA Reflex to My Direct2D application. I have a ID2D11Device, but NvAPI_D3D_SetSleepMode requires Direct3DDevice.
I know that Direct2D is based on Direct3D. So, I think that I can acquire Direct3D Device from Direct2D Device. But, I can't find any solution.
How to get Direct3D device from Direct2D device? If I misunderstand conceptions, please let me make know a right concepts. Thanks.
When you created the ID2D1Device, you had to start with a Direct3D device. Use that one.
// Obtain the underlying DXGI device of the Direct3D11.1 device.
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);
// Obtain the Direct2D device for 2-D rendering.
DX::ThrowIfFailed(
m_d2dFactory->CreateDevice(dxgiDevice.Get(), &m_d2dDevice)
);
// And get its corresponding device context object.
DX::ThrowIfFailed(
m_d2dDevice->CreateDeviceContext(
D2D1_DEVICE_CONTEXT_OPTIONS_NONE,
&m_d2dContext
)
);
Related
When calling IDXGIFactory1::CreateSwapChain with DXGI_FORMAT_B5G6R5_UNORM I get an error that this format isn't supported, specifically E_INVALIDARG One or more arguments are invalid. However, this works fine with a more standard format like DXGI_FORMAT_B8G8R8A8_UNORM.
I'm trying to understand how I can know which swap chain formats are supported. From digging around in the documentation, I can find lists of supported formats for "render targets", but this doesn't appear to be the same set of formats supported for swap chains. B5G6R5 does need 11.1 to have required support for most uses, but it's working as a render target.
https://learn.microsoft.com/en-us/previous-versions//ff471325(v=vs.85)
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-0-feature-level-hardware
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-1-feature-level-hardware
As a test, I looped through all formats and attempted to create swap chains with each. Of the 118 formats, only 8 appear to be supported on my machine (RTX 2070):
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB
DXGI_FORMAT_NV12
DXGI_FORMAT_YUY2
What is the proper way to know which swap chain formats are supported?
For additional context, I'm doing off-screen rendering to a 16-bit (565) format. I have an optional "preview window" that I open occasionally to quickly see the rendering results. When I create the window I create a swap chain and do a copy from the real render target into the swap chain back buffer. I'm targeting DirectX 11 or 11.1. I'm able to render to the B5G6R5 format just fine, it's only the swap chain that complains. I'm running Windows 10 1909.
Here's a Gist with resource creation snippets and a full code sample.
https://gist.github.com/akbyrd/c9d312048b49c5bd607ceba084d95bd0
For the swap-chain, it must be supported for "Display Scan-Out". If you need to check a format support at runtime, you can use:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(backBufferFormat, &formatSupport)))
formatSupport = 0;
UINT32 required = D3D11_FORMAT_SUPPORT_RENDER_TARGET | D3D11_FORMAT_SUPPORT_DISPLAY;
if ((formatSupport & required) != required)
{
// Not supported
}
For all Direct3D Hardware Feature Levels devices, you can always count on DXGI_FORMAT_R8G8B8A8_UNORM working. Unless you are on Windows Vista or ancient WDDM 1.0 legacy drivers, you can also count on DXGI_FORMAT_B8G8R8A8_UNORM
For Direct3D Hardware Feature Level 10.0 or better devices, you can also count on DXGI_FORMAT_R16G16B16A16_FLOAT and DXGI_FORMAT_R10G10B10A2_UNORM being supported.
You can also count on all Direct3D Hardware Feature Level devices supporting DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for swap-chains if you are using the 'legacy' swap effects. For modern swap effects which are required for DirectX 12 and recommended on Windows 10 for DirectX 11 (see this blog post), the swapchain buffer is not created with _SRGB but instead you create just the render target view with it.
See Anatomy of Direct3D 11 Create Device
I found the git below is simple and efficient by using func capturer(_ capturer: RTCVideoCapturer, didCapture frame: RTCVideoFrame) of RTCVideoCapturerDelegate. You get RTCVideoFrame and then convert to CVPixelBuffer to modify.
https://gist.github.com/lyokato/d041f16b94c84753b5e877211874c6fc
However, I found Chronium says nativeHandle to get PixelBuffer is no more available(link below). I tried frame.buffer.pixelbuffer..., but, looking at framework > Headers > RTCVideoFrameBuffer.h, I found CVPixelBuffer is also gone from here!
https://codereview.webrtc.org/2990253002
Is there any good way to convert RTCVideoFrame to CVPixelBuffer?
Or do we have better way to modify captured video from RTCCameraVideoCapturer?
Below link suggests modifying sdk directly but hopefully we can achieve this on Xcode.
How to modify (add filters to) the camera stream that WebRTC is sending to other peers/server
can you specify what is your expectation? because you can get pixel buffer from RTCVideoframe easily but I feel there can be a better solution if you want to filter video buffer than sent to Webrtc, you should work with RTCVideoSource.
you can get buffer with
as seen
RTCCVPixelBuffer *buffer = (RTCCVPixelBuffer *)frame.buffer;
CVPixelBufferRef imageBuffer = buffer.pixelBuffer;
(with latest SDK and with local video camera buffer only)
but in the sample i can see that filter will not work for remote.
i have attached the screenshot this is how you can check the preview as well.
I'm trying to use the OSVR IR camera in OpenCV 3.1.
Initialization works OK.
Green LED is lit on camera.
When I call VideoCapture.read(mat) it returns false and mat is empty.
Other cameras work fine with the same code and VLC can grab the stream from the OSVR camera.
Some further testing reveals: grab() return true, whereas retrieve(mat) again returns false.
Getting width and height from the camera yields expected results but MODE and FORMAT gets me 0.
Is this a config issue? Can it be solved by a combination of VideoCapture.set calls?
Alternative Official answer received from the developers (after my own solution below):
The reason my camera didn't work out of the box with OpenCV might be that it has old firmware (pre-v7).
Work around (or just update firmware):
I found the answer here while browsing anything remotely linked to the issue:
Fastest way to get frames from webcam
You need to specify that it should use DirectShow.
VideoCapture capture( CV_CAP_DSHOW + id_of_camera );
I have been used ffmpeg to decode every single frame that I received from my ip cam. The brief code looks like this:
-(void) decodeFrame:(unsigned char *)frameData frameSize:(int)frameSize{
AVFrame frame;
AVPicture picture;
AVPacket pkt;
AVCodecContext *context;
pkt.data = frameData;
pat.size = frameSize;
avcodec_get_frame_defaults(&frame);
avpicture_alloc(&picture, PIX_FMT_RGB24, targetWidth, targetHeight);
avcodec_decode_video2(&context, &frame, &got_picture, &pkt);
}
The code woks fine, but it's software decoding. I want to enhance the decoding performance by hardware decoding. After lots of research, I know it may be achieved by AVFoundation framework.
The AVAssetReader class may help, but I can't figure out what's the next.Could anyone points out the following steps for me? Any help would be appreciated.
iOS does not provide any public access directly to the hardware decode engine, because hardware is always used to decode H.264 video on iOS.
Therefore, session 513 gives you all the information you need to allow frame-by-frame decoding on iOS. In short, per that session:
Generate individual network abstraction layer units (NALUs) from your H.264 elementary stream. There is much information on how this is done online. VCL NALUs (IDR and non-IDR) contain your video data and are to be fed into the decoder.
Re-package those NALUs according to the "AVCC" format, removing NALU start codes and replacing them with a 4-byte NALU length header.
Create a CMVideoFormatDescriptionRef from your SPS and PPS NALUs via CMVideoFormatDescriptionCreateFromH264ParameterSets()
Package NALU frames as CMSampleBuffers per session 513.
Create a VTDecompressionSessionRef, and feed VTDecompressionSessionDecodeFrame() with the sample buffers
Alternatively, use AVSampleBufferDisplayLayer, whose -enqueueSampleBuffer: method obviates the need to create your own decoder.
Edit:
This link provide more detail explanation on how to decode h.264 step by step: stackoverflow.com/a/29525001/3156169
Original answer:
I watched the session 513 "Direct Access to Video Encoding and Decoding" in WWDC 2014 yesterday, and got the answer of my own question.
The speaker says:
We have Video Toolbox(in iOS 8). Video Toolbox has been there on
OS X for a while, but now it's finally populated with headers on
iOS.This provides direct access to encoders and decoders.
So, there is no way to do hardware decoding frame by frame in iOS 7, but it can be done in iOS 8.
Is there anyone figure out how to directly access to video encoding and decoding frame by frame in iOS 8?
my following code returns null ,
byte[] image1 = _videoControl.getSnapshot(null);
any suggestion please
Few important moments about VideoControl.getSnapshot method:
some manufacturers may not implement getSnapshot() method;
the viewfinder must actually be visible on the screen prior to calling getSnapShot();
if you attempt to take pictures too quickly, however, getSnapShot() may
return null. The camera requires time to clear out its buffer and
prepare for the next shot;
you may check MMAPI System Property for "video.snapshot.encodings" before capturing:
if (System.getProperty("video.snapshot.encodings") == null) {
// getSnapshot() is not supported
}
You may read this chapter from book "Advanced BlackBerry Development":
http://books.google.com/books?id=F4Qu-lpoVncC&pg=PA53&lpg=PA53#v=onepage&q&f=false
Since VideoControl.getSnapshot method is not supported by all devices I'd recommend to use another approach. You can start the native BB Camera app with this line of code:
Invoke.invokeApplication(Invoke.APP_TYPE_CAMERA, new CameraArguments());
and then using the FileSystemJournalListener catch the taken image.
The BB SDK on your PC contains samples. Search for 'fileexplorerdemo' sample to see the rest of details.