I am aware that there are already a few questions asking this or similar things and I dived into a few of them, but without any success.
I try to capture a "screenshot" of my display using the Desktop duplication API and process pixeldata of it. Later I would like to do that at least 30 times/second, but thats a different case.
For now, I tried the example of microsoft: https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official%20Windows%20Platform%20Sample/DXGI%20desktop%20duplication%20sample
I successfully saved a picture of the screen and accessed the pixel data with that code.
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
Now I wanted to break the example code down to the basic stuff I need. As I am not familiar with DirectX I have a hard time doing that.
I came up with following code, which runs without error but produces an empty picture. I check hr in Debug Mode, I am aware that this is bad practice and dirty!
int main()
{
HRESULT hr = S_OK;
ID3D11Device* m_Device;
ID3D11DeviceContext* m_DeviceContext;
// Driver types supported
D3D_DRIVER_TYPE DriverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
};
UINT NumDriverTypes = ARRAYSIZE(DriverTypes);
// Feature levels supported
D3D_FEATURE_LEVEL FeatureLevels[] =
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_1
};
UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);
D3D_FEATURE_LEVEL FeatureLevel;
// Create device
for (UINT DriverTypeIndex = 0; DriverTypeIndex < NumDriverTypes; ++DriverTypeIndex)
{
hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr, 0, FeatureLevels, NumFeatureLevels,
D3D11_SDK_VERSION, &m_Device, &FeatureLevel, &m_DeviceContext);
if (SUCCEEDED(hr))
{
// Device creation success, no need to loop anymore
break;
}
}
IDXGIOutputDuplication* m_DeskDupl;
IDXGIOutput1* DxgiOutput1 = nullptr;
IDXGIOutput* DxgiOutput = nullptr;
IDXGIAdapter* DxgiAdapter = nullptr;
IDXGIDevice* DxgiDevice = nullptr;
UINT Output = 0;
hr = m_Device->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&DxgiDevice));
hr = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void**>(&DxgiAdapter));
DxgiDevice->Release();
DxgiDevice = nullptr;
hr = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
DxgiAdapter->Release();
DxgiAdapter = nullptr;
hr = DxgiOutput->QueryInterface(__uuidof(DxgiOutput1), reinterpret_cast<void**>(&DxgiOutput1));
DxgiOutput->Release();
DxgiOutput = nullptr;
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
ID3D11Texture2D* m_AcquiredDesktopImage;
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
hr = m_DeskDupl->ReleaseFrame();
}
Could anyone give me a hint what is wrong with this code?
EDIT:
Just found the code snipet below and integrated it into my code.
Now it works!
Lessons learnt:
-) actully output/process hr!
-) AcquireNextFrame might not work on first try (?)
I might update this post again with better code, with functioning loop.
int lTryCount = 4;
do
{
Sleep(100);
hr = m_DeskDupl->AcquireNextFrame(250, &FrameInfo, &DesktopResource);
if (SUCCEEDED(hr))
break;
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
continue;
}
else if (FAILED(hr))
break;
} while (--lTryCount > 0);
AcquireNextFrame is allowed to return null resource (texture) because it returns on either change in desktop image or change related to pointer.
AcquireNextFrame acquires a new desktop frame when the operating system either updates the desktop bitmap image or changes the shape or position of a hardware pointer.
When you start frame acquisition you apparently are to get first desktop image soon, but you can also have a few of pointer notifications too before the image.
You should not limit yourself with 4 attempts and you don't need to sleep within the loop. Just keep polling for the image. To avoid dead loop it makes more sense to track total time spent in the loop and limit it to, for example, one second.
See also:
AcquireNextFrame() never grabs an updated image, always blank
Related
I am working on a project in which I have to store the datas of an ADC Stream on a µSD card. However even if I use a 16 bits buffer, I lose data from the ADC stream. My ADC is used with DMA and I use FATFS (WITHOUT DMA) and the SDMMC1 peripheral to fill a .bin file with the datas.
Do you have an idea to avoid this loss ?
Here is my project : https://github.com/mathieuchene/STM32H743ZI
I use a nucleo-h743zi2 Board, CubeIDE, and CubeMx in their last version.
EDIT 1
I tried to implement Colin's solution, it's better but I have a strange things in the middle of my acquisition. However when I increase the maximal count value or try to debug, the HardFault_Handler appears. I modified main.c file by creating 2 blocks (uint16_t blockX[BUFFERLENGTH/2]) and 2 flags for when adcBuffer is half filled or completely filled.
I also changed the while(1) part in main function like this
if (flagHlfCplt){
//flagCplt=0;
res = f_write(&SDFile, block1, strlen((char*)block1), (void *)&byteswritten);
memcpy(block2, adcBuffer, BUFFERLENGTH/2);
flagHlfCplt = 0;
count++;
}
if (flagCplt){
//flagHlfCplt=0;
res = f_write(&SDFile, block2, strlen((char*)block2), (void *)&byteswritten);
memcpy(block1, adcBuffer[(BUFFERLENGTH/2)-1], BUFFERLENGTH/2);
flagCplt = 0;
count++;
}
if (count == 10){
f_close(&SDFile);
HAL_ADC_Stop_DMA(&hadc1);
while(1){
HAL_GPIO_TogglePin(LD1_GPIO_Port, LD1_Pin);
HAL_Delay(1000);
}
}
}
EDIT 2
I modified my program. I set block 1 and block 2 with the length of BUFFERLENGTH and I added a pointer (*idx) to change the buffer which is filled. I don't have HardFault_Handler anymore but I still loose some datas from my adc's stream.
Here are the modification I made:
// my pointer and buffers
uint16_t block1[BUFFERLENGTH], block2[BUFFERLENGTH], *idx;
// init of pointer and adc start
idx=block1;
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)idx, BUFFERLENGTH);
// while(1) part
while (1)
{
if (flagCplt){
if (flagToChangeBuffer) {
idx=block1;
res = f_write(&SDFile, block2, strlen((char*)block2), (void *)&byteswritten);
flagCplt = 0;
flagToChangeBuffer=0;
count++;
}
else {
idx=block2;
res = f_write(&SDFile, block1, strlen((char*)block1), (void *)&byteswritten);
flagCplt = 0;
flagToChangeBuffer=1;
count++;
}
}
if (count == 150){
f_close(&SDFile);
HAL_ADC_Stop_DMA(&hadc1);
while(1){
HAL_GPIO_TogglePin(LD1_GPIO_Port, LD1_Pin);
HAL_Delay(1000);
}
}
}
Does someone know how to solve my matter with these loss?
Best Regards
Mathieu
I'm building the dialog-based MFC application.
When I click the Button then I can choose the image from the file explorer and that image is also loaded into cv::imread(). And also shows in the Picture Control.
I can load and show an image in the Picture Control with the following code.
void CMFCApplication3Dlg::OnBnClickedButton1()
{
cv::Mat src = cv::imread("D:/source/repos/Testing_Photos/Large/cavalls.png");
Display(src);
}
But not with the following code.
void CMFCApplication3Dlg::OnBnClickedButton1()
{
TCHAR szFilter[] = _T("PNG (*.png)|*.png|JPEG (*.jpg)|*.jpg|Bitmap (*.bmp)|*.bmp||");
CFileDialog dlg(TRUE, NULL, NULL, OFN_HIDEREADONLY | OFN_OVERWRITEPROMPT, szFilter, AfxGetMainWnd());
if (dlg.DoModal() == IDC_BUTTON1)
{
CString cstrImgPath = dlg.GetPathName();
CT2CA pszConvertedAnsiString(cstrImgPath);
std::string strStd(pszConvertedAnsiString);
cv::Mat src = cv::imread(strStd);
Display(src);
}
}
And the following is the "Display" function.
void CMFCApplication3Dlg::Display(cv::Mat& mat) {
CStatic * PictureB = (CStatic *)GetDlgItem(IDC_STATIC);
CWnd* cwn = (CWnd *)GetDlgItem(IDC_STATIC);
CDC* wcdc = cwn->GetDC();
HDC whdc = wcdc->GetSafeHdc();
RECT rec;
cwn->GetClientRect(&rec);
cv::Size matSize;
matSize = cv::Size(rec.right, rec.bottom);
BITMAPINFO bitmapinfo;
bitmapinfo.bmiHeader.biBitCount = 24;
bitmapinfo.bmiHeader.biWidth = mat.cols;
bitmapinfo.bmiHeader.biHeight = -mat.rows;
bitmapinfo.bmiHeader.biPlanes = 1;
bitmapinfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bitmapinfo.bmiHeader.biCompression = BI_RGB;
bitmapinfo.bmiHeader.biClrImportant = 0;
bitmapinfo.bmiHeader.biClrUsed = 0;
bitmapinfo.bmiHeader.biSizeImage = 0;
bitmapinfo.bmiHeader.biXPelsPerMeter = 0;
bitmapinfo.bmiHeader.biYPelsPerMeter = 0;
StretchDIBits(
whdc,
0, 0,
matSize.width, matSize.height,
0, 0,
mat.cols, mat.rows,
mat.data,
&bitmapinfo,
DIB_RGB_COLORS,
SRCCOPY
);
}
I am very new to MFC and I really don't have any idea where I'm wrong.
Please help me!
Thank you.
This is not the correct method to paint. The picture will be erased every time there is a paint request.
That's what happens with CFileDialog which forces another repaint immediately after you paint the image. You have to respond to OnPaint or OnDrawItem, or use CStatic::SetBitmap as noted in comment.
You can do this with CImage class, there is no need for OpenCV:
CImage img;
if(S_OK == img.Load(L"unicode.jpg"))
{
CStatic *control = (CStatic*)GetDlgItem(IDC_STATIC);
control->ModifyStyle(0, SS_BITMAP);
auto oldbmp = control->SetBitmap(img.Detach());
if(oldbmp)
DeleteObject(oldbmp);
}
Or you can use OpenCV to create HBITMAP handle.
Note that OpenCV does not handle Unicode filenames. CW2A converts Unicode to ANSI character encoding. This fails if one or more code points cannot be represented in the currently active code page. To work around it, we can open the file with CFile or std::ifstream, read it as binary, then open with cv::imdecode instead. Example:
//open the file from unicode path:
const wchar_t *filename = L"unicode.jpg";
std::ifstream fin(filename, std::ios::binary);
if(!fin.good())
return;
//read from memory
std::vector<char> vec(std::istreambuf_iterator<char>(fin), {});
cv::Mat src = cv::imdecode(vec, cv::IMREAD_COLOR);
if(!src.data)
return;
//create hbitmap
BITMAPINFOHEADER bi = { sizeof(bi), src.cols, -src.rows, 1, 24 };
CClientDC dc(this);
auto hbmp = CreateDIBitmap(dc, &bi, CBM_INIT, src.data, (BITMAPINFO*)&bi, 0);
//send hbitmap to control
auto control = (CStatic*)GetDlgItem(IDC_STATIC);
auto oldbmp = control->SetBitmap(hbmp);
if (oldbmp)
DeleteObject(oldbmp);
I'm using direct3d and dxgi to write a VNC program and I want to minimize the resource transfer between the Video RAM and System RAM. But I'm confusing by the some part of the following code. Since the usage of texture acquired by the method AcquireNextFrame is not staging and cannot be read by CPU, I have to copy it to a staging texture. But I'm wondering what does the CopyResource do when it copies from a default usage resource to a staging usage resource? Does it allocate some memory in Video RAM for the staging resource, then copy to that area and when I call Map the texture is copied from that area to the System RAM or it directly copies resource to the System RAM?
DUPL_RETURN DUPLICATIONMANAGER::GetFrame(D3D11_MAPPED_SUBRESOURCE* rawBits, bool* Timeout) {
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
// Get new frame
HRESULT hr = m_DeskDupl->AcquireNextFrame(0, &FrameInfo, &DesktopResource);
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
*Timeout = true;
return DUPL_RETURN_SUCCESS;
}
*Timeout = false;
if (FAILED(hr))
{
return ProcessFailure(m_Device, L"Failed to acquire next frame in DUPLICATIONMANAGER", L"Error", hr, FrameInfoExpectedErrors);
}
// If still holding old frame, destroy it
if (m_AcquiredDesktopImage)
{
m_AcquiredDesktopImage->Release();
m_AcquiredDesktopImage = nullptr;
}
// QI for IDXGIResource
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void **>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
if (FAILED(hr))
{
return ProcessFailure(nullptr, L"Failed to QI for ID3D11Texture2D from acquired IDXGIResource in DUPLICATIONMANAGER", L"Error", hr);
}
ID3D11Texture2D* StagingTex = nullptr;
D3D11_TEXTURE2D_DESC desc;
m_AcquiredDesktopImage->GetDesc(&desc);
desc.Usage = D3D11_USAGE_STAGING;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.BindFlags = 0;
desc.MiscFlags = 0;
hr = m_Device->CreateTexture2D(&desc, nullptr, &StagingTex);
if (FAILED(hr)) {
return ProcessFailure(m_Device, L"Failed to create texture for extracting raw data", L"Error", hr, SystemTransitionsExpectedErrors);
}
m_DeviceContext->CopyResource(StagingTex, m_AcquiredDesktopImage);
UINT subresource = D3D11CalcSubresource(0, 0, 0);
**hr = m_DeviceContext->Map(StagingTex, subresource, D3D11_MAP_READ, 0, rawBits);**
if (FAILED(hr)) {
}
m_DeviceContext->Unmap(StagingTex, subresource);
StagingTex->Release();
return DUPL_RETURN_SUCCESS;
}
I use GetDIBits() to retrieve the bits of bitmap and copies them into a buffer. The function doesn't fail (the result is different to NULL), but I get wrong height of bitmap and erroneous buffer.
This is a part of my code:
HDC hdcMemDC = CreateCompatibleDC(hDC); // (hDC = hDC = BeginPaint(hwnd, &ps): i get it in WM_Paint)
int l_uiWidth = 400;
int l_uiHeight = 120;
HBITMAP hbmp = CreateCompatibleBitmap(hDC, l_uiWidth, l_uiHeight);
HGDIOBJ oldhbmp = SelectObject(hdcMemDC,hbmp);
BITMAPINFO bi;
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bi.bmiHeader.biWidth = l_uiWidth;
bi.bmiHeader.biHeight = l_uiHeight;
bi.bmiHeader.biPlanes = 1;
bi.bmiHeader.biBitCount = 8;
bi.bmiHeader.biCompression = BI_RGB;
bi.bmiHeader.biSizeImage = 0;
bi.bmiHeader.biXPelsPerMeter = 0;
bi.bmiHeader.biYPelsPerMeter = 0;
bi.bmiHeader.biClrUsed = 256;
bi.bmiHeader.biClrImportant = 0;
BYTE *l_ImagePDM = new BYTE[l_uiWidth * l_uiHeight];
GetDIBits(hdcMemDC,hbmp,0,l_uiHeight,l_Image,&bi,DIB_RGB_COLORS);
Please Help me!
What's wrong with my code ?
You have not drawn anything on the bitmap that you are querying the bits for. CreateCompatibleBitmap() does not make a copy of the pixels of the source HDC. It merely allocates an HBITMAP that is compatible with the specified HDC, which then allows you to select the HBITMAP into that HDC. But you still have to draw something onto the bitmap to make its content meaningful before you can then query its bits.
When you use C++ Builder you can use Graphics unit for processing graphics.
Example of copying something from window:
void __fastcall TForm1::Button1Click(TObject *Sender)
{
// Create a bitmap
Graphics::TBitmap* bit = new Graphics::TBitmap();
bit->PixelFormat = pf24bit;
bit->HandleType = bmDIB;
bit->Width = 200;
bit->Height = 200; // or bit->SetSize(200, 200) - newer versions C++ Builder
// Copy something from this Form (from window)
bit->Canvas->CopyRect(TRect(0, 0, 200, 200), this->Canvas, TRect(0, 0, 200, 200));
// Do something with bitmap data
RGBTRIPLE* line;
for (int i = 0; i < bit->Height; i++)
{
// Get memory address from line i
line = (RGBTRIPLE*) bit->ScanLine[i];
// Change 5'th pixel
line[5].rgbtRed = 255;
}
// Get whole bitmap memory. Bitmap data are stored upside down and size of each row is rounded
// up to a multiple of 4 bytes.
unsigned char* bitmem = (unsigned char*)(bit->ScanLine[ bit->Height-1 ]);
//...
// Draw bitmap on Form
Canvas->Draw(0, 200, bit);
delete bit;
}
When you have device context you can use TCanvas like this:
void __fastcall TForm1::Button2Click(TObject *Sender)
{
// Create device context
HDC hdc = GetDC(this->Handle); // or GetWindowDC - whole window with frame
// Create canvas and associate device context
TCanvas* canv = new TCanvas();
canv->Handle = hdc;
// Create a bitmap
Graphics::TBitmap* bit = new Graphics::TBitmap();
bit->PixelFormat = pf24bit;
bit->HandleType = bmDIB;
bit->Width = this->ClientWidth;
bit->Height = this->ClientHeight; // or bit->SetSize(w, h) - newer versions C++ Builder
// Copy window content
TRect r(0, 0, this->ClientWidth, this->ClientHeight);
bit->Canvas->CopyRect(r, canv, r); // USEING TCanvas
// Release context and delete canvas
ReleaseDC(this->Handle, hdc);
delete canv;
// Do something with the bitmap
bit->SaveToFile("screenshot.bmp");
delete bit;
}
You can also use TCanvas for device context from BeginPaint:
canv->Handle = BeginPaint(hwnd, &ps);
I'm using cvSetData to get the rgb frame into one I can use for openCV.
I modified the SkeletalViewer slightly to produce the rgb stream.
void CSkeletalViewerApp::Nui_GotVideoAlert( )
{
const NUI_IMAGE_FRAME * pImageFrame = NULL;
IplImage* kinectColorImage = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 4);
HRESULT hr = NuiImageStreamGetNextFrame(
m_pVideoStreamHandle,
0,
&pImageFrame );
if( FAILED( hr ) )
{
return;
}
NuiImageBuffer * pTexture = pImageFrame->pFrameTexture;
KINECT_LOCKED_RECT LockedRect;
pTexture->LockRect( 0, &LockedRect, NULL, 0 );
if( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
m_DrawVideo.DrawFrame( (BYTE*) pBuffer );
cvSetData(kinectColorImage, (BYTE*) pBuffer,kinectColorImage->widthStep);
cvShowImage("Color Image", kinectColorImage);
//cvReleaseImage( &kinectColorImage );
cvWaitKey(10);
}
else
{
OutputDebugString( L"Buffer length of received texture is bogus\r\n" );
}
cvReleaseImage(&kinectColorImage);
NuiImageStreamReleaseFrame( m_pVideoStreamHandle, pImageFrame );
}
With the cvReleaseImage, I would get a cvException error. Not exactly sure which one as it didn't specify. Without cvReleaseImage, I would get the rgb video running in an openCV window but would eventually crash because it ran out of memory.
How should I release the image properly?
Just solved this problem.
After a bunch of sleuthing using breakpoints and debugging, it appears as though the problem has to do with the pointers used in cvSetData. My best guess is that Nui_GotVideoAlert() updates the address pointed to by pBuffer before cvReleaseImage is called. In addition, cvSetData never appears to copy the bytes from this address.
What happens then is that cvReleaseImage is called on an address that no longer exists.
I fixed this by declaring kinectColorImage at the top of NuiImpl.cpp, calling cvSetData in ::Nui_GotVideoAlert(), and only calling cvReleaseImage in the Nui_Uninit() method. This way, kinectColorImage will just update instead of creating a new IplImage in each call of Nui_GotVideoAlert().
That's strange. As far as I know, cvReleaseImage released both the image header and the image data. I did the piece of code below and in this certain example, cvReleaseImage does not free the buffer that contains the data. There I didn't use cvSetData but I just updated the pointer to the image data. If you uncomment the commented lines and comment the ones just below each one, program still runs but you'll get some memory leaks. I used OpenCV 2.2 (this is the legacy interface).
#include <opencv/cv.h>
#include <stdlib.h>
#define NLOOPS 1000
int main(void){
int i,j
char *buff = (char *) malloc( sizeof(char) * 3 * 640 * 480 );
for( i = 0; i < 640 * 480 * 3; i++ ) buff[i] = 128;
j = 0;
while( j++< NLOOPS ){
IplImage *im = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 3);
//cvSetData(im, buff, im->widthStep); ---> If you use that version you'll get memory leaks. Comment line below.
im->imageData = buff;
cvWaitKey(4);
cvShowImage("kk", im);
//cvReleaseImageHeader(&im); ---> If you use that version you'll get memory leaks. Comment line below.
cvReleaseImage(&im);
free(im);
}
free(buff);
return 0;
}