The desktop capture with DirectX does not work - directx

Because processing was slow by D3DPOOL_SCRATCH, I wrote the desktop capture program to reference for the report on the Internet. However, a result is a pitch-black picture. Is this the result of a console program or is there any other cause?
#include <stdio.h>
#include <Windows.h>
#include <d3d9.h>
void main()
{
CoInitialize(NULL);
LPDIRECT3D9 d3d9;
LPDIRECT3DDEVICE9 d3ddev;
d3d9 = Direct3DCreate9(D3D_SDK_VERSION);
int ww = GetSystemMetrics(SM_CXSCREEN);
int wh = GetSystemMetrics(SM_CYSCREEN);
HWND hwnd = GetDesktopWindow();
D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.Windowed = TRUE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.BackBufferFormat = D3DFMT_UNKNOWN;
d3dpp.BackBufferCount = 1;
d3dpp.BackBufferWidth = ww;
d3dpp.BackBufferHeight = wh;
d3dpp.MultiSampleType = D3DMULTISAMPLE_NONE;
d3dpp.MultiSampleQuality = 0;
d3dpp.EnableAutoDepthStencil = TRUE;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
d3dpp.hDeviceWindow = hwnd;
d3dpp.Flags = D3DPRESENTFLAG_LOCKABLE_BACKBUFFER;
d3dpp.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_DEFAULT;
d3d9->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwnd, D3DCREATE_HARDWARE_VERTEXPROCESSING, &d3dpp, &d3ddev);
IDirect3DSurface9* render;
IDirect3DSurface9* dest;
d3ddev->CreateOffscreenPlainSurface(ww, wh, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &dest, NULL);
d3ddev->GetRenderTarget(0, &render);
d3ddev->GetRenderTargetData(render, dest);
D3DLOCKED_RECT bits;
dest->LockRect(&bits, NULL, D3DLOCK_READONLY);
// If a capture is successful, colors other than black(0x00000000) should enter.
for(int i = 0; i < 100; i++){
printf("%02X ", *((BYTE*)bits.pBits + i));
}
dest->UnlockRect();
render->Release();
dest->Release();
d3ddev->Release();
d3d9->Release();
CoUninitialize();
}

This is nothing related to the application type, if you want to get the data of desktop image, you should use the following function
GetFrontBufferData
So instead of calling
d3ddev->GetRenderTarget(0, &render);
d3ddev->GetRenderTargetData(render, dest);
You should call
d3ddev->GetFrontBufferData(0, dest);

If you are running on a version of Windows before Windows 8, the GetFrontBufferData API works fine, as given by another answer. However, starting with Windows 8, there is a new desktop duplication API that can be used to capture the desktop in video memory, including mouse cursor changes and which parts of the screen actually changed or moved. This is far more performant than the D3D9 approach and is really well-suited to doing things like encoding the desktop to a video stream, since you never have to pull the texture out of GPU memory. The new API is available by enumerating DXGI outputs and calling DuplicateOutput on the screen you want to capture. Then you can enter a loop that waits for the screen to update and acquires each frame in turn.
If you want to encode the frames to a video, I'd recommend taking a look at Media Foundation rather than DirectShow, since Media Foundation has a lot better performance and can use D3D11 rather than D3D9 to do the encoding. It's also a bit easier to use, depending on your scenario. Take a look at Media Foundation's Sink Writer for the simplest method of encoding the video frames.
For full disclosure, I work on the team that owns the desktop duplication API at Microsoft, and I've personally written apps that capture the desktop (and video, games, etc.) to a video file at 60fps using this technique, as well as a lot of other scenarios.

Related

GpuMat to FFMPEG Encoder

I'm doing some image processing with opencv::cuda so what I end up with is a cv::cuda::GpuMat. I now want to encode it using ffmpeg(so I can choose the encoder to be hardware accelerated or not). Now I wonder if i can somehow keep the data on the GPU for the encoder without downloading it, because that seems to be the bottleneck in my application running multiple threads.
I'm resizing the images with Opencv CUDA so I have less to download. (resizing with sws_scale makes no difference)
cv::cuda::GpuMat currentFrame;
...
cv::cuda::GpuMat resized;
cv::cuda::resize(currentFrame,resized,cv::Size(width*0.75,height*0.75),0,0,cv::INTER_NEAREST);
cv::Mat frameEnc = cv::Mat(resized);
const int stride[] = { static_cast<int>(frameEnc.step[0]) };
sws_scale(swsctx, &frameEnc.data, stride, 0, frameEnc.rows, avframe->data, avframe->linesize);
ret = avcodec_send_frame(codec, avframe);
if(!ret) {
/* rescale packet timestamp */
pkt->duration = 1;
av_packet_rescale_ts(pkt, codec->time_base, vstrm->time_base);
/* write packet */
av_write_frame(outctx, pkt);
}
Now this does work and performs ok, but I really wish I could do something like:
cv::cuda::GpuMat currentFrame;
...
GpuMatToAvFrame(currentFrame,avframe);
ret = avcodec_send_frame(codec, avframe);
if(!ret) {
/* rescale packet timestamp */
pkt->duration = 1;
av_packet_rescale_ts(pkt, codec->time_base, vstrm->time_base);
/* write packet */
av_write_frame(outctx, pkt);
}
where the avframe data is also on the gpu so that I don't download need any transfer between GPU-CPU/CPU-GPU
I think the class cv::cudacodec::VideoWriter could help, once an issue with OpenCV gets fixed. The class allows you to write a GpuMat directly. However I believe that due to a bug in OpenCV, you can't build OpenCV with support for this class. Which means this isn't a great solution now, but might be in the future.

DirectX 11, exception thrown when updating constant buffer with UpdateSubresource

So I am very new to DirectX and are trying to learn the basics but I'm running into some problem with my constant buffer. I'm trying to send a struct with three matrices to the vertex shader, but when I try to update the buffer with UpdateSubresource I get "Exception is thrown at 0x710B5DF3 (d3d11.dll) in Demo.exe: 0xC0000005: Access violation reading location 0x0000003C".
My struct:
struct Matracies
{
DirectX::XMMATRIX projection;
DirectX::XMMATRIX world;
DirectX::XMMATRIX view;
};
Matracies matracies;
Buffer creation:
ID3D11Buffer* ConstantBuffer = nullptr;
D3D11_BUFFER_DESC Buffer;
memset(&Buffer, 0, sizeof(Buffer));
Buffer.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
Buffer.Usage = D3D11_USAGE_DEFAULT;
Buffer.ByteWidth = sizeof(Matracies);
Buffer.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &matracies;
data.SysMemPitch = 0;
data.SysMemSlicePitch = 0;
Device->CreateBuffer(&Buffer, &data, &ConstantBuffer);
DeviceContext->VSSetConstantBuffers(0, 1, &ConstantBuffer);
Updating buffer:
DeviceContext->UpdateSubresource(ConstantBuffer, 0, 0, &matracies, 0, 0);
I am not sure what information is relevant to solve this so let me know if anything is missing.
Welcome to the wooly world of DirectX!
The first two steps in debugging any DirectX program are:
(1) Enable the Debug device. See this blog post. This will generate additional debug output at runtime which gives hints about problems like the one you have above.
(2) If a function returns an HRESULT, you must check that for success or failure at runtime. If it was safe to ignore the return value, it would return void. See this page.
If you had done either or both of the above, you would have caught the error returned from CreateBuffer above which resulted in ConstantBuffer still being a nullptr when you called UpdateSubresource.
The reason it failed is that you can't in general create a constant buffer that is both D3D11_USAGE_DEFAULT and D3D11_CPU_ACCESS_WRITE. DEFAULT usage memory is often in video memory that is not accessible to the CPU. Since you are using UpdateSubresource as opposed to Map, you should just use:
Buffer.CPUAccessFlags = 0;
You should take a look at DirectX Tool Kit and it's associated tutorials.

Nokia 5110 LCD initialization issue

I am trying to connect Nokia 5110 LCD to BeagleBone Black Rev-C over SPI protocol.
The connections are exactly as shown on the page 6 of:
Nokia5110-BeagleBone Black Connections
I wrote a C equivalent of Arduino's code for Philips PCD8544 (Nokia 3310) driver.
Where I export the required GPIO ports and send commands and data over SPI interface.
I successfully installed and ran Adafruit's python-library:
Adafruit Nokia LCD
My problem is
I have a strange issue, when I run this python code first and then my C code, the code works perfect!
But if I run my C code before the python code, I get no output. Logic says that the python
code must be initializing something that I am missing in my code.
Here's how I initialize the LCD:
fd_spi_dev = open(device, O_RDWR);
//set mode
mode = SPI_MODE_0;
ioctl(fd_spi_dev, SPI_IOC_WR_MODE, &mode);
ioctl(fd_spi_dev, SPI_IOC_RD_MODE, &mode);
//set max bitrate
speed = 4000000;
ioctl(fd_spi_dev, SPI_IOC_RD_MAX_SPEED_HZ, &speed);
ioctl(fd_spi_dev, SPI_IOC_WR_MAX_SPEED_HZ, &speed);
// set an msb first
lsbsetting = 0;
ioctl(fd_spi_dev, SPI_IOC_WR_LSB_FIRST, &lsbsetting);
// set bits per word
bits = 8;
ioctl(fd_spi_dev, SPI_IOC_WR_BITS_PER_WORD, &bits);
ioctl(fd_spi_dev, SPI_IOC_RD_BITS_PER_WORD, &bits);
lcd_write_cmd(0x21); // LCD extended commands
lcd_write_cmd(0xB8); // set LCD Vop (contrast)
lcd_write_cmd(0x04); // set temp coefficient
lcd_write_cmd(0x14); // set biad mode 1:40
lcd_write_cmd(0x20); // LCD basic commands
lcd_write_cmd(0x09); // LCD all segments on
/* I am expecting to see all segments lit here */
sleep(5);
lcd_write_cmd(0x0C); // LCD normal video
void lcd_write_cmd(uint8_t cmd) {
uint8_t *tx = &cmd;
uint8_t rx;
uint32_t len = 1;
struct spi_ioc_transfer tr = {
.tx_buf = (uint32_t)tx,
.rx_buf = (uint32_t)&rx,
.len = len,
.delay_usecs = delay,
.speed_hz = speed,
.bits_per_word = bits,
.cs_change = 1,
};
size = write(fd_dc_val, "0", 1);
size = write(fd_cs_val, "0", 1);
ioctl(fd_spi_dev, SPI_IOC_MESSAGE(1), &tr);
write(fd_cs_val, "1", 1);
}
I am a novice in embedded programming. I would greatly appreciate any help. Thank you.
If you're not missing an initialization step (and I haven't checked you against the 5110 datasheet), it must either be something wrong with your ioctls or a timing issue.
You could try using a library that abstracts away the ioctl calls to rule that out (I'm partial to my own: https://github.com/graycatlabs/serbus ;).
If it still doesn't work with that then I'd say it's probably a timing issue - Python is a lot slower than C when it comes to file I/O, so it might not be giving the LCD driver enough time to update after some of the commands - check the datasheet to see if it needs you to give it some time after any of the commands.

OpenCV VideoCapture reading issue

This will probably be a dumb question, but i really can't figure it out.
First of all: sorry for the vague title, i'm not really sure about how to describe my problem in a couple of words.
I'm using OpenCV 2.4.3 in MS Visual Studio, C++. I'm using the VideoCapture interface for capturing frames from my laptop webcam.
What my program should do is:
Loop on different poses of the user, for each pose:
wait that the user is in position (a getchar() waits for an input that says "i'm in position" by simply hitting enter)
read the current frame
extract a region of intrest from that frame
save the image in the ROI and then label it
Here is the code:
int main() {
Mat img, face_img, img_start;
Rect *face;
VideoCapture cam(0);
ofstream fout("dataset/dataset.txt");
if(!fout) {
cout<<"Cannot open dataset file! Aborting"<<endl;
return 1;
}
int count = 0; // Number of the (last + 1) image in the dataset
// Orientations are: 0°, +/- 30°, +/- 60°, +/-90°
// Distances are just two, for now
// So it is 7x2 images
cam.read(img_start);
IplImage image = img_start;
face = face_detector(image);
if(!face) {
cout<<"No face detected..? Aborting."<<endl;
return 2;
}
// Double ROI dimensions
face->x = face->x-face->width / 2;
face->y = face->y-face->height / 2;
face->width *= 2;
face->height *=2;
for(unsigned i=0;i<14;++i) {
// Wait for the user to get in position
getchar();
// Get the face ROI
cam.read(img);
face_img = Mat(img, *face);
// Save it
stringstream sstm;
string fname;
sstm << "dataset/image" << (count+i) << ".jpeg";
fname = sstm.str();
imwrite(fname,face_img);
//do some other things..
What i expect from it:
i stand in front of the camera when the program starts and it gets the ROI rectangle using the face_detector() function
when i'm ready, say in pose0, i hit enter and a picture is taken
from that picture a subimage is extracted and it is saved as image0.jpeg
loop this 7 times
What it does:
i stand in front of the camera when the program starts, nothing special here
i hit enter
the ROI is extracted not from the picture taken in that moment, but from the first one
At first, i used img in every cam.capture(), then i changed the first one in cam.capture(img_start) but that didn't help.
The second iteration of my code saves the image that should have been saved in the 1st, the 3rd iteration the one that should have been saved in the 2nd and so on.
I'm probably missing someting important from the VideoCapture, but i really can't figure it out, so here i am.
Thanks for any help, i really appreciate it.
The problem with your implementation is that the camera is not running freely and capturing images in real time. When you start up the camera, the videocapture buffer is filled up while waiting for you to read in the frames. Once the buffer is full, it doesn't drop old frames for new ones until you read and free up space in it.
The solution would be to have a separate capture thread, in addition to your "process" thread. The capture thread keeps reading in frames from the buffer whenever a new frame comes in and stores it in a "recent frame" image object. When the process thread needs the most recent frame (i.e. when you hit Enter), it locks a mutex for thread safety, copies the most recent frame into another object and frees the mutex so that the capture thread continues reading in new frames.
#include <iostream>
#include <stdio.h>
#include <thread>
#include <mutex>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
void camCapture(VideoCapture cap, Mat* frame, bool* Capture){
while (*Capture==true) {
cap >> *frame;
}
cout << "camCapture finished\n";
return;
}
int main() {
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
Mat *frame, SFI, Input;
frame = new Mat;
bool *Capture = new bool;
*Capture = true;
//your capture thread has started
thread captureThread(camCapture, cap, frame, Capture);
mtx.lock();
imshow(*frame,current_frame);
mtx.unlock();
//Terminate the thread
mtx.lock();
*Capture = false;
mtx.unlock();
captureThread.join();
return 0;
}
This is the code that I wrote from the above advice. I hope someone can get help from this.
When you are capturing the image continuously, no captured frame will be stored in the opencv buffer, such that there will be no lag in streaming.
If you take screenshot/capture image with some time gap inbetween, the captured image will be first stored in the opencv buffer, after that the image is retrieved from the buffer.
When the buffer is full, when you are calling captureObject >> matObject, the last frame from the image is returned, not the current frame in the capturecard/webcam.
So only you are seeing a lag in your code. This issue can be resolved by taking screenshot based on the frames per second (fps) value of the webcam and time taken to capture the screenshot.
The time taken to read frame from buffer is very less, Measure the time taken to take the screenshot. If it is lesser than the fps we can assume that is read from buffer else it means it is captured from webcam.
Sample Code:
For capturing a recent screenshot from webcam.
#include <opencv2/opencv.hpp>
#include <time.h>
#include <thread>
#include <chrono>
using namespace std;
using namespace cv;
int main()
{
struct timespec start, end;
VideoCapture cap(-1); // first available webcam
Mat screenshot;
double diff = 1000;
double fps = ((double)cap.get(CV_CAP_PROP_FPS))/1000;
while (true)
{
clock_gettime(CLOCK_MONOTONIC, &start);
//camera.grab();
cap.grab();// can also use cin >> screenshot;
clock_gettime(CLOCK_MONOTONIC, &end);
diff = (end.tv_sec - start.tv_sec)*1e9;
diff = (diff + (end.tv_nsec - start.tv_nsec))*1e-9;
std::cout << "\n diff time " << diff << '\n';
if(diff > fps)
{
break;
}
}
cap >> screenshot; // gets recent frame, can also use cap.retrieve(screenshot);
// process(screenshot)
cap.release();
screenshot.release();
return 0;
}

View GPU Memory / View Texture2D memory space for debugging

I've got a question about a PixelShader I am trying to implement, and what I currently do (this is just for debugging, and trying to figure stuff out):
int3 loc;
loc.x = (int)(In.TextureUV.x * resolution_XY.x);
loc.y = (int)(In.TextureUV.x * resolution_XY.x);
loc.z = 0;
float4 r = g_txDiffuse.Load(loc);
return float4(r.x, r.y, r.z, 1);
The point is, this is always 0,0,0,1
The texture buffer is created:
D3D11_TEXTURE2D_DESC tDesc;
tDesc.Height = 480;
tDesc.Width = 640;
tDesc.Usage = D3D11_USAGE_DYNAMIC;
tDesc.MipLevels = 1;
tDesc.ArraySize = 1;
tDesc.SampleDesc.Count = 1;
tDesc.SampleDesc.Quality = 0;
tDesc.Format = DXGI_FORMAT_R8_UINT;
tDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
tDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
tDesc.MiscFlags = 0;
V_RETURN(pd3dDevice->CreateTexture2D(&tDesc, NULL, &g_pCurrentImage));
I upload the texture (which should be a live display at the end) via:
D3D11_MAPPED_SUBRESOURCE resource;
pd3dImmediateContext->Map(g_pCurrentImage, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
memcpy( resource.pData, g_Images.GetData(), g_Images.GetDataSize() );
pd3dImmediateContext->Unmap( g_pCurrentImage, 0 );
I've checked the resource.pData, the data in there is a valid 8bit monochrome image. I made sure the data coming from the camera is 8bit monochrome 640x480.
There's a few things I don't fully understand:
if I run the Map / memcpy / Unmap routine in every frame, the driver will ultimately crash, the system will be unresponsive. Is there a different way to update a complete texture every frame which should be done?
the texture I uploaded is 8bit, why is the Texture2D.load() a float4 return? Do I have to use a different method to access the texture data? I tried to .sample it, but that didn't work either. Would I have to use a int buffer or something instead?
is there a way to debug the GPU memory, to check if the memcpy worked in the first place?
The Map, memcpy, Unmap really ought not to crash unless2 you are trying to copy too much data into the texture. It would be interesting to know what "GetDataSize()" returns. Does it equal 307,200? If its more than that then there lies your problem.
Texture2D returns a float4 because thats what you've asked for. If you write float r = g_txDiffuse.Load( ... ). The 8-bits get extended to a normalised float as part of the load process. Are you sure, btw, that your calculation of "loc" is correct because as you have it now loc.x and loc.y will always be the same.
You can debug whats going on with DirectX using PIX. Its a great tool and I highly recommend you familiarise yourself with it.

Resources