I've currently got textures loading using CreateWICTextureFromFile however I'd like a little more control over it, and I'd like to store images in their byte form in a resource loader. Below is just two sets of test code that return two separate results and I'm looking for any insight into a possible solution.
ID3D11ShaderResourceView* srv;
std::basic_ifstream<unsigned char> file("image.png", std::ios::binary);
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
unsigned char* buffer = new unsigned char[length];
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], sizeof(buffer), nullptr, &srv, NULL);
As a return for the above code I get Component not found.
std::ifstream file;
ID3D11ShaderResourceView* srv;
file.open("../Assets/Textures/osg.png", std::ios::binary);
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
std::vector<char> buffer(length);
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), (const uint8_t*)&buffer[0], sizeof(buffer), nullptr, &srv, NULL);
The above code returns that the image format is unknown.
I'm clearly doing something wrong here, any help is greatly appreciated. Tried finding anything even similar on stackoverflow, and google to no avail.
Hopefully someone trying to do the same thing will find this solution.
Below is the code I used to solve this problem.
std::basic_ifstream<unsigned char> file("image.png", std::ios::binary);
if (file.is_open())
{
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
unsigned char* buffer = new unsigned char[length];
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], (size_t)length, nullptr, &srv, NULL);
}
The important change being (size_t)length in CreateWICTextureFromMemory
It was indeed a stupid error.
Related
The mmap man pages indicate that closing a file does not result in unmapping of pages. However, I wonder if the following sequence is valid that by the time the read occurs the pages have likely not have been faulted into memory. In other words, is the file still open after the close ? Also is the behavior expected on both Android and iOS?
void func()
{
auto fd = open("test.txt", O_RDONLY);
void *ptr = mmap(nullptr, 16384, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
uint8_t *p = (uint8_t *)ptr;
// Read from *p
}
I am aware that there are already a few questions asking this or similar things and I dived into a few of them, but without any success.
I try to capture a "screenshot" of my display using the Desktop duplication API and process pixeldata of it. Later I would like to do that at least 30 times/second, but thats a different case.
For now, I tried the example of microsoft: https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official%20Windows%20Platform%20Sample/DXGI%20desktop%20duplication%20sample
I successfully saved a picture of the screen and accessed the pixel data with that code.
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
Now I wanted to break the example code down to the basic stuff I need. As I am not familiar with DirectX I have a hard time doing that.
I came up with following code, which runs without error but produces an empty picture. I check hr in Debug Mode, I am aware that this is bad practice and dirty!
int main()
{
HRESULT hr = S_OK;
ID3D11Device* m_Device;
ID3D11DeviceContext* m_DeviceContext;
// Driver types supported
D3D_DRIVER_TYPE DriverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
};
UINT NumDriverTypes = ARRAYSIZE(DriverTypes);
// Feature levels supported
D3D_FEATURE_LEVEL FeatureLevels[] =
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_1
};
UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);
D3D_FEATURE_LEVEL FeatureLevel;
// Create device
for (UINT DriverTypeIndex = 0; DriverTypeIndex < NumDriverTypes; ++DriverTypeIndex)
{
hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr, 0, FeatureLevels, NumFeatureLevels,
D3D11_SDK_VERSION, &m_Device, &FeatureLevel, &m_DeviceContext);
if (SUCCEEDED(hr))
{
// Device creation success, no need to loop anymore
break;
}
}
IDXGIOutputDuplication* m_DeskDupl;
IDXGIOutput1* DxgiOutput1 = nullptr;
IDXGIOutput* DxgiOutput = nullptr;
IDXGIAdapter* DxgiAdapter = nullptr;
IDXGIDevice* DxgiDevice = nullptr;
UINT Output = 0;
hr = m_Device->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&DxgiDevice));
hr = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void**>(&DxgiAdapter));
DxgiDevice->Release();
DxgiDevice = nullptr;
hr = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
DxgiAdapter->Release();
DxgiAdapter = nullptr;
hr = DxgiOutput->QueryInterface(__uuidof(DxgiOutput1), reinterpret_cast<void**>(&DxgiOutput1));
DxgiOutput->Release();
DxgiOutput = nullptr;
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
ID3D11Texture2D* m_AcquiredDesktopImage;
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
hr = m_DeskDupl->ReleaseFrame();
}
Could anyone give me a hint what is wrong with this code?
EDIT:
Just found the code snipet below and integrated it into my code.
Now it works!
Lessons learnt:
-) actully output/process hr!
-) AcquireNextFrame might not work on first try (?)
I might update this post again with better code, with functioning loop.
int lTryCount = 4;
do
{
Sleep(100);
hr = m_DeskDupl->AcquireNextFrame(250, &FrameInfo, &DesktopResource);
if (SUCCEEDED(hr))
break;
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
continue;
}
else if (FAILED(hr))
break;
} while (--lTryCount > 0);
AcquireNextFrame is allowed to return null resource (texture) because it returns on either change in desktop image or change related to pointer.
AcquireNextFrame acquires a new desktop frame when the operating system either updates the desktop bitmap image or changes the shape or position of a hardware pointer.
When you start frame acquisition you apparently are to get first desktop image soon, but you can also have a few of pointer notifications too before the image.
You should not limit yourself with 4 attempts and you don't need to sleep within the loop. Just keep polling for the image. To avoid dead loop it makes more sense to track total time spent in the loop and limit it to, for example, one second.
See also:
AcquireNextFrame() never grabs an updated image, always blank
Im new to opencv and c++, Im forced to lean c++ to use opencv on my delphi app. So im exporting this function from dll to convert a pointer to mat back to bytes after image processing. This is the function im using:
DllExport unsigned char* MatToBytes(cv::Mat *src, int &outLen)
{
cv::Mat &matCvrt = *src;
std::vector<unsigned char> *poutVet = new std::vector<unsigned char>();
std::vector<unsigned char> &outVet = *poutVet;
imencode(".png", matCvrt, outVet);
outLen = outVet.size();
unsigned char* outBytes;
outBytes = new unsigned char[outVet.size()];
std::copy(outVet.begin(), outVet.end(), outBytes);
vector<unsigned char>().swap(outVet);
return outBytes;
}
I already researched about it whole day now but couldnt find an answer. if I remove the code
vector<unsigned char>().swap(outVet);
it would work fine, but leaks memory. if I put it in the code, I get "Debug Assertion Failed!" Hope someone can help me out, thanks a lot.
Do not use pointers to std::vector. This class already manage memory allocation and deallocation:
DllExport unsigned char* MatToBytes(cv::Mat *src, int &outLen)
{
std::vector<unsigned char> outVet;
imencode(".png", *src, outVet);
outLen = outVet.size();
unsigned char* outBytes;
outBytes = new unsigned char[outVet.size()];
std::copy(outVet.begin(), outVet.end(), outBytes);
return outBytes;
}
I fixed it myself, I tried it on visual studio 2010 and its working ok. I was using vc11 libraries of opencv, so now Ill just have to build opencv on visual studio 2012.
I want to save (pipe/copy) a BIO into a char array.
When I know the size it works, but otherwise not.
For example, I can store the content of my char* into a BIO using this
const unsigned char* data = ...
myBio = BIO_new_mem_buf((void*)data, strlen(data));
But when I try to use SMIME_write_CMS which takes a BIO (what I've created before) for the output it doesn't work.
const int SIZE = 50000;
unsigned char *temp = malloc(SIZE);
memset(temp, 0, SIZE);
out = BIO_new_mem_buf((void*)temp, SIZE);
if (!out) {
NSLog(#"Couldn't create new file!");
assert(false);
}
int finished = SMIME_write_CMS(out, cms, in, flags);
if (!finished) {
NSLog(#"SMIME write CMS didn't succeed!");
assert(false);
}
printf("cms encrypted: %s\n", temp);
NSLog(#"All succeeded!");
The OpenSSL reference uses a direct file output with the BIO.
This works but I can't use BIO_new_file() in objective-c... :-/
out = BIO_new_file("smencr.txt", "w");
if (!out)
goto err;
/* Write out S/MIME message */
if (!SMIME_write_CMS(out, cms, in, flags))
goto err;
Do you guys have any suggestion?
I would suggest trying to use SIZE-1, that way you are guaranteed that it is NULL terminated. Otherwise, it is possible that it is just over running the buffer.
out = BIO_new_mem_buf((void*)temp, SIZE-1);
Let me know if that helps.
Edit:
When using BIO_new_mem_buf() it is a read only buffer, so you cannot write to it. If you want to write to memory use:
BIO *bio = BIO_new(BIO_s_mem());
I'm using cvSetData to get the rgb frame into one I can use for openCV.
I modified the SkeletalViewer slightly to produce the rgb stream.
void CSkeletalViewerApp::Nui_GotVideoAlert( )
{
const NUI_IMAGE_FRAME * pImageFrame = NULL;
IplImage* kinectColorImage = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 4);
HRESULT hr = NuiImageStreamGetNextFrame(
m_pVideoStreamHandle,
0,
&pImageFrame );
if( FAILED( hr ) )
{
return;
}
NuiImageBuffer * pTexture = pImageFrame->pFrameTexture;
KINECT_LOCKED_RECT LockedRect;
pTexture->LockRect( 0, &LockedRect, NULL, 0 );
if( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
m_DrawVideo.DrawFrame( (BYTE*) pBuffer );
cvSetData(kinectColorImage, (BYTE*) pBuffer,kinectColorImage->widthStep);
cvShowImage("Color Image", kinectColorImage);
//cvReleaseImage( &kinectColorImage );
cvWaitKey(10);
}
else
{
OutputDebugString( L"Buffer length of received texture is bogus\r\n" );
}
cvReleaseImage(&kinectColorImage);
NuiImageStreamReleaseFrame( m_pVideoStreamHandle, pImageFrame );
}
With the cvReleaseImage, I would get a cvException error. Not exactly sure which one as it didn't specify. Without cvReleaseImage, I would get the rgb video running in an openCV window but would eventually crash because it ran out of memory.
How should I release the image properly?
Just solved this problem.
After a bunch of sleuthing using breakpoints and debugging, it appears as though the problem has to do with the pointers used in cvSetData. My best guess is that Nui_GotVideoAlert() updates the address pointed to by pBuffer before cvReleaseImage is called. In addition, cvSetData never appears to copy the bytes from this address.
What happens then is that cvReleaseImage is called on an address that no longer exists.
I fixed this by declaring kinectColorImage at the top of NuiImpl.cpp, calling cvSetData in ::Nui_GotVideoAlert(), and only calling cvReleaseImage in the Nui_Uninit() method. This way, kinectColorImage will just update instead of creating a new IplImage in each call of Nui_GotVideoAlert().
That's strange. As far as I know, cvReleaseImage released both the image header and the image data. I did the piece of code below and in this certain example, cvReleaseImage does not free the buffer that contains the data. There I didn't use cvSetData but I just updated the pointer to the image data. If you uncomment the commented lines and comment the ones just below each one, program still runs but you'll get some memory leaks. I used OpenCV 2.2 (this is the legacy interface).
#include <opencv/cv.h>
#include <stdlib.h>
#define NLOOPS 1000
int main(void){
int i,j
char *buff = (char *) malloc( sizeof(char) * 3 * 640 * 480 );
for( i = 0; i < 640 * 480 * 3; i++ ) buff[i] = 128;
j = 0;
while( j++< NLOOPS ){
IplImage *im = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 3);
//cvSetData(im, buff, im->widthStep); ---> If you use that version you'll get memory leaks. Comment line below.
im->imageData = buff;
cvWaitKey(4);
cvShowImage("kk", im);
//cvReleaseImageHeader(&im); ---> If you use that version you'll get memory leaks. Comment line below.
cvReleaseImage(&im);
free(im);
}
free(buff);
return 0;
}