Memory Issues with cvShowImage and Kinect SDK: Skeletal Viewer - memory

I'm using cvSetData to get the rgb frame into one I can use for openCV.
I modified the SkeletalViewer slightly to produce the rgb stream.
void CSkeletalViewerApp::Nui_GotVideoAlert( )
{
const NUI_IMAGE_FRAME * pImageFrame = NULL;
IplImage* kinectColorImage = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 4);
HRESULT hr = NuiImageStreamGetNextFrame(
m_pVideoStreamHandle,
0,
&pImageFrame );
if( FAILED( hr ) )
{
return;
}
NuiImageBuffer * pTexture = pImageFrame->pFrameTexture;
KINECT_LOCKED_RECT LockedRect;
pTexture->LockRect( 0, &LockedRect, NULL, 0 );
if( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
m_DrawVideo.DrawFrame( (BYTE*) pBuffer );
cvSetData(kinectColorImage, (BYTE*) pBuffer,kinectColorImage->widthStep);
cvShowImage("Color Image", kinectColorImage);
//cvReleaseImage( &kinectColorImage );
cvWaitKey(10);
}
else
{
OutputDebugString( L"Buffer length of received texture is bogus\r\n" );
}
cvReleaseImage(&kinectColorImage);
NuiImageStreamReleaseFrame( m_pVideoStreamHandle, pImageFrame );
}
With the cvReleaseImage, I would get a cvException error. Not exactly sure which one as it didn't specify. Without cvReleaseImage, I would get the rgb video running in an openCV window but would eventually crash because it ran out of memory.
How should I release the image properly?

Just solved this problem.
After a bunch of sleuthing using breakpoints and debugging, it appears as though the problem has to do with the pointers used in cvSetData. My best guess is that Nui_GotVideoAlert() updates the address pointed to by pBuffer before cvReleaseImage is called. In addition, cvSetData never appears to copy the bytes from this address.
What happens then is that cvReleaseImage is called on an address that no longer exists.
I fixed this by declaring kinectColorImage at the top of NuiImpl.cpp, calling cvSetData in ::Nui_GotVideoAlert(), and only calling cvReleaseImage in the Nui_Uninit() method. This way, kinectColorImage will just update instead of creating a new IplImage in each call of Nui_GotVideoAlert().

That's strange. As far as I know, cvReleaseImage released both the image header and the image data. I did the piece of code below and in this certain example, cvReleaseImage does not free the buffer that contains the data. There I didn't use cvSetData but I just updated the pointer to the image data. If you uncomment the commented lines and comment the ones just below each one, program still runs but you'll get some memory leaks. I used OpenCV 2.2 (this is the legacy interface).
#include <opencv/cv.h>
#include <stdlib.h>
#define NLOOPS 1000
int main(void){
int i,j
char *buff = (char *) malloc( sizeof(char) * 3 * 640 * 480 );
for( i = 0; i < 640 * 480 * 3; i++ ) buff[i] = 128;
j = 0;
while( j++< NLOOPS ){
IplImage *im = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 3);
//cvSetData(im, buff, im->widthStep); ---> If you use that version you'll get memory leaks. Comment line below.
im->imageData = buff;
cvWaitKey(4);
cvShowImage("kk", im);
//cvReleaseImageHeader(&im); ---> If you use that version you'll get memory leaks. Comment line below.
cvReleaseImage(&im);
free(im);
}
free(buff);
return 0;
}

Related

Desktop Duplication API returns empty frame

I am aware that there are already a few questions asking this or similar things and I dived into a few of them, but without any success.
I try to capture a "screenshot" of my display using the Desktop duplication API and process pixeldata of it. Later I would like to do that at least 30 times/second, but thats a different case.
For now, I tried the example of microsoft: https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official%20Windows%20Platform%20Sample/DXGI%20desktop%20duplication%20sample
I successfully saved a picture of the screen and accessed the pixel data with that code.
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
Now I wanted to break the example code down to the basic stuff I need. As I am not familiar with DirectX I have a hard time doing that.
I came up with following code, which runs without error but produces an empty picture. I check hr in Debug Mode, I am aware that this is bad practice and dirty!
int main()
{
HRESULT hr = S_OK;
ID3D11Device* m_Device;
ID3D11DeviceContext* m_DeviceContext;
// Driver types supported
D3D_DRIVER_TYPE DriverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
};
UINT NumDriverTypes = ARRAYSIZE(DriverTypes);
// Feature levels supported
D3D_FEATURE_LEVEL FeatureLevels[] =
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_1
};
UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);
D3D_FEATURE_LEVEL FeatureLevel;
// Create device
for (UINT DriverTypeIndex = 0; DriverTypeIndex < NumDriverTypes; ++DriverTypeIndex)
{
hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr, 0, FeatureLevels, NumFeatureLevels,
D3D11_SDK_VERSION, &m_Device, &FeatureLevel, &m_DeviceContext);
if (SUCCEEDED(hr))
{
// Device creation success, no need to loop anymore
break;
}
}
IDXGIOutputDuplication* m_DeskDupl;
IDXGIOutput1* DxgiOutput1 = nullptr;
IDXGIOutput* DxgiOutput = nullptr;
IDXGIAdapter* DxgiAdapter = nullptr;
IDXGIDevice* DxgiDevice = nullptr;
UINT Output = 0;
hr = m_Device->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&DxgiDevice));
hr = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void**>(&DxgiAdapter));
DxgiDevice->Release();
DxgiDevice = nullptr;
hr = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
DxgiAdapter->Release();
DxgiAdapter = nullptr;
hr = DxgiOutput->QueryInterface(__uuidof(DxgiOutput1), reinterpret_cast<void**>(&DxgiOutput1));
DxgiOutput->Release();
DxgiOutput = nullptr;
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
ID3D11Texture2D* m_AcquiredDesktopImage;
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
hr = m_DeskDupl->ReleaseFrame();
}
Could anyone give me a hint what is wrong with this code?
EDIT:
Just found the code snipet below and integrated it into my code.
Now it works!
Lessons learnt:
-) actully output/process hr!
-) AcquireNextFrame might not work on first try (?)
I might update this post again with better code, with functioning loop.
int lTryCount = 4;
do
{
Sleep(100);
hr = m_DeskDupl->AcquireNextFrame(250, &FrameInfo, &DesktopResource);
if (SUCCEEDED(hr))
break;
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
continue;
}
else if (FAILED(hr))
break;
} while (--lTryCount > 0);
AcquireNextFrame is allowed to return null resource (texture) because it returns on either change in desktop image or change related to pointer.
AcquireNextFrame acquires a new desktop frame when the operating system either updates the desktop bitmap image or changes the shape or position of a hardware pointer.
When you start frame acquisition you apparently are to get first desktop image soon, but you can also have a few of pointer notifications too before the image.
You should not limit yourself with 4 attempts and you don't need to sleep within the loop. Just keep polling for the image. To avoid dead loop it makes more sense to track total time spent in the loop and limit it to, for example, one second.
See also:
AcquireNextFrame() never grabs an updated image, always blank

Does cvQueryFrame advance the current frame?

I'm trying to learn OpenCV using an O'Reilley book and am finding the sample programs raise as many questions as they answer. In a very basic program to show a video:
#include <highgui.h>
#include <string>
int main( int argc, char** argv){
std::string name = "Example 2";
cvNamedWindow(name.c_str(),CV_WINDOW_AUTOSIZE);
CvCapture* capture = cvCreateFileCapture( argv[1]);
IplImage* frame;
while(1) {
frame = cvQueryFrame( capture );
if ( !frame ) break;
cvShowImage (name.c_str(), frame);
char c = cvWaitKey(33);
if (c == 27) break; //User hits ESC key - ASCII value 27
}
cvReleaseCapture( &capture );
cvDestroyWindow(name.c_str());
}
I find myself wondering why I ever see more than a single frame. Everything I read online about cvQueryFrame says it retrieves the current frame. No where have I seen anything about how/when/where the "current" frame advances.
Does cvQueryFrame act more like reading from a file or stream, in that it reads data and then prepares to read the next piece of data, or does the current frame advance in some other way?

Changing DCT coefficients

I decided to use libjpeg as the main library working with jpeg files.
I've read libjpg.txt file. And I was pleased that library allows DCT coefficients reading/writing in a convenient way. Since writing an own decoder will take a long time.
My work is related to the lossless embedding. Currently I need to read DCT coefficients from a file then modify some of them and write changed coefficients in the same file.
Well, I found jpeg_write_coefficients() function. And I naively thought that I could apply it to a decompression object (struct jpeg_decompress_struct). But it does not work and requires a compression object.
I can't believe that such the powerful library is not able to do this.
I think that most likely I'm missing something. Although I tried to be attentive.
Perhaps the writing coefficients can be done more sophisticated way.
But I don't know how to.
I will be very glad if you propose your ideas.
You can ue jpeg_write_coefficients to write your changed DCT.
The following information is avaliable in libjpeg.txt
To write the contents of a JPEG file as DCT coefficients, you must provide
the DCT coefficients stored in virtual block arrays. You can either pass
block arrays read from an input JPEG file by jpeg_read_coefficients(), or
allocate virtual arrays from the JPEG compression object and fill them
yourself. In either case, jpeg_write_coefficients() is substituted for
jpeg_start_compress() and jpeg_write_scanlines(). Thus the sequence is
* Create compression object
* Set all compression parameters as necessary
* Request virtual arrays if needed
* jpeg_write_coefficients()
* jpeg_finish_compress()
* Destroy or re-use compression object
jpeg_write_coefficients() is passed a pointer to an array of virtual block
array descriptors; the number of arrays is equal to cinfo.num_components.
The virtual arrays need only have been requested, not realized, before
jpeg_write_coefficients() is called. A side-effect of
jpeg_write_coefficients() is to realize any virtual arrays that have been
requested from the compression object's memory manager. Thus, when obtaining
the virtual arrays from the compression object, you should fill the arrays
after calling jpeg_write_coefficients(). The data is actually written out
when you call jpeg_finish_compress(); jpeg_write_coefficients() only writes
the file header.
When writing raw DCT coefficients, it is crucial that the JPEG quantization
tables and sampling factors match the way the data was encoded, or the
resulting file will be invalid. For transcoding from an existing JPEG file,
we recommend using jpeg_copy_critical_parameters(). This routine initializes
all the compression parameters to default values (like jpeg_set_defaults()),
then copies the critical information from a source decompression object.
The decompression object should have just been used to read the entire
JPEG input file --- that is, it should be awaiting jpeg_finish_decompress().
jpeg_write_coefficients() marks all tables stored in the compression object
as needing to be written to the output file (thus, it acts like
jpeg_start_compress(cinfo, TRUE)). This is for safety's sake, to avoid
emitting abbreviated JPEG files by accident. If you really want to emit an
abbreviated JPEG file, call jpeg_suppress_tables(), or set the tables'
individual sent_table flags, between calling jpeg_write_coefficients() and
jpeg_finish_compress().
So to change a single dct, you can use the following simple code:
To access any dct coeff, you need to change four index, cx, bx, by, bi.
In my code, I used blockptr_one[bi]++; to increase one dct Coeff
#include <stdio.h>
#include <jpeglib.h>
#include <stdlib.h>
#include <iostream>
#include <string>
int write_jpeg_file(std::string outname,jpeg_decompress_struct in_cinfo, jvirt_barray_ptr *coeffs_array ){
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
FILE * infile;
if ((infile = fopen(outname.c_str(), "wb")) == NULL) {
fprintf(stderr, "can't open %s\n", outname.c_str());
return 0;
}
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, infile);
j_compress_ptr cinfo_ptr = &cinfo;
jpeg_copy_critical_parameters((j_decompress_ptr)&in_cinfo,cinfo_ptr);
jpeg_write_coefficients(cinfo_ptr, coeffs_array);
jpeg_finish_compress( &cinfo );
jpeg_destroy_compress( &cinfo );
fclose( infile );
return 1;
}
int read_jpeg_file( std::string filename, std::string outname )
{
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
FILE * infile;
if ((infile = fopen(filename.c_str(), "rb")) == NULL) {
fprintf(stderr, "can't open %s\n", filename.c_str());
return 0;
}
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
jpeg_stdio_src(&cinfo, infile);
(void) jpeg_read_header(&cinfo, TRUE);
jvirt_barray_ptr *coeffs_array = jpeg_read_coefficients(&cinfo);
//change one dct:
int ci = 0; // between 0 and number of image component
int by = 0; // between 0 and compptr_one->height_in_blocks
int bx = 0; // between 0 and compptr_one->width_in_blocks
int bi = 0; // between 0 and 64 (8x8)
JBLOCKARRAY buffer_one;
JCOEFPTR blockptr_one;
jpeg_component_info* compptr_one;
compptr_one = cinfo.comp_info + ci;
buffer_one = (cinfo.mem->access_virt_barray)((j_common_ptr)&cinfo, coeffs_array[ci], by, (JDIMENSION)1, FALSE);
blockptr_one = buffer_one[0][bx];
blockptr_one[bi]++;
write_jpeg_file(outname, cinfo, coeffs_array);
jpeg_finish_decompress( &cinfo );
jpeg_destroy_decompress( &cinfo );
fclose( infile );
return 1;
}
int main()
{
std::string infilename = "you_image.jpg", outfilename = "out_image.jpg";
/* Try opening a jpeg*/
if( read_jpeg_file( infilename, outfilename ) > 0 )
{
std::cout << "It's Okay..." << std::endl;
}
else return -1;
return 0;
}
You should really take a look at transupp.h and sources for jpegtran that comes with the library.
Anyway, here is my dirty code with comments, assembled partially from jpegtran. It lets you manipulate DCT coefficients one by one.
#include "jpeglib.h" /* Common decls for cjpeg/djpeg applications */
#include "transupp.h" /* Support routines for jpegtran */
struct jpeg_decompress_struct srcinfo;
struct jpeg_compress_struct dstinfo;
struct jpeg_error_mgr jsrcerr, jdsterr;
static jpeg_transform_info transformoption; /* image transformation options */
transformoption.transform = JXFORM_NONE;
transformoption.trim = FALSE;
transformoption.force_grayscale = FALSE;
jvirt_barray_ptr * src_coef_arrays;
jvirt_barray_ptr * dst_coef_arrays;
/* Initialize the JPEG decompression object with default error handling. */
srcinfo.err = jpeg_std_error(&jsrcerr);
jpeg_create_decompress(&srcinfo);
/* Initialize the JPEG compression object with default error handling. */
dstinfo.err = jpeg_std_error(&jdsterr);
jpeg_create_compress(&dstinfo);
FILE *fp;
if((fp = fopen(filePath], "rb")) == NULL) {
//Throw an error
} else {
//Continue
}
/* Specify data source for decompression */
jpeg_stdio_src(&srcinfo, fp);
/* Enable saving of extra markers that we want to copy */
jcopy_markers_setup(&srcinfo, JCOPYOPT_ALL);
/* Read file header */
(void) jpeg_read_header(&srcinfo, TRUE);
jtransform_request_workspace(&srcinfo, &transformoption);
src_coef_arrays = jpeg_read_coefficients(&srcinfo);
jpeg_copy_critical_parameters(&srcinfo, &dstinfo);
/* Do your DCT shenanigans here on src_coef_arrays like this (I've moved it into a separate function): */
moveDCTAround(&srcinfo, &dstinfo, 0, src_coef_arrays);
/* ..when done with DCT, do this: */
dst_coef_arrays = jtransform_adjust_parameters(&srcinfo, &dstinfo, src_coef_arrays, &transformoption);
fclose(fp);
//And write everything back
fp = fopen(filePath, "wb");
/* Specify data destination for compression */
jpeg_stdio_dest(&dstinfo, fp);
/* Start compressor (note no image data is actually written here) */
jpeg_write_coefficients(&dstinfo, dst_coef_arrays);
/* Copy to the output file any extra markers that we want to preserve */
jcopy_markers_execute(&srcinfo, &dstinfo, JCOPYOPT_ALL);
jpeg_finish_compress(&dstinfo);
jpeg_destroy_compress(&dstinfo);
(void) jpeg_finish_decompress(&srcinfo);
jpeg_destroy_decompress(&srcinfo);
fclose(fp);
And the function itself:
void moveDCTAround (j_decompress_ptr srcinfo, j_compress_ptr dstinfo, JDIMENSION x_crop_offset, jvirt_barray_ptr *src_coef_arrays)
{
size_t block_row_size;
JBLOCKARRAY coef_buffers[MAX_COMPONENTS];
JBLOCKARRAY row_ptrs[MAX_COMPONENTS];
//Allocate DCT array buffers
for (JDIMENSION compnum=0; compnum<srcinfo->num_components; compnum++)
{
coef_buffers[compnum] = (dstinfo->mem->alloc_barray)((j_common_ptr) dstinfo, JPOOL_IMAGE, srcinfo->comp_info[compnum].width_in_blocks,
srcinfo->comp_info[compnum].height_in_blocks);
}
//For each component,
for (JDIMENSION compnum=0; compnum<srcinfo->num_components; compnum++)
{
block_row_size = (size_t) sizeof(JCOEF)*DCTSIZE2*srcinfo->comp_info[compnum].width_in_blocks;
//...iterate over rows,
for (JDIMENSION rownum=0; rownum<srcinfo->comp_info[compnum].height_in_blocks; rownum++)
{
row_ptrs[compnum] = ((dstinfo)->mem->access_virt_barray)((j_common_ptr) &dstinfo, src_coef_arrays[compnum], rownum, (JDIMENSION) 1, FALSE);
//...and for each block in a row,
for (JDIMENSION blocknum=0; blocknum<srcinfo->comp_info[compnum].width_in_blocks; blocknum++)
//...iterate over DCT coefficients
for (JDIMENSION i=0; i<DCTSIZE2; i++)
{
//Manipulate your DCT coefficients here. For instance, the code here inverts the image.
coef_buffers[compnum][rownum][blocknum][i] = -row_ptrs[compnum][0][blocknum][i];
}
}
}
//Save the changes
//For each component,
for (JDIMENSION compnum=0; compnum<srcinfo->num_components; compnum++)
{
block_row_size = (size_t) sizeof(JCOEF)*DCTSIZE2 * srcinfo->comp_info[compnum].width_in_blocks;
//...iterate over rows
for (JDIMENSION rownum=0; rownum < srcinfo->comp_info[compnum].height_in_blocks; rownum++)
{
//Copy the whole rows
row_ptrs[compnum] = (dstinfo->mem->access_virt_barray)((j_common_ptr) dstinfo, src_coef_arrays[compnum], rownum, (JDIMENSION) 1, TRUE);
memcpy(row_ptrs[compnum][0][0], coef_buffers[compnum][rownum][0], block_row_size);
}
}

DirectX11 CreateWICTextureFromMemory Using PNG

I've currently got textures loading using CreateWICTextureFromFile however I'd like a little more control over it, and I'd like to store images in their byte form in a resource loader. Below is just two sets of test code that return two separate results and I'm looking for any insight into a possible solution.
ID3D11ShaderResourceView* srv;
std::basic_ifstream<unsigned char> file("image.png", std::ios::binary);
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
unsigned char* buffer = new unsigned char[length];
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], sizeof(buffer), nullptr, &srv, NULL);
As a return for the above code I get Component not found.
std::ifstream file;
ID3D11ShaderResourceView* srv;
file.open("../Assets/Textures/osg.png", std::ios::binary);
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
std::vector<char> buffer(length);
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), (const uint8_t*)&buffer[0], sizeof(buffer), nullptr, &srv, NULL);
The above code returns that the image format is unknown.
I'm clearly doing something wrong here, any help is greatly appreciated. Tried finding anything even similar on stackoverflow, and google to no avail.
Hopefully someone trying to do the same thing will find this solution.
Below is the code I used to solve this problem.
std::basic_ifstream<unsigned char> file("image.png", std::ios::binary);
if (file.is_open())
{
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
unsigned char* buffer = new unsigned char[length];
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], (size_t)length, nullptr, &srv, NULL);
}
The important change being (size_t)length in CreateWICTextureFromMemory
It was indeed a stupid error.

OpenCV VideoWriter won't write anything, although cvWriteToAVI does

I'm been trying to capture video from a cam and write it into an AVI file. I'm using Qt 4.8.2 with MSVC 2010 (x86) on Windows 7. I have 2 versions of the code: one using cv::Mat and the other using IplImage*. However, only the IplImage* version is working. Here's my code using cv::Mat:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main() {
VideoCapture* capture2 = new VideoCapture( CV_CAP_DSHOW );
Size size2 = Size(640,480);
int codec = CV_FOURCC('M', 'J', 'P', 'G');
VideoWriter* writer2 = new VideoWriter("video.avi",codec,15,size2);
int a = 100;
Mat frame2;
while ( a > 0 ) {
capture2->read(frame2);
writer2->write(frame2);
a--;
}
writer2->release();
capture2->release();
return 0;
}
And here's the code using IplImage*:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
CvCapture* capture = cvCaptureFromCAM( CV_CAP_DSHOW );
CvSize size = cvSize(640,480);
int codec = CV_FOURCC('M', 'J', 'P', 'G');
CvVideoWriter* writer = cvCreateVideoWriter("video.avi",codec,15,size);
int a = 100;
while ( a > 0 ) {
IplImage* frame = cvQueryFrame( capture );
cvWriteToAVI(writer,frame);
a--;
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture( &capture );
return 0;
}
It's basically the same, or at least it looks like the same thing to me. It reads 100 frames and should write them into "video.avi". It compiles and runs without errors, but the cv::Mat version doesn't write anything, and the IplImage* version works perfectly.
Does someone have any idea on what's going on?
The syntax in Opencv C++ reference is bit different, and here is a working code in C++.
I Just added imshow and waitkey, for checking you can remove them if you want.
int main()
{
VideoCapture* capture2 = new VideoCapture(CV_CAP_DSHOW);
Size size2 = Size(640, 480);
int codec = CV_FOURCC('M', 'J', 'P', 'G');
// Unlike in C, here we use an object of the class VideoWriter//
VideoWriter writer2("video_.avi", codec, 15.0, size2, true);
writer2.open("video_.avi", codec, 15.0, size2, true);
if (writer2.isOpened())
{
int a = 100;
Mat frame2;
while (a > 0)
{
capture2->read(frame2);
imshow("live", frame2);
waitKey(100);
writer2.write(frame2);
a--;
}
}
else
{
cout << "ERROR while opening" << endl;
}
// No Need to release the Writer as the distructor will called automatically
capture2->release();
return 0;
}
I had the same problem over and over again, and none of the solutions I found online helped.
Strange enough, the problem (identified purely with a trial and error method) was with the write permission. Everything worked after I sudo chmod u+rwx the python script.
I have the same problem and after a few time i realize that the input video isn't the same size with the output. Resize the input video may help u.
capture2->read(frame2);
cv::resize(frame2,frame2,cv::Size(640,480);
writer2->write(frame2);

Resources