Good afternoon everybody,
In OpenCV I'm having trouble getting the VideoCapture's set function to change the frame rate. Here is my test program, using the "768x576.avi" test video file from the "C:\OpenCV-3.1.0\opencv\sources\samples\data" directory
// VideoCaptureSetTest.cpp
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<iostream>
#include<conio.h> // it may be necessary to change or remove this line if not using Windows
///////////////////////////////////////////////////////////////////////////////////////////////////
int main() {
cv::VideoCapture capVideo;
cv::Mat imgFrame;
capVideo.open("768x576.avi");
if (!capVideo.isOpened()) { // if unable to open video file
std::cout << "error reading video file" << std::endl << std::endl; // show error message
_getch(); // it may be necessary to change or remove this line if not using Windows
return(0); // and exit program
}
char chCheckForEscKey = 0;
double dblFPS = capVideo.get(CV_CAP_PROP_FPS);
std::cout << "1st time - dblFPS = " << dblFPS << "\n"; // this prints "10" to std out as expected
bool blnSetReturnValue = capVideo.set(CV_CAP_PROP_FPS, 5);
std::cout << "2nd time - dblFPS = " << dblFPS << "\n"; // this also prints "10", why not "5" as expected ??!!
std::cout << "blnSetReturnValue = " << blnSetReturnValue << std::endl; // this prints "0" (i.e. false)
while (chCheckForEscKey != 27) {
capVideo.read(imgFrame);
if (imgFrame.empty()) break;
cv::imshow("imgFrame", imgFrame);
chCheckForEscKey = cv::waitKey(1);
}
return(0);
}
I'm using a very standard setup here, OpenCV 3.1.0, Visual Studio 2015, Windows 10, and the .avi file I'm testing with is the one that ships with OpenCV.
No matter what I attempt to set the FPS to, it stays at 10 and the set function always returns false.
Yes, I'm aware of the hack fix of setting the cv::waitKey() parameter to a larger value to achieve a certain delay, but this would then be computer-dependent and I may need to run video on various computers in the future so this is not an option.
Am I doing something wrong? Is the VideoCapture::set function known to not work in some cases? Has anybody else experienced the same results? I checked the OpenCV issue tracker and did not find anything to this effect. Is this a bug I should raise a ticket for? Anybody else with some experience with this please advise.
Related
I am trying to print the contents of a String in a console application. I am doing a test and would like to visualize the content for debugging purposes.
Here is my code:
bool Tests::test001() {
std::string temp;
CDecoder decoder; // Create an instance of the CDecoder class
String input = "60000000190210703800000EC00000164593560001791662000000000000080000000002104302040235313531353135313531353153414C4535313030313233343536373831323334353637383930313233";
String expected_output = "6000000019";
String output = decoder.getTPDU(input); // Call the getTPDU method
std::cout << "Expected :" << expected_output.t_str() <<std::endl;
std::cout << "Obtained :" << output.t_str() <<std::endl;
return output == expected_output; // Return true if the output is as expected, false otherwise
}
This is what I get:
Running test: 0
Expected :024B8874
Obtained :00527226
Test Fail
Press any key to continue...
This is what I want to get:
Running test: 0
Expected :6000000019
Obtained :0000001902
Test Fail
Press any key to continue...
Here the Obtained value is a substring of the input I chose randomly (a shift to the left by two characters).
Whether I use t_str() or c_str() the result is the same.
In C++Builder 2009 and later, String (aka System::String) is an alias (ie, a typedef) for System::UnicodeString, which is a UTF-16 string type based on wchar_t on Windows and char16_t on other platforms.
Also, the UnicodeString::t_str() method has been deprecated since around C++Builder 2010. In modern versions, it just returns the same pointer as the UnicodeString::c_str() method.
You can't print a UnicodeString's characters using std::cout. You are getting memory addresses printed out instead of characters, because std::cout does not have an operator<< defined for wchar_t*/char16_t* pointers, but it does have one for void* pointers.
You need to use std::wcout instead, eg:
std::wcout << L"Expected :" << expected_output.c_str() << std::endl;
std::wcout << L"Obtained :" << output.c_str() << std::endl;
If you want to use std::cout, you will have to convert the String values to either System::UTF8String (and put the console into UTF-8 mode) or System::AnsiString instead, eg:
std::cout << "Expected :" << AnsiString(expected_output).c_str() << std::endl;
std::cout << "Obtained :" << AnsiString(output).c_str() << std::endl;
This seems to do the work:
std::wcout
Here is the working code:
// Member function to run test001
bool Tests::test001() {
std::string temp;
CDecoder decoder; // Create an instance of the CDecoder class
String input = "60000000190210703800000EC00000164593560001791662000000000000080000000002104302040235313531353135313531353153414C4535313030313233343536373831323334353637383930313233";
String expected_output = "6000000019";
String output = decoder.getTPDU(input); // Call the getTPDU method
std::wcout << "Expected: " << expected_output <<std::endl;
std::wcout << "Obtained: " << output <<std::endl;
return output == expected_output; // Return true if the output is as expected, false otherwise
I am using OpenCV-4.2.0 (contrib). OpenCV does not show a labeled image created with connectedComponentsWithStats.
The relevant code is:
//Connected Compnents
std::cout << "Calculating Connected Components..." << std::endl;
Mat background_mask_labels(background_mask.size(), CV_32S);
Mat cc_stats,cc_centroids;
int nLabels = cv::connectedComponentsWithStats(background_mask, background_mask_labels, cc_stats, cc_centroids, 4);
// show output
imshow("Background Image", background_mask);
imshow("Background Labels", background_mask_labels);
waitKey(0);
When running the program in x64 Debug mode it throws an error:
OpenCV(4.2.0) Error: Assertion failed (src_depth != CV_16F && src_depth != CV_32S) in convertToShow, file c:\build\master_winpack-build-win64-vc15\opencv\modules\highgui\src\precomp.hpp, line 137
at
imshow("Background Labels", background_mask_labels);
But I dont't understand the error, since I declared background_mask_labels to be CV_32S in first place.
Even a background_mask_labels.convertTo(background_mask_labels , CV_32S); after the connectedComponents does not help - same output.
Thanks for helping!
I am gonna try to render a 2D Texture from opencv::Mat to a Texture in DX11 and also to a shaderesource afterwars. The Problem is, the program crashes on Device::CreateTexture2D() and I can't figure out why. I researched the whole day, I just don't see whats wrong here.
Furthermore, the Problem seems not to be the cv::Mat as Resource, because I have tried also this example here: D3D11 CreateTexture2D in 32 bit format with the chess-example as resource for the texture... and the functions still crashes, when calling with the 3rd param.
I found others, who had Problems with that function, sometimes the Problem was because that SysMemPitch was not set for 2D Textures but that's not the case here unfortunately.
Error Output: First-chance exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
Unhandled exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
here is the relevant code:
bool Texture::InitCameraStream(ID3D11Device* device, ARiftControl* arift_control)
{
D3D11_TEXTURE2D_DESC td;
ZeroMemory(&td, sizeof(td));
td.ArraySize = 1;
td.BindFlags = D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = arift_control->picture_1_.size().height;
td.Width = arift_control->picture_1_.size().width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srInitData;
srInitData.pSysMem = arift_control->picture_1_.ptr();
srInitData.SysMemPitch = arift_control->picture_1_.size().width * 4;
ID3D11Texture2D* tex = 0;
if ((device->CreateTexture2D(&td, &srInitData, NULL) == S_FALSE));
{
std::cerr << "Texture Description: OK " << std::endl << "Subresource: OK" << std::endl;
}
if (FAILED(device->CreateTexture2D(&td, &srInitData, &tex)));
{
std::cerr << "Error: Texture could not be created! "<< std::endl;
return false;
}
// Create the shader-resource view
D3D10_SHADER_RESOURCE_VIEW_DESC srDesc;
srDesc.Format = td.Format;
srDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MostDetailedMip = 0;
srDesc.Texture2D.MipLevels = 1;
if (FAILED(device->CreateShaderResourceView(tex, NULL, &texture_)));
{
std::cerr << "Can't create Shader Resource View" << std::endl;
return false;
}
return true;
}
CreateTexture2D Returns S_FALSE when the first 2 Parameters are valid, and passing 0 as the 3rd param. So in my case, it also Returns S_FALSE the first time, so the Debug Output appears. When calling CreateTexture2D with the 3rd param (the TExture COM object), it crashed. I have absolutely no idea anymore.
Furthermore, I tried to Setup Debugging with DirectX and followed that tutorial: http://blog.rthand.com/post/2010/10/25/Capture-DirectX-1011-debug-output-to-Visual-Studio.aspx - but I can't see a "Debug" window in my Project Properties in Visual Studio 2013. So I still get to "igd10iumd32.pdb not loaded" window, after the program crashes.
Edit: at least I could fix the issue with the additional D3D debug Outputs for now. In my Visual Studio 2013 I had to set the following: Project Properties -> Debugging -> Debug Type -> Mixed for getting the Additional D3D logs :)
Can anyone help here? It's really frustrating, just getting stuck on that single function the whole day ..
Many thanks!
Max
Your input texture data passed in D3D11_SUBRESOURCE_DATA is not sufficiently sized. In your comment, you said that the input image data is 900x1600, and the link is a JPEG. However, you are specifying to D3D that the data format is DXGI_FORMAT_B8G8R8A8_UNORM. JPEG is a compressed format, thus, the data stream will be smaller than it would be in BGRA format. When your drive (igd10iumd32.dll) attempts to read this input stream, it crashes because the buffer is not as large as you told D3D it was.
You can use D3DX11CreateTextureFromFile to load JPEG data. There are also some free image conversion libraries you can use to convert the JPEG into a D3D natively compatible format.
Please say to me what steps I should follow for installing ipp7.1 on windows and use it in OpenCV2.4.2. I downloaded Ipp7.1 evaluation version and I used CMake 2.8 then I configured static OpenCV in CMake and Built all project in vs2008 without any problem.
For the static project I appended the following list for OpenCV & 3rdparty:
opencv_calib3d241.lib opencv_contrib241.lib opencv_core241.lib
opencv_features2d241.lib opencv_flann241.lib opencv_gpu241.lib
opencv_highgui241.lib opencv_imgproc241.lib opencv_legacy241.lib
opencv_ml241.lib opencv_nonfree241.lib opencv_objdetect241.lib
opencv_photo241.lib opencv_stitching241.lib opencv_ts241.lib
opencv_video241.lib opencv_videostab241.lib
libjasper.lib libjasperd.lib libjpeg.lib libjpegd.lib libpng.lib
libpngd.lib libtiff.lib libtiffd.lib zlib.lib zlibd.lib user32.lib
And then appended the following list for IPP static library:
ippac_l.lib ippcc_l.lib ippch_l.lib ippcore_l.lib ippcv_l.lib
ippdc_l.lib ippdi_l.lib ippi_l.lib ippj_l.lib ippm_l.lib ippr_l.lib
ippsc_l.lib ipps_l.lib ippvc_l.lib ippvm_l.lib
My project compiled without any problem.
I used the following code for a way to make sure that IPP is installed and working correctly. This function has 2 input arguments. First one is "opencv_lib" and it will be filled by the version of OpenCV. But my problem is with the second parameter. "add_modules" is always empty.
const char* opencv_lib = 0;
const char* add_modules = 0;
cvGetModuleInfo(0, &opencv_lib,&add_modules);
printf("\t opencv_lib = %s,\n\t add_modules = %s\n\n", opencv_lib,add_modules);
There is another problem too which I believe it refers to the previous problem. In the following code I've used cvUseOptimized(1) and cvUseOptimized(0) before a same loop code. But the odd point is that the proccessing time is actually equall for both!
double t1, t2,timeCalc;
IplImage *template_image = cvLoadImage ("c:\\box.png",0);
IplImage* converted_image= cvLoadImage ("c:\\box_in_scene.png",0);
CvSize cvsrcSize = cvGetSize(converted_image);
cout << " image match template using OpenCV cvMatchTemplate() " << endl;
IplImage *image_ncc, *result_ncc;
image_ncc = cvCreateImage(cvsrcSize,8,1);
memcpy(image_ncc->imageData,converted_image->imageData,converted_image->imageSize);
result_ncc = cvCreateImage(cvSize(converted_image->width -template_image->width+1,
converted_image->height-template_image->height+1),IPL_DEPTH_32F,1);
int NumUploadedFunction = cvUseOptimized(1);
t1 = (double)cvGetTickCount();
for (int j=0;j<LOOP;j++)
cvMatchTemplate(image_ncc, template_image, result_ncc, CV_TM_CCORR_NORMED);
t2 = (double)cvGetTickCount();
timeCalc=(t2-t1)/((double)cvGetTickFrequency()*1000. * 1000.0);
cout << " OpenCV matchtemplate using cross-correlation Valid: " << timeCalc << endl;
NumUploadedFunction = cvUseOptimized(0);
t1 = (double)cvGetTickCount();
for (int j=0;j<LOOP;j++)
cvMatchTemplate(image_ncc, template_image, result_ncc, CV_TM_CCORR_NORMED);
t2 = (double)cvGetTickCount();
timeCalc=(t2-t1)/((double)cvGetTickFrequency()*1000. * 1000.0);
cout << " OpenCV matchtemplate using cross-correlation Valid: " << timeCalc << endl;
I have a fairly basic program that is intended to sort a list of numbers via a Linked List.
Where I am getting hung up is when the element needs to be inserted at the beginning of the list. Here is the chunk of code in question
Assume that root->x = 15 and assume that the user inputs 12 when prompted:
void addNode(node *root)
{
int check = 0; //To break the loop
node *current = root; //Starts at the head of the linked list
node *temp = new node;
cout << "Enter a value for x" << endl;
cin >> temp->x;
cin.ignore(100,'\n');
if(temp->x < root->x)
{
cout << "first" << endl;
temp->next=root;
root=temp;
cout << root->x << " " << root->next->x; //Displays 12 15, the correct response
}
But if, after running this function, I try
cout << root->x;
Back in main(), it displays 15 again. So the code
root=temp;
is being lost once I leave the function. Now other changes to *root, such as adding another element to the LL and pointing root->next to it, are being carried over.
Suggestions?
This because you are setting the local node *root variable, you are not modifying the original root but just the parameter passed on stack.
To fix it you need to use a reference to pointer, eg:
void addNode(node*& root)
or a pointer to pointer:
void addNode(node **root)