I have a problem, it might be trivial to many of you...
I'm reading different images, extract SIFT features and save the features in Yaml file... which I got a file of:
descriptors1: !!opencv-matrix
rows: 342
cols: 128
dt: f
data: [ 0.,.........
....................]
descriptors1: !!opencv-matrix
rows: 393
cols: 128
dt: f
data: [ 0., 0., 3., 62.....
......]
and so on... The first part is the first image information and the second part in the second image information
The think so far is quite satisfy my work...
but when I'm reading it I got only one part which is the first one (i.e reading only the information of the first image (and it's neglecting the rest of the file) :(
This is the way for reading my code
FileStorage fs;
fs.open("cola.yaml", FileStorage::READ);
if (!fs.isOpened())
{
cout << "failed to open " << "test.yaml" << endl;
return 1;
}
Mat descriptors1;
fs["descriptors1"] >> descriptors1;
fs.release();
What I want is to read all the information contained in this file... So I got in the end one matrix has all the information of different images (I'm getting 342*128 matrix) but I want (735*128 matrix)
What should I do?
Save the two descriptors with different names, in the YAML file
by example descritors1 and descriptors2.
Related
I manually add a custom matrix to a YAML file for openCV parameters, the problem is it cannot read the matrix and so OpenCV returns none-type. I do not know what is happening here as I tried editing both in notepad and visual studio code.
%YAML:1.0
---
test_matrix: !!opencv-matrix
rows: 2
cols: 2
dt: i
data: [ 1, 1, 1, 1 ]
I want to do the extraction of iVector characteristics from a .txt file containing the Mel Frequency Cepstral Coefficents (MFFCs). Meaning i have the MFCC extraction part I just wanted apply a universal background model/Gaussian mixture model (UBM-GMM) on this file (float) then extract the iVector using bob tool
the file contains 151000 rows and 39 column (float) I try to use the bob tool but I noticed that bob follow a complete methodology means : MFCC extraction, then application of UBM-GMM then extraction of the ivector but In my case i have the MFCC extraction part, and I just wanted apply UBM-GMM on this file then extract the iVector
This the contents of my input file (float)
"[ 20.412132 12.660626 -0.364722 ... 0.66215 -2.386265 -0.483279] [ 20.387691 28.921465 -7.665021 ... 4.967792 -0.837653 0.086281] [ 20.592092 26.138031 -6.041919 ... 4.59261 -1.237065 -0.047726] ... [ 17.551338 -26.566893 -8.112051 ... -1.840103 5.563478 0.753488] [ 17.538497 -25.868483 -6.84744 ... -0.072941 3.455679 -2.500857] [ 17.157589 -22.458966 -4.125641 ... 1.418648 3.925556 -3.072655]"
I wanted to apply to this data a UBM-GMM then extract the iVector.
As output I want another matrix of dimension lower than that given in entry which represents the iVector.
More on iVector: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41528.pdf
I am implementing the Bag of words Model using SURF and SIFT features and SVM Classifier. I want to train(80% of 2876 images) and test(20% of 2876 images) it. I have kept dictionarySize set to 1000. My Computer configuration is intel Xeon(2 processors)/ 32GB RAM/ 500GB HDD. Here, images are read whenever necessary instead of storing them.
like,
std::ifstream file("C:\\testFiles\\caltech4\\train0.csv", ifstream::in);
if (!file)
{
string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
string line, path, classlabel;
printf("\nReading Training images................\n");
while (getline(file, line))
{
stringstream liness(line);
getline(liness, path, separator);
getline(liness,classlabel);
if(!path.empty())
{
Mat image = imread(path, 0);
cout << " " << path << "\n";
detector.detect(image, keypoints1);
detector.compute(image, keypoints1,descriptor1);
featuresUnclustered.push_back(descriptor1);
}
}
Here, the train0.csv contains the paths to the images with the labels. It stops from this loop while reading the images, computing the descriptor and adding it to the features to be clustered. Following error apprears on the console:
Here, in the code, I re-sized images being read to the dimension 256*256; the requirement of the memory is reduced. Ergo, the error disappeared.
Mat image = imread(path, 0);
resize(image,image,Size(256,256));
cout << " " << path << "\n";
detector.detect(image, keypoints1);
detector.compute(image, keypoints1,descriptor1);
featuresUnclustered.push_back(descriptor1);
But, it might appear with bigger dataset.
I have image in csv file and i want to load it in my program. I found that I can load image from cvs like this:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
cv::namedWindow("img");
cv::imshow("img", img);
I have RGB picture in that file but I got grey picture... Can somebody explain me how to load color image or how can I modify this code to get color image?
Thanks!
Updated
Ok, I don't know how to read your file into OpenCV for the moment, but I can offer you a work-around to get you started. The following will create a header for a PNM format file to match your CSV file and then append your data onto the end and you should end up with a file that you can load.
printf "P3\n284 177\n255\n" > a.pnm # Create PNM header
tr -d ',][' < izlaz.csv >> a.pnm # Append CSV data, after removing commas and []
If I do the above, I can see your bench, tree and river.
If you cannot read that PNM file directly into OpenCV, you can make it into a JPEG with ImageMagick like this:
convert a.pnm a.jpg
I also had a look at the University of Wisconsin ML data archive, that is read with those OpenCV functions that you are using, and the format of their data is different from yours... theirs is like this:
1000025,5,1,1,1,2,1,3,1,1,2
1002945,5,4,4,5,7,10,3,2,1,2
1015425,3,1,1,1,2,2,3,1,1,2
1016277,6,8,8,1,3,4,3,7,1,2
yours looks like this:
[201, 191, 157, 201 ... ]
So maybe this tr command is enough to convert your data:
tr -d '][' < izlaz.csv > TryMe.csv
Original Answer
If you run the following on your CSV file, it translates commas into newlines and then counts the lines:
tr "," "\n" < izlaz.csv | wc -l
And that gives 150,804 lines, which means 150,804 commas in your file and therefore 150,804 integers in your file (+/- 1 or 2). If your greyscale image is 177 rows by 852 columns, you are going to need 150,804 RGB triplets (i.e. 450,000 +/- integers) to represent a colour image, as it is you only have a single greyscale value for each pixel.
The fault is in the way you write the file, not the way you read it.
To see color image I must set number of channels. So this code works for me:
CvMLData mlData;
mlData.read_csv(argv[1]);
const CvMat* tmp = mlData.get_values();
cv::Mat img(tmp, true),img1;
img.convertTo(img, CV_8UC3);
img= img.reshape(3); //set number of channels
I am trying to convert a 1 channel image (16 bit) to a 3 channel image in OpenCV 2.3.1. I am having trouble using the merge function and get the following error:
Mat temp, tmp2;
Mat hud;
tmp2 = cv_ptr->image;
tmp2.convertTo(temp, CV_16UC1);
temp = temp.t();
cv::flip(temp, temp, 1);
resize(temp, temp, Size(320, 240));
merge(temp, 3, hud);
error: no matching function for call to ‘merge(cv::Mat&, int, cv::Mat&)’
Can anyone help me with this? Thanks in advance!
If temp is the 1 channel matrix that you want to convert to 3 channels, then the following will work:
cv::Mat out;
cv::Mat in[] = {temp, temp, temp};
cv::merge(in, 3, out);
check the Documenation for more info.
Here is a solution that does not require replicating the single channel image before creating a 3-channel image from it. The memory footprint of this solution is 3 times less than the solution that uses merge (by volting above).
See openCV documentation for cv::mixChannels if you want to understand why this works
// copy channel 0 from the first image to all channels of the second image
int from_to[] = { 0,0, 0,1, 0,2};
Mat threeChannelImage(singleChannelImage.size(), CV_8UC3);
mixChannels(&singleChannelImage, 1, & threeChannelImage, 1, from_to, 3);
It looks like you aren't quite using merge correctly. You need to specify all of the cannels that are to be 'merged'. I think you want a three channel frame, with all the channels identical, in Python I would write this:
cv.Merge(temp, temp, temp, None, hud)
From the opencv documentation:
cvMerge: Composes a multi-channel array from several single-channel arrays or inserts a single channel into the array.