OpenCV: passing an argument from a C main function to the OpenCV C++ API - opencv

I have an application that is written in C, and I would like to use a few OpenCV functions to do some image processing. I see where the C API has been deprecated, so I am currently trying to interface with the C++ API from C. I am running into some compilation issues.
Just to reproduce the example, let's say that I have this C main function in a file grouping_test_main.c:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdarg.h>
#include <malloc.h>
extern "C"
{
#include "opencv_grouping.h"
}
uint8_t **allocate_2D_array(uint32_t SizeY, uint32_t SizeX);
int free_2D_array(uint8_t **image_array, uint32_t SizeY);
int main()
{
uint32_t SizeY=19;
uint32_t SizeX=20;
uint8_t **image_array;
FILE* fimage_array = fopen("./test_array.txt", "r");
double threshold_value=128;
ConnectedComponentsResults results;
int i;
int j;
image_array = allocate_2D_array(SizeY, SizeX);
for (i=0; i<SizeY; i++)
{
for (j=0; j<SizeX; j++)
{
fscanf(fimage_array, "%hhd", &image_array[i][j]);
}
}
fclose(fimage_array);
treshold_then_group(image_array, SizeY, SizeX, results);
printf("Number of connected components: %d\n", results.num_components);
free_2D_array(image_array, SizeY);
}
uint8_t **allocate_2D_array(uint32_t SizeY, uint32_t SizeX)
/*
* Allocates memory for a 2D image array.
*/
{
int i;
uint8_t **image_array;
image_array = malloc(SizeY * sizeof(uint8_t *));
for (i = 0; i < SizeY; i++)
{
image_array[i] = malloc(SizeX * sizeof(uint8_t ));
if(image_array[i] == NULL)
{
fprintf(stderr, "Memory allocation for image_array[%d] failed.\n", i);
exit(EXIT_FAILURE);
}
}
return image_array;
}
int free_2D_array(image_array, SizeY)
/*
* Frees memory allocated for 2D image array.
*/
uint8_t **image_array;
uint32_t SizeY;
{
int i;
for (i = 0; i < SizeY; i++)
{
free(image_array[i]);
}
return(1);
}
threshold_then_group() is the C++ function. test_array.txt is a text file with a 19X20 array:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 255 255 255 0 0 0 0 0 0 255 255 255 255 255 0 0 0
0 0 0 255 255 255 0 0 0 0 0 0 0 255 255 255 0 0 0 0
0 0 255 255 255 255 0 0 0 0 0 0 0 0 255 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 255 255 0 0 0 0 0 0 0 0 255 0 0 0 0 0
0 0 0 255 255 255 255 255 0 0 0 0 255 255 255 255 0 0 0 0
0 0 0 0 255 255 255 255 255 0 0 255 255 255 0 0 0 0 0 0
0 0 0 0 0 0 255 255 255 255 255 255 255 0 0 0 0 0 0 0
0 0 0 0 0 0 0 255 255 255 255 255 255 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 255 255 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
opencv_grouping.h is the header file where I declare the C++ functions.
/* Header file for opencv_grouping.cpp.
*/
#ifndef CONNECTED_COMPONENTS_H
#define CONNECTED_COMPONENTS_H
#include <stdint.h>
#include <opencv2/opencv.hpp>
struct ConnectedComponentsResults
{
int num_components;
cv::Mat labels;
cv::Mat stats;
cv::Mat centroids;
};
void threshold_then_group(uint8_t **image_array, uint32_t SizeY, uint32_t SizeX, double threshold_value, ConnectedComponentsResults &results);
#endif
Then, the C++ functions in opencv_grouping.cpp are:
/* Grouping and centroiding implementation using OpenCV's
* connectedComponentsWithStats().
*/
#include <stdint.h>
#include <iostream>
#include <fstream>
#include <cstdio>
#include <cstdlib>
#include <opencv2/opencv.hpp>
#include "opencv_grouping.h"
void threshold_then_group(uint8_t **image_array, uint32_t SizeY, uint32_t SizeX, double threshold_value, ConnectedComponentsResults &results)
{
uint8_t *image_data = new uint8_t[SizeY * SizeX];
double max_value=255.0;
int i;
int j;
for (i=0; i<SizeY; i++)
{
for (j=0; j<SizeX; j++)
{
image_data[i * SizeX + j] = image_array[i][j];
}
}
cv::Mat image(SizeY, SizeX, CV_8U, image_data);
cv::Mat binary;
threshold(image, binary, threshold_value, max_value, cv::THRESH_BINARY);
cv::Mat labels, stats, centroids;
int num_components = cv::connectedComponentsWithStats(binary, results.labels, results.stats, results.centroids, 8, CV_32S);
results.num_components = results.stats.rows - 1;
delete[] image_data;
}
I am attempting to compile the code like this:
g++ `pkg-config --cflags opencv4` -c opencv_grouping.cpp
gcc `pkg-config --cflags opencv4` -c grouping_test_main.c
gcc opencv_test.o opencv_grouping.o -lstdc++ `pkg-config --libs opencv4` -o opencv_test2
opencv_grouping.cpp compiles just fine, but grouping_test_main.c fails. This is because opencv_grouping.h includes opencv2/opencv.hpp, and the error messages indicate that the files included in opencv.hpp must be compiled using a C++ compiler. I haven't tried to compile the main function using g++, but even if this somehow works, this isn't really a solution. My actual main function is a lot more complicated than what I've shown here, and it uses a lot of C libraries that I don't think would compile with g++.

Related

R-Package vegan Decorana

enter image description hereI'm new to R and I was trying to run a Detrended correspondence analysis (DCA) which is a multivariate statistical analysis for ordination of species, I have four sites. I keep getting the error message:
> Error rowsums x must be numeric
Species Haasfontein Mini Pit Vlaklaagte Mini Pit Vlaklaagte Block 3 Mini Pit Block 10 Mini Pit
Agrostis lachnantha 1 0 0 0
Aristida congesta subsp. Congesta 0 0 0 0
Brachiaria nigropedata 0 0 0 0
Cynodon dactylon 0 12 2 3
Cyperus esculentus  0 5 0 0
Digitaria eriantha 0 1 6 20
Elionurus muticus 0 0 0 0
Eragrostis acraea De Winter 0 0 1 0
Eragrostis chloromelas 35 0 12 4
Eragrostis curvula 6 0 0 0
Eragrostis lehmanniana 5 0 0 0
Eragrostis rigidior 3 0 1 0
Eragrostis rotifer 3 0 0 0
Eragrostis trichophora 10 1 2 2
Hyparrhenia hirta 0 0 9 1
Melinis repens 0 0 2 0
Panicum coloratum 0 4 0 0
Panicum deustum  3 0 0 0
Paspalum dilatatum 0 0 0 0
Setaria sphacelata var. sphacelata 0 1 0 0
Sporobolus africanus 0 0 2 0
Sporobolus centrifuges 1 0 1 0
Sporobolus fimbriatus 0 0 0 0
Sporobolus ioclados 2 0 5 1
Themeda triandra 0 0 0 0
Trachypogon spicatus 0 0 0 0
Tragus berteronianus 0 0 0 1
Verbena bonariensis 16 0 2 0
Cirsium vulgare 0 0 0 0
Eucalyptus cameldulensis 1 0 0 0
Xanthium strumarium 0 0 0 0
Argemone ochroleuca 0 0 0 0
Solanum sisymbriifolium 0 0 0 0
Campuloclinium macrocephalum  7 0 0 0
Paspalum dilatatum 0 0 0 0
Senecio ilicifolius 0 0 0 0
Pseudognaphalium luteoalbum (L.) 8 0 0 0
 Cyperus esculentus  0 0 0 0
Foeniculum vulgare  0 0 0 0
Conyza canadensis 0 0 0 1
Tagetes minuta 0 0 0 0
Hypochaeris radicata 0 0 0 0
Solanum incanum 0 0 0 0
Asclepias fruticosa 11 0 0 0
Hypochaeris radicata 0 0 0 0
My data is organised as shown above and I'm not sure if my data is organised correctly or there is some other error. Can someone please assist me
You're still fighting to get you data into R. That is your first problem. After you tackle this problem and manage to read in your data, you have the following problems:
You should not have empty (all zero) rows in your data, but they will give an error (empty columns are removed and only give a warning).
DCA treats rows and columns non-symmetrically, and you should have species as columns and sampling units as rows. You should transpose your data (function t()).
You really should not use DCA with only four sampling units. It will be meaningless.
I think the last point is most important.

Logistic Regression prediction faults

I have been trying to solve this problem of titanic survived problem. Where i splitted x to be the passengers and y to be the survived. But the problem is i couldn't able to get the y_pred (ie) prediction results. As it is 0 for all the values. I get 0 value as prediction. It would be helpful for me if anyone can solve it. As it is my first classifier problem as a beginner
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('C:/Users/Umer/train.csv')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x_train = sc_x.fit_transform(x_train)
x_test = sc_x.transform(x_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
#predicting the test set results
y_pred = classifier.predict(x_test)
I couldn't reproduce the same result, in fact, I copied-pasted your code and did not get them all zeros as you described the issue as, instead I got:
[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0]
Nevertheless, there are a few things I noticed in your approach that you may want to know about:
The default separator in Pandas read_csv is , , so if your dataset variables separated by a tab (same like the one I have) , you then should specify the separator like this:
df = pd.read_csv('titanic.csv', sep='\t')
PassengerId has no useful information that your model may learn from in order to predict the Survived people, it's just a continuous number that increments with each new passenger. Generally speaking, in classification, you need to avail of all features that make your model learns from (unless of course there are redundant features that add no information to the model) especially in your dataset, it's a multivariate dataset.
There is no point of scaling the PassengerId, because features scaling is usually used when features highly vary in magnitudes, units and range (e.g. 5kg and 5000gms) and in your case, as I mentioned, it's just an incremental integer which has no real information to the model.
One last thing, you should get your data as type float for StandardScaler to avoid warnings like the follow:
DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.
So you do convert like this from the beginning:
x = df['PassengerId'].values.astype(float).reshape(-1,1)
Finally if you're still getting the same result, then please add a link to your dataset.
Update
After providing the dataset, it turns out that the result you're getting is correct, that's again because of reason number 2 I mentioned above (that is PassengerId provides no useful information to the model so it cannot predict correctly!)
You can test it yourself via comparing the log loss before and after adding more features from the dataset:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function using only the PassengerId
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
Output
13.33982681120802
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0]
Now by using many "supposedly-useful" information:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
# denote the words female and male as 0 and 1
df['Sex'].replace(['female','male'], [0,1], inplace=True)
# try three features that you think they are informative to the model
# so it can learn from them
x = df[['Fare', 'Pclass', 'Sex']].values.reshape(-1,3)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function with the above 3 features
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
Output
7.238735137632405
[0 0 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 0 0
0 1 1 0 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 1 0
1 0 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1
1 0 0 1 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0
0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 1
1]
In Conclusion:
As you can see, the loss gave better value (lesser than before) and the prediction is now more reasonable!

Feature Reduction

How do I reduce feature dimension ? My feature looks like :
1(Class Number) 10_10_1(File name) 0 0 0 0 0 0 0 0 0.564971751 23.16384181 25.98870056 19.20903955 16.10169492 13.27683616 1.694915254 0 0 0 0 0 0 0 3.95480226 11.5819209 20.33898305 60.4519774 3.672316384 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3.107344633 62.99435028 33.89830508 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.412429379 66.66666667 31.92090395 0 0 0 0 0 0 0 0 0 0 0 0 0 0.564971751 22.59887006 26.83615819 46.89265537 3.107344633 0 0 0 0 0 0 0 0 0 0 0 0 0 0.564971751 16.38418079 28.53107345 50.84745763 3.672316384 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 90.6779661 9.322033898 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.847457627 90.11299435 9.039548023 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 17.79661017 81.3559322 0.847457627 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 27.11864407 72.88135593 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.564971751 37.85310734 61.29943503 0.282485876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.412429379 50.84745763 47.45762712 0.282485876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24.57627119 75.42372881 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 17.23163842 82.20338983 0.564971751 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 29.37853107 70.62146893 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 55.64971751 44.35028249 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 64.40677966 35.59322034 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 67.79661017 32.20338983 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 66.66666667 33.33333333 0 0 0 0 0 0 0 0 0 0 0 0 1 3 2 6 7 5 4 8 9 10 11 12 13 14 15 16 17 18 14.81834463 3.818489078 3.292123621 2.219541777 2.740791003 1.160544518 2.820053602 1.006906813 0.090413195 2.246638594 0.269778302 2.183126126 2.239168249 0.781498607 2.229795302 0.743329919 1.293839141 0.783068011 1.104421291 0.770312707 0.697659061 1.082266169 0.408339745 1.073922207 0.999148017 0.602195061 1.247286588 0.712143548 0.867327913 0.603063537 0.474115683 0.596387106 0.370847522 0.54900076 0.35930586 0.580272233 0.397060362 0.535337691
After filename, feature values are given.
If your feature is unsupervised, you can use PCA.
import numpy as np
from sklearn.decomposition import PCA
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
pca = PCA(n_components=2)
pca.fit(X)
PCA(copy=True, n_components=2, whiten=False)
print(pca.explained_variance_ratio_)
If it is supervised, you can use LDA
import numpy as np
from sklearn.lda import LDA
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
y = np.array([1, 1, 1, 2, 2, 2])
clf = LDA()
clf.fit(X, y)
For reducing the amount of features handed to models, besides others, feature reduction or feature deletion/selection can be used. Some frequently used feature reduction approaches have already been pointed out here, like principal component analysis (PCA), linear discriminant analysis (LDA), or partial least square regression (PLSR). Those essentially project the original data into a subspace, which aims for representing the information in less features. In particular, PCA thereby tries to maximize the preserved variance in the original data (unsupervised), while LDA tries to minimize intra-class-variance and maximize inter-class-distance (supervised, for classification), and PLSR tries to maximize the preserved variance in the original data and maximize the correlation with the target variable (supervised, regression).
Additionally, classic feature selection can be employed for reducing the amount of features. Those don't project data into a subspace but select "useful" features straight from the existing set of features. Usually those approaches are divided in feature filters and feature wrappers, where filters decide on which features to use by looking only at the features and the target variable (e.g. try to minimize inter-feature-correlation while maximizing the feature-target-correlation). In contrast, feature wrappers additionally consider the model that uses the selected features - so they directly optimize the model performance instead. Usually, filters are computationally cheaper than wrappers - but similar to using PCA, feature filters don't necessarily need to improve subsequent model performance, as they don't know what to optimize for.
Edit: as you are working with image data, feature filters and wrappers might not be optimal if used alone - they likely require image preprocessing and/or downsizing before being employed.
If you are using R, I'd recommend using the caret package, which provides all of the above already embedded into the model model training and evaluation process, which is quite important (cf. here for some details on their filters/wrappers). Here's a small snippet for usage of the approaches above:
library(caret)
# PCA with preserving 95% variance in original data
modelPca <- train(x = iris[,1:4], iris[,5], preProcess=c('center', 'scale', 'pca'), trControl=trainControl(preProcOptions=list(thresh=0.95)), method='svmLinear', tuneGrid=expand.grid(C=3**(-3:3)))
# LDA with selection of dimensions
modelLda2 <- train(x = iris[,1:4], y = iris[,5], method='lda2', tuneGrid=expand.grid(dimen=1:4))
# PLSR with selection of dimensions
modelPls <- train(x = iris[,1:3], y = iris[,4], method='pls', tuneGrid=expand.grid(ncomp=1:3))
# feature wrapper: (backwards) recursive feature elimination (there exist more...)
modelRfe <- rfe(x = iris[,1:4], y = iris[,5], sizes = 1:4, rfeControl = rfeControl())
# feature filter: univariate filtering
modelSbf <- sbf(x = iris[,1:4], y = iris[,5], sbfControl = sbfControl())

display array inside array in a table (view)

How can I display an array of array in a table (view) ?
matrix = Array.new(rows){Array.new(cols){0}}
like that:
0 0 0 0
0 0 0 0
0 0 0 0
matrix = Array.new(3){Array.new(4){0}}
puts matrix.map {|x| x.join(' ')}
0 0 0 0
0 0 0 0
0 0 0 0

Read xml file with type_id opencv-image using opencv

Hey i have tried a lot search in reading xml file with type_id "opencv-image", all i am looking here is "opencv-matrix" and all help available is useless for me.
Please help me out in reading an image matrix from xml file.
I am pasting here an upper portion of my xml file for some idea.
<?xml version="1.0"?>
<opencv_storage>
<depthImg190 type_id="opencv-image">
<width>320</width>
<height>240</height>
<origin>top-left</origin>
<layout>interleaved</layout>
<dt>w</dt>
<data>
0 0 0 0 27120 27384 27120 27120 27384 27120 27120 27120 27120 27384
27384 27664 27664 27944 27944 27664 27664 27944 27944 27944 28224
27944 27944 28224 28224 28224 28224 28520 28816 29120 29120 29120
29120 29120 29120 29120 29432 29744 30072 30072 29744 29744 30072
30072 30072 30400 30400 30736 30736 31080 31080 31080 31440 31440
31440 31440 31800 31800 31800 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 27120 27120 27120 27120 27384 27384 27384 27384 27384 27384
</depthImg190>
</opencv_storage>
I have used the code
FileStorage f;
Mat m;
f.open("temp.xml", FileStorage::READ);
f["depthImg190"] >> m;
f.release();
but i am facing an exception "Opencv Errorr: Bad argument < Unknown array type > in cv::read, file ........\opencv\modules\core\persistence.cpp, line 5535".
Any help would be appriciated
Actually there is some documentation that you could use like: http://docs.opencv.org/modules/core/doc/xml_yaml_persistence.html
In any case the answer to your problem is easy:
FileStorage fs("file.xml", FileStorage::READ);
Mat image;
fs["depthImg190"] >> image;
(...)
fs.release();
It should work!

Resources