deeplearing4j with SVHN dataset - deeplearning4j

I try to model a CNN with deeplearing4j using SVHN dataset (http://ufldl.stanford.edu/housenumbers/), in particular I'm using
Format 2: Cropped Digits
This is matlab's files and each one contains a struct with a tensor (4-D) and an array with label. I would open this one into my deeplearing4j code, so I wondered and I find this class MatlabRecordReader.java into deeplearning4j/DataVec (https://github.com/deeplearning4j/DataVec/blob/master/datavec-api/src/main/java/org/datavec/api/records/reader/impl/misc/MatlabRecordReader.java) but I can't understand how use it. Anybody has experience whit this?
Thanks in advance

Here is a reference for "datavec":
http://deeplearning4j.org/DataVec
So if you look at:
http://nd4j.org/tensor
All of deeplearning4j's neural nets are written using nd4j (matlab for java) so this should be pretty easy to map.
You'll see it more or less maps to matlab.
What might be easier is if you could just write out the values as a csv
and reshape them to be the proper value instead. If you use c ordering it should work fine.
If you do that you can just use the csvrecord reader.
That matlab record reader hasn't been used by a lot of people and I think may only work with matrices (it's been a while)
I would try the csv one first.

Related

tfx.components.StatisticsGen display train and eval in two different figures, is it possible to have them in a single figure as tfdv does?

a superimposed display for train/val splits using StatisticsGen
Hi,
I'm currently using tfx pipeline inside kubeflow. I struggle to have StatisticsGen showing a single graph with train and validation splits curves superimposed, allowing better comparaison distributions. this is exactly how tfdv.visualize_statistics(lhs_statistics=train_stats, rhs_statistics=eval_stats, lhs_name='train', rhs_name='eval') behaves (see illustration 1), and I would like StatisticsGen to also provide a superimposed splits graph.
Thanks for any reference or help so that i can move forward.
Regards
You can use something like
# docs-infra: no-execute
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
From the tensorflow data validation tutorial

machine learning algorithm with list of arrays as data set

i'm new in data science and i'm searching for machine learning algorithm that take data set as List of arrays each array have sequence of floats data
A little bit of context: we have some angels that took from user motion ,
by these angels we determines if the user make the correct motion or not ,
the motion represented in our system in list of array each array has sequence of angels
any help please ? i searched for a lot of time but have no result !
Check out neupy. It is a great library for new machine learning users. I would suggest just the standard back propagation algorithm with momentum. It has been proven that newer adaptive learning techniques don't do as well as the simple gradient back propagation algorithm with momentum.
It is easy to implement. It would be implemented for example using the following code,
A: Create data set
x = np.zeros((len(list[0]),len(list)))
for i in np.arange(len(list)):
for j in np.arange(len(list[0]):
x[i][j] = list[i][j]
This would be the input. Then you create the architecture
B: Create Architecture
network = layers.Input(len(list[0])) > layers.Sigmoid(int(len(list[0])/2)) > layers.Sigmoid(2)
C: Use Gradient Descent With Momentum
gdnet = layers.Algorithms.Momentum(network,momentum=0.1)
gdnet.train(x,y, max_iter=1000)
Where y is the movement of interest.
D: Predict Motion
y_predicted = gdnet(x)
In general, most libraries take in numpy arrays as inputs.
There are a number of ways to wrangle your data into that format. I find pandas (https://pandas.pydata.org/pandas-docs/stable/) to be the most convenient way. If you have the data in .csv file, excel sheet or some other common, structured format, pandas has functions for loading that in with no pain at all
If you give some more details (Are you using a machine learning library (like sci-kit), what format the data is in) i can be of more help.

OpenCV - training new LatentSVMDetector Models

I haven't found any method to train new latent svm detector models using openCV. I'm currently using the existing models given in the xml files, but I would like to train my own.
Is there any method for doing so?
Thank you,
Gil.
As of now only DPM-detection is implemented in OpenCV, not training.
If you want to train your own models, the most reliable approach is to use Felzenszwalb's and Girshick's matlab code (most of the heavy stuff is implemented in C) (http://www.cs.berkeley.edu/~rbg/latent/)(http://www.rossgirshick.info/latent/) It is reliable and works reasonably fast
If you want to do it in C-only, there is an implementation here (http://libccv.org/doc/doc-dpm/) that I haven't tried myself.
I think there is a function in the octave version of the author's code here
(Octave Version of DPM). It is in step #5,
mat2opencvxml('./INRIA/inriaperson_final.mat', 'inriaperson_cascade_cv.xml');
I will try it and let you know about the result.
EDIT
I tried to convert the .mat file from the octave version i mentioned before to .xml file, and compared the result with the built in opencv .xml model and the construction of the 2 xmls was different (tags, #components,..), it seems that this version of octave dpm generates xml files for later opencv version (i am using 2.4).
VOC-release3.1 is the one matches opencv2.4.14. I tried to convert the already trained model from this version using mat2xml function available in opencv and the result xml file is successfully loaded and working with opencv. Here are some helpful links:
mat2xml code
VOC-release-3.1
How To Train DPM on a New Object

OpenCV Multilevel B-Spline Approximation

Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm

How to compute SVD using Cimg (or maybe openCV or eigen library)?

May anyone give me a quick guide on how to use Cimg to compute SVD for a 3-dimension array?
I just want to get the decomposition of the array in order to compress it small for speeding up further process.
What value should I input at where, and how to get the output?
I've been searched around and still can't understand how it works. and not really fully understand how SVD works as well..only know that it can used to decompress matrix.
At the same time I found that OpenCV and Eigen library also can done the job, do let me know their steps if is much more easier..
(Alternative for me instead of SVD is PCA, which I found its source/library but also don't know how to use..)
Thanks!
See http://cimg.sourceforge.net/reference/structcimg__library_1_1CImg.html#a9a79f3a0849388b3ec13bd140b67a12e
CImg<float> A(3,3); // A = U'*S*V
A.rand(0,1);
CImgList<float> USV = A.get_SVD(); //USV[0] = U and so forth

Resources