machine learning algorithm with list of arrays as data set - machine-learning

i'm new in data science and i'm searching for machine learning algorithm that take data set as List of arrays each array have sequence of floats data
A little bit of context: we have some angels that took from user motion ,
by these angels we determines if the user make the correct motion or not ,
the motion represented in our system in list of array each array has sequence of angels
any help please ? i searched for a lot of time but have no result !

Check out neupy. It is a great library for new machine learning users. I would suggest just the standard back propagation algorithm with momentum. It has been proven that newer adaptive learning techniques don't do as well as the simple gradient back propagation algorithm with momentum.
It is easy to implement. It would be implemented for example using the following code,
A: Create data set
x = np.zeros((len(list[0]),len(list)))
for i in np.arange(len(list)):
for j in np.arange(len(list[0]):
x[i][j] = list[i][j]
This would be the input. Then you create the architecture
B: Create Architecture
network = layers.Input(len(list[0])) > layers.Sigmoid(int(len(list[0])/2)) > layers.Sigmoid(2)
C: Use Gradient Descent With Momentum
gdnet = layers.Algorithms.Momentum(network,momentum=0.1)
gdnet.train(x,y, max_iter=1000)
Where y is the movement of interest.
D: Predict Motion
y_predicted = gdnet(x)

In general, most libraries take in numpy arrays as inputs.
There are a number of ways to wrangle your data into that format. I find pandas (https://pandas.pydata.org/pandas-docs/stable/) to be the most convenient way. If you have the data in .csv file, excel sheet or some other common, structured format, pandas has functions for loading that in with no pain at all
If you give some more details (Are you using a machine learning library (like sci-kit), what format the data is in) i can be of more help.

Related

PETSc vectorize operations with neighboring vector values

I'm implementing finite difference algorithm from uFDTD book. Many FDM equations involve operations on adjoined vector elements.
For example, an update equation for electric field
ez[m] = ez[m] + (hy[m] - hy[m-1]) * imp0
uses adjoined vector values hy[m] and hy[m-1].
How can I implement these operations in PETSc efficiently? Is there something beyond local vector loops and scatterers?
If my goal was efficiency, I would call a stencil engine. There are many many many papers, and sometimes even open source code, for example, Devito. The idea is that PETSc manages the data structure and parallelism. Then you can feed the local data brick to your favorite stencil computer.

Time series prediction using GP - training data

I am trying to implement time series forecasting using genetic programming. I am creating random trees (Ramped Half-n-Half) with s-expressions and evaluating each expression using RMSE to calculate the fitness. My problem is the training process. If I want to predict gold prices and the training data looked like this:
date open high low close
28/01/2008 90.959999 91.889999 90.75 91.75
29/01/2008 91.360001 91.720001 90.809998 91.150002
30/01/2008 90.709999 92.580002 90.449997 92.059998
31/01/2008 90.919998 91.660004 90.739998 91.400002
01/02/2008 91.75 91.870003 89.220001 89.349998
04/02/2008 88.510002 89.519997 88.050003 89.099998
05/02/2008 87.900002 88.690002 87.300003 87.68
06/02/2008 89 89.650002 88.75 88.949997
07/02/2008 88.949997 89.940002 88.809998 89.849998
08/02/2008 90 91 89.989998 91
As I understand, this data is nonlinear so my questions are:
1- Do I need to make any changes to this data like exponential smoothing? and why?
2- When looping the current population and evaluating the fitness of each expression on the training data, should I calculate the RMSE on just part of this data or all of it?
3- When the algorithm finishes and I get an expression with the best (lowest) fitness, does this mean that when I apply any row from the training data, the output should be the price of the next day?
I've read some research papers about this and I noticed some of them mentioning dividing the training data when calculating the fitness and some of them are doing exponential smoothing. However, I found them a bit difficult to read and understand, and most implementations I've found are either in python or R which I am not familiar with.
I appreciate any help on this.
Thank you.

deeplearing4j with SVHN dataset

I try to model a CNN with deeplearing4j using SVHN dataset (http://ufldl.stanford.edu/housenumbers/), in particular I'm using
Format 2: Cropped Digits
This is matlab's files and each one contains a struct with a tensor (4-D) and an array with label. I would open this one into my deeplearing4j code, so I wondered and I find this class MatlabRecordReader.java into deeplearning4j/DataVec (https://github.com/deeplearning4j/DataVec/blob/master/datavec-api/src/main/java/org/datavec/api/records/reader/impl/misc/MatlabRecordReader.java) but I can't understand how use it. Anybody has experience whit this?
Thanks in advance
Here is a reference for "datavec":
http://deeplearning4j.org/DataVec
So if you look at:
http://nd4j.org/tensor
All of deeplearning4j's neural nets are written using nd4j (matlab for java) so this should be pretty easy to map.
You'll see it more or less maps to matlab.
What might be easier is if you could just write out the values as a csv
and reshape them to be the proper value instead. If you use c ordering it should work fine.
If you do that you can just use the csvrecord reader.
That matlab record reader hasn't been used by a lot of people and I think may only work with matrices (it's been a while)
I would try the csv one first.

Feed a complex-valued image into Neural network (tensorflow)

I'm working on a project which tries to "learn" a relationship between a set of around 10 k complex-valued input images (amplitude/phase; real/imag) and a real-valued output-vector with 48 entries. This output-vector is not a set of labels, but a set of numbers which represents the best parameters to optimize the visual impression of the given complex-valued image. These parameters are generated by an algorithm. It's possible, that there is some noise in the data (comming from images and from the algorithm which generates the parameter-vector)
Those parameters more-less depends on the FFT (fast-fourier-transform) of the input image. Therfore I was thinking of feeding the network (5 hidden-layers, but architecture shouldn't matter right now) with a 1D-reshaped version of the FFT(complexImage) - some pseudocode:
// discretize spectrum
obj_ft = fftshift(fft2(object));
obj_real_2d = real(obj_ft);
obj_imag_2d = imag(obj_ft);
// convert 2D in 1D rows
obj_real_1d = reshape(obj_real_2d, 1, []);
obj_imag_1d = reshape(obj_imag_2d, 1, []);
// create complex variable for 1d object and concat
obj_complx_1d(index, :) = [obj_real_1d obj_imag_1d];
opt_param_1D(index, :) = get_opt_param(object);
I was wondering if there is a better approach for feeding complex-valued images into a deep-network. I'd like to avoid the use of complex gradients, because it's not really necessary?! I "just" try to find a "black-box" which outputs the optimized parameters after inserting a new image.
Tensorflow gets the input: obj_complx_1d and output-vector opt_param_1D for training.
There are several ways you can treat complex signals as input.
Use a transform to make them into 'images'. Short Time Fourier Transforms are used to make spectrograms which are 2D. The x-axis being time, y-axis being frequency. If you have complex input data, you may choose to simply look at the magnitude spectrum, or the power spectral density of your transformed data.
Something else that I've seen in practice is to treat the in-phase and quadrature (real/imaginary) channels separate in early layers of the network, and operate across both in higher layers. In the early layers, your network will learn characteristics of each channel, in higher layers it will learn the relationship between the I/Q channels.
These guys do a lot with complex signals and neural nets. In particular check out 'Convolutional Radio Modulation Recognition Networks'
https://radioml.com/research/
The simplest way to feed complex valued numbers with out using complex gradients in your models is to represent the complex values in a different representation. The two main ways are:
Magnitude/Angle components
Real/Imaginary components
I'll show this idea using magnitude/angle components. Assuming you have a 2d numpy array representing an image with shape = (WIDTH, HEIGHT)
import numpy as np
kSpace = np.fft.ifftshift(np.fft.fft2(img))
This would give you a 2D complex array. You can then transform the array into a
data = np.dstack((np.abs(kSpace), np.angle(kSpace)))
This array will be a numpy array with shape = (WIDTH, HEIGHT, 2). This array represents one complex valued image. For a set of images, make sure to concatenate them together to get an array of shape = (NUM_IMAGES, WIDTH, HEIGHT, 2)
I made a simple example of using tensorflow to learn an Fourier Transform with a simple neural network. You can find this example at https://github.com/michaelmendoza/learning-tensorflow

How to decode from scikit-learn embedding?

If I have a data matrix X, in which I want to learn a manifold embedding:
from sklearn.manifold import MDS
mds = MDS()
embedding = mds.fit_transform(X)
I can get back a 2D embedding/encoding of the original data X in the variable embedding.
Is there a way to "decode"/de-embed a given 2D point back to the original data dimension?
99% of embeddings used in ML are not injective thus there is no such thing as an inverse transformation (it is not even about being hard, it literally cannot exists as it transforms huge chunks of the space to a single point). In particular, MDS is not injective thus there is no way to go back .

Resources