Wavelet Neural Networks : why no Inverse Transform? - machine-learning

i'm wondering why in a Wavelet Neural Network, there is no Inverse Transform that recompose the signal?
How come only the wavelet coefficients are enough to find the wanted signal?
Wouldn't it be better to have the IWT?
Jeff

The use of wavelets in neural network is just for features extractions.
And from wavelets decompositions you can restore the original signal.
Another point about your question is.
Are you using the WNN for regression or classification?
In case of classification there's no need for IWT.
In case of regression i don't know how to fit WNN.
sugestion:
Wavelet Neural Networks

Related

What are the common loss functions for edge detection in deep learning?

This is one Ground Truth example:
I have the feature map of the corresspongding image through CNN now, and I wonder how to get the final prediction result, as well as the loss for deep learning.
Thanks!

Decision boundary is not a property of training data in classification

In ML videos of Andrew Ng on Coursera on Classification (in the third video), he said that the "decision boundary is not a property of the training set". What does this statement mean? And does it also imply that the straight line or any curves that we use in linear regression to fit data are not a property of the training set? He claims that those curves (achieved through linear regression) aren't the properties of the corresponding training data. I am a bit confused about this. Kindly if my doubts could be removed. Thanks in advance.
The decision boundary is a property of your classifier. Different classifiers lead to different decision boundaries.
Decision boundary has nothing to do with linear regression, as it only makes sense for classification problems. The decision boundary is the curve (or surface, in more than two dimensions) that splits the elements of the two different classes in your classification problem. In logistic regression, the decision boundary is a straight line, while in nonlinear classification methods, like neural networks, the decision boundary is a curve.

PCA on wavelet subbands

I guess that PCA cannot be applied on vectors, however I found some papers that apply PCA on each of the wavelet subbands as in this paper and this. As wavelet subbands are vectors, how is it possible to apply PCA on them?
Thank you in advance
The papers you mention are about EEG and ECG signals, which are also (1D) vectors. Multiple signals together for one subject (or a group) are a matrix. That's what how the PCA runs on input EEG signals.
You can do the same with a wavelet transform. A wavelet subband of a 1D signal is still a 1D signal, but you can group them together in matrices. Then you can run PCA in the same way as on the input data.

Perceptron and shape recognition

I recently implemented a simple Perceptron. This type of perceptron (composed of only one neuron giving binary information in output) can only solve problems where classes can be linearly separable.
I would like to implement a simple shape recognition in images of 8 by 8 pixels. I would like for example my neural network to be able to tell me if what I drawn is a circle, or not.
How to know if this problem has classes being linearly separable ? Because there is 64 inputs, can it still be linearly separable ? Can a simple perceptron solve this kind of problem ? If not, what kind of perceptron can ? I am a bit confused about that.
Thank you !
This problem, in a general sense, can not be solved by a single layer perception. In general other network structures such as convolutional neural networks are best for solving image classification problems, however given the small size of your images a multilayer perception may be sufficient.
Most problems are linearly separable, but not necessarily in 2 dimensions. Adding extra layers to a network allows it to transform data in higher dimensions so that it is linearly separable.
Look into multilayer perceptrons or convolutional neural networks. Examples of classification on the MNIST dataset might be helpful as well.

What is learned in convolutional network

In a convolutional net (CNN), someone answered to me than filters are initialized randomly.
I'm ok for this, but, when there is the gradient descent, who is learning? The features maps, or the filters ?
My intuition is the filters are learning, because they need to recognize complex things.
But I would like to be sure about this.
In the context of convolutional neural networks, kernel = filter = feature detector.
Here is a great illustration from Stanford's deep learning tutorial (also nicely explained by Denny Britz).
The filter is the yellow sliding window, and its value is:
The feature map is the pink matrix. Its value depends on both the filter and the image: as a result, it doesn't make sense to learn the feature map. Only the filter is learnt when the network is trained. The network may have other weights to be trained as well.
As aleju said, filters weights are learned. Feature maps are outputs of the convolutional layers. Besides convolutional filter weights, there are also weights of fully connected (and other types) layers.

Resources