CNN architecture keras - machine-learning

I have to design a CNN architecture for Face Expression Recognition. I find a lot of examples about this on Internet which differ on CNN architecture. I'm trying to find out about the features which determinate the difference between the different architecture.

CNN + RNN is the basic idea, as expression is a sequence of face images.

I know this is a long time since this question is asked, but I left this note for other people who are interested in this topic :
There are some architectures who use pure CNN (without RNN or etc.) and do a pretty good job actually! One of them is shown in the image below:
Although, this architecture (shown in the image) is pretty simple. There are some state-of-the-art architectures that their goal is "RealTime Facial Expression classification" with CNNs.Below, I put some links to related papers on this topic and how to train an efficient model for doing this task:
1- Real Time Emotion Recognition from Facial Expressions Using CNN Architecture - IEEE
2- Real-Time Facial Expression Recognition Based on CNN - Researcher Gate
3- Facial Expressions using CNNs - Researcher Gate
4- Emotion-Net 2 - Github
and every day, there are new models and architectures for this task that outperform eachOther!

Related

Image Classification with Support Vector Machine

I worked with Support Vector Machine for classification with skicit-learn library several time previously. But I only interacted with data contain text and number in ".csv" format. Currently, I am wanting to use Support Vector Machine for image classification. Can you help me how to convert image to type like ".csv" format in order to classification.
I would be very appreciated with any help. Thank you.
Sure, in general, one would define a so-called Feature Vector. It's a vector which contains numeric representations of certain, usually hand-crafted features. In the case of image classification this heavily depends on what you want to classify. Usually, the features in image classification systems are extracted by image processing algorithms such as HOG and SIFT.
But honestly, I wouldn't use SVMs in image classification task because it's usually a lot of work to define and combine features to get a good classifier. Try Convolutional Neural Networks instead. Those learn the necessary feature by them selfs. If you spend months of feature engineering for a good SVM classifier, a CNN could easily outperform your work after the first training.
There are two ways to implement SVM for image classification.
Extract hand crafted features like SIFT,HOG or similar for each image and store them in csv. Finally, apply svm over them.
Use deep learning, extract features before soft max classifier. Store those features in .csv and apply svm over it.

why using support vector machine?

I have some questions about SVM :
1- Why using SVM? or in other words, what causes it to appear?
2- The state Of art (2017)
3- What improvements have they made?
SVM works very well. In many applications, they are still among the best performing algorithms.
We've seen some progress in particular on linear SVMs, that can be trained much faster than kernel SVMs.
Read more literature. Don't expect an exhaustive answer in this QA format. Show more effort on your behalf.
SVM's are most commonly used for classification problems where labeled data is available (supervised learning) and are useful for modeling with limited data. For problems with unlabeled data (unsupervised learning), then support vector clustering is an algorithm commonly employed. SVM tends to perform better on binary classification problems since the decision boundaries will not overlap. Your 2nd and 3rd questions are very ambiguous (and need lots of work!), but I'll suffice it to say that SVM's have found wide range applicability to medical data science. Here's a link to explore more about this: Applications of Support Vector Machine (SVM) Learning in Cancer Genomics

Are there any references to Tensorflow MNIST example

Looking for scientific article references for the network architecture presented in Deep MNIST for Experts tutorial (https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html)
I have a similar image processing data and I'm looking for a good vanilla architecture, any recommendations?
Currently the best solution for this problem are wavelet transform based solutions
You probably don't want to look at Deep MNIST for Experts as an example of a good architecture for MNIST or as a scientific baseline. It's more an example of basic Tensorflow building blocks and a nice introduction to convolutional models.
I.e, you should be able to get equal or better results with a model with 5% of the free parameters and less layers.

Using Caffe to classify "hand-crafted" image features

Does it make any sense to perform feature extraction on images using, e.g., OpenCV, then use Caffe for classification of those features?
I am asking this as opposed to the traditional way of passing the images directly to Caffe, and letting Caffe do the extraction and classification procedures.
Yes, it does make sense, but it may not be the first thing you want to try:
If you have already extracted hand-crafted features that are suitable for your domain, there is a good chance you'll get satisfactory results by using an easier-to-use machine learning tool (e.g. libsvm).
Caffe can be used in many different ways with your features. If they are low-level features (e.g. Histogram of Gradients), then several convolutional layers may be able to extract the appropriate mid-level features for your problem. You may also use caffe as an alternative non-linear classifier (instead of SVM). You have the freedom to try (too) many things, but my advice is to first try a machine learning method with a smaller meta-parameter space, especially if you're new to neural nets and caffe.
Caffe is a tool for training and evaluating deep neural networks. It is quite a versatile tool allowing for both deep convolutional nets as well as other architectures.
Of course it can be used to process pre-computed image features.

OpenCV Haar classifier - is it an SVM

I'm using an OpenCV Haar classifier in my work but I keep reading conflicting reports on whether the OpenCV Haar classifier is an SVM or not, can anyone clarify if it is using an SVM? Also if it is not using an SVM what advantages does the Haar method offer over an SVM approach?
SVM and Boosting (AdaBoost, GentleBoost, etc) are feature classification strategies/algorithms. Support Vector Machines solve a complex optimization problem, often using kernel functions which allows us to separate samples by working in a much higher dimension feature space. On the other hand, boosting is a strategy based on combining lots of "cheap" classifiers in a smart way, which leads to a very fast classification. Those weak classifiers can be even SVM.
Haar-like features are a kind of features based in integral images and very suitable for Computer Vision problems.
This is, you can combine Haar features with any of the two classification schemes.
It isn't SVM. Here is the documentation:
http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#haar-feature-based-cascade-classifier-for-object-detection
It uses boosting (supporting AdaBoost and a variety of other similar methods -- all based on boosting).
The important difference is related to speed of evaluation is important in cascade classifiers and their stage based boosting algorithms allow very fast evaluation and high accuracy (in particular support training with many negatives), at a better balance point than an SVM for this particular application.

Resources