I want to make a classification model for a sequence of CT images with Keras. my dataset obtains from 50 patients and each patient has 1000 images. For a patient, each image has a meaningful relationship with the previous image. I want to use these meaningful relationships, so I don't know how to build a model for such this problem. can you please give me an idea or examples?
Your problem is in the context of Sequence Classification. You need to classify sequences of images. In this case, a model is needed to learn two aspects :
Features of the images
Features of the sequence ( temporal or time-related features )
This might sound similar to video classification in which a video is a sequence of several frames. See here.
For extracting features from images:
Most real-world cases use Convolutional Neural Networks. They use layers like Max Pooling and Convolution. They are excellent at extracting features from a 3D input like an image. You can learn more from here.
For handling temporal data:
Here's where you will require an RNN ( Recurrent Neural Network ). LSTM ( Long-Short Term Memory ) cells are popular RNN as they can hold a stronger memory than traditional RNNs.
RNNs preserve the hidden layer activations and use them in processing each and every term in a sequence. Hence, while processing the 2nd image in a sequence, the RNN has knowledge or activations of the 1st image in that same sequence.
You can know more from here.
Finally, we require a fusion of both the above networks:
A CNN-LSTM network uses both convolutional as well as LSTM cells to classify the image sequences.
You can refer here and here
Hope that helps you. :-)
Related
I'm trying to slowly begin working on a Twitter recommender system as part of a project, which requires me to use some form of deep learning. My goal is to recommend other tweets based on the topical content of a tweet with unlabelled data.
I have pre-processed my data and trained a few variations of models in doc2vec to get both word embeddings and document embeddings. But my issue is that I feel a little lost with where to go from here. I've read that doc2vec can be used as an input to a deeper neural network for training such as an LSTM or even a CNN.
Could anyone help me understand how these document embeddings (and word embeddings, I trained the model on DM mode) are used as input and what the purpose of the neural net would be in this case, is it for clustering? I understand the question is a little open-ended but I'm quite new to all this, any help would be appreciated.
If you have trained a d dimensional doc2vec for each document that will become the input vector for that particular tweet. If you have n number of documents, it will become n*d dimensional matrix. Now, this matrix can be given to the neural network. LSTM and CNN models are all used for supervised learning problems (where you have labeled data).
If you dont have labelled data, then go for unsupervised learning. Clustering comes under this! You can run different clustering algos and recommend based on this.
I'm quite new to neural network and I recently built neural network for number classification in vehicle license plate. It has 3 layers: 1 input layer for 16*24(382 neurons) number image with 150 dpi , 1 hidden layer(199 neurons) with sigmoid activation function, 1 softmax output layer(10 neurons) for each number 0 to 9.
I'm trying to expand my neural network to also classify letters in license plate. But I'm worried if I just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class. And also, I think it might cause problem when input is one of number and neural network wrongly classifies as one of letter with biggest probability, even though sum of probabilities of all number output exceeds that.
So I wonder if it is possible to build hierarchical neural network in following manner:
There are 3 neural networks: 'Item', 'Number', 'Letter'
'Item' neural network classifies whether input is numbers or letters.
If 'Item' neural network classifies input as numbers(letters), then input goes through 'Number'('Letter') neural network.
Return final output from Number(Letter) neural network.
And learning mechanism for each network is below:
'Item' neural network learns all images of numbers and letters. So there are 2 output.
'Number'('Letter') neural network learns images of only numbers(letter).
Which method should I pick to have better classification? Just simply add 10 more classes or build hierarchical neural networks with method above?
I'd strongly recommend training only a single neural network with outputs for all the kinds of images you want to be able to detect (so one output node per letter you want to be able to recognize, and one output node for every digit you want to be able to recognize).
The main reason for this is because recognizing digits and recognizing letters is really kind of exactly the same task. Intuitively, you can understand a trained neural network with multiple layers as performing the recognition in multiple steps. In the hidden layer it may learn to detect various kinds of simple, primitive shapes (e.g. the hidden layer may learn to detect vertical lines, horizontal lines, diagonal lines, certain kinds of simple curved shapes, etc.). Then, in the weights between hidden and output layers, it may learn how to recognize combinations of multiple of these primitive shapes as a specific output class (e.g. a vertical and a horizontal line in roughly the correct locations may be recoginzed as a capital letter L).
Those "things" it learns in the hidden layer will be perfectly relevant for digits as well as letters (that vertical line which may indicate an L may also indicate a 1 when combined with other shapes). So, there are useful things to learn that are relevant for both ''tasks'', and it will probably be able to learn these things more easily if it can learn them all in the same network.
See also a this answer I gave to a related question in the past.
I'm trying to expand my neural network to also classify letters in license plate. But i'm worried if i just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class.
You're far from where it becomes problematic. ImageNet has 1000 classes and is commonly done in a single network. See the AlexNet paper. If you want to learn more about CNNs, have a look at chapter 2 of "Analysis and Optimization of
Convolutional Neural Network Architectures". And when you're on it, see chapter 4 for hirarchical classification. You can read the summary for ... well, a summary of it.
I am interested in training a range of image and object detection models and I am wondering what the general rule of when to use pretrained weights of a network like VGG16 is.
For example, it seems obvious that fine-tuning pre-trained VGG16 imagenet model weights is helpful you are looking for a subset ie. Cats and Dogs.
However it seems less clear to me whether using these pretrained weights is a good idea if you are training an image classifier with 300 classes with only some of them being subsets of the classes in the pretrained model.
What is the intuition around this?
Lower layers learn features that are not necessarily specific to your application/dataset: corners, edges , simple shapes, etc. So it does not matter if your data is strictly a subset of the categories that the original network can predict.
Depending on how much data you have available for training, and how similar the data is to the one used in the pretrained network, you can decide to freeze the lower layers and learn only the higher ones, or simply train a classifier on top of your pretrained network.
Check here for a more detailed answer
Recently I started to play with tensorflow, while trying to learn the popular algorithms i am in a situation where i need to find similarity between images.
Image A is supplied to the system by me, and userx supplies an image B and the system should retrieve image A to the userx if image B is similar(color and class).
Now i have got few questions:
Do we consider this scenario to be supervised learning? I am asking
because i don't see it as a classification problem(confused!!)
What algorithms i should use to train etc..
Re-training should be done quite often, how should i tackle this
problem so i don't train everytime from scratch( fine-tuning??)
Do we consider this scenario to be supervised learning?
It is supervised learning when you have labels to optimize your model. So for most neural networks, it is supervised.
However, you might also look at the complete task. I guess you don't have any ground truth for image pairs and the "desired" similarity value your model should output?
One way to solve this problem which sounds inherently unsupervised is to take a CNN (convolutional neural network) trained (in a supervised way) on the 1000 classes of image net. To get the similarity of two images, you could then simply take the euclidean distance of the output probability distribution. This will not lead to excellent results, but is probably a good starter.
What algorithms i should use to train etc..
First, you should define what "similar" means for you. Are two images similar when they contain the same object (classes)? Are they similar if the general color of the image is the same?
For example, how similar are the following 3 pairs of images?
Have a look at FaceNet and search for "Content based image retrieval" (CBIR):
Wikipedia
Google Scholar
This can be a supervised learning. You can classify the images into categories, if two images are in the same categories (or close in a category), you can think of them as similar.
You can use the deep conventional neural networks for imagenet such as inception model. The inception model outputs a probability map for 1000 classes (which is a vector whose values sum to 1). You can calculate the distance of vectors of two images to get their similarity.
On the same page of the inception model, you will also find the instructions to retrain a model: https://github.com/tensorflow/models/tree/master/inception#how-to-fine-tune-a-pre-trained-model-on-a-new-task
I understand all the computational steps of training a neural network with gradient descent using forwardprop and backprop, but I'm trying to wrap my head around why they work so much better than logistic regression.
For now all I can think of is:
A) the neural network can learn it's own parameters
B) there are many more weights than simple logistic regression thus allowing for more complex hypotheses
Can someone explain why a neural network works so well in general? I am a relative beginner.
Neural Networks can have a large number of free parameters (the weights and biases between interconnected units) and this gives them the flexibility to fit highly complex data (when trained correctly) that other models are too simple to fit. This model complexity brings with it the problems of training such a complex network and ensuring the resultant model generalises to the examples it’s trained on (typically neural networks require large volumes of training data, that other models don't).
Classically logistic regression has been limited to binary classification using a linear classifier (although multi-class classification can easily be achieved with one-vs-all, one-vs-one approaches etc. and there are kernalised variants of logistic regression that allow for non-linear classification tasks). In general therefore, logistic regression is typically applied to more simple, linearly-separable classification tasks, where small amounts of training data are available.
Models such as logistic regression and linear regression can be thought of as simple multi-layer perceptrons (check out this site for one explanation of how).
To conclude, it’s the model complexity that allows neural nets to solve more complex classification tasks, and to have a broader application (particularly when applied to raw data such as image pixel intensities etc.), but their complexity means that large volumes of training data are required and training them can be a difficult task.
Recently Dr. Naftali Tishby's idea of Information Bottleneck to explain the effectiveness of deep neural networks is making the rounds in the academic circles.
His video explaining the idea (link below) can be rather dense so I'll try to give the distilled/general form of the core idea to help build intuition
https://www.youtube.com/watch?v=XL07WEc2TRI
To ground your thinking, vizualize the MNIST task of classifying the digit in the image. For this, I am only talking about simple fully-connected neural networks (not Convolutional NN as is typically used for MNIST)
The input to a NN contains information about the output hidden inside of it. Some function is needed to transform the input to the output form. Pretty obvious.
The key difference in thinking needed to build better intuition is to think of the input as a signal with "information" in it (I won't go into information theory here). Some of this information is relevant for the task at hand (predicting the output). Think of the output as also a signal with a certain amount of "information". The neural network tries to "successively refine" and compress the input signal's information to match the desired output signal. Think of each layer as cutting away at the unneccessary parts of the input information, and
keeping and/or transforming the output information along the way through the network.
The fully-connected neural network will transform the input information into a form in the final hidden layer, such that it is linearly separable by the output layer.
This is a very high-level and fundamental interpretation of the NN, and I hope it will help you see it clearer. If there are parts you'd like me to clarify, let me know.
There are other essential pieces in Dr.Tishby's work, such as how minibatch noise helps training, and how the weights of a neural network layer can be seen as doing a random walk within the constraints of the problem.
These parts are a little more detailed, and I'd recommend first toying with neural networks and taking a course on Information Theory to help build your understanding.
Consider you have a large dataset and you want to build a binary classification model for that, Now you have two options that you have pointed out
Logistic Regression
Neural Networks ( Consider FFN for now )
Each node in a neural network will be associated with an activation function for example let's choose Sigmoid since Logistic regression also uses sigmoid internally to make decision.
Let's see how the decision of logistic regression looks when applied on the data
See some of the green spots present in the red boundary?
Now let's see the decision boundary of neural network (Forgive me for using a different color)
Why this happens? Why does the decision boundary of neural network is so flexible which gives more accurate results than Logistic regression?
or the question you asked is "Why neural networks works so well ?" is because of it's hidden units or hidden layers and their representation power.
Let me put it this way.
You have a logistic regression model and a Neural network which has say 100 neurons each of Sigmoid activation. Now each neuron will be equivalent to one logistic regression.
Now assume a hundred logistic units trained together to solve one problem versus one logistic regression model. Because of these hidden layers the decision boundary expands and yields better results.
While you are experimenting you can add more number of neurons and see how the decision boundary is changing. A logistic regression is same as a neural network with single neuron.
The above given is just an example. Neural networks can be trained to get very complex decision boundaries
Neural networks allow the person training them to algorithmically discover features, as you pointed out. However, they also allow for very general nonlinearity. If you wish, you can use polynomial terms in logistic regression to achieve some degree of nonlinearity, however, you must decide which terms you will use. That is you must decide a priori which model will work. Neural networks can discover the nonlinear model that is needed.
'Work so well' depends on the concrete scenario. Both of them do essentially the same thing: predicting.
The main difference here is neural network can have hidden nodes for concepts, if it's propperly set up (not easy), using these inputs to make the final decission.
Whereas linear regression is based on more obvious facts, and not side effects. A neural network should de able to make more accurate predictions than linear regression.
Neural networks excel at a variety of tasks, but to get an understanding of exactly why, it may be easier to take a particular task like classification and dive deeper.
In simple terms, machine learning techniques learn a function to predict which class a particular input belongs to, depending on past examples. What sets neural nets apart is their ability to construct these functions that can explain even complex patterns in the data. The heart of a neural network is an activation function like Relu, which allows it to draw some basic classification boundaries like:
Example classification boundaries of Relus
By composing hundreds of such Relus together, neural networks can create arbitrarily complex classification boundaries, for example:
Composing classification boundaries
The following article tries to explain the intuition behind how neural networks work: https://medium.com/machine-intelligence-report/how-do-neural-networks-work-57d1ab5337ce
Before you step into neural network see if you have assessed all aspects of normal regression.
Use this as a guide
and even before you discard normal regression - for curved type of dependencies - you should strongly consider kernels with SVM
Neural networks are defined with an objective and loss function. The only process that happens within a neural net is to optimize for the objective function by reducing the loss function or error. The back propagation helps in finding the optimized objective function and reach our output with an output condition.