I partially implemented the Stroke Width Transform algorithm.
My implementation is ugly, but something works
My implementation gives me many candidates (I use some rules to filter them). But I still have many non-character candidates.
I want to use neural network (or another ML algorithm) to filter them.
What feature I should use for my classifier?
I can extract mean / std (of SW value of component) and width / height.
Example:
Red rectangles is character candidates
(Implementation doesn't detect light-on-dark characters, bad detection of "Land Rover" is normal)
SWT image after component filtering
Neural networks and other techniques such as SVM are not used to filter inputs, instead they are used to classify the input. The difference is that filtering will discard an input based on whether they match or not the imposed rules, so it doesn't actually require any training (more likely a handful of good thresholds). A /trained/ classifier, on the other hand, assigns a class to a given input, which means you need to adequately train the classifier with the expected classes as well negative samples. So the approaches vary if you want to do the former or the later, but the features you use in the former might be useful for doing the later too.
Some basic pre-processing for whatever path you take involves first getting a clearer component, by this I mean removing the extraneous white dots inside the components present in the example given. After that, a lot of options are available. The basic width and height measurements can be used to filter the components that are you sure to not match what you expect, so there is no need to classify it either. By considering the skeleton of the connected components, you obtain the end points and branch points, which form two features. The euler number is another one, and, in fact, there are way too many possible features to be extracted to list them all here. The characteristic of these mentioned features is that they are all scale, rotation, and translation invariant. This also means you need another features to distinguish, for example, a 9 from a 6, the centroids of the holes in the skeleton would be one such example (just take care with it because the direct extraction of this feature isn't invariant to anything).
Note that even simple features can help in separating the entire character set. For instance, for the Euler number = 0, you will get only 'A', 'D', 'O', 'P', 'Q', 'R', '0', '4', '6', or '9', supposing ascii alphanum, a well-behaved font, and a good pre-processing of the input.
Lastly, there is a quite decent amount of papers to look for more info and different approaches beyond SWT. For instance, T-HOG is a recent one of them which, according to the published results, is marginally better than SWT.
EDIT: Resuming and extending:
If you want to use machine learning, you will need a good handful of labeled data from which you can separate for training and testing. If your objective is only distinguishing between "this is a character" from "this is not a character", and the later class is not adequately described (i.e., you have few examples of what is not a character, or you cannot characterize it for any kind of input you can receive), One-Class SVM is an option.
For the features to be extracted from the individual characters, as mentioned before, there are too many of them and many approaches to it. The paper "Feature Extraction Methods for Character Recognition -- A Survey" (1995, not recent at all) discusses some of them (it also mentions the expected minimum size of training data, be sure to read it), so I'm including part of its contents here.
Probably good features to extract from the character (both for grayscale and binary image):
Hu, Reiss, Flusser, Suk, Bamieh, de Figueiredo Moments (all Geometric Moments Invariants based on improvements of the initial work by Hu at "Visual Pattern Recognition by Moment Invariants");
Zernike Moments
Good features to extract from skeletonized characters:
Number of T-joints;
Number of X-joints;
Number of bend points;
Number of endpoints;
Number of crossings with the axis by placing the origin in the shape's centroid;
Number of semi-circles
Fourier descriptors can also be applied in either the skeleton, the binary representation, or a graph representation of the character as discussed in the mentioned paper.
One approach that's used in practice is to just scale all the candidates to the same dimensions (width x height) and then input each of these pixels into the neural network.
Then you have an output for each character (returning between 0 and 1, for how close of a match it is) (and possibly a last output to indicate no match, though this could be concluded from not having a clear candidate character).
With a neural network, you will need quite a bit of training data, it's simply the way it is. Options to avoid manually getting the required training data:
Look for training data online
Generate training data algorithmically (create an algorithm to draw characters and backgrounds and feed them into the NN)
Perform transformations on already obtained training data (rotate, resize, change colours). This can make a reasonably small training set quite a bit bigger. Make sure not to try to generate too large a percentage of the data this way, otherwise your network probably won't perform well.
Related
I am trying to Classify Document Vector pairs (Doc2Vec, 300 Features per Document) as similar/not similar. I tried Distance Messures (Cosine etc.) with additional Features like document size etc. but did not achieve perfect results, especially because I suspect, that only some of the features are meaningful for my problem.
What is the simple, but effective way to feed two vectors to a Classifier (LogisticRegression, SVM etc.)
I already tested the subtraction of one vector from the other and use the absolute result as feature vector: abs(vec1 -vec2) but this was worse than distance messures
I also tried the concatenation of both vectors, also with worse results. I suspect the doubling of dimension will increase the need of training samples, at least for some classifiers?
Is there a state-of-the-art way to classify similaritys or relationships between feature vectors? Or if there are concurent methods, which one is to prefer for which problem/classifier?
Generally, you'd aim for your vectorization of the documents (eg via Doc2Vec) to give vectors where the similarities between vectors are a useful continuous similarity measure. (Most often this is cosine-similarity, but in some cases euclidean-distance may be worth trying as well.)
If the vectors coming out of the Doc2Vec stage don't already exhibit that, the first thing to do would be to debug and optimize that process. That could involve:
double-checking everything, including logged output of the process, for errors
tweaking document preprocessing, to perhaps ensure salient document features are retained and noise discarded
tuning Doc2Vec meta-parameters and modes, to ensure the resulting vectors are sensitive to the kinds of similarity that are important in your end-goals.
It'd be hard to say more about improving that step without more details about your data size and character, Doc2Vec choices/code so far, and end-goals.
How are you deciding whether two documents are "similar enough" or not? How much such evaluative data do you have to help score different Doc2Vec models in a repeatable, quantitative way. (Being able to do such automated scoring will let you test far more Doc2Vec permutations.) Are there examples of doc pairs where simple doc-vector cosine-similarity is working well, or not working well?
I see two red flags in the word you've chosen so far:
"did not achieve perfect results" - getting "perfect" results is an unrealistic goal. You want to find something close to the state of the art, given the resources & tolerance-for-complexity of your project
"300 Features per Document" - Doc2Vec doesn't really find "300 Features" that are independent. It's a single 300-dimensional "dense" "embedded" vector. Every direction – not just the 300 axes – may be meaningful. So even if certain "directions" are more significant for your needs, they're unlikely to be fully correlated with exact dimension axes.
It's possible a classifier on the (v1 - v2) difference, or (v1 || v2) concatenation, could help refine a "similar enough or not" decision, but you'd need a lot of training data, and perhaps a very sophisticated classifier.
I've been trying to understand self-attention, but everything I found doesn't explain the concept on a high level very well.
Let's say we use self-attention in a NLP task, so our input is a sentence.
Then self-attention can be used to measure how "important" each word in the sentence is for every other word.
The problem is that I do not understand how that "importance" is measured. Important for what?
What exactly is the goal vector the weights in the self-attention algorithm are trained against?
Connecting language with underlying meaning is called grounding. A sentence like “The ball is on the table” results into an image which can be reproduced with multimodal learning. Multimodal means, that different kind of words are available for example events, action words, subjects and so on. A self-attention mechanism works with mapping input vector to output vectors and between them is a neural network. The output vector of the neural network is referencing to the grounded situation.
Let us make a short example. We need a pixel image which is 300x200, we need a sentence in natural language and we need a parser. The parser works in both directions. He can convert text to image, that means the sentence “The ball is on the table” gets converted into the 300x200 image. But it is also possible to parse a given image and extract the natural sentence back. Self-attention learning is a bootstrapping technique to learn and use the grounded relationship. That means to verify existing language models, to learn new one and to predict future system states.
This question is old now but I came across it so I figured I should update others as my own understanding has increased.
Attention simply refers to some operation that takes the output and combines it with some other information. Typically this just happens by taking the dot product of the output with some other vector so it can "attend" to it in some way.
Self-attention combines the output with other parts of the input (hence self part). Again the combination usually occurs via the dot-product between the vectors.
Finally how is attention (or self-attention) trained?
Let's call Z our output, W our weight matrix and X our input (we'll use # as matrix multiplication symbol).
Z = X^T # W^T # X
In NLP we will compare Z to whatever we want the resulting output to be. In machine translation it is the sentence in the other language for example. We can compare the two with average cross entropy loss over each word predicted. Finally we can update W with back propagation.
How do we see what is important? We can look at the magnitudes of Z to see after the attention what words were most "attended" to.
This is a slightly simplified example as it only has one weight matrix and typically the inputs are embedded but I think it still highlights some of the necessary details concerning attention.
Here is a useful resource with visualizations for more information about attention.
Here is another resource with visualizations for more about attention in transformers specifically self-attention.
I want to find the opinion of a sentence either positive or negative. For example talk about only one sentence.
The play was awesome
If change it to vector form
[0,0,0,0]
After searching through the Bag of words
bad
naughty
awesome
The vector form becomes
[0,0,0,1]
Same for other sentences. Now I want to pass it to the machine learning algorithm for training it. How can I train the network using these multiple vectors? (for finding the opinion of unseen sentences) Obviously not! Because the input is fix in neural network. Is there any way? The above procedure is just my thinking. Kindly correct me if I am wrong. Thanks in advance.
Since your intuitive input format is "Sentence". Which is, indeed, a string of tokens with arbitrary length. Abstracting sentences as token series is not a good choice for many existing algorithms only works on determined format of inputs.
Hence, I suggest try using tokenizer on your entire training set. This will give you vectors of length of the dictionary, which is fixed for given training set.
Because when the length of sentences vary drastically, then size of the dictionary always keeps stable.
Then you can apply Neural Networks(or other algorithms) to the tokenized vectors.
However, vectors generated by tokenizer is extremely sparse because you only work on sentences rather than articles.
You can try LDA (supervised, not PCA), to reduce the dimension as well as amplify the difference.
That will keep the essential information of your training data as well as express your data at fixed size, while this "size" is not too large.
By the way, you may not have to label each word by its attitude since the opinion of a sentence also depends on other kind of words.
Simple arithmetics on number of opinion-expressing words many leave your model highly biased. Better label the sentences and leave the rest job to classifiers.
For the confusions
PCA and LDA are Dimensional Reduction techniques.
difference
Let's assume each tuple of sample is denoted as x (1-by-p vector).
p is too large, we don't like that.
Let's find a matrix A(p-by-k) in which k is pretty small.
So we get reduced_x = x*A, and most importantly, reduced_x must
be able to represent x's characters.
Given labeled data, LDA can provide proper A that can maximize
distance between reduced_x of different classes, and also minimize
the distance within identical classes.
In simple words: compress data, keep information.
When you've got
reduced_x, you can define training data: (reduced_x|y) where y is
0 or 1.
I am trying to pre-process biological data to train a neural network and despite an extensive search and repetitive presentation of the various normalization methods I am none the wiser as to which method should be used when. In particular I have a number of input variables which are positively skewed and have been trying to establish whether there is a normalisation method that is most appropriate.
I was also worried about whether the nature of these inputs would affect performance of the network and as such have experimented with data transformations (log transformation in particular). However some inputs have many zeros but may also be small decimal values and seem to be highly affected by a log(x + 1) (or any number from 1 to 0.0000001 for that matter) with the resulting distribution failing to approach normal (either remains skewed or becomes bimodal with a sharp peak at the min value).
Is any of this relevant to neural networks? ie. should I be using specific feature transformation / normalization methods to account for the skewed data or should I just ignore it and pick a normalization method and push ahead?
Any advice on the matter would be greatly appreciated!
Thanks!
As features in your input vector are of different nature, you should use different normalization algorithms for every feature. Network should be feeded by uniformed data on every input for better performance.
As you wrote that some data is skewed, I suppose you can run some algoritm to "normalize" it. If applying logarithm does not work, perhaps other functions and methods such as rank transforms can be tried out.
If the small decimal values do entirely occur in a specific feature, then just normalize it in specific way, so that they get transformed into your work range: either [0, 1] or [-1, +1] I suppose.
If some inputs have many zeros, consider removing them from main neural network, and create additional neural network which will operate on vectors with non-zeroed features. Alternatively, you may try to run Principal Component Analysis (for example, via Autoassociative memory network with structure N-M-N, M < N) to reduce input space dimension and so eliminate zeroed components (they will be actually taken into account in the new combined inputs somehow). BTW, new M inputs will be automatically normalized. Then you can pass new vectors to your actual worker neural network.
This is an interesting question. Normalization is meant to keep features' values in one scale to facilitate the optimization process.
I would suggest the following:
1- Check if you need to normalize your data. If, for example, the means of the variables or features are within same scale of values, you may progress with no normalization. MSVMpack uses some normalization check condition for their SVM implementation. If, however, you need to do so, you are still advised to run the models on the data without Normalization.
2- If you know the actual maximum or minimum values of a feature, use them to normalize the feature. I think this kind of normalization would preserve the skewedness in values.
3- Try decimal value normalization with other features if applicable.
Finally, you are still advised to apply different normalization techniques and compare the MSE for evey technique including z-score which may harm the skewedness of your data.
I hope that I have answered your question and gave some support.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am using LibSVM to classify some documents. The documents seem to be a bit difficult to classify as the final results show. However, I have noticed something while training my models. and that is: If my training set is for example 1000 around 800 of them are selected as support vectors.
I have looked everywhere to find if this is a good thing or bad. I mean is there a relation between the number of support vectors and the classifiers performance?
I have read this previous post but I am performing a parameter selection and also I am sure that the attributes in the feature vectors are all ordered.
I just need to know the relation.
Thanks.
p.s: I use a linear kernel.
Support Vector Machines are an optimization problem. They are attempting to find a hyperplane that divides the two classes with the largest margin. The support vectors are the points which fall within this margin. It's easiest to understand if you build it up from simple to more complex.
Hard Margin Linear SVM
In a training set where the data is linearly separable, and you are using a hard margin (no slack allowed), the support vectors are the points which lie along the supporting hyperplanes (the hyperplanes parallel to the dividing hyperplane at the edges of the margin)
All of the support vectors lie exactly on the margin. Regardless of the number of dimensions or size of data set, the number of support vectors could be as little as 2.
Soft-Margin Linear SVM
But what if our dataset isn't linearly separable? We introduce soft margin SVM. We no longer require that our datapoints lie outside the margin, we allow some amount of them to stray over the line into the margin. We use the slack parameter C to control this. (nu in nu-SVM) This gives us a wider margin and greater error on the training dataset, but improves generalization and/or allows us to find a linear separation of data that is not linearly separable.
Now, the number of support vectors depends on how much slack we allow and the distribution of the data. If we allow a large amount of slack, we will have a large number of support vectors. If we allow very little slack, we will have very few support vectors. The accuracy depends on finding the right level of slack for the data being analyzed. Some data it will not be possible to get a high level of accuracy, we must simply find the best fit we can.
Non-Linear SVM
This brings us to non-linear SVM. We are still trying to linearly divide the data, but we are now trying to do it in a higher dimensional space. This is done via a kernel function, which of course has its own set of parameters. When we translate this back to the original feature space, the result is non-linear:
Now, the number of support vectors still depends on how much slack we allow, but it also depends on the complexity of our model. Each twist and turn in the final model in our input space requires one or more support vectors to define. Ultimately, the output of an SVM is the support vectors and an alpha, which in essence is defining how much influence that specific support vector has on the final decision.
Here, accuracy depends on the trade-off between a high-complexity model which may over-fit the data and a large-margin which will incorrectly classify some of the training data in the interest of better generalization. The number of support vectors can range from very few to every single data point if you completely over-fit your data. This tradeoff is controlled via C and through the choice of kernel and kernel parameters.
I assume when you said performance you were referring to accuracy, but I thought I would also speak to performance in terms of computational complexity. In order to test a data point using an SVM model, you need to compute the dot product of each support vector with the test point. Therefore the computational complexity of the model is linear in the number of support vectors. Fewer support vectors means faster classification of test points.
A good resource:
A Tutorial on Support Vector Machines for Pattern Recognition
800 out of 1000 basically tells you that the SVM needs to use almost every single training sample to encode the training set. That basically tells you that there isn't much regularity in your data.
Sounds like you have major issues with not enough training data. Also, maybe think about some specific features that separate this data better.
Both number of samples and number of attributes may influence the number of support vectors, making model more complex. I believe you use words or even ngrams as attributes, so there are quite many of them, and natural language models are very complex themselves. So, 800 support vectors of 1000 samples seem to be ok. (Also pay attention to #karenu's comments about C/nu parameters that also have large effect on SVs number).
To get intuition about this recall SVM main idea. SVM works in a multidimensional feature space and tries to find hyperplane that separates all given samples. If you have a lot of samples and only 2 features (2 dimensions), the data and hyperplane may look like this:
Here there are only 3 support vectors, all the others are behind them and thus don't play any role. Note, that these support vectors are defined by only 2 coordinates.
Now imagine that you have 3 dimensional space and thus support vectors are defined by 3 coordinates.
This means that there's one more parameter (coordinate) to be adjusted, and this adjustment may need more samples to find optimal hyperplane. In other words, in worst case SVM finds only 1 hyperplane coordinate per sample.
When the data is well-structured (i.e. holds patterns quite well) only several support vectors may be needed - all the others will stay behind those. But text is very, very bad structured data. SVM does its best, trying to fit sample as well as possible, and thus takes as support vectors even more samples than drops. With increasing number of samples this "anomaly" is reduced (more insignificant samples appear), but absolute number of support vectors stays very high.
SVM classification is linear in the number of support vectors (SVs). The number of SVs is in the worst case equal to the number of training samples, so 800/1000 is not yet the worst case, but it's still pretty bad.
Then again, 1000 training documents is a small training set. You should check what happens when you scale up to 10000s or more documents. If things don't improve, consider using linear SVMs, trained with LibLinear, for document classification; those scale up much better (model size and classification time are linear in the number of features and independent of the number of training samples).
There is some confusion between sources. In the textbook ISLR 6th Ed, for instance, C is described as a "boundary violation budget" from where it follows that higher C will allow for more boundary violations and more support vectors.
But in svm implementations in R and python the parameter C is implemented as "violation penalty" which is the opposite and then you will observe that for higher values of C there are fewer support vectors.