Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need a collection of sample images to train a Haar-based classifier for face detection. I read that a ratio of 2 negative examples for each positive example is acceptable. I searched around the web and found many databases containing positive examples to train my classifier (that is, images that contain faces), however, I can't any database with negative examples. Right now, I have over 18000 positive examples. Where can I find 2000 or more negative examples?
use
http://tutorial-haartraining.googlecode.com/svn/trunk/data/negatives/
or any other image set that has no objects you need to recognize
NOTE: the number of samples you mention is too big, you don't need so much to obtain high accuracy
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am hoping that TensorFlow can turn this input, to this output.
Input: A floorplan PNG, and 1 - 5 images of a symbol
Output: The same floorplan, but with all matching symbols highlighted
I can do the hard work of figuring out HOW to do it, but I don't want to waste 2 weeks just to figure out it wouldn't be possible. I know I'd need to train it with multiple images, but I won't have more than 5 examples of a given symbol.
Does TensorFlow have these capabilities?
Thanks!
Yes, it is possible to use tensorflow to create a machine learning algorithm to do that for you, but I would bet that is not how you want to do this. First off, in order to do this in tensorflow, you would need to manually create a large number of training samples and spend a significant amount of time figuring out how to define the network and train it. Sure, you could do it, but I definitely wouldn't advise it.
If you have a specific set of symbols that you want to highlight, it would probably be better to use opencv to find and highlight the symbols. For example, in opencv, you could use Template Matching to find a specific symbol in the floor plan and then highlight them by modifying pixel color.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have a DataSet of words and texts and I want to make clusters (by K-means )or any other unsupervised/supervised learning method to distinguish words for example , the word 'John' will be classified as a name(and will be clustered with other person names) , 'brazil' as a place and etc...
Is there a model that I can use to solve the problem.
I have Heard of N-grams but I dont know how to plot the Ngrams probability on a x,y plot or such
P.S if you have any examples that will be wonderful
How about word2vec and embeddings?
https://deeplearning4j.org/word2vec
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Does anyone know of or have a good tool for labeling image data to be used in training a DNN?
Specifically labeling 2 points in an image, like upperLeftCorner and lowerRightCorner, which then calculates a bouding box around the specified object. That's just an example but I would like to be able to follow the MSCoco data format.
Thanks!
You might try LabelMe, http://labelme.csail.mit.edu/Release3.0/
It's usually for outlines for segmentation, but I'm pretty sure it works fine for bounding boxes too.
I had a similar issue finding a tool that did bouding boxes for labeling image data, so I started this new project called LabelD (https://github.com/sweppner/labeld) that uses NodeJS and focuses on bouding boxes for annotation. It's still very much in alpha, but it's pretty easy to use and functional for labeling images!
Let me know if you have any questions!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I have a stream of data on which I want to estimate the kernel density(or a histogram). Input parameter is the window length. For example 60-seconds, or 5-minutes or a month. Any data older than the window size should affect the estimation minimally. However, I cannot store the actual data point. Once a new data sample arrives, I want to add it to the estimator and discard it. Since this is a resource constrained environment(both CPU+memory), the method should preferably be O(1) in space and time complexity.
Are there any readymade libraries that already do this job?
I would prefer something in GO, but I would take an implementation in any language.
If there are no existing implementations, is there an algorithm that I can refer to and implement?
(Searching on Google did not give me any direct answer, I am new to machine learning and statistics)
Geometrically decaying histogram would do the job.
$$h(i) = h(i) * (1 - e^{-1/\tau}) + e^{-1/\tau} \delta(i-j)$$
where $j$ is the new observation, $h(i)$ is the running average histogram, and $\tau$ is the time constant.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am working on a project wherein I am supposed to detect human beings from a live video stream which I get from a UAV's camera module. I do not need to create any rectangles or boxes around detected subjects, but just need to reply with a yes or no. I am fairly new to Open-CV and have no prior experience.
What I have tried:
I started by training my SVM on HOG features. My team gathered a few images from a UAV we had, with people in it. I then trained the SVM from the crops of those people. We got unsatisfactory results when we used the trained detector on the a video from sky with people. Moreover processing each frame turned out to be very slow , therefore the system became unusable.(it did work on still images to some extent).
My question:
I wanted to know if there is some other technique, library etc I could try for achieving good results. Please point me to the next step.