I'm reading Kernelized Locality-Sensitive Hashing, which is obviously based on the concept of kernel applied to space, functions and data.
I'm not confident with this concept in math and image processing too (since it's not my domain, sorry if I'm naive), so please can someone help me to understand it?
I found an exhaustive and simple article about what kernel functions are in this article (that I strongly suggest).
Related
I usually set unk'value as random distribution vector or 0-vector.
It performed not bad,but most situation it's also not best for many task,i think.
But i'm curious about the best method to process 'unk' word vector,thank for any helpful advice.
If you're training word-vectors, the most-common strategy is to discard low-frequency terms entirely. (That's what the min_count setting does, in Google's original word2vec.c, Python gensim Word2Vec, etc.)
Whether you'd need to remember that something was at a particular place would be more common in sequence-learning scenarios, rather than plain word2vec. (If that's your concern, you could make your question more specific about why & how you're using word-vectors.)
I found this nice paper about color reduction
http://asp.eurasipjournals.com/content/2013/1/95
The approach sounds really interesting and i like to evaluate the algorithm.
Does anyone know if
- Its there any implementation public available ?
- Or is it "easy" to implement it for instance with opencv ( I dont have much experience with opencv, but i m willing to learn it if its necessary )
Regards
The SWT part of this you can find here https://github.com/aperrau/DetectText it detects text regions with SWT. But it works rather slow, more when several seconds per image.
Paper about this implementation is here: http://www.cs.cornell.edu/courses/cs4670/2010fa/projects/final/results/group_of_arp86_sk2357/Writeup.pdf
Just wondering whether anyone has recommendations for optimisation utilities for Delphi.
eg Simplex, Genetic algorithms etc...
Basically I need to optimise my overarching model as a complete black box function, with input variables like tilt angle or array size, within pre-determined boundaries. Output is usually a smooth curve, and usually with no false summits.
The old NR Pascal stuff is looking a bit dated (no functions as variables etc).
Many thanks, Brian
I found a program, written in Pascal, that simulates the Simplex method. It's a little old, but you may convert it into Delphi. You can find it
here
I hope it's of some use to you.
PS: If you have some cash to spend, try
here
TSimplex Class
https://iie.fing.edu.uy/svn/SimSEE/src/rchlib/mat/usimplex.pas
For Mixed INteger Simplex
TMIPSimplex Class
https://iie.fing.edu.uy/svn/SimSEE/src/rchlib/mat/umipsimplex.pas
User: simsee_svn
Password: publico
I have to train a Support Vector Machine model and I'd like to use a custom kernel matrix, instead of the preset ones (like RBF, Poly, ecc.).
How can I do that (if is it possible) with opencv's machine learning library?
Thank you!
AFAICT, custom kernels for SVM aren't supported directly in OpenCV. It looks like LIBSVM, which is the underlying library that OpenCV uses for this, doesn't provide a particularly easy means of defining custom kernels. So, many of the wrappers that use LIBSVM don't provide this either. There seem to be a few, e.g. scikit for python: scikit example of SVM with custom kernel
You could also take a look at a completely different library, like SVMlight. It supports custom kernels directly. Also take a look at this SO question. The answers there include a handful of SVM libraries, along with brief reviews.
If you have compelling reasons to stay within OpenCV, you might be able to accomplish it by using kernel type CvSVM::LINEAR and applying your custom kernel to the data before training the SVM. I'm a little fuzzy on whether this direction would be fruitful, so I hope someone with more experience with SVM can chime in and comment. If it is possible to use a "precomputed kernel" by choosing "linear" as your kernel, then take a look at this answer for more ideas on how to proceed.
You might also consider including LIBSVM and calling it directly, without using OpenCV. See FAQ #418 for LIBSVM, which briefly touches on how to do custom kernels:
Q: I would like to use my own kernel. Any example? In svm.cpp, there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?
An example is "LIBSVM for string data" in LIBSVM Tools.
The reason why we have two functions is as follows. For the RBF kernel exp(-g |xi - xj|^2), if we calculate xi - xj first and then the norm square, there are 3n operations. Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2)) and by calculating all |xi|^2 in the beginning, the number of operations is reduced to 2n. This is for the training. For prediction we cannot do this so a regular subroutine using that 3n operations is needed. The easiest way to have your own kernel is to put the same code in these two subroutines by replacing any kernel.
That last option sounds like a bit of a pain, though. I'd recommend scikit or SVMlight. Best of luck to you!
If you're not married to OpenCV for the SVM stuff, have a look at the shogun toolbox ... lots of SVM voodoo in there.
I have a basic understanding in image processing and now studying in-depth the "Digital Image Processing" book by Gonzales.
When image given and object of interest approximated form is known (e.g. circle, triangle),
what is the best algorithm / method to find this object on image?
The object can be slightly deformed, so brute force approach will not help.
You may try using Histograms of Oriented Gradients (also called Edge Orientation Histograms). We have used them for detecting road signs. http://en.wikipedia.org/wiki/Histogram_of_oriented_gradients and the papers by Bill Triggs should get you started.
I recommend you use the Hough transform, which allows you to find any given pattern described by a equation. What's more the Hough transform works also great for deformed objects.
The algorithm and implementation itself is quite simple.
More details can be found here: http://en.wikipedia.org/wiki/Hough_transform , even a source code for this algorithm is included on a referenced page (http://www.rob.cs.tu-bs.de/content/04-teaching/06-interactive/HNF.html).
I hope that helps you.
I would look at your problem in two steps:
first finding your object's outer boundary:
I'm supposing you have contrasted enough image, that you can easily threshold to get a binary image of your object. You need to extract the object boundary chain-code.
then analyzing the boundary's shape to deduce the form (circle, polygon,...):
You can calculate the curvature in each point of the boundary chain and thus determine how many sharp angles (i.e. high curvature value) there are in your shape. Several sharp angles means you have a polygon, none means you have a circle (constant curvature).
You can find a description on how to get your object's boundary from the binary image and ways of analysing it in Gonzalez's Digital Image Processing, chapter 11.
I also found this insightful presentation on binary image analyis (PPT) and a matlab script that implements some of the techniques that Gonzalez talks about in DIP.
I strongly recommend you to use OpenCV, it's a great computer vision library that greatly help with anything related to computer vision. Their website isn't really attractive, nor helpful, but the API is really powerful.
A book that helped me a lot since there isn't a load of documentation on the web is Learning OpenCV. The documentation that comes with the API is good, but not great for learning how to use it.
Related to your problem, you could use a Canny Edge detector to find the border of your item and then analyse it, or you could proceed with and Hough transform to search for lines and or circles.
you can specially try 'face recognition'. Because, you know that is a specific topic. On the other hand 'face detection' etc. EmguCV can be useful for you.. It is .Net wrapper to the Intel OpenCV image processing library.
It looks like professor Jean Rouat from the University of Sherbooke, has found a way to find objects in images by processing neutral spiking neural network. His technology name RN-SPIKES, seems to be available for licencing.