Where can I find geometry algorithms, which can answer "simple" question like if there is two line intersection, if a point is inside polygon and so on.
I was good at math, but this topic for me is a little outdated. But to my surprise I can not find suitable pack of routines, which solves these problems.
Does FastGEO suit your needs?
The abstract for the project is
FastGEO is a library written in Delphi that contains a wide range of
highly optimized vector based geometrical algorithms and routines for
many different types of geometrical operations such as geometrical
primitives and predicates, hull construction and triangulation,
clipping, rotations and projections.
The SDL suite has this kind of stuff. http://www.lohninger.com/sdlindex.html. We use a lot of their library and have been quite happy with it (and their support).
I believe they have a free version.
T.
I suggest you to visit the following sites :
efg's Computer Lab.
Freeware Delphi Components & Utilities by Angus Johnson.
I hope these will help you.
Related
I am implementing active ASM/AAM using OpenCV for segmentation of face images using OpenCV (to be further used in face recognition). I am pretty much done with the canonical implementation of ASM (as per T. Cootes papers) and result I get is not ideal, it does not always converge and when it does some boundaries are not captured, which I believe is a problem in the modeling of a local structure - i.e. gradient profile matching.
Now I am a bit unsure what to do next. ASM is a simpler and computationally less intensive algorithm compared to AAM. Should I continue improving ASM(say for example using 2D profiles rather than 1D profiles, or use different profile structure for different type of lanmarks) or get my hands straight on AAM?
Edit: Also, what are the papers you could recommend that improve on the original work by T.Cootes? I know there are so many of them, but maybe there are techniques that are considered canonical today?
You can find clarifications and implemented AAM whith 2D profiles in the book "Mastering OpenCV with Practical Computer Vision Projects" by Packt Publishing 2012. A lot of projects described in this book are open source and can be downloaded here: GitHub. They are more advanced than T.Cootes implementation.
I can say that AAM (as existing implementation you can look also at vosm) have good convergence (better than ASM) only if you train it on the same man (very good results for example for FRANCK (Talking Face Video) sequence) in other cases ASM works better.
Can the Hough Transform be used in commercial software?
I mean, it is one of those things that seem research only and unstable.
You would not put it in a commercial compositing software for example
and have the user rely on it at all times.
Any opinions?
Thanks
The Hough transform has been in use in commercial and industrial applications all over the world for years, decades even. From the wikipedia page you can see that it was first developed in 1972, based on earlier ideas from 1962. That means it is older than the CCD that you use to capture the images you use in the compositing software.
Given that it "seems research only and unstable" to you, I would suggest you spend some time learning various computer vision and image analysis algorithms and techniques, and get a good mathematical basis in the field in general before you implement the Hough transform in commercial compositing software.
And when you are done studying I'd suggest you use a well tested open source implementation.
Yes. In fact, I've written Hough transform code for a piece of commercial software that wasn't meant to be a research tool like MATLAB. Though I put a lot of time into its robustness towards a specific application, it worked great.
The Hough transform by itself can sometimes be unreliable in applications where you have some level noise, such in webcams, or when there are some distortions in the shape you need to extract. This may be what you are seeing. In this case you may need to do a little more tuning towards your application, or try some basic image preprocessing.
I'm a bit annoyed with the condescending tone in both the comment to the question (by High Performance Mark), as well as the accepted answer here.
Firstly, that programming libraries/frameworks provide an implementation of an algorithm does not mean it is used, or rather, suited for commercial applications (i.e. getting the job done, robustly, on less pristine conditions). The Hough transform is a well defined algorithm (with possible uses and limitations) which is simple enough to understand, and very commonly taught in introductory image processing courses. Not surprisingly, it has been implemented in general purpose libraries such as Matlab's, Octave's and OpenCV. I don't believe the question was intended to discuss the robustness of an implementation and possibility of inclusion in commercial image processing frameworks, but rather if the algorithm itself is well suited for end user software (an application that counts circles, or what not).
The accepted answer, as it stands, is "The algorithm is very old. Here is a book on image processing, here is a link to a image processing library that has implemented it". The other answer with zero score seems to be on topic (i.e. discussion possible applications), though isn't very specific ("worked for me").
So, why do some people get the impression that the hough transform is unreliable for shape detection? Here is a good example: Unreliable results with cv2.HoughCircles
The input seems to be very well defined circles. However, the more robust, suggested working solution doesn't use Hough transform. I've had similar experience with my own projects. Usually, the more robust way is some kind of object segmentation, distance transform, watershed and peak localization. Have I ever used Hough transform with good results? No. I think it could be useful in some cases. In particular if the shapes of the imaged objects are perfectly defined, and partially occluded.
In other words, I'm also curious as to commercial applications that ended up benefiting from Hough transform. That's how I came across this question, and subsequently disappointed in the "you wouldn't ask that question if you understood the subject better", responses.
I'm looking for good GIS solutions that work on a relatively smaller, detailed scale. Specifically, I was wondering what APIs or toolkits are available for mapping out spaces in a building (like rooms, hallways, shelves). This need not be a 3D solution, like one might envisage for architectural CAD-type drawings. Something relatively lightweight would be ideal.
I feel like ArcGIS is a bit of overkill for that, though I may very well be wrong.
Since it's mapping but not quite GPS/routing/distance/Earth/road-type mapping I'm in a bit of a quandary.
Thanks in advance for your input!
The boundary between GIS and CAD softwares depends on the scale and level of detail of your data. GIS are said more suitable for topographic scales (less than 1:10000) with simplified 2D representations, and CAD softs for bigger scales (more then 1:10000) with very detailled representations, usually in 3D. Nevertheless, there is a convergence between both: GIS supports more and more 3D representations, and CAD softs provide some spatial analysis features.
If your purpose is to build a 3D model, I will suggest you a CAD software like blender. You can also have a look at sketchup or the tools used to build citygml data.
PostGIS is available for many solutions, just check wikipedia.
I'm new to image processing and I want to do a project in object detection. So help me by suggesting a step-by-step procedure to this project. Thanx.
Object detection is a very complex problem that includes some real hardcore math and long tuning of parameters to the computation methods involved. Your best bet is to use some freely available library for that - Google will help.
There are lot of algorithms about the theme and no one is the best of all. It's usually a mixture of them what makes the best solution to the solution.
For example, for object movement detection you could look at frame differencing and misture of gaussians.
Also, it's very dependent of your application, the environment (i.e. noise, signal quality), the processing capacity you may have available, the allowable error margin...
Besides, for it to work, most of time it's first necessary to do some kind of image processing to the input data like median filter, sobel filter, contrast enhancement and a large so on.
I think you should start reading all you can: books, google and, very important, a lot of papers about the subjects (there are many free in internet) you are interested in.
And first of all, i think it's fundamental (at least it has been for me) having a good library for testing. The one i have used/use is OpenCV. It's very complete, implement many of the actual more advanced algorithms, is very active, has a big community and it's free.
Open Computer Vision Library (OpenCV)
Have luck ;)
Take a look at AForge.NET. It's nowhere near Project Natal's levels of accuracy or usefulness, but it does give you the tools to learn the algorithms easily. It's an image processing and AI library and there are several tutorials on colored object tracking and motion detection.
Another one to look at is OpenCV from Intel. I believe it's a bit more advanced, but it's written in C.
Take a look at this. It might get you started in this complex field. The algorithm pages that it links to are interesting reading.
http://sun-valley.stanford.edu/projects/helicopters/final.html
This lecture by Jeff Hawkins, will give you an idea about the state of the art in this super-difficult field.
Seems that video disappeared... but this vid should cover similar ground.
Does anybody here do computer vision work on Mathematica? I would like to know what external libraries are available for doing that. The built in image processing functions are not enough. I am looking for things like SURF, stereo, camera calibration, multi-view geometry etc.
How difficult would it be to wrap OpenCV for use in Mathematica?
Apart from the extensive set of image processing tools that are now (version 8) natively present in Mathematica, and which include a number of CV algorithms like finding morphologic objects, image segmentation and feature detection (see figure below), there's the new LibraryLink functionality, which makes working with DLLs very easy. You wouldn't have to change OpenCV much to be able to call it from Mathematica. Just some wrappers for the functions to be called and you're basically done.
I don't think such a thing exists, but I'm getting started.
It has the advantage that you can perform some analytic methods... for example rather than hacking in openCV or even Matlab endlessly, you can compute analytically a quantity, and see that the method leading to this matrix is numerically unstable as a function of input variables. Thus you do not need to hack, as it would be pointless.
As for wrapping opencv, that doesn't seem to make sense. The correct procedure would be to fix bad implementations in opencv based on your analysis in Mathematica and on paper.
Agreeing with Peter, I don't believe that forcing Mathematica to use OpenCV is a great thing.
All of the computer vision people that I've talked to, read about, and seen examples are using Matlab and the Imaging toolkit. Its either that, or go with a OpenCV compatible language + OpenCV.
Mathematica has a rich set of tools for image processing, but I'm uncertain about the computer vision capabilities.