Machine learning applied to geometry - machine-learning

before i have to say that i don't know very well machine learning.
I want to ask if with Machine Learning can help to fill a specified area with some irregular geometries to use more area that it can, it's call "nesting"

Machile learning works by comparing the current configuration to other ones previously shown for training, with solutions, and tries to "interpolate" between them.
In your case, it means that you should have sufficiently many solved cases so that you can reasonably cover the configuration space. This can be a serious obstacle, because it requires that you have a packing algorithm anyway.
Also, due to the discontinous nature of the solution function, I doubt that an ML approach be efficient and able to construct collision-free solutions in all cases.

Related

Non Convex Optimizations

I have a gd algorithm and I am trying to come up with a non-convex univariate optimization problem. I want to plot the function python and then show two runs of gd, one where it gets caught in a local minimum and one where it manages to make it to a global minimum. I am thinking of using different starting points to accomplish this.
That being said I am somewhat clueless about coming up with such a function or trying two different points, any help is appreciated.
Your question is really broad and really hard to answer because nonconvex optimization is rather complicated so is any iterative algorithm that solves such problems. As a quick hint, you can use the Mexican Hat function (or a simple polynomial that gives you what you want) for your test case. Also these papers can give you come context : Paper1 Paper2
Good luck.

what software can learn the best structure of a neural network?

Is there software out there that optimises the best combination of learning rate, weight ranges, hidden layer structure, for a certain task? After presumably trying and failing different combinations? What is this called? As far as I can tell, we just do it manually at the moment...
I know this is not differently code related but am sure it will help many others too. Cheers.
The above comes under multi variate optimization problem, use an optimization algorithm and check the results. Particle Swarm Optimization would do it ( there are however considerations to use this algorithm) as long as you have a cost function to optimize for example the error rate of the network output

Online machine learning for obstacle crossing or bypassing

I want to program a robot which will sense obstacles and learn whether to cross over them or bypass around them.
Since my project, must be realized in week and a half period, I must use an online learning algorithm (GA or such would take a lot time to test because robot needs to try to cross over the obstacle in order to determine is it possible to cross).
I'm really new to online learning so I don't really know which online learning algorithm to use.
It would be a great help if someone could recommend me a few algorithms that would be the best for my problem and some link with examples wouldn't hurt.
Thanks!
I think you could start with A* (A-Star)
It's simple and robust, and widely used.
There are some nice tutorials on the web like this http://www.raywenderlich.com/4946/introduction-to-a-pathfinding
Online algorithm is just the one that can collect new data and update a model incrementally without re-training with full dataset (i.e. it may be used in online service that works all the time). What you are probably looking for is reinforcement learning.
RL itself is not a method, but rather general approach to the problem. Many concrete methods may be used with it. Neural networks have been proved to do well in this field (useful course). See, for example, this paper.
However, to create real robot being able to bypass obstacles you will need much then just knowing about neural networks. You will need to set up sensors carefully, preprocess data from them, work out your model and collect a dataset. Not sure it's possible to even learn it all in a week and a half.

Feature combination/joint features in supervised learning

While trying to come up with appropriate features for a supervised learning problem I had the following idea and wondered if it makes sense and if so, how to algorithmically formulate it.
In an image I want to classify two regions, i.e. two "types" of pixels. Say I have some bounded structure, let's take a circle, and I know I can limit my search space to this circle. Within that circle I want to find a segmenting contour, i.e. a contour that separates my pixels into an inner class A and an outer class B.
I want to implement the following model:
I know that pixels close to the bounding circle are more likely to be in the outer class B.
Of course, I can use the distance from the bounding circle as a feature, then the algorithm would learn the average distance of the inner contour from the bounding circle.
But: I wonder if I can exploit my model assumption in a smarter way. One heuristic idea would be to weigh other features by this distance, so to say, if a pixel further away from the bounding circle wants to belong to the outer class B, it has to have strongly convincing other features.
This leads to a general question:
How can one exploit joint information of features, that were prior individually learned by the algorithm?
And to a specific question:
In my outlined setup, does my heuristic idea make sense? At what point of the algorithm should this information be used? What would be recommended literature or what would be buzzwords if I wanted to search for similar ideas in the literature?
This leads to a general question:
How can one exploit joint information of features, that were prior individually learned by the algorithm?
It is not really clear what you are really asking here. What do you mean by "individually learned by the algorithm" and what would be "joiint information"? First of all, problem is too broad, there is no such tring as "generic supervised learning model", each of them works in at least slightly different way, most falling into three classes:
Building a regression model of some kind, to map input data to the output and then agregate results for classification (linear regression, artificial neural networks)
Building geometrical separation of data (like support vector machines, classification-soms' etc.)
Directly (more or less) estimating probability of given classes (like Naive Bayes, classification restricted boltzmann machines etc.)
in each of them, there is somehow encoded "joint information" regarding features - the classification function is their joint information. In some cases it is easy do interpret (linear regression) and in some it is almost impossible (deep boltzmann machines, generally all deep architectures).
And to a specific question:
In my outlined setup, does my heuristic idea make sense? At what point of the algorithm should this information be used? What would be recommended literature or what would be buzzwords if I wanted to search for similar ideas in the literature?
To my best knowledge this concept is quite doubtfull. Many models tends to learn and work better, if your data is uncorrelated, while you are trying to do the opposite - correlate everything with some particular feature. This leads to one main concern - why are you doing this? To force model to use mainly this feature?
If it is so important - maybe a supervised learning is not the good idea, maybe you can directly model your problem by appling set of simple rules based on this particular feature?
If you know the feature is important, but you are aware that in some cases other things matter, and you cannot model them, then your problem will be how much to weight your feature. Should it be just distance*other_feature? Why not sqrt(distance)*feature? What about log(distance)*feature? There are countless possibilities, and seek for the best weighting scheme may be much more costfull, then finding a better machine learning model, which can learn your data from its raw features.
If you only suspect the importance of the feature, the best possible option would be to... do not trust this belief. Numerous studies have shown, that machine learning models are better in selecting features then humans. In fact, this is the whole point of non-linear models.
In literature, problem they you are trying to solve is generally refered as incorporating expert knowledge into the learning process. There are thousands of examples, where there is some kind of knowledge that cannot be directly encoded in data representation, yet too valuable to omit it. You should research terms like "machine learning expert knowledge", and its possible synomyms.
There's a fair amount of work treating the kind of problem you're looking at (which is called segmentation) as an optimisation to be performed on a Markov Random Field, which can be solved by graph theoretic methods like GraphCut. Some examples are the work of Pushmeet Kohli at Microsoft Research (try this paper).
What you describe is, in that framework, a prior on node membership, where p(B) is inversely proportional to the distance from the edge (in addition to any other connectivity constraints you want to impose, there's normally a connectedness one, and there will certainly be a likelihood term for the pixel's intensity). The advantage of doing this is that if you can express everything as a probability model, you don't need to rely on heuristics and you can use standard mechanisms for performing inference.
The downside is you need a fairly strong mathematical background to attempt this; I don't know what the scale of the project you're proposing is, but if you want results quickly or you're lacking the necessary background this is going to be pretty daunting.

What's the best approach to recognize patterns in data, and what's the best way to learn more on the topic?

A developer I am working with is developing a program that analyzes images of pavement to find cracks in the pavement. For every crack his program finds, it produces an entry in a file that tells me which pixels make up that particular crack. There are two problems with his software though:
1) It produces several false positives
2) If he finds a crack, he only finds small sections of it and denotes those sections as being separate cracks.
My job is to write software that will read this data, analyze it, and tell the difference between false-positives and actual cracks. I also need to determine how to group together all the small sections of a crack as one.
I have tried various ways of filtering the data to eliminate false-positives, and have been using neural networks to a limited degree of success to group cracks together. I understand there will be error, but as of now, there is just too much error. Does anyone have any insight for a non-AI expert as to the best way to accomplish my task or learn more about it? What kinds of books should I read, or what kind of classes should I take?
EDIT My question is more about how to notice patterns in my coworker's data and identify those patterns as actual cracks. It's the higher-level logic that I'm concerned with, not so much the low-level logic.
EDIT In all actuality, it would take AT LEAST 20 sample images to give an accurate representation of the data I'm working with. It varies a lot. But I do have a sample here, here, and here. These images have already been processed by my coworker's process. The red, blue, and green data is what I have to classify (red stands for dark crack, blue stands for light crack, and green stands for a wide/sealed crack).
In addition to the useful comments about image processing, it also sounds like you're dealing with a clustering problem.
Clustering algorithms come from the machine learning literature, specifically unsupervised learning. As the name implies, the basic idea is to try to identify natural clusters of data points within some large set of data.
For example, the picture below shows how a clustering algorithm might group a bunch of points into 7 clusters (indicated by circles and color):
(source: natekohl.net)
In your case, a clustering algorithm would attempt to repeatedly merge small cracks to form larger cracks, until some stopping criteria is met. The end result would be a smaller set of joined cracks. Of course, cracks are a little different than two-dimensional points -- part of the trick in getting a clustering algorithm to work here will be defining a useful distance metric between two cracks.
Popular clustering algorithms include k-means clustering (demo) and hierarchical clustering. That second link also has a nice step-by-step explanation of how k-means works.
EDIT: This paper by some engineers at Phillips looks relevant to what you're trying to do:
Chenn-Jung Huang, Chua-Chin Wang, Chi-Feng Wu, "Image Processing Techniques for Wafer Defect Cluster Identification," IEEE Design and Test of Computers, vol. 19, no. 2, pp. 44-48, March/April, 2002.
They're doing a visual inspection for defects on silicon wafers, and use a median filter to remove noise before using a nearest-neighbor clustering algorithm to detect the defects.
Here are some related papers/books that they cite that might be useful:
M. Taubenlatt and J. Batchelder, “Patterned Wafer Inspection Using Spatial Filtering for Cluster Environment,” Applied Optics, vol. 31, no. 17, June 1992, pp. 3354-3362.
F.-L. Chen and S.-F. Liu, “A Neural-Network Approach to Recognize Defect Spatial Pattern in Semiconductor Fabrication.” IEEE Trans. Semiconductor Manufacturing, vol. 13, no. 3, Aug. 2000, pp. 366-373.
G. Earl, R. Johnsonbaugh, and S. Jost, Pattern Recognition and Image Analysis, Prentice Hall, Upper Saddle River, N.J., 1996.
Your problem falls in the very broad field of image classification. These types of problems can be notoriously difficult, and at the end of the day, solving them is an art. You must exploit every piece of knowledge you have about the problem domain to make it tractable.
One fundamental issue is normalization. You want to have similarly classified objects to be as similar as possible in their data representation. For example, if you have an image of the cracks, do all images have the same orientation? If not, then rotating the image may help in your classification. Similarly, scaling and translation (refer to this)
You also want to remove as much irrelevant data as possible from your training sets. Rather than directly working on the image, perhaps you could use edge extraction (for example Canny edge detection). This will remove all the 'noise' from the image, leaving only the edges. The exercise is then reduced to identifying which edges are the cracks and which are the natural pavement.
If you want to fast track to a solution then I suggest you first try the your luck with a Convolutional Neural Net, which can perform pretty good image classification with a minimum of preprocessing and noramlization. Its pretty well known in handwriting recognition, and might be just right for what you're doing.
I'm a bit confused by the way you've chosen to break down the problem. If your coworker isn't identifying complete cracks, and that's the spec, then that makes it your problem. But if you manage to stitch all the cracks together, and avoid his false positives, then haven't you just done his job?
That aside, I think this is an edge detection problem rather than a classification problem. If the edge detector is good, then your issues go away.
If you are still set on classification, then you are going to need a training set with known answers, since you need a way to quantify what differentiates a false positive from a real crack. However I still think it is unlikely that your classifier will be able to connect the cracks, since these are specific to each individual paving slab.
I have to agree with ire_and_curses, once you dive into the realm of edge detection to patch your co-developers crack detection, and remove his false positives, it seems as if you would be doing his job. If you can patch what his software did not detect, and remove his false positives around what he has given you. It seems like you would be able to do this for the full image.
If the spec is for him to detect the cracks, and you classify them, then it's his job to do the edge detection and remove false positives. And your job to take what he has given you and classify what type of crack it is. If you have to do edge detection to do that, then it sounds like you are not far from putting your co-developer out of work.
There are some very good answers here. But if you are unable to solve the problem, you may consider Mechanical Turk. In some cases it can be very cost-effective for stubborn problems. I know people who use it for all kinds of things like this (verification that a human can do easily but proves hard to code).
https://www.mturk.com/mturk/welcome
I am no expert by any means, but try looking at Haar Cascades. You may also wish to experiment with the OpenCV toolkit. These two things together do face detection and other object-detection tasks.
You may have to do "training" to develop a Haar Cascade for cracks in pavement.
What’s the best approach to recognize patterns in data, and what’s the best way to learn more on the topic?
The best approach is to study pattern recognition and machine learning. I would start with Duda's Pattern Classification and use Bishop's Pattern Recognition and Machine Learning as reference. It would take a good while for the material to sink in, but getting basic sense of pattern recognition and major approaches of classification problem should give you the direction. I can sit here and make some assumptions about your data, but honestly you probably have the best idea about the data set since you've been dealing with it more than anyone. Some of the useful technique for instance could be support vector machine and boosting.
Edit: An interesting application of boosting is real-time face detection. See Viola/Jones's Rapid Object Detection using a Boosted Cascade of Simple
Features (pdf). Also, looking at the sample images, I'd say you should try improving the edge detection a bit. Maybe smoothing the image with Gaussian and running more aggressive edge detection can increase detection of smaller cracks.
I suggest you pick up any image processing textbook and read on the subject.
Particularly, you might be interested in Morphological Operations like Dilation and Erosion‎, which complements the job of an edge detector. Plenty of materials on the net...
This is an image processing problem. There are lots of books written on the subject, and much of the material in these books will go beyond a line-detection problem like this. Here is the outline of one technique that would work for the problem.
When you find a crack, you find some pixels that make up the crack. Edge detection filters or other edge detection methods can be used for this.
Start with one (any) pixel in a crack, then "follow" it to make a multipoint line out of the crack -- save the points that make up the line. You can remove some intermediate points if they lie close to a straight line. Do this with all the crack pixels. If you have a star-shaped crack, don't worry about it. Just follow the pixels in one (or two) directions to make up a line, then remove these pixels from the set of crack pixels. The other legs of the star will recognized as separate lines (for now).
You might perform some thinning on the crack pixels before step 1. In other words, check the neighbors of the pixels, and if there are too many then ignore that pixel. (This is a simplification -- you can find several algorithms for this.) Another preprocessing step might be to remove all the lines that are too thin or two faint. This might help with the false positives.
Now you have a lot of short, multipoint lines. For the endpoints of each line, find the nearest line. If the lines are within a tolerance, then "connect" the lines -- link them or add them to the same structure or array. This way, you can connect the close cracks, which would likely be the same crack in the concrete.
It seems like no matter the algorithm, some parameter adjustment will be necessary for good performance. Write it so it's easy to make minor changes in things like intensity thresholds, minimum and maximum thickness, etc.
Depending on the usage environment, you might want to allow user judgement do determine the questionable cases, and/or allow a user to review the all the cracks and click to combine, split or remove detected cracks.
You got some very good answer, esp. #Nate's, and all the links and books suggested are worthwhile. However, I'm surprised nobody suggested the one book that would have been my top pick -- O'Reilly's Programming Collective Intelligence. The title may not seem germane to your question, but, believe me, the contents are: one of the most practical, programmer-oriented coverage of data mining and "machine learning" I've ever seen. Give it a spin!-)
It sounds a little like a problem there is in Rock Mechanics, where there are joints in a rock mass and these joints have to be grouped into 'sets' by orientation, length and other properties. In this instance one method that works well is clustering, although classical K-means does seem to have a few problems which I have addressed in the past using a genetic algorithm to run the interative solution.
In this instance I suspect it might not work quite the same way. In this case I suspect that you need to create your groups to start with i.e. longitudinal, transverse etc. and define exactly what the behviour of each group is i.e. can a single longitudinal crack branch part way along it's length, and if it does what does that do to it's classification.
Once you have that then for each crack, I would generate a random crack or pattern of cracks based on the classification you have created. You can then use something like a least squares approach to see how closely the crack you are checking fits against the random crack / cracks you have generated. You can repeat this analysis many times in the manner of a Monte-Carlo analysis to identify which of the randomly generated crack / cracks best fits the one you are checking.
To then deal with the false positives you will need to create a pattern for each of the different types of false positives i.e. the edge of a kerb is a straight line. You will then be able to run the analysis picking out which is the most likely group for each crack you analyse.
Finally, you will need to 'tweak' the definition of different crack types to try and get a better result. I guess this could either use an automated approach or a manual approach depending on how you define your different crack types.
One other modification that sometimes helps when I'm doing problems like this is to have a random group. By tweaking the sensitivity of a random group i.e. how more or less likely a crack is to be included in the random group, you can sometimes adjust the sensitivty of the model to complex patterns that don't really fit anywhere.
Good luck, looks to me like you have a real challenge.
You should read about data mining, specially pattern mining.
Data mining is the process of extracting patterns from data. As more data are gathered, with the amount of data doubling every three years, data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.
A good book on the subject is Data Mining: Practical Machine Learning Tools and Techniques
(source: waikato.ac.nz) ](http://www.amazon.com/Data-Mining-Ian-H-Witten/dp/3446215336 "ISBN 0-12-088407-0")
Basically what you have to do is apply statistical tools and methodologies to your datasets. The most used comparison methodologies are Student's t-test and the Chi squared test, to see if two unrelated variables are related with some confidence.

Resources