"Terrain" Implementation for procedural generation - procedural-generation

So I'm looking to implement the diamond square with the ability to step through the generation and not having it all generated at once. Something like in this video:
https://www.youtube.com/watch?v=9HJKrctqIJI
What's the best method for implementing the terrain such that I can control the subdivisions? I was looking at QuadTrees but I wasn't sure if they would be the best approach.

By definition the diamond-square algorithm is an iterative algorithm, so there is nothing preventing you from pausing it between two steps.
What do you mean by controlling the subdivision ? Only subdivide specific areas ?
In that case yes a quadtree can be used to control subdivision, but try to be more specific with your question.
What do you intend to do and at which specific point are you stuck ?

Related

what is best solution for setting 'unknown' or 'unk' word vector in word2vec?

I usually set unk'value as random distribution vector or 0-vector.
It performed not bad,but most situation it's also not best for many task,i think.
But i'm curious about the best method to process 'unk' word vector,thank for any helpful advice.
If you're training word-vectors, the most-common strategy is to discard low-frequency terms entirely. (That's what the min_count setting does, in Google's original word2vec.c, Python gensim Word2Vec, etc.)
Whether you'd need to remember that something was at a particular place would be more common in sequence-learning scenarios, rather than plain word2vec. (If that's your concern, you could make your question more specific about why & how you're using word-vectors.)

RANSAC camera-calibration implementation

I looked into several libraries like OpenCV etc, but could not find any implementation of camera calibration in RANSAC way. I mean, I want to do calibration providing point correspondences (P, p) (ie. 3D -> 2D) which can contain outliers and finally find both the inrinsic and extrinsic matrix from the inliers.
Before I go on and implement my own using some libraries like scikit (I did not find a good general RANSAC implementation in C++ as well), I wanted to know if something like that is readily available.
Did you have a look at OpenCV's calibrateCamera? If you are unsure about the quality of your point correspondences, I think it would be very easy to write your own RANSAC-based calibration based on this, as the function conveniently returns the reprojection error.
First question to ask is: why do you want to do this?
The reason you won't find a shrink-wrapped implementation of a RANSAC loop around a whole camera calibration package is that, on the surface, it sounds like a bad idea.
Camera calibration use cases normally are (or should be) highly repeatable, low-noise affairs, with outlier-fractions small enough to be dealt with a robustifier in the bundle adjustment. If your use case is the opposite of all that, it brings into question the entire approach.
It'd help if you described your use case with more detail.

SWT voting-based color reduction

I found this nice paper about color reduction
http://asp.eurasipjournals.com/content/2013/1/95
The approach sounds really interesting and i like to evaluate the algorithm.
Does anyone know if
- Its there any implementation public available ?
- Or is it "easy" to implement it for instance with opencv ( I dont have much experience with opencv, but i m willing to learn it if its necessary )
Regards
The SWT part of this you can find here https://github.com/aperrau/DetectText it detects text regions with SWT. But it works rather slow, more when several seconds per image.
Paper about this implementation is here: http://www.cs.cornell.edu/courses/cs4670/2010fa/projects/final/results/group_of_arp86_sk2357/Writeup.pdf

What algorithm would you use for clustering based on people attributes?

I'm pretty new in the field of machine learning (even if I find it extremely interesting), and I wanted to start a small project where I'd be able to apply some stuff.
Let's say I have a dataset of persons, where each person has N different attributes (only discrete values, each attribute can be pretty much anything).
I want to find clusters of people who exhibit the same behavior, i.e. who have a similar pattern in their attributes ("look-alikes").
How would you go about this? Any thoughts to get me started?
I was thinking about using PCA since we can have an arbitrary number of dimensions, that could be useful to reduce it. K-Means? I'm not sure in this case. Any ideas on what would be most adapted to this situation?
I do know how to code all those algorithms, but I'm truly missing some real world experience to know what to apply in which case.
K-means using the n-dimensional attribute vectors is a reasonable way to get started. You may want to play with your distance metric to see how it affects the results.
The first step to pretty much any clustering algorithm is to find a suitable distance function. Many algorithms such as DBSCAN can be parameterized with this distance function then (at least in a decent implementation. Some of course only support Euclidean distance ...).
So start with considering how to measure object similarity!
In my opinion you should also try expectation-maximization algorithm (also called EM). On the other hand, you must be careful while using PCA because this algorithm may reduce the dimensions relevant to clustering.

sift features for "similar" objects

I find out that SIFT features is only good for find the same object in the scene, but it seems not suitable for "similar" objects.
maybe I doing something wrong?
maybe I must use some other descriptors?
images and SIFT\ASIFT algorithms work:
link
same problem- no matches
link
I find out that SIFT features is only good for find the same object in the scene, but it seems not suitable for "similar" objects.
It is exactly what they are doing (and not only them, task is called "wide baseline matching") - 1)for each feature find the most similar - called "tentative" or "putative" correspondence
2)use RANSAC or other similar method to find geometric transformation between sets of correspondences.
So, if you need to find "similar", you have to use other method, like Viola-Jones http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
Or (but it will give you a lot of false positives) you can compare big image to small and do not use step 2.
The basic SIFT algorithm using VLfeat gives me this as a result. Which given the small and not so unique target image, is a pretty good result I would say.

Resources