Procedural Generation: dividing a map into multiple areas - procedural-generation

I am creating a world generator (like in Minecraft). I am using Perlin noise to generate the elevation map. But I want to divide my map into areas (or biomes).
I can divide the map into equal polygons, but I am looking for a more random way.
My map is pixelated (composed of blocks). Every pixel has x and y coordinates.
Example:

One way of doing it is generating additional noise maps for things like temperature, humidity and maybe even more(the more different biomes you have the more important it will be to have more different parameters to differentiate them).
Then you assign each biome you have a value for each of those parameters and you also need to add some height limit(so your engine won't create a forest under water or similar stupid things).
Then for each point on the map you choose the closest biome in terms of temperature, humidity, …
That's the basic concept. Depending on your noisemaps that will generate a random-looking pattern and as a bonus also keeping realism(biomes of similar temperature and humidity are close to each other)
Here are some further tips for the actual implementation:
Make sure the temperature and humidity maps have much lower frequencies than the heigh map, so biomes don't get too small.
I suggest also adding a low-frequency part, so the transition is not so smooth.
If you want to add a more natural transition between biomes, you can choose randomly if two biomes are similarly good in terms of the parameters.

Related

Terrain Generation with Realistic Height Map

I am having troubles finding out the best way to add realism to a terrain generator. At this point I have a flood fill that works perfectly, however if I want to add any sort of realism I will need to add in height variables. I have seen the following methods attempted to make heightmaps:
Tectonic Plates https://experilous.com/1/blog/post/procedural-planet-generation
Simplex/Perlin Noise
Diamond-Square Algoritm
Right now I am generating plates through my flood fill, but I am not sure where to go from there.
I am not sure about using a noise function just due to the fact that I would need to generate biomes within a continent to make it look realistic (A continent with just mountains would be unrealistic). The diamond square algorithm probably isn't going to work for my needs because I would like to be flexible in sizing.
What is my best option for generating a height map if I have square tiles to give some realism, not very resource intensive, and keep the code I have?
Here is an image of the generation, and the generation code is in the Github project:
https://github.com/Hunterb9101/TileWorkspace/blob/59fe1f28f019d7128c970772d1ef6bd30d63072c/Generation.png
tldr: I would use a perlin noise generation with some tacked on biomes.
This article/tutorial goes over code snippets and their implementation methods. Suggesting the best algorithm for your task depends entirely on your skill and end result goals.
However a brief description of perlin noise and using it with realistic aims in mind...
As with most terrain generation, noise functions are your friend -
Perlin and/or simplex noise in particular. I've implemented some
planetary terrain generation algorithms and although they are in 2d,
the resulting height / "texture" map could be projected to a sphere
rather easily. I assume conversion to hex format is not an issue
either.
My technique has been creating multiple noise layers, e.g. temperature
and humidity. Temperature is fused with a latitude coordinate, in
order to make the equator more hot and poles cold, while the noise
makes sure it's not a simple gradient. The final terrain type is
selected by rules like "if hot and not humid then pick desert". You
can see my JavaScript implementation of this here:
https://github.com/tapio/infiniverse/blob/master/js/universe/planet-aerial.js
As for the water percentage, you can just adjust the water level
height as noise functions tend to have a constant average. Another
option is to apply an exponent filter (useful also when generating
clouds, see my implementation here).
Another way to generate spherical terrain that came into mind (haven't
tested) is to use 3d noise and sample it from a surface of a sphere,
using the resulting value as the ground height at that point. You can
then weight that according to amount of water on planet and the
latitude coordinate.
I'll end with a link to one practical implementation of 3d planetary
terrain generation:
http://libnoise.sourceforge.net/tutorials/tutorial8.html
To generate any random style of realistic terrain you are going to have to use noise of some kind. In past projects I myself have used the diamond square algorithm. However that was to simply generate heightmaps.
For some more light reading I would check out this article about realistic terrain techniques.

Calculating heat map weights based on clustering of points

I have an array of MKLocationCoordinate2D in iOS and I'd like to create a heat map of those points based on the clustering of them.
i.e. the more there are in a certain area then the higher the weight.
I've found a load of different frameworks for generating the heat maps and they all require the weights to be calculated yourself (which makes sense).
I'm just not sure where to start with the calculation.
I could do something like calculating the mean distance between each point and every other point but I'm not sure if that's a good idea.
Could someone point me in the direction of how to weight each point based on it's closeness to other points.
Thanks
I solved this by implementing a quad tree and using that to quickly get the number of neighbours within a certain radius.
I can then change the radius to tweak it but it will very quickly return weights based on how many neighbours each point has.

License Plate Image Matching

I would like to match two license plate images, sample images given below
Here these two license plate belong to same vehicle, hence they should give match.
There may be zoom and slight rotation in these images, also only a part of the original may be visible as given in the example.
If the License plate belong to different vehicle algorithm should say it is different.
Which is best algorithm for doing this ?
I would suggest you use openCV functions from Features2D Framework, and Homography method to handle the scaling and rotation problem. Specifically, in Features2D, there are classes that may be helpful for your to detect the letter, extract them, and match your two templates after extraction.
Frankly this is a non-trivial question.
Just to list some obvious options:
Implement one of the numerous character recognition softwares, and
get the string of characters, and then do a search for the substring
in another string.
For images with almost no difference in zoom
level, Use edge detection filters, like canny edge detection, to
enhance the image, then use ICP (Iterative Closest Point), letting
each edge pixel provide a vector to the closest edge pixel in the
other image, with a similar value. this typically aligns images if
they are similar enough. The final score tells you how similar they
are.
For very large zoom levels, use multiple rotation and zoom
hypothesis, and for each, scale the images and do cross correlation
of the two images. select the hypothesis, that provides the
coordinates with the best correlation, and use the point of
correlation, as the x and y offset. The value of the correlation
tells you how good a fit you have..
many other smarter algorithms have been produced for image fitting. However, you have much larger problems.
The two example images you provide does not show the entire licenseplate, so you will not be able to say anything better than, "the probabillity of a match is larger than zero", as the number of visible characters increase, so does the probabillity of a match.
you could argue that small damages to a license plate also increases the probabillity, in that case cross correlation or similar method is needed to evaluate the probabillity of a match.

How to calculate/represent rate of change of a pixel in ImageJ

The following is in reference to dynamic 16-bit images in ImageJ64.
I am aiming to be able to "plot" a rate of change for each pixel in the image for the whole sequence of images (60 per set) and use the different gradient values of this plot as representation of the change in that pixel over time thus displaying dynamic data as a still image. Any ideas on where to start and any tools that may be of use?
There are many possible "rates of change", everything depends on particular application. Some of possible solutions include (assuming that pix is a set of a particular pixel's values across your images):
values amplitude max(pix)-min(pix)
values variance (or standard deviation) var(pix) (or std(pix))
more complex functions can be used, if you are interested in actual "visual effect change" rather then simple pixel value by for example computing variance of directional partial derivatives etc. As stated before - everything depends on your application, what kind of change are you interested in.

How to match texture similarity in images?

What are the ways in which to quantify the texture of a portion of an image? I'm trying to detect areas that are similar in texture in an image, sort of a measure of "how closely similar are they?"
So the question is what information about the image (edge, pixel value, gradient etc.) can be taken as containing its texture information.
Please note that this is not based on template matching.
Wikipedia didn't give much details on actually implementing any of the texture analyses.
Do you want to find two distinct areas in the image that looks the same (same texture) or match a texture in one image to another?
The second is harder due to different radiometry.
Here is a basic scheme of how to measure similarity of areas.
You write a function which as input gets an area in the image and calculates scalar value. Like average brightness. This scalar is called a feature
You write more such functions to obtain about 8 - 30 features. which form together a vector which encodes information about the area in the image
Calculate such vector to both areas that you want to compare
Define similarity function which takes two vectors and output how much they are alike.
You need to focus on steps 2 and 4.
Step 2.: Use the following features: std() of brightness, some kind of corner detector, entropy filter, histogram of edges orientation, histogram of FFT frequencies (x and y directions). Use color information if available.
Step 4. You can use cosine simmilarity, min-max or weighted cosine.
After you implement about 4-6 such features and a similarity function start to run tests. Look at the results and try to understand why or where it doesnt work. Then add a specific feature to cover that topic.
For example if you see that texture with big blobs is regarded as simmilar to texture with tiny blobs then add morphological filter calculated densitiy of objects with size > 20sq pixels.
Iterate the process of identifying problem-design specific feature about 5 times and you will start to get very good results.
I'd suggest to use wavelet analysis. Wavelets are localized in both time and frequency and give a better signal representation using multiresolution analysis than FT does.
Thre is a paper explaining a wavelete approach for texture description. There is also a comparison method.
You might need to slightly modify an algorithm to process images of arbitrary shape.
An interesting approach for this, is to use the Local Binary Patterns.
Here is an basic example and some explanations : http://hanzratech.in/2015/05/30/local-binary-patterns.html
See that method as one of the many different ways to get features from your pictures. It corresponds to the 2nd step of DanielHsH's method.

Resources