Terrain Generation with Realistic Height Map - procedural-generation

I am having troubles finding out the best way to add realism to a terrain generator. At this point I have a flood fill that works perfectly, however if I want to add any sort of realism I will need to add in height variables. I have seen the following methods attempted to make heightmaps:
Tectonic Plates https://experilous.com/1/blog/post/procedural-planet-generation
Simplex/Perlin Noise
Diamond-Square Algoritm
Right now I am generating plates through my flood fill, but I am not sure where to go from there.
I am not sure about using a noise function just due to the fact that I would need to generate biomes within a continent to make it look realistic (A continent with just mountains would be unrealistic). The diamond square algorithm probably isn't going to work for my needs because I would like to be flexible in sizing.
What is my best option for generating a height map if I have square tiles to give some realism, not very resource intensive, and keep the code I have?
Here is an image of the generation, and the generation code is in the Github project:
https://github.com/Hunterb9101/TileWorkspace/blob/59fe1f28f019d7128c970772d1ef6bd30d63072c/Generation.png

tldr: I would use a perlin noise generation with some tacked on biomes.
This article/tutorial goes over code snippets and their implementation methods. Suggesting the best algorithm for your task depends entirely on your skill and end result goals.
However a brief description of perlin noise and using it with realistic aims in mind...
As with most terrain generation, noise functions are your friend -
Perlin and/or simplex noise in particular. I've implemented some
planetary terrain generation algorithms and although they are in 2d,
the resulting height / "texture" map could be projected to a sphere
rather easily. I assume conversion to hex format is not an issue
either.
My technique has been creating multiple noise layers, e.g. temperature
and humidity. Temperature is fused with a latitude coordinate, in
order to make the equator more hot and poles cold, while the noise
makes sure it's not a simple gradient. The final terrain type is
selected by rules like "if hot and not humid then pick desert". You
can see my JavaScript implementation of this here:
https://github.com/tapio/infiniverse/blob/master/js/universe/planet-aerial.js
As for the water percentage, you can just adjust the water level
height as noise functions tend to have a constant average. Another
option is to apply an exponent filter (useful also when generating
clouds, see my implementation here).
Another way to generate spherical terrain that came into mind (haven't
tested) is to use 3d noise and sample it from a surface of a sphere,
using the resulting value as the ground height at that point. You can
then weight that according to amount of water on planet and the
latitude coordinate.
I'll end with a link to one practical implementation of 3d planetary
terrain generation:
http://libnoise.sourceforge.net/tutorials/tutorial8.html
To generate any random style of realistic terrain you are going to have to use noise of some kind. In past projects I myself have used the diamond square algorithm. However that was to simply generate heightmaps.
For some more light reading I would check out this article about realistic terrain techniques.

Related

Simulation of "unfocused" image

I am programming automatic focusing on a microscope with programmable axis controller.
For testing, I've implemented a simulation, which returns an image depending on exposure, axis position, etc. The simulation takes a good image, and distorts it - e.g. makes brighter, darker.
The primer indicator for good focus are sharp edges (works well for my type of images). Basically I sum the intensity differences between neighboirung pixels. The higher the sum, the better focus.
My question is, how to simulate unfocused image? Did anyone implemented it already?
A sequence of filters would be great.
I've tried cvSmooth, but it didn't give realistic results.
PS: My current workaround is changing ROI-size inversily proportional to distance from focus-position. It works well for testig my algorithms, but not good for demonstrations - as the image doesn't change during simulation.
(I'm not sure if stack overflow is the right site for your problem, since it is heavily domain-specific)
View your microscope within the terms of a Point Spread Function (PSF). The PSF is the function that describes the image of a point-like light source in the microscope. If you take a single plane from the 3D point spread function, you have the defocussing behaviour for this axial distance. Fold your image with the point spread function image, and you have the defocussed image. This operation is usually called "folding", "convolution" or "smooth with kernel".
Of course, you'll need a point spread function for your microscope, and there are plenty of details to consider - most importantly, the type of optics involved, the numerical aperture, etc. Ask the relevant optical literature. Sibaritas Deconvolution Microscopy seems like a good start.
Be aware that, in general, 3D defocussing involves integrating a Bessel function through the phases. You might be able to approximate the behaviour with a Gaussian mask if the axial defocus is within roughly 2 times the lateral resolution of your microscope. This is something like one or two microns on normal microscopes. For larger axial defocusings, you need to compute the integral. I doubt that this is within OpenCV's scope, so you'd need to pre-compute a defocus function.
My dissertation project rapidSTORM has an implementation for oil objectives, but the code (pixelatedBessel.cpp) is not in opencv.
Especially for high-aperture microscopes, the computations become pretty involved, however - at my lab, we usually preferred to just put a point-like light source (e.g. a quantum dot) on the microscope and let the Z stage do the job.

How to verify the correctness of calibration of a webcam?

I am totally new to camera calibration techniques... I am using OpenCV chessboard technique... I am using a webcam from Quantum...
Here are my observations and steps..
I have kept each chess square side = 3.5 cm. It is a 7 x 5 chessboard with 6 x 4 internal corners. I am taking total of 10 images in different views/poses at a distance of 1 to 1.5 m from the webcam.
I am following the C code in Learning OpenCV by Bradski for the calibration.
my code for calibration is
cvCalibrateCamera2(object_points,image_points,point_counts,cvSize(640,480),intrinsic_matrix,distortion_coeffs,NULL,NULL,CV_CALIB_FIX_ASPECT_RATIO);
Before calling this function I am making the first and 2nd element along the diagonal of the intrinsic matrix as one to keep the ratio of focal lengths constant and using CV_CALIB_FIX_ASPECT_RATIO
With the change in distance of the chess board the fx and fy are changing with fx:fy almost equal to 1. there are cx and cy values in order of 200 to 400. the fx and fy are in the order of 300 - 700 when I change the distance.
Presently I have put all the distortion coefficients to zero because I did not get good result including distortion coefficients. My original image looked handsome than the undistorted one!!
Am I doing the calibration correctly?. Should I use any other option than CV_CALIB_FIX_ASPECT_RATIO?. If yes, which one?
Hmm, are you looking for "handsome" or "accurate"?
Camera calibration is one of the very few subjects in computer vision where accuracy can be directly quantified in physical terms, and verified by a physical experiment. And the usual lesson is that (a) your numbers are just as good as the effort (and money) you put into them, and (b) real accuracy (as opposed to imagined) is expensive, so you should figure out in advance what your application really requires in the way of precision.
If you look up the geometrical specs of even very cheap lens/sensor combinations (in the megapixel range and above), it becomes readily apparent that sub-sub-mm calibration accuracy is theoretically achievable within a table-top volume of space. Just work out (from the spec sheet of your camera's sensor) the solid angle spanned by one pixel - you'll be dazzled by the spatial resolution you have within reach of your wallet. However, actually achieving REPEATABLY something near that theoretical accuracy takes work.
Here are some recommendations (from personal experience) for getting a good calibration experience with home-grown equipment.
If your method uses a flat target ("checkerboard" or similar), manufacture a good one. Choose a very flat backing (for the size you mention window glass 5 mm thick or more is excellent, though obviously fragile). Verify its flatness against another edge (or, better, a laser beam). Print the pattern on thick-stock paper that won't stretch too easily. Lay it after printing on the backing before gluing and verify that the square sides are indeed very nearly orthogonal. Cheap ink-jet or laser printers are not designed for rigorous geometrical accuracy, do not trust them blindly. Best practice is to use a professional print shop (even a Kinko's will do a much better job than most home printers). Then attach the pattern very carefully to the backing, using spray-on glue and slowly wiping with soft cloth to avoid bubbles and stretching. Wait for a day or longer for the glue to cure and the glue-paper stress to reach its long-term steady state. Finally measure the corner positions with a good caliper and a magnifier. You may get away with one single number for the "average" square size, but it must be an average of actual measurements, not of hopes-n-prayers. Best practice is to actually use a table of measured positions.
Watch your temperature and humidity changes: paper adsorbs water from the air, the backing dilates and contracts. It is amazing how many articles you can find that report sub-millimeter calibration accuracies without quoting the environment conditions (or the target response to them). Needless to say, they are mostly crap. The lower temperature dilation coefficient of glass compared to common sheet metal is another reason for preferring the former as a backing.
Needless to say, you must disable the auto-focus feature of your camera, if it has one: focusing physically moves one or more pieces of glass inside your lens, thus changing (slightly) the field of view and (usually by a lot) the lens distortion and the principal point.
Place the camera on a stable mount that won't vibrate easily. Focus (and f-stop the lens, if it has an iris) as is needed for the application (not the calibration - the calibration procedure and target must be designed for the app's needs, not the other way around). Do not even think of touching camera or lens afterwards. If at all possible, avoid "complex" lenses - e.g. zoom lenses or very wide angle ones. For example, anamorphic lenses require models much more complex than stock OpenCV makes available.
Take lots of measurements and pictures. You want hundreds of measurements (corners) per image, and tens of images. Where data is concerned, the more the merrier. A 10x10 checkerboard is the absolute minimum I would consider. I normally worked at 20x20.
Span the calibration volume when taking pictures. Ideally you want your measurements to be uniformly distributed in the volume of space you will be working with. Most importantly, make sure to angle the target significantly with respect to the focal axis in some of the pictures - to calibrate the focal length you need to "see" some real perspective foreshortening. For best results use a repeatable mechanical jig to move the target. A good one is a one-axis turntable, which will give you an excellent prior model for the motion of the target.
Minimize vibrations and associated motion blur when taking photos.
Use good lighting. Really. It's amazing how often I see people realize late in the game that you need a generous supply of photons to calibrate a camera :-) Use diffuse ambient lighting, and bounce it off white cards on both sides of the field of view.
Watch what your corner extraction code is doing. Draw the detected corner positions on top of the images (in Matlab or Octave, for example), and judge their quality. Removing outliers early using tight thresholds is better than trusting the robustifier in your bundle adjustment code.
Constrain your model if you can. For example, don't try to estimate the principal point if you don't have a good reason to believe that your lens is significantly off-center w.r.t the image, just fix it at the image center on your first attempt. The principal point location is usually poorly observed, because it is inherently confused with the center of the nonlinear distortion and by the component parallel to the image plane of the target-to-camera's translation. Getting it right requires a carefully designed procedure that yields three or more independent vanishing points of the scene and a very good bracketing of the nonlinear distortion. Similarly, unless you have reason to suspect that the lens focal axis is really tilted w.r.t. the sensor plane, fix at zero the (1,2) component of the camera matrix. Generally speaking, use the simplest model that satisfies your measurements and your application needs (that's Ockam's razor for you).
When you have a calibration solution from your optimizer with low enough RMS error (a few tenths of a pixel, typically, see also Josh's answer below), plot the XY pattern of the residual errors (predicted_xy - measured_xy for each corner in all images) and see if it's a round-ish cloud centered at (0, 0). "Clumps" of outliers or non-roundness of the cloud of residuals are screaming alarm bells that something is very wrong - likely outliers due to bad corner detection or matching, or an inappropriate lens distortion model.
Take extra images to verify the accuracy of the solution - use them to verify that the lens distortion is actually removed, and that the planar homography predicted by the calibrated model actually matches the one recovered from the measured corners.
This is a rather late answer, but for people coming to this from Google:
The correct way to check calibration accuracy is to use the reprojection error provided by OpenCV. I'm not sure why this wasn't mentioned anywhere in the answer or comments, you don't need to calculate this by hand - it's the return value of calibrateCamera. In Python it's the first return value (followed by the camera matrix, etc).
The reprojection error is the RMS error between where the points would be projected using the intrinsic coefficients and where they are in the real image. Typically you should expect an RMS error of less than 0.5px - I can routinely get around 0.1px with machine vision cameras. The reprojection error is used in many computer vision papers, there isn't a significantly easier or more accurate way to determine how good your calibration is.
Unless you have a stereo system, you can only work out where something is in 3D space up to a ray, rather than a point. However, as one can work out the pose of each planar calibration image, it's possible to work out where each chessboard corner should fall on the image sensor. The calibration process (more or less) attempts to work out where these rays fall and minimises the error over all the different calibration images. In Zhang's original paper, and subsequent evaluations, around 10-15 images seems to be sufficient; at this point the error doesn't decrease significantly with the addition of more images.
Other software packages like Matlab will give you error estimates for each individual intrinsic, e.g. focal length, centre of projection. I've been unable to make OpenCV spit out that information, but maybe it's in there somewhere. Camera calibration is now native in Matlab 2014a, but you can still get hold of the camera calibration toolbox which is extremely popular with computer vision users.
http://www.vision.caltech.edu/bouguetj/calib_doc/
Visual inspection is necessary, but not sufficient when dealing with your results. The simplest thing to look for is that straight lines in the world become straight in your undistorted images. Beyond that, it's impossible to really be sure if your cameras are calibrated well just by looking at the output images.
The routine provided by Francesco is good, follow that. I use a shelf board as my plane, with the pattern printed on poster paper. Make sure the images are well exposed - avoid specular reflection! I use a standard 8x6 pattern, I've tried denser patterns but I haven't seen such an improvement in accuracy that it makes a difference.
I think this answer should be sufficient for most people wanting to calibrate a camera - realistically unless you're trying to calibrate something exotic like a Fisheye or you're doing it for educational reasons, OpenCV/Matlab is all you need. Zhang's method is considered good enough that virtually everyone in computer vision research uses it, and most of them either use Bouguet's toolbox or OpenCV.

3D reconstruction -- How to create 3D model from 2D image?

If I take a picture with a camera, so I know the distance from the camera to the object, such as a scale model of a house, I would like to turn this into a 3D model that I can maneuver around so I can comment on different parts of the house.
If I sit down and think about taking more than one picture, labeling direction, and distance, I should be able to figure out how to do this, but, I thought I would ask if someone has some paper that may help explain more.
What language you explain in doesn't matter, as I am looking for the best approach.
Right now I am considering showing the house, then the user can put in some assistance for height, such as distance from the camera to the top of that part of the model, and given enough of this it would be possible to start calculating heights for the rest, especially if there is a top-down image, then pictures from angles on the four sides, to calculate relative heights.
Then I expect that parts will also need to differ in color to help separate out the various parts of the model.
As mentioned, the problem is very hard and is often also referred to as multi-view object reconstruction. It is usually approached by solving the stereo-view reconstruction problem for each pair of consecutive images.
Performing stereo reconstruction requires that pairs of images are taken that have a good amount of visible overlap of physical points. You need to find corresponding points such that you can then use triangulation to find the 3D co-ordinates of the points.
Epipolar geometry
Stereo reconstruction is usually done by first calibrating your camera setup so you can rectify your images using the theory of epipolar geometry. This simplifies finding corresponding points as well as the final triangulation calculations.
If you have:
the intrinsic camera parameters (requiring camera calibration),
the camera's position and rotation (it's extrinsic parameters), and
8 or more physical points with matching known positions in two photos (when using the eight-point algorithm)
you can calculate the fundamental and essential matrices using only matrix theory and use these to rectify your images. This requires some theory about co-ordinate projections with homogeneous co-ordinates and also knowledge of the pinhole camera model and camera matrix.
If you want a method that doesn't need the camera parameters and works for unknown camera set-ups you should probably look into methods for uncalibrated stereo reconstruction.
Correspondence problem
Finding corresponding points is the tricky part that requires you to look for points of the same brightness or colour, or to use texture patterns or some other features to identify the same points in pairs of images. Techniques for this either work locally by looking for a best match in a small region around each point, or globally by considering the image as a whole.
If you already have the fundamental matrix, it will allow you to rectify the images such that corresponding points in two images will be constrained to a line (in theory). This helps you to use faster local techniques.
There is currently still no ideal technique to solve the correspondence problem, but possible approaches could fall in these categories:
Manual selection: have a person hand-select matching points.
Custom markers: place markers or use specific patterns/colours that you can easily identify.
Sum of squared differences: take a region around a point and find the closest whole matching region in the other image.
Graph cuts: a global optimisation technique based on optimisation using graph theory.
For specific implementations you can use Google Scholar to search through the current literature. Here is one highly cited paper comparing various techniques:
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms.
Multi-view reconstruction
Once you have the corresponding points, you can then use epipolar geometry theory for the triangulation calculations to find the 3D co-ordinates of the points.
This whole stereo reconstruction would then be repeated for each pair of consecutive images (implying that you need an order to the images or at least knowledge of which images have many overlapping points). For each pair you would calculate a different fundamental matrix.
Of course, due to noise or inaccuracies at each of these steps you might want to consider how to solve the problem in a more global manner. For instance, if you have a series of images that are taken around an object and form a loop, this provides extra constraints that can be used to improve the accuracy of earlier steps using something like bundle adjustment.
As you can see, both stereo and multi-view reconstruction are far from solved problems and are still actively researched. The less you want to do in an automated manner the more well-defined the problem becomes, but even in these cases quite a bit of theory is required to get started.
Alternatives
If it's within the constraints of what you want to do, I would recommend considering dedicated hardware sensors (such as the XBox's Kinect) instead of only using normal cameras. These sensors use structured light, time-of-flight or some other range imaging technique to generate a depth image which they can also combine with colour data from their own cameras. They practically solve the single-view reconstruction problem for you and often include libraries and tools for stitching/combining multiple views.
Epipolar geometry references
My knowledge is actually quite thin on most of the theory, so the best I can do is to further provide you with some references that are hopefully useful (in order of relevance):
I found a PDF chapter on Multiple View Geometry that contains most of the critical theory. In fact the textbook Multiple View Geometry in Computer Vision should also be quite useful (sample chapters available here).
Here's a page describing a project on uncalibrated stereo reconstruction that seems to include some source code that could be useful. They find matching points in an automated manner using one of many feature detection techniques. If you want this part of the process to be automated as well, then SIFT feature detection is commonly considered to be an excellent non-real-time technique (since it's quite slow).
A paper about Scene Reconstruction from Multiple Uncalibrated Views.
A slideshow on Methods for 3D Reconstruction from Multiple Images (it has some more references below it's slides towards the end).
A paper comparing different multi-view stereo reconstruction algorithms can be found here. It limits itself to algorithms that "reconstruct dense object models from calibrated views".
Here's a paper that goes into lots of detail for the case that you have stereo cameras that take multiple images: Towards robust metric reconstruction
via a dynamic uncalibrated stereo head. They then find methods to self-calibrate the cameras.
I'm not sure how helpful all of this is, but hopefully it includes enough useful terminology and references to find further resources.
Research has made significant progress and these days it is possible to obtain pretty good-looking 3D shapes from 2D images. For instance, in our recent research work titled "Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks" took a big step in solving the problem of obtaining 3D shapes from 2D images. In our work, we show that you can not only go from 2D to 3D directly and get a good, approximate 3D reconstruction but you can also learn a distribution of 3D shapes in an efficient manner and generate/synthesize 3D shapes. Below is an image of our work showing that we are able to do 3D reconstruction even from a single silhouette or depth map (on the left). The ground-truth 3D shapes are shown on the right.
The approach we took has some contributions related to cognitive science or the way the brain works: the model we built shares parameters for all shape categories instead of being specific to only one category. Also, it obtains consistent representations and takes the uncertainty of the input view into account when producing a 3D shape as output. Therefore, it is able to naturally give meaningful results even for very ambiguous inputs. If you look at the citation to our paper you can see even more progress just in terms of going from 2D images to 3D shapes.
This problem is known as Photogrammetry.
Google will supply you with endless references, just be aware that if you want to roll your own, it's a very hard problem.
Check out The Deadalus Project, althought that website does not contain a gallery with illustrative information about the solution, it post several papers and info about the working method.
I watched a lecture from one of the main researchers of the project (Roger Hubbold), and the image results are quite amazing! Althought is a complex and long problem. It has a lot of tricky details to take into account to get an approximation of the 3d data, take for example the 3d information from wall surfaces, for which the heuristic to work is as follows: Take a photo with normal illumination of the scene, and then retake the picture in same position with full flash active, then substract both images and divide the result by a pre-taken flash calibration image, apply a box filter to this new result and then post-process to estimate depth values, the whole process is explained in detail in this paper (which is also posted/referenced in the project website)
Google Sketchup (free) has a photo matching tool that allows you to take a photograph and match its perspective for easy modeling.
EDIT: It appears that you're interested in developing your own solution. I thought you were trying to obtain a 3D model of an image in a single instance. If this answer isn't helpful, I apologize.
Hope this helps if you are trying to construct 3d volume from 2d stack of images !! You can use open source tool such as ImageJ Fiji which comes with 3d viewer plugin..
https://quppler.com/creating-a-classifier-using-image-j-fiji-for-3d-volume-data-preparation-from-stack-of-images/

Perlin noise for motion?

I'm successfully using Perlin noise to generate terrain, clouds and a few other nifty things. However, I'm now trying to animate a group of flying insects (specifically fireflies), and it was suggested to me to use Perlin noise for this, as well. However, I'm not really sure how to go about this.
The first thing that occurred to me was, given a noise map like so:
Assign each firefly a random initial location, velocity and angular acceleration.
On frame, advance the fly's position following its direction vector.
Read the noise map at the new location, and use it to adjust the angular acceleration, causing
the fly to "turn" towards lighter pixels.
Adjust angular acceleration again by proximity of other flies to avoid having them cluster around local maximums.
However, this doesn't cover cases where flies reach the edge of the map, or cases where they might wind up just orbiting a single point. The second case might not be a big deal, but I'm unsure of a reliable way to have them turn to avoid collisions with the map edge.
Suggestions? Tutorials or papers (in English, please)?
Here is a very good source for 2D perlin noise. You can follow the exact same principles, but instead of creating a 2D grid of gradients, you can create a 1D array of gradients. You can use this to create your noise for a particular axis.
Simply follow this recipe, and you can create similar perlin noise functions for each of your other axes too! Combine these motions, and you should have some good looking noise on your hands. (You could also use these noise functions as random accellerations or velocities. Since the Perlin noise function is globally monotonous, your flies won't rocket off to crazy distances.)
http://webstaff.itn.liu.se/~stegu/TNM022-2005/perlinnoiselinks/perlin-noise-math-faq.html
If you're curious about other types of motion, I would suggest Brownian Motion. That is the same sort of motion that dust particles exhibit when they are floating around your room. This article gets into some more interesting math at the end, but if you're at all familliar with Matlab, the first few sets of instructions should be pretty easy to understand. If not, just google the funcitons, and find their native equivalents for your environment (or create them yourself!) This will be a little more realistic, and much quicker to calculate than perlin noise
http://lben.epfl.ch/files/content/sites/lben/files/users/179705/Simulating%20Brownian%20Motion.pdf
Happy flying!
Maybe you're looking for boids?
Wikipedia page
It doesn't feature Perlin noise in the original concept, maybe you could use the noise to generate attractors or repulsors, as you're trying to do with the 'fly to lighter' behavior.
PS: the page linked above features a related link to Firefly algorithm, maybe you'll be interested in that?

Algorithm for measuring the Euclidean distance between pixels in an image

I have a number of images where I know the focal length, pixel count, dimensions and position (from GPS). They are all in a high oblique manner, taken on the ground with commercially available cameras.
alt text http://desmond.yfrog.com/Himg411/scaled.php?tn=0&server=411&filename=mjbm.jpg&xsize=640&ysize=640
What would be the best method for calculating the euclidean distances between certain pixels within an image? If it is indeed possible.
Assuming you're not looking for full landscape modelling but a simple approximation then this shouldn't be too hard. Basically a first approximation of your image reduces to a camera with know focal length looking along a plane. So we can create a model of the system in 3D very easily - it's not too far from the classic observer looking over a checkerboard demo.
Normally our graphics problem would be to project the 3D model into 2D so we could render the image. Although most programs nowadays use an API (such as OpenGL) to do this the equations are not particularly complex or difficult to understand. I wrote my first code using the examples from 3D Graphics In Pascal which is a nice clear treatise, but there will be lots of other similar source (although probably less nowadays as a hardware API is invariably used).
What's useful about this is that the projection equations are commutative, in that if you have a point on the image and the model you can run the data back though the projection to retrieve the original 3D coordinates - which is what you wish to do.
So a couple of approaches suggest: either write the code to do the above yourself directly, or probably more simply use OpenGL (I'd recommend the GLUT toolkit for this). If your math is good and manipulating matrices causes you no issue then I'd recommend the former as the solution will be tighter and it's interesting stuff - otherwise take the OpenGL approach. You'd probably want to turn the camera/plane approximation into camera/sphere fairly early too.
If this isn't sufficient for your needs then in theory going to actual landscape modelling would be feasible. The SRTM data is freely available (albeit not in the friendliest of forms) so combined with your GPS position it should be possible to create a mesh model in with which you apply the same algorithms as above.

Resources