I am running buffer zones for specific points on a reprojected layer and get errors when the buffer zone is larger than 3000km. Does anyone know why this happens and how to solve it? Thanks for your help! :)
Process:
A csv with a series of coordinates is added as a delimited text layer. This layer is reprojected from decimal degrees to standard UTM. A series of buffer zones are created for the reprojected layer. A strange error occurs when the buffer zone size is greater than 3000km (see image).
The buffer zone error does not occur for all points on the layer, only for some. The extent of the glitch also changes slightly depending on the choice of standard UTM when reprojecting the layer.
Related
I am querying raw vector tiles from zoom level 8 from tiles created to zoom level 11. I then convert the point features in these tiles to geojson. The converted feature geometries do not match the tile geometry at zoom level 11, or tiles from zoom level 11 converted to geojson. I have created the tiles with two different programs (tegola and geojson-vt). I am converting the tiles with vt2geojson. I am trying to determine at what stage in the conversion process the geometry is being manipulated - tile creation, tile conversion back to geojson, or in mapbox gl js. As far as I can tell the coordinates are not being trimmed in the creation or conversion process, but am not 100% positive on that. I get the reason for the simplification of lines and polygons at lower zoom levels, but I do not see any reason to manipulate point geometry.
.
As can be seen in the image, the points start to drift apart after the max zoom level of the original tiles. One workaround is to simply filter the vector tiles to only show the features in the resulting geojson features as the properties are still intact or store the coordinates in the properties, but this is not ideal.
Bottom line - if I want to view points as close to the original data as possible, what max tile zoom level should I use (i.e., 11, 12, 13, 14), and at what stage does the geometry get manipulated?
It's a bit hard to tell what your exact question is, but if I'm understanding correctly it is essentially: "Why do I lose spatial accuracy when I overzoom my vector tiles" and the answer is "because you're overzooming them". It's inherent in the way they work.
Your original data probably had 10 significant figures of precision. Each offset for a point in a vector tile is usually encoded as an integer between 1 and 4096.
Your options are basically:
increase the spatial accuracy of the generated tiles (eg, tippecanoe's -d flag)
generate tiles to a higher zoom level
don't overzoom them so much
I am working on a task where the image I have presents missing data, I wish to obtain the gradient without internal boundary issues.
The idea is to build a height map out of point cloud data (done) and then evaluate the slopes using a gradient function, however the points are sparse and thus the image presents missing data.
The first approach I tried was to use dilation in order to grow the area by some pixels, then apply the gradient filter and finally mask the boundaries to remove fabricated data, but it seems to erodes slopes as well:
On this picture a height map is gererated from a point cloud which in turn comes from a stereo camera system, the camera is facing a steep wall. On the left is the height map and on the right is the dilated map. On the right side it seems the wall has been "pushed back".
What would be the best approach to eliminate the internal border conditions? I thought about dilating the values with a special function that ignores the "non-available pixels" (perhaps represented by 0 or -1) and takes the average of the surrounding available pixel values (if available). Is there such a function in OpenCV?
The application of Konolige's block matching algorithm is not sufficiantly explained in the OpenCV documentation. The parameters of CvStereoBMState influence the accuracy of the disparities calculated by cv::StereoBM. However, those parameters are not documented. I will list those parameters below and describe, what I understand. Maybe someone can add a description of the parameters, which are unclear.
preFilterType: Determines, which filter is applied on the image before the disparities are calculated. Can be CV_STEREO_BM_XSOBEL (Sobel filter) or CV_STEREO_BM_NORMALIZED_RESPONSE (maybe differences to mean intensity???)
preFilterSize: Window size of the prefilter (width = height of the window, negative value)
preFilterCap: Clips the output to [-preFilterCap, preFilterCap]. What happens to the values outside the interval?
SADWindowSize: Size of the compared windows in the left and in the right image, where the sums of absolute differences are calculated to find corresponding pixels.
minDisparity: The smallest disparity, which is taken into account. Default is zero, should be set to a negative value, if negative disparities are possible (depends on the angle between the cameras views and the distance of the measured object to the cameras).
numberOfDisparities: The disparity search range [minDisparity, minDisparity+numberOfDisparities].
textureThreshold: Calculate the disparity only at locations, where the texture is larger than (or at least equal to?) this threshold. How is texture defined??? Variance in the surrounding window???
uniquenessRatio: Cited from calib3d.hpp: "accept the computed disparity d* only ifSAD(d) >= SAD(d*)(1 + uniquenessRatio/100.) for any d != d+/-1 within the search range."
speckleRange: Unsure.
trySmallerWindows: ???
roi1, roi2: Calculate the disparities only in these regions??? Unsure.
speckleWindowSize: Unsure.
disp12MaxDiff: Unsure, but a comment in calib3d.hpp says, that a left-right check is performed. Guess: Pixels are matched from the left image to the right image and from the right image back to the left image. The disparities are only valid, if the distance between the original left pixel and the back-matched pixel is smaller than disp12MaxDiff.
speckleWindowSize and speckleRange are parameters for the function cv::filterSpeckles. Take a look at OpenCV's documentation.
cv::filterSpeckles is used to post-process the disparity map. It replaces blobs of similar disparities (the difference of two adjacent values does not exceed speckleRange) whose size is less or equal speckleWindowSize (the number of pixels forming the blob) by the invalid disparity value (either short -16 or float -1.f).
The parameters are better described in the Python tutorial on depth map from stereo images. The parameters seem to be the same.
texture_threshold: filters out areas that don't have enough texture
for reliable matching
Speckle range and size: Block-based matchers
often produce "speckles" near the boundaries of objects, where the
matching window catches the foreground on one side and the background
on the other. In this scene it appears that the matcher is also
finding small spurious matches in the projected texture on the table.
To get rid of these artifacts we post-process the disparity image with
a speckle filter controlled by the speckle_size and speckle_range
parameters. speckle_size is the number of pixels below which a
disparity blob is dismissed as "speckle." speckle_range controls how
close in value disparities must be to be considered part of the same
blob.
Number of disparities: How many pixels to slide the window over.
The larger it is, the larger the range of visible depths, but more
computation is required.
min_disparity: the offset from the x-position
of the left pixel at which to begin searching.
uniqueness_ratio:
Another post-filtering step. If the best matching disparity is not
sufficiently better than every other disparity in the search range,
the pixel is filtered out. You can try tweaking this if
texture_threshold and the speckle filtering are still letting through
spurious matches.
prefilter_size and prefilter_cap: The pre-filtering
phase, which normalizes image brightness and enhances texture in
preparation for block matching. Normally you should not need to adjust
these.
Also check out this ROS tutorial on choosing stereo parameters.
I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.
A single channel image is my input. ( defalut IPL_DEPTH_8U)
I am multiplying each pixel of my input image with scalar floating point numbers like 2.8085 (as a part of my algorithm).
So this needs me to increase the depth and change the image type to IPL_DEPTH_64F
But whenever I am trying to change my image datatype to IPL_DEPTH_64F and have a double* to access each pixel, my program execution stops abruptly, cribbing that
"file.exe has stopped working. A problem caused the program to stop working."
Does it mean, my processor is not able to handle the double ptr arithmetic ???
You have to create a new image - I'd recommend making a new image of depth IPL_DEPTH_64F and setting each pixel to the appropriate value (2.8085*value).
Also, can you post the code you used?