I am currently looking at some data that is on a coastline. Unfortunately, when calculating the aspect from a DEM I get a very "noisy" aspect values, especially on the sea:
I tried to smoothen my DEM with the "Filter" tool but the result is still very ugly when looking at the water. Is there an easy fix to this? The sea and the riverbed should be flat (-1).
The reason for this looks like a sea waves.
Best solve is clipping sea and riverbelt. After then you will divide itself and multiply with -1 (in raster calculator). Then mosaic it with your aspect raster. Here are the detailed process steps;
Draw polygons for sea and river belt(flat.shp)
Draw polygons for full study area(stdarea.shp)
clip stdarea.shp with flat.shp(out=Aspect.shp)
Clip aspect from river and sea "rasterclip with Aspect.shp" (out=Aspect.tif).
clip sea and river from aspect "rasterclip with flat.shp"(out=flats.tif).
Open raster calculator do this;
("flats.tif" / "flats.tif") * -1 (out=flcalculated.tif)
MosaictoNewRaster "Aspect.tif and flcalculated.tif" (out="FixedAspect.tif")
Best REgards.
Related
Point space:
Dilated:
I used dilation (morphological process) but this is not the result i expected. I want more smooth regions, not squares. How to define contours and fill in them? Maybe i can use k-means like algorithms to het regions from points. Help please.
I need to find the intrinsic parameters of a CCTV camera using a set of historic footage images (That is all I got, no control on the environment, thus no chessboard calibration).
The good news is that I have the access to some ground-truth real-world coordinates, visible in most of the images.
Just wondering if there is any solid approach to come up with the camera intrinsic parameters.
P.S. I already found the homography matrix using cv2.findHomography in Python.
P.S. I have already tested QTcalib on two machines, but it is unable to visualize the images in the first place. Not sure what is wrong with it.
Thanks in advance.
intrinsic parameters contain both fx fy cx cy and skew with additional distortion parameters k1-k5 r1-r2.
Assuming you have no distortion and cx and cy are perfectly in the center. Image origin at top left as a normal understanding of the image. As you say you know some ground truth level 3D points.3D measurements are with respect to camera optical axis. Then this 3D point P can be projected into camera image plane called p. The P p O(the camera optical center) with center lines forms isosceles triangle.
fx / (p_x-cx) = P_z / P_x
fx = (p_x-cx) * P_z / P_x
The same goes for the fy. and usually fx and fy are the same.
This is under the perfect assumption that you don't have distortion on camera. If you start to have distortion, then you need to find enough sample points all over the image to form distortion understanding as shown below. One or 2 points won't give you the whole picture understanding.
There are some cheats in some papers that using sea vanishing lines(see ref, it is a series of works) or perfect 3D building vanishing points to detect the distortion. We start from extrinsic to intrinsic and it can get some good guess after some trial eventually. But it is very much in research and can not apply to general cases.
Ref: Han Wang, Wei Mou, Xiaozheng Mou, Shenghai Yuan, Soner Ulun, Shuai Yang and Bok-Suk Shin, An Automatic Self-Calibration Approach for Wide Baseline Stereo Cameras Using Sea Surface Images, unmanned system
If all you have is a video and a few 3d points, your best bet is probably to matchmove it, that is, do a manually assisted bundle adjustment using a 3D computer graphics environment, e.g. Blender. There are a lot of tutorials online on how to do it (example). To add the 3d points as constraints, you build some shapes representing them in the virtual world (e.g. some small spheres) and place them so that their relative positions match the ground truth you have, then add them to the tracker solution.
Im having trouble generating a decent looking mesh using an image.
Here is an example of an image:
In my project I convert each pixel to 3d point with its height determined on how far away it is from the center of the line.
Here is what it looks like when I have created a 3d mesh from the image:
The problem with the mesh is that there are a lot of triangles (and vertices) and it looks really blocky, I triangulate the points just going through the 2d image and joining pixel neighbours in triangles.
Are there any algorithms that could be used to generate something better looking (less triangles / vertices, smoother transition).
Why don't you just sample both the midline and the boundaries of the white region, and triangulate with a constraint that contiguous vertices of the midline be edges? To preserve shape, the sampling should include all places where midline and boundaries "bend", i.e. all curvature changes.
i have a question about achieving an effect like on a lunar eclipse. The effect should look like in the first seconds of this gif. So just like a black shadow which goes over the circle. The ideal situation would be a function where i can passed a parameter in percentage to get this amount as a shadow on the circle:
The problem which i am facing is that my background is an gradient. So it's not possible to have a black circle which moves over the moon to get the effect.
I tried something with CCClippingNode but it looks not nice. Furthermore the clip on the edges was always a bit pixelated.
I thought about using something like a GLSL Shader to achieve the effect but i am not so familiar with GLSL and i can't find an example.
The effect is for an app game developed for an iphone. I use the cocos2d framework in version 3 (the current one).
Has somebody an idea how to get this effect? An idea where i can start to search?
Thank you in advance
The physics behind is simple you change the light shining on the moon. So
I would create a 1D gradient texture representing the lighting conditions
compute each rendered pixel of moon
you obviously have the 2D texture of moon. So you now need to obtain the position of each pixel inside the 1D lighting texture. So if moon is fully visible you are in sunlight. When partially eclipsed then you are in the umbra region. And finaly while total eclipse you are in penumbra region. so just compute the middle point's of the moon position. And for the rest use relative position in the moons motion direction.
So now just multiply the Moon surface with the lighting texture and render the output.
when working you can add the curvature correction
Now you got linerly cutted Moon phases but the real phases are curved as the lighting conditions differs also with radial distance from motion direction and moons center. To fix this you can do
convert the lighting to 2D texture
or shift the texture coordinate by some curvature dependent on the radial distance
Even with high resolution textures (4k*2k) of Earth the mapping of the poles is distorted. Is it possible to place a square texture with the middle directly at the poles of a sphere with THREE.js and rotate accordingly?
Example map: http://rapidfire.sci.gsfc.nasa.gov/imagery/subsets/?mosaic=Arctic.2012154.terra.4km.jpg
Sorry, no code, looking for a starting point.
Maybe you should 'correct' your rectangular texture to avoid the distortion on the poles.
This link might be of help for that: http://local.wasp.uwa.edu.au/~pbourke/texture_colour/texturemap/
In Cesium, a WebGL globe & map engine, we fixed the poles by creating a screen-space quad over where each pole will be, and using a shader to render the pole correctly. The code is in this pull request. We don't yet offer inlay high-res images, but that's coming.