Gaps when using geopandas buffer closed polyline geometries - buffer

I am working on a problem where i need to search coastal areas using Sentinel-2 data. I want to use the OSM coastlines shapefile (https://osmdata.openstreetmap.de/data/coastlines.html) to select coastal areas of the raster using a 5km buffer.
Everything seems to work except i get gaps where i apply the buffer to smaller islands. I wonder if this is to do with the fact that the polyline is closed or too small an area. I do not have this issue with the main coastline.
The code i am using to make the buffer is this
import rasterio
import geopandas as gpd
from pyproj import Transformer
def get_osm_for_rstr(rstr, path_to_osm, target_crs=4326):
# get raster bounding box (in form (left, bottom, right, top))
bbox = rstr.bounds
# transform from raster crs to osm coastline crs (4326)
transformer = Transformer.from_crs(rstr.crs, target_crs)
x_min, y_min = transformer.transform(bbox.left, bbox.bottom)
x_max, y_max = transformer.transform(bbox.right, bbox.top)
# load osm data for raster bounding box
coast_wgs = gpd.read_file(path_to_osm, bbox=(y_min, x_max, y_max, x_min))
# Convert coast back to raster projection system
coast_utm = coast_wgs.to_crs(rstr.crs)
return coast_utm
rstr = rasterio.open(rstr_file)
coast_gdf = get_osm_for_rstr(rstr, path_to_osm)
coast_buffer = coast_gdf.buffer(1000)
The image below shows the issue i am having. The red lines show the osm coastline shapefile (they are in the form of LINESTRINGS). The light green shows a 1km buffer added. I have a gap in the buffer at the centre of the small island, which means i would lose this area if i crop the raster using the coastline buffer.
Does anyone have any suggestions why this is happenning? How i can include these interior areas in the coastline buffer? Thanks in advance for any advice.

Related

Remove the spikes/triangles on a image

I have an image that is with the spikes/small triangles on the outline border, like this:
I would like to remove the un-wanted spikes/small triangles:
And output the image like this:
I have searched many posts on the web using OpenCV/Emgu CV but no luck.
The problem is the contour is not in equal spacing and I can not use any find peak functions to find them and remove them.
I have also used cubic spline to smooth the image, but it just destroyed the original image shape (too smooth) or just got a little effect on the spikes.
Could anyone who have ideas help me with this issue?
As suggested by Cris, a morphological closing is a good starting point.
In the picture below, I performed closing with an octognal kernel 49x49 (circular would be better), and took the difference with the original.
If you filter out the blobs by size (and possibly by shape), you will get the true spikes that you can subtract. The rest of the shape remains unchanged.
Something like this will also help.
Where:
#contours is your list of contours after findContrours()
#idx is the index of your contour
#eps regulates how much the contour is approximated.
cv::Mat approx;
double eps = cv::arcLength(contours[idx], true) * 0.05;
cv::approxPolyDP(contours[idx], approx, eps, true);
approx.copyTo(contours[idx]);
Maybe this is what you want (its not accurate at all)
OpenCV + Python
# Import preprocessors
import os
import cv2
import numpy as np
# Read image
dir = os.path.abspath(os.path.dirname(__file__))
im = cv2.imread(dir+'/im.png')
# Remove triangles
kernel = np.ones((5,5), np.uint8)
factor=11
im = cv2.dilate(im, kernel, iterations=factor)
im = cv2.erode(im, kernel, iterations=factor)
# Save the processed image
cv2.imwrite(dir+'/spike_res.png', im)
Update:
Maybe not related to OpenCV tag; but with .NET you can also use Erosion and Dialation of AForge.

How to filter image to throw away stray pixels?

I have image data that comprises mostly roundish images surrounded by boring black background. I am handling this by grabbing the bounding box using PIL's getbbox(), and then cropping. This gives me some satisfaction, but tiny specks of grey within the sea of boring black cause getbbox() to return bounding boxes that are too large.
A deliberately generated problematic image is attached; note the single dark-grey pixel in the lower right. I have also included a more typical "real world" image.
Generated problematic image
Real-world image
I have done some faffing around with UnsharpMask and SHARP and BLUR filters in the PIL ImageFilter module with no success.
I want to throw out those stray gray pixels and get a nice bounding box, but without hosing my image data.
You want to run a median filter on a copy of your image to get the bounding box, then apply that bounding box to your original, unblurred image. So:
copy your original image
apply a median blur filter to the copy - probably 5x5 depending on the size of the speck
get bounding box
apply bounding box to your original image.
Here is some code to get you started:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image, ImageFilter
# Load image
im = Image.open('eye.png').convert('L')
orig = im.copy() # Save original
# Threshold to make black and white
thr = im.point(lambda p: p > 128 and 255)
# Following line is just for debug
thr.save('result-1.png')
# Median filter to remove noise
fil = thr.filter(ImageFilter.MedianFilter(3))
# Following line is just for debug
fil.save('result-2.png')
# Get bounding box from filtered image
bbox = fil.getbbox()
# Apply bounding box to original image and save
result = orig.crop(bbox)
result.save('result.png')

How to get back the co-ordinate points corresponding to the intensity points obtained from a faster r-cnn object detection process?

As a result of the faster r-cnn method of object detection, I have obtained a set of boxes of intensity values(each bounding box can be thought of as a 3D matrix with depth of 3 for rgb intensity, a width and a height which can then be converted into a 2D matrix by taking gray scale) corresponding to the region containing the object. What I want to do is to obtain the corresponding co-ordinate points in the original image for each cell of intensity inside of the bounding box. Any ideas how to do so?
From what I understand, you got an R-CNN model that outputs cropped pieces of the input image and you now want to trace those output crops back to their coordinates in the original image.
What you can do is simply use a patch-similarity-measure to find the original position.
Since the output crop should look exactly like itself in the original image, just use Pixel-based distance:
Find the place in the image with the smallest distance (should be zero) and from that you can find your desired coordinates.
In python:
d_min = 10**6
crop_size = crop.shape
for x in range(org_image.shape[0]-crop_size[0]):
for y in range(org_image.shape[1]-crop_size[1]):
d = np.abs(np.sum(np.sum(org_image[x:x+crop_size[0],y:y+crop_size[0]]-crop)))
if d <= d_min:
d_min = d
coord = [x,y]
However, your model should have that info available in it (after all, it crops the output based on some coordinates). Maybe if you add some info on your implementation.

Visualizing OpenCV KeyPoints

I am learning OpenCV and at the moment I am trying to understand the underlying data stored in a KeyPoint so that I can better utilize that data for an application I'm working on.
So far I have been going through these two pages:
http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=featuredetector#FeatureDetector
http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html
When I follow the tutorial, however, using drawKeypoints(), the points are all the same size and shape, and are drawn with a seemingly arbitrary color.
I guess I could iterate through the attributes for each key point: draw a circle, draw an arrow (for the angle), give it a color based on the response, etc. But I figured there had to be a better way.
Is there a built-in method or other approach similar to drawKeypoints() that will help me more efficiently visualize the KeyPoints of an image?
Yes, there is the method to perform your task. As says in documentation
For each keypoint the circle around keypoint with keypoint size and
orientation will be drawn
If you are using Java, you can simply specify the type of keypoints:
Features2d.drawKeypoints(image1, keypoints1, imageOut2,new Scalar(2,254,255),Features2d.DRAW_RICH_KEYPOINTS);
In C++:
drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
I had a similair problem and wanted to customize the points that are drawn, decided to share my solution because I wanted to alter the shape of the points drawn.
You can alter the line with cv2.circle with what you want. im is the input image you want the points to be drawn in, keyp are the keypoints you want to draw, col is the line color, th is the thickness of the circle edge.
import cv2
import numpy as np
import matplotlib.pyplot as plt
def drawKeyPts(im,keyp,col,th):
for curKey in keyp:
x=np.int(curKey.pt[0])
y=np.int(curKey.pt[1])
size = np.int(curKey.size)
cv2.circle(im,(x,y),size, col,thickness=th, lineType=8, shift=0)
plt.imshow(im)
return im
imWithCircles = drawKeyPts(origIm.copy(),keypoints,(0,255,0),5)
You can iterate through the vector of keypoints that you detect and draw (for example) a circle on every KeyPoint.pt having radius analogous to KeyPoint.size and color with respect to KeyPoint.response.. This is of course just an example; you could write more complicated drawing functions based on the octave and angle of the KeyPoint (if your detector gives that output)..
Hope this helps.
hello it is my code #Alex
def drawKeyPts(im, keyp, col, th):
draw_shift_bits = 4
draw_multiplier = 1 << 4
LINE_AA = 16
im = cv2.cvtColor(im, cv2.COLOR_GRAY2BGR)
for curKey in keyp:
center = (int(np.round(curKey.pt[0]*draw_multiplier)), int(np.round(curKey.pt[1]*draw_multiplier)))
radius = int(np.round(curKey.size/2*draw_multiplier))
cv2.circle(im, center, radius, col, thickness=th, lineType=LINE_AA, shift=draw_shift_bits)
if(curKey.angle != -1):
srcAngleRad = (curKey.angle * np.pi/180.0)
orient = (int(np.round(np.cos(srcAngleRad)*radius)), int(np.round(np.sin(srcAngleRad)*radius)))
cv2.line(im, center, (center[0]+orient[0], center[1]+orient[1]), col, 1, LINE_AA, draw_shift_bits)
cv2.imshow('name1', im)
cv2.waitKey()
return im

Image in Image Algorithm

I need an algorithm written in any language to find an image inside of an image, including at different scales. Does anyone know a starting point to solving a problem like this?
For example:
I have an image of 800x600 and in that image is a yellow ball measuring 180 pixels in circumference. I need to be able to find this image with a search pattern of a yellow ball having a circumference of 15 pixels.
Thanks
Here's an algorithm:
Split the image into RGB and take the blue channel. You will notice that areas that were yellow in the color image are now dark in the blue channel. This is because blue and yellow are complementary colors.
Invert the blue channel
Create a greyscale search pattern with a circle that's the same size as what's in the image (180 pixels in circumference). Make it a white circle on a black background.
Calculate the cross-correlation of the search pattern with the inverted blue channel.
The cross-correlation peak will correspond to the location of the ball.
Here's the algorithm in action:
RGB and R:
G and B:
Inverted B and pattern:
Python + OpenCV code:
import cv
if __name__ == '__main__':
image = cv.LoadImage('ball-b-inv.png')
template = cv.LoadImage('ball-pattern-inv.png')
image_size = cv.GetSize(image)
template_size = cv.GetSize(template)
result_size = [ s[0] - s[1] + 1 for s in zip(image_size, template_size) ]
result = cv.CreateImage(result_size, cv.IPL_DEPTH_32F, 1)
cv.MatchTemplate(image, template, result, cv.CV_TM_CCORR)
min_val, max_val, min_loc, max_loc = cv.MinMaxLoc(result)
print max_loc
Result:
misha#misha-desktop:~/Desktop$ python cross-correlation.py
(72, 28)
This gives you the top-left co-ordinate of the first occurence of the pattern in the image. Add the radius of the circle to both x and y co-ordinates if you want to find the center of the circle.
You should take a look at OpenCV, an open source computer vision library - this would be a good starting point. Specifically check out object detection and the cvMatchTemplate method.
a version of one of previous posts made with opencv 3 and python 3
import cv2
import sys
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(cv2.matchTemplate(cv2.imread(sys.argv[1]),cv2.imread(sys.argv[2]),cv2.TM_CCOEFF_NORMED))
print(max_loc)
save as file.py and run as:
python file.py image pattern
A simple starting point would be the Hough transform, if you want to find circles.
However there is a whole research area arount this subject called object detection and recognition. The state of the art has advanced significantly the past decade.

Resources