GeoJSON, How detect long tunnels between two or more area in single polygon - geojson

The whole question is in the title, I will just give an example of "bad" data, in the image I marked these long tunnels with a red marker. I need to check if there are such tunnels in the polygon?
enter image description here

Related

In ARKit, Can I select any real world object to bind virtual objects with it, using depth map or other techniques?

I am wondering how to implement this in ARKit: I have a bottle on the table and I want to add a "small hat" on it, by circling the aimed object like I did with red pen in the pic.
enter image description here
Also, if the bottle is partly hidded, it can also be selected. Can I implement this bound between any real object and virtual objects in ARKit? Maybe using depth map or sth.
enter image description here
Thanks a lot!

UIButton with a custom asymmetric clickable region?

I have an image with various asymmetric regions, is it possible to place a button above each region?
The image will be something similar to this: https://cdn.dribbble.com/users/1557638/screenshots/4367307/proactive_d.png
After way too much research, I’ve decided to go with following solution for an image with multiple asymmetric clickable regions:
Duplicated the image file, colored each region with a different color, and stored as an asset.
Placed a simple tap gesture recognition over the displayed image. When tapped, I get the relative coordinate to the displayed image.
Get the color from the map image at the tapped coordinates.
Compare to a predefined enum.

Registration mark to support image registration

I have a paper document that will be scanned, and then I'll want to perform image registration (image alignment) on different scans of different copies of the document.
I've noticed that paper forms often have a "registration mark" printed in the four corners of the piece of the paper (a cross-hairs: a circle with a plus intersecting in it). It looks something like this:
In my case, I have the freedom to choose the exact shape of the registration mark, to make it as easy as possible for the image processing code to detect the location of the four registration marks. My goal is for the code to detect registration marks as efficiently and robustly as possible, given that the image might be slightly rotated/skewed/translated. Is the "cross-hairs" shape shown above optimal? Are there better marks that are easier to algorithmically locate?
Black-and-white only; I can't print or scan in color, unfortunately.

Detect text in a scanned page

I'm trying to detect the text in a scanned page and get the coordinates of it.
See the attached image for an example of scanned page.
I need the vertical coordinates for spliting page from the useless parts, and then detect the text's coordinates.
What kind of tools could I use to split and detect text's coordinates?
Take a look at the Stroke Width Transform.
See also this SO answer.

How to separate the query and the train image from the Mat object returned from DrawMatches() method

I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches() and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.
1 - size matters but in your case, I think the most crucial problem is "textureness". SURF detect the interest points where the "texture gradient" is strong. In the case of your mouse, the gradient is mainly smooth, except aroud the logo (fujitsu), the button and at the border of the image. In the tutorial you point to, you notice it uses a very textured object to demonstrate the effect.
2 - to the best of my knowledge, there is fully automatic method to do what you want, but it can be done with a few steps. Basically, you must determine the surrounding box of your object then draw it. To draw, the easier is to use cv::rectangle but you can be more precise with four (or more....) cv::line. To determine the surrounding box, you can estimate the extreme points among the filtered matches.
Good luck!

Resources