OpenCV: Extract significant weather over an area from a weather map - opencv

I would like to extract significant weather over an area from a weather map.
This is what the map looks like:
The black "clouds like" lines define several areas in which different types of weather phenomena will occur. The blue dashed area labeled "1" is the same type of area but refers to the legend of the map, on the right top side.
I already managed to extract all the contours (in green). That's pretty much all I can do for now because I just discovered OpenCV. (Please be kind if I missed anything.)
I assume the next step is to look in which areas the place I am interested in is located. And then try to retrieve all the significant weather symbols located inside. But I definitely do know how to do that for the moment.
The constants in this project are :
The map will always be centered and zoomed in the exact same way.
The region I am interested in will always be the same (the red circle that I drew with paint).
The variables of this project are :
Between two different maps, the shapes of the significant weather areas will completely change.
Significant weather symbols in the areas will, of course, also change.
Those maps are named TEMSI. If someone knows a different way to use computer vision to do the job I want to do, please let me know.

Related

filter irregular blobs with opencv for OCR improvement

I'm trying to extract handwritten text from an image to enable ocr. My forms contain textboxes so it is not too complex to get the right regions of interests, but the problem is most people have issues to stay within the boundaries of the boxes. While I can increase the area to cover for this, the result is that I get my string, and some part of the box above and beyond.
Like below image
Depending on the level of pollution on top or bottom of the picture, the OCR software happily ignores, or adds random nonsense. So in order to be safe I need to get rid of as much as possible, while at the same time I need to keep my 'full' letters intact to ensure there is enough quality left for the OCR part.
The expected output should just show ITEGEM (which is a small place in Belgium, nothing fancy here)
like this :
I've been trying a few things, but standard blob detection is too harsch as it also removes part of the first T, as there are a few pixels between the top of the T and the base of the T, so I get left with I instead of T.
Any suggestions to get me back on track (preferably python)?

How to align SCNScene to a physical table using ARKit?

I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.

Achieving equal size of square/pixel on Mapbox anywhere on the world map?

The problem I'm facing is similar and closely related to this issue on Github but that's for Unity SDK, my question is for iOS SDK.
I want to achieve the same thing. Let me explain, basically I have pixel grid in which each pixel'd have equal size. Pixel is set to be 10m x 10m in real world. The thing I experienced is that if pixel locates towards the northern or southern part of the world, its size is stretched like the following.
Click for larger resolution
But when such pixel locates along the equator line, or simply along the middle part of the world. It looks ok like following
Click for larger resolution
There's no problem about rendering stuff, or positioning on Mapbox. The thing is I want every pixel to be square visually.
I've read along on the issue I linked above. It relates to mercator and the world is not flat thus makes this visual happens. It looks stretched along the northern and southern part of world map. As well, I found out that there's no equal functionalities as presented in Unity SDK for this particular problem on iOS SDK, so I'm not sure which approach I should go on to solve this solution.
How can I achieve equal size of pixel on the gridline on mapbox using Mapbox iOS SDK? Is there already solutions provided in the SDK?
FYI.
My requirement also needs real distance as shown on the map. I'm not sure it'd affect the solution as presented in the link I linked above.
I use Mapbox iOS SDK 3.7.6
My initial approach is straightforward as I fix the size of pixel to be 10m x 10m, then calculate its corresponding latitude and longitude value. Use those values to position them in Mapbox treating entire world map as a tilemap. Anyway I didn't take into account mercator in calculation, so this might be the case, if so then how to do just that? Only thing from my checking as available in iOS SDK is MGLMapView's metersPerPoint(atLatitude:). No tile ID system, or Conversions.cs as seen on Unity SDK. So i'm not sure on how to go on and solve this problem.
Update
I managed to solve it and made it work!
I'll come back and post the solution.
My solution is to port sphericalmercator.js to swift, then use it in code. I use a fixed zoom level of level 22 as its visual look is closest to what I need and also before. I went with the approach to at least have it looks visually equal not necessary its physical size.
Thanks to a hint in this answer on how to use sphericalmercator.js.
Anyway from my testing with it, tile size as set when you creating an instance using SphericalMercator seems not to be in effect no matter what value I set. Only zoom level will determine number of tiles across the world map for you. Note that upper-left corner is origin which is 0,0 tile index. Lower zoom level value will generate large tile size, but higher value will generate smaller tile size.
You can take a look as SphericalMercator-swift; the code I ported from origin JS implementation as linked above along with how to use it to get tile index, or bounding box of longitude/latitude in swift code in order to do rendering stuff on top of Mapbox.

Best way to create custom building's map for iOS

I see some ways to do it:
1) Draw using OpenGL programmatically.
2) Draw using QuartzCore and CoreAnimation programmatically.
3) Draw map in AutoCad and then somehow connect it to iOS.
4) Draw map using SVG.
Requirments are supporting pathfinding and gps navigation.
For first 2 ways I think that it's expensive in terms of performance way, redraw all elements on scaling; and I don't think that this way may have GPS-navigation support.
Using AutoCad pictured maps is hard to understand for me how to connect it with graphs\paths for pathfinding.
My colleagues will develop this app on web using SVG. I found it - https://github.com/SVGKit/SVGKit , but still have no idea how it will support pathfinding and navigation.
I would appreciate any help.
Generally there are two types of map application:
A) They display a map, (with or without a user position) without needing to calculate a path like a navigation system does (see point B)
B) Application that use the vectors of a map and calculate something: e.g to find a best path. The shortest connection, e.g A navighation system , etc.
Application for A) are usually less complex then that of B), because the vectors can be somewhat inacurate, have no conections, have small gaps, have no logic between the edges, etc.
1) To only display a building map, you would only need a list of edges. (An edge is pair of coordinates (x1,y1) - (x2,y2). How ever you get that. E.g MapInfo Professional format mif/mid.
Or even you could dispaly a pdf that contains the map of the builing. Right with the built in PDF View, (also with SVG but more difficult).
Things get much more complicated if it is not a relative map, but also a map that is positioned with an reference coordinate system, like latitude/longitude (WGS84).
In that case you would use a Tool (mapInfoProfessional, to import AutoCad DXF Files, and apply 3 GPS measured reference points at the corner of the house, and convert that to LatLong WGS84 coordinates system.
With ios you cannot measure that 3 Points because you cannot average a position, ios stops sending when you are standing still at one corner of the house.
You could try to extract the positions from a google earth satellite foto if you are living in a region where google Sattelite fotos have high resolution. (But this might violate the license conditions of that Satellite Foto provider (Topic: derived data))
Finally you now have a list of edges in Lat Lon coordinate System.
For Displaying I personally would either do with 1) OpenGL) or 2) Quartz2D.
Now the Path finding part.
Probaly you need a second "map" that defines the possible paths inside the building.
This structure must be a connect graph (points with connected neighbours).
Computer games do it that way. (Some even allow you to display that path in developper mode)
The path can be drawn, in a different layer of the floor plan. But this path
has higher requirements: No gaps are allowed, all must be perfect connected.
Call that layer "Path" and export it as own plan.
Now use only this path layer, and import, and create a graph of nodes with connect neighbours.
Use Dijkstra Algo to search for shortest path.

iOS Mapkit - How to offset a user's location be specified amount

I am developing an app that uses the user's location to be displayed on a map with other users.
I want to ensure that all users have a bit of privacy when it comes to their location being displayed openly to other users, so I am hoping to just set their location with a specified offset (lets say 1 mile) and display the "edited" location to all other users while still showing the "exact" location to the current user.
Example - If I am looking at the map, I want my "user location" (the blue dot) to be somewhat exact, while all other player's will see my location slightly offset from the real location.
What is the best way to achieve this?
I think the question you actually want the answer to is this:
How do I convert the user's location into an "approximate location" in a way that preserves the user's privacy?
It's not an easy problem:
Offsetting by a specific distance doesn't work:
There's a trivial attack if the direction is fixed.
If the direction does not change often enough, then the attacker only needs to wait to identify what looks like a road.
If the direction changes too often, then they'll tend to form a 1-mile circle around the target's house/work.
Offsetting by a random distance/direction doesn't work; the attacker just needs to collect enough samples; the clusters will likely be centered on the target's home/work.
Quantizing to a grid naively (e.g. "X is within this grid square") will tell you when the target crosses a grid boundary. This is especially bad if the target lives on a grid boundary.
Here's something that works a little better, but wil still (eventually) give away the user's location:
Pick an (approximately) 1-mile grid. For a "square" grid, you could use the Pierce quincunxal projection (there are four points of infinite distortion but you can make those all at sea — it looks like you can limit distortion on land to a factor of 2). There are also projections onto cube and, for a triangular grid, an icosahedron.
When you first need to report the user's location, give the nearest point on the grid. Also pick a threshold distance between 1 and 2 grid "squares", or so.
While the user is within the threshold distance of the center of the grid square, continue to report the same grid square. Otherwise, repeat.
It'll still eventually be obvious if the user happens to live on a grid boundary. There are various ways to attempt to fix this problem (e.g. a bias to reporting grid squares you've reported before), but these will eventually fail.
This seems a lot like trying to remove a digital watermark (the user's actual location) by using lossy compression (the approximation process) while producing an output image/audio (approximate location) that sounds/looks like the original. (The analogy works a little better if you treat the "watermark" as the user's daily habits, which will be visible in the output unless you know exactly what those habits are and can remove them.)
Or in signal processing terms: A low SNR simply means you have to listen for longer to extract the signal.
Are you showing everyone else as a pin? It might be strange if you show a pin at an exact location but the other user isn't there. For example if someone was a mile north and you showed their pin at the same location as the current user. Maybe you should display the other users with an MKOVerlay circle, and then use some calculation base on a userID to shift it slightly off centre so that people don't find out that it is always shifted 500m east and thus easily see here people are.
Whether or not you change the display, the code you seek is here: Get the GPS coordinate given the current location, bearing and distance

Resources