Achieving equal size of square/pixel on Mapbox anywhere on the world map? - ios

The problem I'm facing is similar and closely related to this issue on Github but that's for Unity SDK, my question is for iOS SDK.
I want to achieve the same thing. Let me explain, basically I have pixel grid in which each pixel'd have equal size. Pixel is set to be 10m x 10m in real world. The thing I experienced is that if pixel locates towards the northern or southern part of the world, its size is stretched like the following.
Click for larger resolution
But when such pixel locates along the equator line, or simply along the middle part of the world. It looks ok like following
Click for larger resolution
There's no problem about rendering stuff, or positioning on Mapbox. The thing is I want every pixel to be square visually.
I've read along on the issue I linked above. It relates to mercator and the world is not flat thus makes this visual happens. It looks stretched along the northern and southern part of world map. As well, I found out that there's no equal functionalities as presented in Unity SDK for this particular problem on iOS SDK, so I'm not sure which approach I should go on to solve this solution.
How can I achieve equal size of pixel on the gridline on mapbox using Mapbox iOS SDK? Is there already solutions provided in the SDK?
FYI.
My requirement also needs real distance as shown on the map. I'm not sure it'd affect the solution as presented in the link I linked above.
I use Mapbox iOS SDK 3.7.6
My initial approach is straightforward as I fix the size of pixel to be 10m x 10m, then calculate its corresponding latitude and longitude value. Use those values to position them in Mapbox treating entire world map as a tilemap. Anyway I didn't take into account mercator in calculation, so this might be the case, if so then how to do just that? Only thing from my checking as available in iOS SDK is MGLMapView's metersPerPoint(atLatitude:). No tile ID system, or Conversions.cs as seen on Unity SDK. So i'm not sure on how to go on and solve this problem.
Update
I managed to solve it and made it work!
I'll come back and post the solution.

My solution is to port sphericalmercator.js to swift, then use it in code. I use a fixed zoom level of level 22 as its visual look is closest to what I need and also before. I went with the approach to at least have it looks visually equal not necessary its physical size.
Thanks to a hint in this answer on how to use sphericalmercator.js.
Anyway from my testing with it, tile size as set when you creating an instance using SphericalMercator seems not to be in effect no matter what value I set. Only zoom level will determine number of tiles across the world map for you. Note that upper-left corner is origin which is 0,0 tile index. Lower zoom level value will generate large tile size, but higher value will generate smaller tile size.
You can take a look as SphericalMercator-swift; the code I ported from origin JS implementation as linked above along with how to use it to get tile index, or bounding box of longitude/latitude in swift code in order to do rendering stuff on top of Mapbox.

Related

OpenCV: Extract significant weather over an area from a weather map

I would like to extract significant weather over an area from a weather map.
This is what the map looks like:
The black "clouds like" lines define several areas in which different types of weather phenomena will occur. The blue dashed area labeled "1" is the same type of area but refers to the legend of the map, on the right top side.
I already managed to extract all the contours (in green). That's pretty much all I can do for now because I just discovered OpenCV. (Please be kind if I missed anything.)
I assume the next step is to look in which areas the place I am interested in is located. And then try to retrieve all the significant weather symbols located inside. But I definitely do know how to do that for the moment.
The constants in this project are :
The map will always be centered and zoomed in the exact same way.
The region I am interested in will always be the same (the red circle that I drew with paint).
The variables of this project are :
Between two different maps, the shapes of the significant weather areas will completely change.
Significant weather symbols in the areas will, of course, also change.
Those maps are named TEMSI. If someone knows a different way to use computer vision to do the job I want to do, please let me know.

How to detect text in a photo

I am researching into the best way to detect test in a photo using open source libraries.
I think the standard way is as follows (note: steps 1 - 4 all use OpenCV):
1) detect outline of document
2) transform document so it's flat and cropped, using said outline
3) Make the background of document white, using a filter
4) Feed resulting image to Tesseract
Is this the optimum process, or is there a better way, or better tools?
Also, what happens for case if the photo doesn't have a document outline (It's possible that step 1 & 2 are redundant)?
Is there anyway to automatically detect document orientation (i.e. portrait / landscape)?
I think your process is fine. I've used a similar process for an Android project.
I think that the only way you can discover if a document is portrait/landscape is to reason with the length of the sides of the bounding box of your outline.
I don't think there's an automatic way to do this, maybe you can find the most external contour approximable with a 4 segment polyline (all doable in opencv). In order to get this you'll have to work with contour hierarchy and contous approximation (see cv2.approxPolyDP).
This is how I would go for automatic outline detection. As I said, the rest of your algorithm seems just fine to me.
PS. I'll leave my Android project GitHub link. I don't know if it can be useful to you, but here I specify the outline by dragging some handles, then transform the image and feed it to Tesseract, using Java and OpenCV. Yeah It's a very bad idea to do that in the main thread of an Android app and yeah, the app is not finished. I just wanted to experiment with OCR, so I didn't care much of performance and usability, since this was not intended to use, but just for studying.
Look up the uniform width transform.
What this does is detect edges which have more or less the same width with respect to their opposite edge. So things like drainpipes (which can be eliminated at a later pass) but also the majority of text. Whilst conceptually it's similar to a distance transform, the published method uses rather ad hoc normal projection methods and Canny edge detection.

ARToolkit Multiple Mandatory Markers

I studied the multimarker documentation of ARToolKit for iOS and i have some troubles in achieving some sort of QR-Code.
I want, for example:
A set of 6 markers positioned differently on a picture, and when and only when ALL of them are present, some sort of video is displayed in the origin of them( i want to use some sort of CORNER Markers like QR-Code system ).
How to do this ? From what i've seen, on multimarkers, if 1 is present out of 6 for example, the object is displayed.
From looking into the ARToolKit code you can see that a MultiMarker is internally handled as one single Marker consisting of several Pattern:
https://github.com/artoolkit/artoolkit5/blob/master/lib/SRC/ARWrapper/ARMarker.cpp#L344
https://github.com/artoolkit/artoolkit5/blob/master/lib/SRC/ARWrapper/ARMarkerMulti.cpp#L75
That is why ARToolKit will always return true whenever one of the markers configured in the multi-marker configuration is visible.
Taking that into account ‘Multi-Markers’ are not the way to go for the target you would like to reach.
What you can do, however, is to configure each marker separately and add them as ‘Single-Marker’. Then you can query if all of these ‘Single-Markers’ are visible.
If so you can calculate the origin of all these ‘Single-Markers’ and render your object there.
You can get an idea on how to configure several ‘Single-Markers’ if you take a look here:
http://augmentmy.world/moving-cars-augmented-reality
Also take that example here on how to set to markers into the same coordinate system (and calculate the distance between them) you can use that as a starting point for calculating the origin between several markers:
https://github.com/artoolkit/artoolkit5/tree/master/AndroidStudioProjects/ARMarkerDistanceProj
I know that these are not iOS examples but I have only done Android so far. Also, the ARWrapper interface should be the same on Android and iOS, meaning to say there should not be much difference between these two.
I hope that helps

Xtion Vertical position usage ( calibration ) with Skeleton

I am currently working on the "Xtion pro live" by using "OpenNI" library.
The problem is that the Xtion must be vertically placed (along a wall). The problem is that in this position the user calibration always fails, so it is impossible to get the Skeleton info.
So, I would like to know how to fix this issue, I suppose there is something that I didn't understand about "GetSkeletonCap().RequestCalibration()" or with the "SampleConfig.xml" file. After a lot of research however I am still stuck.
Try moving the user, followed by the camera in 360degree circumference around the subject keeping the vertical positioning of the camera the same all the way through. It may detect optimal angle on the depth sensor. We did this twice with the kinect and it worked.
Also make sure the room is well lit.

Best way to create custom building's map for iOS

I see some ways to do it:
1) Draw using OpenGL programmatically.
2) Draw using QuartzCore and CoreAnimation programmatically.
3) Draw map in AutoCad and then somehow connect it to iOS.
4) Draw map using SVG.
Requirments are supporting pathfinding and gps navigation.
For first 2 ways I think that it's expensive in terms of performance way, redraw all elements on scaling; and I don't think that this way may have GPS-navigation support.
Using AutoCad pictured maps is hard to understand for me how to connect it with graphs\paths for pathfinding.
My colleagues will develop this app on web using SVG. I found it - https://github.com/SVGKit/SVGKit , but still have no idea how it will support pathfinding and navigation.
I would appreciate any help.
Generally there are two types of map application:
A) They display a map, (with or without a user position) without needing to calculate a path like a navigation system does (see point B)
B) Application that use the vectors of a map and calculate something: e.g to find a best path. The shortest connection, e.g A navighation system , etc.
Application for A) are usually less complex then that of B), because the vectors can be somewhat inacurate, have no conections, have small gaps, have no logic between the edges, etc.
1) To only display a building map, you would only need a list of edges. (An edge is pair of coordinates (x1,y1) - (x2,y2). How ever you get that. E.g MapInfo Professional format mif/mid.
Or even you could dispaly a pdf that contains the map of the builing. Right with the built in PDF View, (also with SVG but more difficult).
Things get much more complicated if it is not a relative map, but also a map that is positioned with an reference coordinate system, like latitude/longitude (WGS84).
In that case you would use a Tool (mapInfoProfessional, to import AutoCad DXF Files, and apply 3 GPS measured reference points at the corner of the house, and convert that to LatLong WGS84 coordinates system.
With ios you cannot measure that 3 Points because you cannot average a position, ios stops sending when you are standing still at one corner of the house.
You could try to extract the positions from a google earth satellite foto if you are living in a region where google Sattelite fotos have high resolution. (But this might violate the license conditions of that Satellite Foto provider (Topic: derived data))
Finally you now have a list of edges in Lat Lon coordinate System.
For Displaying I personally would either do with 1) OpenGL) or 2) Quartz2D.
Now the Path finding part.
Probaly you need a second "map" that defines the possible paths inside the building.
This structure must be a connect graph (points with connected neighbours).
Computer games do it that way. (Some even allow you to display that path in developper mode)
The path can be drawn, in a different layer of the floor plan. But this path
has higher requirements: No gaps are allowed, all must be perfect connected.
Call that layer "Path" and export it as own plan.
Now use only this path layer, and import, and create a graph of nodes with connect neighbours.
Use Dijkstra Algo to search for shortest path.

Resources