Metabase integration with geojson and attributes - geojson

I am trying to implement a geojson and it is deployed correctly but it does not recognize the attributes
"fill": "# 580858","fill-opacity": 0.5
Is there a way to make a map where states are painted in different colors?

Related

Google Cloud Vision API Object Localization - number of recognized objects

I'm using the following python code to detect License Plates with Cloud Vision API.
response = client.annotate_image({
'image': {'source': {'image_uri': uri}},
'features': [
{'max_results': 1000, 'type_': vision.Feature.Type.TEXT_DETECTION},
{'max_results': 1000, 'type_': vision.Feature.Type.OBJECT_LOCALIZATION},
],
})
lo_annotations = response.localized_object_annotations
for obj in lo_annotations:
print('\n{} (confidence: {})'.format(obj.name, obj.score))
print('Normalized bounding polygon vertices: ')
for vertex in obj.bounding_poly.normalized_vertices:
print(' - ({}, {})'.format(vertex.x, vertex.y))
If i use an image showing more cars, buildings, persons etc. I get about 4-7 objects recognized. The recognized objects are the bigger ones in the scene like "Car", "Car", "Building", "Building", "Person"
If I snip out just one car from this image and do the Object Localization with this new image I get objects like "Car", "Tire", "Tire", "License plate" which is perfect - because the plate gets recogized and listed.
So it seems the Object Localization algorithm picks out some prominent objects from the image and ignores smaller or less prominent objects.
But in my case I need to localize all license plates in the image. Is there a way to get the used Model to list all license plates in the image or more objects than just the most prominent ones?
Otherwise what would then be the right approach to get all plates out of an image - do I have to train a custom model?
Vision API is a pre-trained image detection service provided by Google that could perform basic image detection like detecting of text, objects, etc. hence the observation you have mentioned about the API where it usually detects prominent objects in the image.
What I could suggest is, if the objects in your images usually appear in a specific area in the image (eg. objects appear at the lower half) you can pre-process the image by cropping it using python libraries like PIL and OpenCV before using Vision API to detect license plates. Or detect the objects, get the coordinates per object, use coordinates as input to crop specific objects then use Vision API to detect license plates.
And also as you have mentioned, as an alternative you can always create a custom model to detect license plates if you are not satisfied with the results of Vision API. With a custom model, you have more freedom on how to tweak the model to increase the accuracy to detect license plates.

What is the best options for tippecanoe to process line string features?

I have a GeoJSON file with a FeatureCollection (more than 300 000 features) of LineStrings. It is a road traffic records. I need to convert it to the MVT format using Tippecanoe. I'm trying to convert the GeoJSON with this params:
tippecanoe data.geojson -pf -pS -zg --detect-shared-borders -o data.mbtiles -f
Then I uploading it to Mapbox account as a tileset and use to render with Mapbox GL JS. And there is a problem - not all the features are visible. Moreover, if if will reconvert the GeoJSON file - then I will get a different result! So - what is the best options to use with tippecanoe to convert all the features (lineStrings) without oversimplification to use it with Mapbox GL JS?
P.S. One more thing which I noticed is that datasets uploaded with Mapbox Studio and then converted to tileset has some info like this: "This layer contains mostly LineStrings", but with my own tilesets converted with the tippecanoe I see a next message: "* No dominant geometry type*"
-ae will auto-increase the maxzoom if features are still being dropped at that zoom level. But when zoomed out it doesn't always look good depending on the type of features (e.g.: mising cadastre doesn't look good)...

Change map projection in Cesium.js to match results from turf.js inside query

I am displaying a point and a polygon in Cesium.js, and using turf.js to check is the point inside the polygon.
When displayed in Cesium (or geojson.io), the point is clearly outside the polygon, as can be seen here:
http://geojson.io/#id=gist:kujosHeist/1030e392bd751daf5d9af57aa412a49c&map=3/46.80/-22.76
However, when I queried this on the turf.js issiues page:
https://github.com/Turfjs/turf-inside/issues/15
I was told it was because geojson, (and therefore Cesium) is "misrepresenting the point since it uses World Web Mercator projection (EPSG:3857)" and that "viewing the same point/polygon using WGS84 with QGIS" ...shows the point is inside the polygon.
So my question is: How can I change the map projection used in Cesium (and also in geojson.io if possible), so that the point is correctly displayed inside the polygon?
I am not sure how well geojson.io or Cesium will handle different coordinate systems, but you can specify a CRS element in your GeoJSON that indicates the coordinate system used by points of your features. This is added as a member under your feature collection. For example:
{
{
"type": "FeatureCollection",
"crs": {
"type": "name",
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
},
...the rest of your structure...
}
But like I said, it's up to the map display software to pay attention to your specified CRS and use it to project coords. If it doesn't support the coordinate system you have, then you'll need to pick some other map display software or convert the coords to a supported coordinate system.

OpenLayers 3 - Source with multiple available projections

Is there anyway to specify a (Tile)WMS source that supports multiple projections?
For example, my WMS server supports requests for both EPSG:4326 and EPSG:3395. So if the projection is supported in server-side, the tiles are requested with the appropriate projection, else OpenLayers tries to reproject one of the supported projections.
You can use the client side reprojection to do that, see this example .
Just add a projection property to your ol.source.TileWms.

Openlayers 3 - Load Geometric data into Vector Layer

UPDATE: The basic question is, if my GeoJSON delivered by the REST interface (Json data is visible at end of question) is a valid GeoJSON for the vector layer, because as soon as i add this as source for the vector layer, the layer is broken.
Currently there is no REST Interface to upload shapes, so i just took some valid coordinates from current shapes and created a static JSON on serverside
I try to make it possible that users can draw shapes (Polygons, Lines) on a vector layer and the i store the geometric data on a server and the next time the map is loaded it should be displayed again. But it seems somehow when i define my REST interface as source for the vector layer, that there is some problem as painting and adding objects into the vector layer does not work anymore.
Here the code i put together from OpenLayers Examples. First an image how it should look like:
I extracted the coordinates on the map with drawend event and built a REST interface where i could load the geometric data, this is the response i get for the vector layer source:
{
"type":"FeatureCollection",
"crs":{
"type":"name",
"properties":{
"name":"EPSG:2000"
}
},
"features":[
{
"type":"Feature",
"id":"1",
"properties":{
"name":"TEST1"
},
"geometry":{
"type":"LineString",
"coordinates":[
[
-5920506.46285661,
1533632.5355137766
],
[
-1882185.384494179,
234814.55089206155
]
]
}
}
]
}
But if i load this, nothing will be displayed and its not possible to draw on the layer anymore (if i remove the "source" attribute from the vector layer source it works again)
Here the complete code on pastebin: Example Code
I fixed the problem meanwhile, i was trying to load GeoJSON into my vector layer but the vector layer was always broken afterwards. As i was pretty new to open layers, i didn't notice my mistake, but i had syntax errors in my GeoJSON and the coordinates i supplied were also wrong.
After correcting the syntax and the coordinates everything is working as intended and as mentioned above EPSG:3857 was also the right thing to use. Sorry for my first messy experiences with open layers, but thanks for the friendly help ;)

Resources