I am working on Bing maps, but fairly new to spatial data types. I have managed to get the GeoJson data for a shape from bing maps for example,
{"type":"MultiPolygon","coordinates":[[[[30.86202,-17.85882],[30.93311,-17.89084],[30.90701,-17.92073],[30.87112,-17.90048],[30.86202,-17.85882],[30.86202,-17.85882],[30.86202,-17.85882]]]]}
However I need to save this as DbGeomtry in SQL, how can convert GeoJson to DbGeomtry
Option 1.
Convert the GeoJSON to WKT and then use stgeomfromtext to create the Db object.
Option 2.
Deserialize the GeoJSON using GeoJSON.Net and then use the nuget package GeoJSON.Net.Contrib.MsSqlSpatial to convert to a Db object. eg.
DbGeography dbGeographyPoint = point.ToDbGeography();
Option 3.
For some types of GeoJSON data, modifications based on this approach can be used
drop table if exists BikeShare
create table BikeShare(
id int identity primary key,
position Geography,
ObjectId int,
Address nvarchar(200),
Bikes int,
Docks int )
declare #bikeShares nvarchar(max) =
'{"type":"FeatureCollection",
"features":[{"type":"Feature",
"id":"56679924",
"geometry":{"type":"Point",
"coordinates":[-77.0592213018017,38.90222845310455]},
"properties":{"OBJECTID":56679924,"ID":72,
"ADDRESS":"Georgetown Harbor / 30th St NW",
"TERMINAL_NUMBER":"31215",
"LATITUDE":38.902221,"LONGITUDE":-77.059219,
"INSTALLED":"YES","LOCKED":"NO",
"INSTALL_DATE":"2010-10-05T13:43:00.000Z",
"REMOVAL_DATE":null,
"TEMPORARY_INSTALL":"NO",
"NUMBER_OF_BIKES":15,
"NUMBER_OF_EMPTY_DOCKS":4,
"X":394863.27537199,"Y":137153.4794371,
"SE_ANNO_CAD_DATA":null}
},
......'
-- NOTE: This GeoJSON is truncated.
-- Copy full example from https://github.com/Esri/geojson-layer-js/blob/master/data/dc-bike-share.json
INSERT INTO BikeShare(position, ObjectId, Address, Bikes, Docks)
SELECT geography::STGeomFromText('POINT ('+long + ' ' + lat + ')', 4326),
ObjectId, Address, Bikes, Docks
from OPENJSON(#bikeShares, '$.features')
WITH (
long varchar(100) '$.geometry.coordinates[0]',
lat varchar(100) '$.geometry.coordinates[1]',
ObjectId int '$.properties.OBJECTID',
Address nvarchar(200) '$.properties.ADDRESS',
Bikes int '$.properties.NUMBER_OF_BIKES',
Docks int '$.properties.NUMBER_OF_EMPTY_DOCKS' )
I suggest to try Option 2 first.
Note: Consider Geography instead of Geometry if you are using GCS_WGS_1984 projection as is with Bing Maps.
Instead of retrieving GeoJSON from Bing Maps, retrieve Well Known Text: https://www.bing.com/api/maps/sdk/mapcontrol/isdk/wktwritetowkt
Send this to your backend and then use the geometry::stgeomfromtext https://learn.microsoft.com/en-us/sql/t-sql/spatial-geometry/stgeomfromtext-geometry-data-type?view=sql-server-ver15
Note that DbGeometery will not support spatially accurate calculations. Consider storing the data as a DbGeograpgy instead using geography::stgeomfromtext https://learn.microsoft.com/en-us/sql/t-sql/spatial-geography/stgeomfromtext-geography-data-type?view=sql-server-ver15 and pass in '4326' as the SRID.
Related
Is there any way to delete columns of an influx timeseries as we have accidentally injected data using the wrong data type (int instead of float).
Or to change the type of data instead.
Unfortunately, there is no way to delete a "column" (i.e. a tag or a field) from an Influx measurement so far. Here's the feature request for that but there is no ETA yet.
Three workarounds:
use SELECT INTO to copy the desirable data into a different measurement, excluding the undesirable "columns". e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2 INTO new_measurement FROM measurement
use CAST operations to "change the data type" from float to int. e.g.:
SELECT desirableTag1, desirableTag2, desirableField1, desirableField2, undesiredableTag3::integer, undesiredableField3::integer INTO new_measurement FROM measurement
"Update" the data with insert statement, which will overwrite the data with the same timestamp, same tags, same field keys. Keep all other things equal, except that the "columns" that you would like to update. To make the data in integer data type, remember to put a trailing i on the number. Example: 42i. e.g.:
insert measurement,desirableTag1=v1 desirableField1=fv1,desirableField2=fv2,undesirableField1=someValueA-i 1505799797664800000
insert measurement,desirableTag1=v21 desirableField1=fv21,desirableField2=fv22,undesirableField1=someValueB-i 1505799797664800000
I'm trying to convert a STRING data type to a GEOGRAPHY repeated type in order to join two tables in Bigquery. I'm not very familiar with Bigquery so any help will be greatly appreciated.
My 1st table is known as DealerInfo with the following data types:
DealerId - STRING
Geographies - GEOGRAPHY
My 2nd table is known as Cities:
City_name- STRING
Geometry - STRING
The Geometry string looks something like this:
POLYGON ((-28.855180740356388 -20.470634460449162,-28.855382919311467
-20.470634460449162,-28.855499267578125 -20.470554351806527,
-28.855585098266545 -20.470428466796818,-28.855623245239258
-20.470342636108398,-28.855686187744141 -20.470207214355469,
-28.855756759643498 -20.470096588134766,-28.855855941772404
-20.470016479492188,-28.856058120727482 -20.469953536987305,
-28.856224060058594 -20.469875335693303,-28.856462478637695
-20.469699859619141,-28.856561660766602 -20.46949577331543,
-28.856527328491154 -20.469383239746037,-28.856527328491154
-20.469097137451115,-28.856479644775334 -20.468908309936523,
-28.856412887573185 -20.468816757202148,-28.856327056884766
-20.468719482421818,-28.856176376342773 -20.468656539916935,
-28.855871200561467 -20.46859169006342,-28.855756759643498
-20.46859169006342,-28.855636596679574 -20.468561172485295,
-28.855499267578125 -20.468547821044808,-28.855382919311467
-20.468450546264648,-28.855401992797852 -20.468360900878793,
-28.855417251586914 -20.468278884887638,-28.8554668426513…
Whereas the Geographies variable looks exactly the same but it's a GEOGRAPHY data type so I'm unable to join the tables.
Any advice/code will be greatly appreciated!
Here is the code I used to try and convert the STRING to GEOGRAPHY in order to join the two tables:
SELECT
ST_MAKEPOLYGON(geometry) AS NEW_GEO
FROM `DWH.CASE1.BrazilCitiesGeographies`
LIMIT 1000
To convert a string type with a POLYGON to a BigQuery Geography type you should use ST_GEOGFROMTEXT() function as in below example
SELECT
ST_GEOGFROMTEXT(geometry) AS NEW_GEO
FROM `DWH.CASE1.BrazilCitiesGeographies`
LIMIT 1000
I have the following Entity model (name and data type) in Core Data:
Item - String
Timestamp - Date
Value - Double
Is there any way to fetch the equivalent of "select date(timestamp) as date_value, sum(value) from Entity group by date_value, value"?
Googling on groupings doesn't explain how to do this with the Date type very clearly. I tried a workaround by adding a new column called date_string where it's a formatted String version of the Timestamp column, like '2018-03-19', but it doesn't seem very elegant and doesn't really work.
You don’t do this while fetching when using Core Data but rather fetch all relevant objects and then do the grouping and summation in your code.
I would use a dictionary with dictionaries for this, date (as string) as key and double as value for the inner dictionary and the item property as key and the inner dictionary as value for the outer dictionary. Or alternatively create a summation structure and use that in a single dictionary.
I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.
The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.
I was trying out some hive optimization features and encountered such problem:
I cannot use bucket map join in hive 0.12. After all the setting I tried below, only one hashtable file is generated and the join turn out to be just map join.
I have two tables both in rcfile format and both bucktized into 10 bucket, they are created and populated as follows(Origin data was generated from TPC-H):
hive> create table lsm (l_orderkey int, l_partkey int, l_suppkey int, l_linenumber int, l_quantity double, l_extendedprice double, l_discount double, l_tax double, l_returnflag string, l_linestatus string, l_shipdate string, l_commitdate string, l_receiptdate string, l_shipstruct string, l_shipmode string, l_comment string) clustered by (l_orderkey) into 10 buckets stored as rcfile;
hive> create table osm (o_orderkey int, o_custkey int) clustered by (o_orderkey) into 10 buckets stored as rcfile;
hive> set hive.enforce.bucketing=true;
hive> insert overwrite table lsm select * from orili;
hive> insert overwrite table osm select o_orderkey, o_custkey from orior;
And I can query both table’s data normally, and lsm is 790MB, osm is 11MB, both 10 bucket files, then I want to try bucket map join:
hive> set hive.auto.convert.join=true;
hive> set hive.optimize.bucketmapjoin=true;
hive> set hive.enforce.bucketmapjoin=true;
hive> set hive.auto.convert.join.noconditionaltask=true;
hive> set hive.auto.convert.join.noconditionaltask.size=1000000000000000;
hive> set hive.input.format=org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
and my query is as follows:
hive> select /*+ Mapjoin(osm) */ osm.o_orderkey, lsm.* from osm join lsm on osm.o_orderkey = lsm.l_orderkey;
This query only generate 1 hashtable of osm and fall back to a map join, I was really confused about it. Do I have all the hint set to enable the bucket map join feature, or are there any problems in my query ?
Short Version:
Set hive> set hive.ignore.mapjoin.hint=false;
will make Bucket Map Join work as Expected. Which means I would get the 10 small tables's bucket files build as hash table and do hash join with its corresponding big file's buckets.
A longer Version:
I dive into the hive 0.12 code and find hive.ignore.mapjoin.hint in HiveConf.java and it was set to true by default, which means the /*+ MAPJOIN */ hint is ignored deliberately. Since there are 2 phase of Optimization in hive, logical optimization and physical optimization, both are rule based optimizations.
Logical Optimization In the logical optimization, mapjoin optimization was followed by bucketmapjoin optimization, bucketmapjoin optimization take the MapJoin operator tree as input and convert it into BucketMapJoin, so a hinted query would be first transformed into a mapjoin and then a bucketmapjoin. Therefore, hint disabled logical optimisation would do nothing join optimisation on the join tree.
Physical Optimization In the physical optimisation, the hive.auto.convert.join was tested and MapJoinResolver was used and just convert a reduce join into a MapJoin. No further BucketMapJoin Optimization rules in this phase. That's why I just get Mapjoin in my question.