Is it true CKAN DataStore is able to deal with GeoJson? I've not seen any reference in the documentation except for this link about the DataStore Map visualization, saying:
Shows data stored on the DataStore in an interactive map. It supports plotting markers from a pair of latitude / longitude fields or from a field containing a GeoJSON representation of the geometries.
Thus, I'm supossing GeoJson is accepted in DataStore columns. Anyway, I've not found any GeoJson CKAN type, thus, again, I'm guessing the simple Json type must be use for this purpose.
Can anybody confirm this? Thanks!
EDIT 1
I've created a resource and a datastore and a "recline_map_view" associated to the resource. Then, I've upserted a value, which is shown by this datastore_search operation:
$ curl -X POST "https://host:port/api/3/action/datastore_search" -d '{"resource_id":"14418d40-de42-4fdd-84f7-3c51244c7469"}' -H "Authorization: xxx" -k
{"help": "https://host:port/api/3/action/help_show?name=datastore_search", "success": true, "result": {"resource_id": "14418d40-de42-4fdd-84f7-3c51244c7469", "fields": [{"type": "int4", "id": "_id"}, {"type": "text", "id": "label"}, {"type": "json", "id": "geojson"}], "records": [{"_id": 1, "geojson": {"type": "Point", "coordinates": [48.856699999999996, 2.3508]}, "label": "Paris"}], "_links": {"start": "/api/3/action/datastore_search", "next": "/api/3/action/datastore_search?offset=100"}, "total": 1}}
Nevertheless, nothing is shown in CKAN :(
EDIT 2
It was a problem with my CKAN. I've tested Ifurini's solution at demo.ckan.org and it works.
GeoJSON is just a (particular kind of) JSON, so it does not have a particular treatment as a database field.
So, you can create a resource with a GeoJSON field from a simple CSV file like this:
Name,Position
"Paris","{""type"":""Point"",""coordinates"":[2.3508,48.8567]}"
(note the double double quotes "" instead of just a single double quote ")
If you call the column "GeoJSON" (or "geojson", "gEoJsOn", etc., as capitalization is not important) the Map View will automatically use that field to mark the data in the map, instead of just letting you manually select which field to use.
Related
I have an rails app and I'm trying to configure logging to graylog. Pipeline consists of next steps:
1) Logs are written to file in JSON format by SemanticLogger gem. Log message consist of header info (first level tags) and payload with several levels of hierarchy:
{
"tag": "mortgage",
"app": "sneakers",
"pid": 3448,
"env": "production",
"host": "thesaurus-mortgage",
"thread": "91090300",
"level": "info",
"name": "Sneakers",
"payload": {
"class": "EgrnListenerWorker",
"method": "work",
"json": {
"resource": "kontur",
"action": "request_egrn_done",
"system_code": "thesaurus",
"id": 35883717,
"project_id": "mortgage",
"bank_id": "ab",
"params": {
"egrn": {
"zip": "rosreestr/kontur/kontur_4288_2018-10-11_021848.zip",
"pdf": "rosreestr/kontur/kontur_4288_2018-10-11_021848.pdf",
"xml": "rosreestr/kontur/kontur_4288_2018-10-11_021848.xml"
},
"code": "SUCCESS"
}
},
"valid_json": true
},
"created_at": "2018-10-11T17:44:58.262+00:00"
}
2) File is being read by Filebeat service and sent to Graylog.
And graylog could not parse correctly payload contents:
As you can see - keys are concatenated with ":" in one string in such manner: key1=value1:key2=value2. This is not what I am expected. It would be perfect if I could manage graylog to parse contents of payload into different fields with names payload.key1, payload.key2 and so on (so I could perform search on these fields)
ps: my log data is heterogeneous, i.e. payload contents depend on functionality it was produced by, so I expect that there would be a huge amount of different fields of a kind "payload.xxxxx" - is it ok?
This isn't exactly a filebeat question since filebeat only ships the logs in their original JSON format (zipped, if wanted).
From the Graylog Website: http://docs.graylog.org/en/2.4/pages/extractors.html
Using the JSON extractor
Since version 1.2, Graylog also supports extracting data from messages sent in JSON format.
Using the JSON extractor is easy: once a Graylog input receives
messages in JSON format, you can create an extractor by going to
System -> Inputs and clicking on the Manage extractors button for that
input. Next, you need to load a message to extract data from, and
select the field containing the JSON document. The following page let
you add some extra information to tell Graylog how it should extract
the information.
This should get you going.
I know I come to you with any news, but I'm stuck solving an issue that probably is my fault, indeed I can't realize what's the solution.
I'm using a standalone installation of the Confluent platform (4.0.0 open source version) in order to demonstrate how to adopt the platform for a specific use case.
Trying to demonstrate the value of using the schema registry I'm facing the following issue posting a new schema with Postman.
The request is:
http://host:8081/subjects/test/versions
, method POST
, Header: Accept:application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
Content-Type:application/json
, Body:
{"schema":"{{\"namespace\":\"com.testlab\",\"name\":\"test\",\"type\":\"record\",\"fields\":[{\"name\":\"resourcepath\",\"type\":\"string\"},{\"name\":\"resource\",\"type\":\"string\"}]}}" }
The response is: {"error_code":42201,"message":"Input schema is an invalid Avro schema"}
Looking at docs and after googling a lot I'm without options.
Any suggestion ?
Thanks for your time
R.
You have extra {} around the schema field.
One way to test this is with jq
Before
$ echo '{"schema":"{{\"namespace\":\"com.testlab\",\"name\":\"test\",\"type\":\"record\",\"fields\":[{\"name\":\"resourcepath\",\"type\":\"string\"},{\"name\":\"resource\",\"type\":\"string\"}]}}" }' | jq '.schema|fromjson'
jq: error (at <stdin>:1): Objects must consist of key:value pairs at line 1, column 146 (while parsing '{{"namespace":"com.testlab","name":"test","type":"record","fields":[{"name":"resourcepath","type":"string"},{"name":"resource","type":"string"}]}}')
After
$ echo '{"schema":"{\"namespace\":\"com.testlab\",\"name\":\"test\",\"type\":\"record\",\"fields\":[{\"name\":\"resourcepath\",\"type\":\"string\"},{\"name\":\"resource\",\"type\":\"string\"}]}" }' | jq '.schema|fromjson'
{
"namespace": "com.testlab",
"name": "test",
"type": "record",
"fields": [
{
"name": "resourcepath",
"type": "string"
},
{
"name": "resource",
"type": "string"
}
]
}
See my comment here about importing AVSC files so that you don't need to type out the JSON on the CLI
How do I connect my database to API.AI
Making every sentence into INTENT and creating entities for each doesn't seem to be a good idea? So what is the best possible way to go about?
As far as I know it is not possible yet, but you can switch to row mode and past your entities inCVS or JSON format OR import a JSON/CSV file containing all your entities.
The file should look like below (JSON format):
[
{
"value": "val1",
"synonyms": [
"syn1",
"syn2"
]
},
{
"value": "val2",
"synonyms": [
"syn21",
"syn22"
]
},
]
So you can image of writing a small job that reads entities from you DB and make a JSON/CSV file according the wanted format.
Once the job done, this process may dramatically facilitate the creation of your entities on api.ai.
If you use a webhook for an intent, you can pass params to your endpoint where you can do all the queries to your db
I did a demo where I was querying news (cheating as I was getting it from the web, but I could plug a DB).
The was getting requests such as:
"What are the latest news about France"
latest and France would be params that I send through to the webhook endpoint.
You would get the following JSON sent your endpoint by API.AI
"result": {
"source": "agent",
"resolvedQuery": "latest news about France",
"action": "show.news",
"actionIncomplete": false,
"parameters": {
"adjective": "latest",
"subject": "France"
}
Then you can query all the news for France and order them by latest
In my understanding the idea is to create entities that are "placeholders" for the values you need to query.
Then you teach the AI with few examples by tagging in the request what did the person ask. Let say someone asks:
"what is the oldest news about France?"
The AI may not know what is oldest thus you tell it is is an adjective and from now on you can get oldest as a param
I need to get the highway name on which the user is currently navigating.
That can be done in navigation mode, getting it from
-(void)routingService:(SKRoutingService *)routingService didChangeCurrentStreetName:(NSString *)currentStreetName streetType:(SKStreetType)streetType countryCode:(NSString *)countryCode
So, when I was testing my app yesterday, I was on the highway, and yes, Skobbler did recognised that I am on one, and yes, I got the Highway name back.
It was "Brooklyn-Queens Expressway".
But, Brooklyn-Queens Expressway is actually name of the I-278 Interstate highway, and all the functions I would later have to use, need to get Highway name in that format I-nnn
Here is the map photo of what I mean
So, Is there a way to get streetName in that I-nnn format, when the streetType is recognised as an interstate highway?
Or is there any Open Streetmap database we could consult? I wasn't able to find anything on OSM Wiki.
Don't know about the Skobbler SDK, but if online query is available and you have the approximate geographical area and the name of the motorway, you may use the Overpass API (http://wiki.openstreetmap.org/wiki/Overpass_API) to query the openstreetmap database for the highway reference.
For example, the following query (for a particular bbox which contains a small section of the highway):
[out:json]
[timeout:25]
;
(
way
["highway"="motorway"]
["name"="Brooklyn-Queens Expressway"]
(40.73483602685421,-73.91463160514832,40.73785205632046,-73.9096748828888);
);
out body qt;
returns (with some key-value pairs omitted for simplicity):
{
"version": 0.6,
"generator": "Overpass API",
"osm3s": {
"timestamp_osm_base": "2015-09-18T20:21:02Z",
"copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
},
"elements": [
{
"type": "way",
"id": 46723482,
"nodes": [
488264429,
488264444,
488264461,
488264512,
488264530,
488264541,
597315979
],
"tags": {
"bicycle": "no",
"bridge": "yes",
"foot": "no",
"hgv": "designated",
"highway": "motorway",
"horse": "no",
"lanes": "3",
"layer": "1",
"name": "Brooklyn-Queens Expressway",
"oneway": "yes",
"ref": "I 278",
"sidewalk": "none",
}
},
{
"type": "way",
"id": 46724225,
"nodes": [
597315978,
488242888,
488248526,
488248544,
488248607
],
"tags": {
"bicycle": "no",
"bridge": "yes",
"foot": "no",
"hgv": "designated",
"highway": "motorway",
"horse": "no",
"lanes": "3",
"layer": "1",
"name": "Brooklyn-Queens Expressway",
"oneway": "yes",
"ref": "I 278",
"sidewalk": "none",
}
}
]
}
Which are 2 sections of the road in the osm database. In the US the "ref" tag for interstates is in the form "I XXX" (See http://wiki.openstreetmap.org/wiki/Interstate_Highways and note the format for co-location). You can retrieve the interstate name accordingly.
You can try the above query in overpass-turbo (a UI for the service) at http://overpass-turbo.eu/s/bxi (Press RUN and the DATA tab for the returned data, and pan the map for query in another bbox).
The "ref" information is not exposed in the SDK (will put this on the TODO list).
A workaround would be to look in the text advices (when using TTS) as this information is there (if you look at the $ref parameter, that contains the information you are looking for).
For more details regarding the text advices structure, see this blog article.
I am trying to create a course in a semester through the api in valence d2l. I keep getting a 404 not found error, both in my program and in the "getting started" application. The call I am making is to /d2l/api/lp/1.0/courses/ using post. I pass the following JSON object along with it:
{
"Name": "COMM291 - Test A",
"Code": "C-COMM291",
"Path": "/enforced/C-COMM291/",
"CourseTemplateId": 20992,
"SemesterId": 20993,
"StartDate": "2013-08-22T19:41:14.0983532Z",
"EndDate": "2013-08-27T19:41:14.0993532Z",
"LocaleId": 4105,
"ForceLocale": false,
"ShowAddressBook": false
}
I have also tried passing null for the fields that say they accept null values, but no luck. The course template and the semester ID are correct - I have tripled checked that they exist, I am enrolled in them and I am using the correct ID numbers.
Try reducing the precision in your start and end dates to three decimals after the final point (e.g., "2013-08-22T19:41:14.0983532Z" becomes "2013-08-22T19:41:14.098Z").
If your org is configured to automatically enforce, and generate, paths for course offerings, then you should not provide one in your CreateCourseOffering block at all. The following structure works on our test instance: notice the empty string for path (shouldn't be null, but an empty string, I believe):
{ "Name": "Extensibility 104",
"Code": "EXT-104",
"Path": "",
"CourseTemplateId": 8082,
"SemesterId": 6984,
"StartDate": "2013-09-01T19:41:14.098Z",
"EndDate": "2013-12-27T19:41:14.098Z",
"LocaleId": 1,
"ForceLocale": false,
"ShowAddressBook": false }
The other thing to note is that if your CreateCourse form doesn't have a form element to provide a Semester ID, then your API call should pass null for that property.
I found that part of my problem was with the call if I change it to /d2l/api/lp/1.3/courses/ instead of 1.0 it works, (1.0 will work but it seems that you can only pass null for the semester).
The dates were also picky and did prefer milliseconds to only 3 decimal places.
Then passing null for LocaleId also helped.