I am using influx db and I want to enforce some kind of schema validation.
I had a problem that influx had learned a filed using the wrong type due to a developer mistake. As a result, once we sent the right type, influx wouldn't persist it because it recognised the field of another type.
Can I force field types such as String, Integer and Double?
I use Java
Regards,
Ido
Unfortunately we have waited so long to see this feature finally in the newer release.
Starting from InfluxDB v2.4, we could create a bucket (new name for database in InfluxDB v2.X ) with an explicit schema. That is,
Create a bucket with an explicit schema (see more details here)
influx bucket create \
--name my_schema_bucket \
--schema-type explicit
Adding measurement schemas to your bucket (see more details here)
influx bucket-schema create \
--bucket my_schema_bucket \
--name temperature \
--columns-file columns.csv
where that column.csv is similar to DDL:
{"name": "time", "type": "timestamp"}
{"name": "alert", "type": "field", "dataType": "string"}
{"name": "cdi", "type": "field", "dataType": "float"}
You could refer to this blog as well.
Related
I know I come to you with any news, but I'm stuck solving an issue that probably is my fault, indeed I can't realize what's the solution.
I'm using a standalone installation of the Confluent platform (4.0.0 open source version) in order to demonstrate how to adopt the platform for a specific use case.
Trying to demonstrate the value of using the schema registry I'm facing the following issue posting a new schema with Postman.
The request is:
http://host:8081/subjects/test/versions
, method POST
, Header: Accept:application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
Content-Type:application/json
, Body:
{"schema":"{{\"namespace\":\"com.testlab\",\"name\":\"test\",\"type\":\"record\",\"fields\":[{\"name\":\"resourcepath\",\"type\":\"string\"},{\"name\":\"resource\",\"type\":\"string\"}]}}" }
The response is: {"error_code":42201,"message":"Input schema is an invalid Avro schema"}
Looking at docs and after googling a lot I'm without options.
Any suggestion ?
Thanks for your time
R.
You have extra {} around the schema field.
One way to test this is with jq
Before
$ echo '{"schema":"{{\"namespace\":\"com.testlab\",\"name\":\"test\",\"type\":\"record\",\"fields\":[{\"name\":\"resourcepath\",\"type\":\"string\"},{\"name\":\"resource\",\"type\":\"string\"}]}}" }' | jq '.schema|fromjson'
jq: error (at <stdin>:1): Objects must consist of key:value pairs at line 1, column 146 (while parsing '{{"namespace":"com.testlab","name":"test","type":"record","fields":[{"name":"resourcepath","type":"string"},{"name":"resource","type":"string"}]}}')
After
$ echo '{"schema":"{\"namespace\":\"com.testlab\",\"name\":\"test\",\"type\":\"record\",\"fields\":[{\"name\":\"resourcepath\",\"type\":\"string\"},{\"name\":\"resource\",\"type\":\"string\"}]}" }' | jq '.schema|fromjson'
{
"namespace": "com.testlab",
"name": "test",
"type": "record",
"fields": [
{
"name": "resourcepath",
"type": "string"
},
{
"name": "resource",
"type": "string"
}
]
}
See my comment here about importing AVSC files so that you don't need to type out the JSON on the CLI
I've searched in the documentation and in the message boards, but was unable to find the following.
I'm trying to import data from mqtt into thingsboard, using the IOT gateway.
The documentation outlines how to configure the IOT gateway to import json-formatted data.
{
"topicFilter": "sensors",
"converter": {
"type": "json",
"filterExpression": "",
"deviceNameJsonExpression": "${$.serialNumber}",
"attributes": [
{
"type": "string",
"key": "model",
"value": "${$.model}"
}
],
"timeseries": [
{
"type": "double",
"key": "temperature",
"value": "${$.temperature}"
}
]
}
}
from (https://thingsboard.io/docs/iot-gateway/getting-started/#step-81-basic-mapping-example).
That mapping then works to import data published like this:
mosquitto_pub -h localhost -p 1883 -t "sensors" -m '{"serialNumber":"SN-001", "model":"T1000", "temperature":36.6}'
I am hoping that it is also possible to import raw data, i.e. without json formatting, because I already have many data topics with raw data payloads. So, just raw ascii-encoded values. So, like this:
mosquitto_pub -h localhost -p 1883 -t "sensors/livingroom/temperature" -m '36.6'
Is that possible with the IOT gateway, and if so, what would the configuration look like?
It is possible, but you will need to implement new converter type. The one we have is using JSON. You can implement your own converter that accepts binary data. So, your configuration will looks similar to this:
{
"topicFilter": "sensors",
"converter": {
"type": "binary",
/* whatever configuration structure that is applicable to your use case */
}
}
Is it true CKAN DataStore is able to deal with GeoJson? I've not seen any reference in the documentation except for this link about the DataStore Map visualization, saying:
Shows data stored on the DataStore in an interactive map. It supports plotting markers from a pair of latitude / longitude fields or from a field containing a GeoJSON representation of the geometries.
Thus, I'm supossing GeoJson is accepted in DataStore columns. Anyway, I've not found any GeoJson CKAN type, thus, again, I'm guessing the simple Json type must be use for this purpose.
Can anybody confirm this? Thanks!
EDIT 1
I've created a resource and a datastore and a "recline_map_view" associated to the resource. Then, I've upserted a value, which is shown by this datastore_search operation:
$ curl -X POST "https://host:port/api/3/action/datastore_search" -d '{"resource_id":"14418d40-de42-4fdd-84f7-3c51244c7469"}' -H "Authorization: xxx" -k
{"help": "https://host:port/api/3/action/help_show?name=datastore_search", "success": true, "result": {"resource_id": "14418d40-de42-4fdd-84f7-3c51244c7469", "fields": [{"type": "int4", "id": "_id"}, {"type": "text", "id": "label"}, {"type": "json", "id": "geojson"}], "records": [{"_id": 1, "geojson": {"type": "Point", "coordinates": [48.856699999999996, 2.3508]}, "label": "Paris"}], "_links": {"start": "/api/3/action/datastore_search", "next": "/api/3/action/datastore_search?offset=100"}, "total": 1}}
Nevertheless, nothing is shown in CKAN :(
EDIT 2
It was a problem with my CKAN. I've tested Ifurini's solution at demo.ckan.org and it works.
GeoJSON is just a (particular kind of) JSON, so it does not have a particular treatment as a database field.
So, you can create a resource with a GeoJSON field from a simple CSV file like this:
Name,Position
"Paris","{""type"":""Point"",""coordinates"":[2.3508,48.8567]}"
(note the double double quotes "" instead of just a single double quote ")
If you call the column "GeoJSON" (or "geojson", "gEoJsOn", etc., as capitalization is not important) the Map View will automatically use that field to mark the data in the map, instead of just letting you manually select which field to use.
I'm migrating a number of projects from one JIRA instance to another using JSON importer. Although the importer can assign issues to existing sprints, the sprints themselves must already exist -- a limitation of the current version of JIRA Importer.
We've been creating sprints by hand 'till now, but some of our projects have a large number of them, which make the manual process both tedious and error-prone.
It does not appear like JIRA REST API can create new sprints either -- although people talk about the greenhopper/1.0/sprint/create endpoint, it does not exist.
Is there, perhaps, some other way to create sprints programmatically? I have no problems with obtaining the full list of them from the source JIRA instance, it is creating them in the target instance, that does not seem possible...
Any hope? Can I INSERT new records into the AO_60DB71_SPRINT-table with a SQL-client? Thanks!
This can be done using the JIRA Agile API. See JIRA Agile REST API Reference
So, for example using curl:
## Request JIRA Sprint POST Create
curl -X "POST" "https://jira.foobar.com/rest/agile/1.0/sprint" \
-H 'Content-Type: application/json' \
-u 'myusername:mypassword' \
-d $'{
"startDate": "2018-04-23T00:00:00.000+01:00",
"name": "Cool Sprint",
"endDate": "2018-05-03T13:00:00.000+01:00",
"originBoardId": 1072
}'
The response of which would be:
{
"id": 1130,
"self": "https://jira.foobar.com/rest/agile/1.0/sprint/1130",
"state": "future",
"name": ""Cool Sprint",
"startDate": "2018-04-23T01:00:00.000+02:00",
"endDate": "2018-05-03T14:00:00.000+02:00",
"originBoardId": 1072
}
I'm writing a mapreduce query in erlang for Riak and I want to pass parameters into Riak using the HTTP API through curl on an Ubuntu terminal. The input to the query is a 2i query but I want a parameter to allow further filtering. I thought options was the keyword since the python client is what I'll be using in production, but it's inconvenient for proofing my Erlang, and it's the keyword that's always used on my team.
This is what I'm trying:
curl -X POST http://riakhost:port/mapred -H 'Content-Type: application/json' -d '{
"inputs": {
"bucket":"mybucket",
"index":"field1_bin",
"key":"val3"
},
"options": "test",
"query": [
{"map": {"language": "erlang",
"module": "mapreduce",
"function":"map"
}},
]}'
On a three record set I am seeing:
["none", "none", "none"]
But I want:
["test", "test", "test"]
What is the format for arguments?
I developed a set of configurable utility functions for Riak mapreduce in Erlang. As I wanted to be able to specify sets of critera, I decided to allow the user to pass configuration in as a JSON document as this works well for all client types, although other text representations should also work. Examples of how these functions are used from curl are available in the README.markdown file.
You can pass an argument to each individual map or reduce phase function through the 'arg' parameter. Whatever you specify here will be passed on as the final parameter to the map or reduce phase, see the example below:
"query":[{"map":{"language":"erlang","module":"riak_mapreduce_utils",
"function":"map_link","keep":false,
"arg":"{\"bucket\":\"master\"}"}},