With Grok Debuger I am trying to parse some custom data:
1 1 "Device 1" 1 "Input 1" 0 "On" "Off" "2020-01-01T00:00:00.1124303+00:00"
So far I have:
%{INT:id} %{INT:device} %{QUOTEDSTRING:device_name} %{INT:input}
%{QUOTEDSTRING:input_name} %{INT:state} %{QUOTEDSTRING:on_phrase}
%{QUOTEDSTRING:off_phrase} \"%{TIMESTAMP_ISO8601:when}\"
However, I am getting things like double quotes around strings %{QUOTEDSTRING), and two lots of hours and minutes with the time and date %{TIMESTAMP_ISO8601:when}
{
"id": [
[
"1"
]
],
"device": [
[
"1"
]
],
"device_name": [
[
""Device 1""
]
],
"input": [
[
"1"
]
],
"input_name": [
[
""Input 1""
]
],
"state": [
[
"0"
]
],
"on_phrase": [
[
""On""
]
],
"off_phrase": [
[
""Off""
]
],
"when": [
[
"2020-01-01T00:00:00.1124303+00:00"
]
],
"YEAR": [
[
"2020"
]
],
"MONTHNUM": [
[
"01"
]
],
"MONTHDAY": [
[
"01"
]
],
"HOUR": [
[
"00",
"00"
]
],
"MINUTE": [
[
"00",
"00"
]
],
"SECOND": [
[
"00.1124303"
]
],
"ISO8601_TIMEZONE": [
[
"+00:00"
]
]
}
Also, I am a little stuck when it comes to the logstash.conf as I am not sure what I would put as the index in the output. The following code is from a previous example from github:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
manage_template => false
index => "sample-%{+YYYY.MM.dd}"
}
}
I'm guessing mine would look something like this:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{INT:id} %{INT:device} %{QUOTEDSTRING:device_name} %{INT:input} %{QUOTEDSTRING:input_name} %{INT:state} %{QUOTEDSTRING:on_phrase} %{QUOTEDSTRING:off_phrase} \"%{TIMESTAMP_ISO8601:when}\"" }
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
manage_template => false
index => "sample-%{????????}"
}
}
Again I'm unclear as to what I am supposed to do with "sample-%{????????}"
In regard to the double-double-quotes: just use DATA instead of QUOTEDSTRING:
"%{DATA:device_name}"
Duplicated entries in the hours and minutes come from the timezone: first entry is the actual hour, the second one is the hour of the timezone. Same for the minutes.
To get rid of it you would need a custom pattern:
"(?<when>%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?(?<ISO8601_TIMEZONE>Z|[+-](?:2[0123]|[01]?[0-9])(?::?(?:[0-5][0-9])))?)"
(if you are not interested in parsing the timestamp at all, just use DATA again).
So, your pattern might look like this:
%{INT:id} %{INT:device} "%{DATA:device_name}" %{INT:input} "%{DATA:input_name}" %{INT:state} "%{DATA:on_phrase}" "%{DATA:off_phrase}" "(?<when>%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?(?<ISO8601_TIMEZONE>Z|[+-](?:2[0123]|[01]?[0-9])(?::?(?:[0-5][0-9])))?)"
Regarding index:
you can omit it completely then the default one is used: logstash-%{+YYYY.MM.dd}
you can use sample-%{+YYYY.MM.dd} if you want to have separate indexes for each day
you can use sample- to have just one index
you can use any other combination of the fields in your index pattern
Related
I have a timeseries data which I am trying to display with Highstocks:
Here is the data:
{
"title": {
"text": "My Graph"
},
"series": [
[
{
"name": "Future Index Longs",
"data": [
[
"2019-02-05",
104516
],
[
"2019-02-06",
127260
],
[
"2019-02-07",
156291
],
[
"2019-02-08",
167567
]
]
}
],
[
{
"name": "Future Index Longs",
"data": [
[
"2019-02-05",
21
],
[
"2019-02-06",
0
],
[
"2019-02-07",
1263
],
[
"2019-02-08",
12
]
]
}
],
[
{
"name": "Future Index Longs",
"data": [
[
"2019-02-05",
33873
],
[
"2019-02-06",
61093
],
[
"2019-02-07",
43125
],
[
"2019-02-08",
41928
]
]
}
],
[
{
"name": "Future Index Longs",
"data": [
[
"2019-02-05",
47542
],
[
"2019-02-06",
55084
],
[
"2019-02-07",
75256
],
[
"2019-02-08",
77786
]
]
}
],
[
{
"name": "Future Index Longs",
"data": [
[
"2019-02-05",
185952
],
[
"2019-02-06",
243437
],
[
"2019-02-07",
275935
],
[
"2019-02-08",
287293
]
]
}
]
]
}
The graph is empty and no data is displayed. What am I doing wrong?
Sorry to add this filler here but I am required to add more text to post this question and since this is a pretty simple question, I don't have much to add.
You have the wrong format on your series, it should be an array of objects.
Like this: series: [{ ... }, { ... }]
Check this fiddle: https://jsfiddle.net/wg1vnyzp/1/
To have a chart with datetime axes in Highcharts you have to pass the X value as the timestamp in milliseconds since 1970.
Highstock example:
https://jsfiddle.net/BlackLabel/f0rsz6cd/1/
Note that in Highcharts you have to define xAxis.type as datetime like that:
xAxis: {
type: 'datetime'
}
Highcharts demo:
https://jsfiddle.net/BlackLabel/kas2oywp/
API reference:
https://api.highcharts.com/highcharts/series.line.data.x
https://api.highcharts.com/highcharts/xAxis.type
I'm having an issue parsing some of the JSON data being retrieved from Oxford Dictionaries into my application.
I am printing the JSON response into my console which confirms that I am successfully getting the data needed.
Verification that JSON data is being retrieved:
There are two main folders initially when I get the JSON data back. "results", which contains the information I need: the word definition and other information
and "metadata".
JSON format that I am getting back from Oxford Dictionary:
I am able to parse the information contained in the "metadata" folder, and have printed it into the console to confirm.
Verification that I am able to parse and print the metadata:
I however, can't seem to parse any of the data contained in the 'Results' folder, which is what I actually need. The definition, the language, the word ID, et cetera.
What am I doing wrong here??
Example of issue with parsing "results" data:
Another example of faulty attempt to parse 'results':
Plain text copy of my JSON response:
{
"metadata": {
"provider": "Oxford University Press"
},
"results": [
{
"id": "ace",
"language": "en",
"lexicalEntries": [
{
"entries": [
{
"etymologies": [
"Middle English (denoting the ‘one’ on dice): via Old French from Latin as ‘unity, a unit’"
],
"grammaticalFeatures": [
{
"text": "Singular",
"type": "Number"
}
],
"homographNumber": "100",
"senses": [
{
"definitions": [
"a playing card with a single spot on it, ranked as the highest card in its suit in most card games"
],
"domains": [
"Cards"
],
"examples": [
{
"registers": [
"figurative"
],
"text": "life had started dealing him aces again"
},
{
"text": "the ace of diamonds"
}
],
"id": "m_en_gbus0005680.006",
"short_definitions": [
"playing card with single spot on it, ranked as highest card in its suit in most card games"
]
},
{
"definitions": [
"a person who excels at a particular sport or other activity"
],
"domains": [
"Sport"
],
"examples": [
{
"text": "a motorcycle ace"
}
],
"id": "m_en_gbus0005680.010",
"registers": [
"informal"
],
"short_definitions": [
"person who excels at particular sport or other activity"
],
"subsenses": [
{
"definitions": [
"a pilot who has shot down many enemy aircraft"
],
"domains": [
"Air Force"
],
"examples": [
{
"text": "a Battle of Britain ace"
}
],
"id": "m_en_gbus0005680.011",
"short_definitions": [
"pilot who has shot down many enemy aircraft"
]
}
],
"thesaurusLinks": [
{
"entry_id": "ace",
"sense_id": "t_en_gb0000173.001"
}
]
},
{
"definitions": [
"(in tennis and similar games) a service that an opponent is unable to return and thus wins a point"
],
"domains": [
"Tennis"
],
"examples": [
{
"text": "Nadal banged down eight aces in the set"
}
],
"id": "m_en_gbus0005680.013",
"short_definitions": [
"(in tennis and similar games) service that opponent is unable to return and thus wins point"
],
"subsenses": [
{
"definitions": [
"a hole in one"
],
"domains": [
"Golf"
],
"examples": [
{
"text": "his hole in one at the 15th was Senior's second ace as a professional"
}
],
"id": "m_en_gbus0005680.014",
"registers": [
"informal"
],
"short_definitions": [
"hole in one"
]
}
]
}
]
},
{
"etymologies": [
"early 21st century: abbreviation of asexual, with alteration of spelling on the model of ace"
],
"grammaticalFeatures": [
{
"text": "Singular",
"type": "Number"
}
],
"homographNumber": "200",
"senses": [
{
"definitions": [
"a person who has no sexual feelings or desires"
],
"domains": [
"Sex"
],
"examples": [
{
"text": "both asexual, they have managed to connect with other aces offline"
}
],
"id": "m_en_gbus1190638.004",
"short_definitions": [
"asexual person"
]
}
]
}
],
"language": "en",
"lexicalCategory": "Noun",
"pronunciations": [
{
"audioFile": "http://audio.oxforddictionaries.com/en/mp3/ace_1_gb_1_abbr.mp3",
"dialects": [
"British English"
],
"phoneticNotation": "IPA",
"phoneticSpelling": "eɪs"
}
],
"text": "ace"
},
{
"entries": [
{
"grammaticalFeatures": [
{
"text": "Positive",
"type": "Degree"
}
],
"homographNumber": "101",
"senses": [
{
"definitions": [
"very good"
],
"examples": [
{
"text": "Ace! You've done it!"
},
{
"text": "an ace swimmer"
}
],
"id": "m_en_gbus0005680.016",
"registers": [
"informal"
],
"short_definitions": [
"very good"
],
"thesaurusLinks": [
{
"entry_id": "ace",
"sense_id": "t_en_gb0000173.002"
}
]
}
]
},
{
"grammaticalFeatures": [
{
"text": "Positive",
"type": "Degree"
}
],
"homographNumber": "201",
"senses": [
{
"definitions": [
"(of a person) having no sexual feelings or desires; asexual"
],
"domains": [
"Sex"
],
"examples": [
{
"text": "I didn't realize that I was ace for a long time"
}
],
"id": "m_en_gbus1190638.006",
"short_definitions": [
"asexual"
]
}
]
}
],
"language": "en",
"lexicalCategory": "Adjective",
"pronunciations": [
{
"audioFile": "http://audio.oxforddictionaries.com/en/mp3/ace_1_gb_1_abbr.mp3",
"dialects": [
"British English"
],
"phoneticNotation": "IPA",
"phoneticSpelling": "eɪs"
}
],
"text": "ace"
},
{
"entries": [
{
"grammaticalFeatures": [
{
"text": "Transitive",
"type": "Subcategorization"
},
{
"text": "Present",
"type": "Tense"
}
],
"homographNumber": "102",
"senses": [
{
"definitions": [
"(in tennis and similar games) serve an ace against (an opponent)"
],
"domains": [
"Tennis"
],
"examples": [
{
"text": "he can ace opponents with serves of no more than 62 mph"
}
],
"id": "m_en_gbus0005680.020",
"registers": [
"informal"
],
"short_definitions": [
"(in tennis and similar games) serve ace against"
],
"subsenses": [
{
"definitions": [
"score an ace on (a hole) or with (a shot)"
],
"domains": [
"Golf"
],
"examples": [
{
"text": "there was a prize for the first player to ace the hole"
}
],
"id": "m_en_gbus0005680.026",
"short_definitions": [
"score ace on hole or with"
]
}
]
},
{
"definitions": [
"achieve high marks in (a test or exam)"
],
"examples": [
{
"text": "I aced my grammar test"
}
],
"id": "m_en_gbus0005680.028",
"regions": [
"North American"
],
"registers": [
"informal"
],
"short_definitions": [
"achieve high marks in"
],
"subsenses": [
{
"definitions": [
"outdo someone in a competitive situation"
],
"examples": [
{
"text": "the magazine won an award, acing out its rivals"
}
],
"id": "m_en_gbus0005680.029",
"notes": [
{
"text": "\"ace someone out\"",
"type": "wordFormNote"
}
],
"short_definitions": [
"outdo someone in competitive situation"
]
}
]
}
]
}
],
"language": "en",
"lexicalCategory": "Verb",
"pronunciations": [
{
"audioFile": "http://audio.oxforddictionaries.com/en/mp3/ace_1_gb_1_abbr.mp3",
"dialects": [
"British English"
],
"phoneticNotation": "IPA",
"phoneticSpelling": "eɪs"
}
],
"text": "ace"
}
],
"type": "headword",
"word": "ace"
}
]
}
Retrieve results like this:
let results = json["results"][0]["id"]
Or
if let resultObj = json["results"].first {
let id = resultObj["id"]
}
id key is in the dictionary which is at zero index of results array.
Full details of your operating system (or distribution) e.g. 64-bit Ubuntu 14.04.
Running InfluxDB/Kapacitor/Chronograf as Docker containers on MacOSX, latest Docker.
The version of Kapacitor you are running
Latest, 1.4.
Whether you installed it using a pre-built package, or built it from source.
Official Docker container
We are running into an issue with TICKscript and its groupBy behaviour.
We have two sets of measurements, indoor_temperatures and outdoor_temperatures, which we query with a batch.
The queries look as follows:
var out_temp = batch
|query('SELECT mean(temperature) FROM yyyy')
.every(10s)
.period(120d)
.groupBy(time(1h))
.fill(0)
var in_temp = batch
|query('SELECT mean(temperature) FROM xxxx')
.every(10s)
.period(120d)
.groupBy(time(1h))
.fill(0)
If we HTTP out both of them, they create the following sets of data:
{
"series": [
{
"name": "outdoor_temperatures",
"columns": [
"time",
"mean"
],
"values": [
[
"2017-09-20T17:00:00Z",
0
],
[
"2017-09-20T18:00:00Z",
11.5
]
... the rest
]
}
]
}
{
"series": [
{
"name": "indoor_measurements",
"columns": [
"time",
"mean"
],
"values": [
[
"2017-09-20T17:00:00Z",
585.44012944984
],
[
"2017-09-20T18:00:00Z",
592.94890510949
]
... the rest
]
}
]
}
Now we do a full join of them, which gives us expected results
out_temp
|join(in_temp)
.as('out_temp_mean', 'in_temp_mean')
.tolerance(5m)
.fill(0)
httpOut:
{
"series": [
{
"name": "outdoor_temperatures",
"columns": [
"time",
"in_temp_mean.mean",
"out_temp_mean.mean"
],
"values": [
[
"2017-09-20T17:00:00Z",
586.10175438596,
0
],
[
"2017-09-20T18:00:00Z",
592.94890510949,
11.5
]
... the rest
]
}
]
}
Which looks perfect. The issue raises when we want to round the out_temp_mean.mean down and groupBy it
So we go ahead and extend the script
out_temp
|join(in_temp)
.as('out_temp_mean', 'in_temp_mean')
.tolerance(5m)
.fill(0)
|eval(lambda: string(floor("out_temp_mean.mean")))
.as('bucket')
.tags('bucket')
.keep('out_temp_mean.mean', 'in_temp_mean.mean')
After which the output STILL looks as it should:
{
"series": [
{
"name": "outdoor_temperatures",
"columns": [
"time",
"in_temp_mean.mean",
"out_temp_mean.mean",
"bucket"
],
"values": [
[
"2017-09-20T17:00:00Z",
586.99190283401,
0,
"0"
],
[
"2017-09-20T18:00:00Z",
592.94890510949,
11.5,
"11"
]
]
}
]
}
Now only thing left is to group the values by the new tag bucket:
out_temp
|join(in_temp)
.as('out_temp_mean', 'in_temp_mean')
.tolerance(5m)
.fill(0)
|eval(lambda: string(floor("out_temp_mean.mean")))
.as('bucket')
.tags('bucket')
.keep('out_temp_mean.mean', 'in_temp_mean.mean')
|groupBy('bucket')
After which everything goes awry and we are greeted with series: null
{
"series": null
}
Is this expected behaviour? A bug? Or something else?
Also filed this as https://github.com/influxdata/kapacitor/issues/1765 if someone wonders.
I Post one Json with RestAssured and After I need to verify that all fields are stored in the database with the correct values.My Json is :
{
"id": "1",
"name": "name1",
"description": "description1",
"source": "source1",
"target": "target1",
"domain": "PM",
"transformation_rules": [
{
"name": "name2",
"filters": [
{
"object": "object1",
"pattern": "pattern1"
}
],
"operations": [
{
"pattern": "pattern2",
"replacement": "replacement1"
}
]
},
{
"name": "name3",
"filters": [
{
"object": "object2",
"pattern": "pattern2"
}
],
"operations": [
{
"pattern": "pattern3",
"replacement": "replacement2"
},
{
"pattern": "pattern3",
"replacement": "replacement3"
},
{
"pattern": "pattern4",
"replacement": "replacement4"
}
]
}
],
"conflict_policy": "ACCEPT_SOURCE"
}
So, I have :
responseGet = RestAssured.given().contentType(ContentType.JSON).when().get(urlApi + "/" + id);
My first verification is :
responseGet.then().body("$[0]['id']", equalTo("1"));
to verify that the field "id" equals to 1 it doesn't execute well and I change to :
responseGet.then().body("$.id", equalTo("1"));
and the same result ---> fails
Please, can you give me your suggestions for testing all the Json ?
Just for information, I try to apply : https://github.com/json-path/JsonPath.
Thank you very much in Advance,
Best Regards,
You can directly use jsonPath() for checking this:
For example:
responseGet.body().jsonPath().getString("id").equals("1");
For reading JsonPath
So i have this piece of code (fieldKey is a String)
var request = [
"size": 0,
"aggs": [
fieldKey : [
"global": [],
"aggs": [
"global": [
"aggs": [
"facet": [
"nested": [
"path": "tags"
],
"aggs": [
"bar": [
"filter": [
"match": [
"tags.name": fieldKey
]
]
],
"aggs": [
"filtered": [
"terms": [
"field": "tags.name"
],
"aggs": [
"values": [
"terms": [
"field": "tags.value.raw",
"min_doc_count": 1
]
]
]
]
]
]
]
]
]
]
]
]
]
Im trying to create a JSON request for Elasticsearch server.
I get the "Expression was to complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions" error.
This is when i tried to do this.
var request = [
"size": 0,
"aggs": [String : AnyObject]()
]
request["aggs"]![fieldKey] = [
fieldKey : [
"global": [],
"aggs": [
"global": [
"aggs": [
"facet": [
"nested": [
"path": "tags"
],
"aggs": [
"bar": [
"filter": [
"match": [
"tags.name": fieldKey
]
]
],
"aggs": [
"filtered": [
"terms": [
"field": "tags.name"
],
"aggs": [
"values": [
"terms": [
"field": "tags.value.raw",
"min_doc_count": 1
]
]
]
]
]
]
]
]
]
]
]
]
But now i get the "Cannot assign to immutable expression of type "AnyObject?!"" error but i clearly used the "var" when creating the request? Does anyone know how to solve this? Is there any better way of creating such long Dictionaries/JSON files? Thanks