Trying to make a query in YQL console. This one works fine:
select * from weather.forecast where woeid=1989965
But I want to get values in metric system (celsius), so I use this query:
select * from weather.forecast where woeid=1989965 and unit='c'
I get a null result:
{
"query": {
"count": 0,
"created": "2016-03-28T01:46:08Z",
"lang": "ru",
"results": null
}
}
I could convert values by myself, but I hope I can make it work out of the box...
Today I discovered, u='c' is worked. So, the answer for my own question would be:
select * from weather.forecast where woeid=1989965 and u='c'
Related
I have Druid timeseries query:
{
"queryType": "timeseries",
"dataSource": {
"type": "union",
"dataSources": [
"ds1",
"ds2"
]
},
"dimensions":["dim1"],
"aggregations": [
{
"name": "y1",
"type": "doubleMax",
"fieldName": "value1"
}
],
"granularity": {
"period": "PT10S",
"type": "period"
},
"postAggregations": [],
"intervals": "2017-06-09T13:05:46.000Z/2017-06-09T13:06:46.000Z"
}
And i want to return the values of the dimensions as well, not just for the aggregations like this:
{
"timestamp": "2017-06-09T13:05:40.000Z",
"result": {
"y1": 28.724306106567383
}
},
{
"timestamp": "2017-06-09T13:05:50.000Z",
"result": {
"y1": 28.724306106567383
}
},
How do I have to change the query? Thanks in advance!
If your requirement is to use dimension column in time series query that means you are using aggregated data with non aggregated column, this requirement leads to the use of topN or groupBy query.
groupBy query is probably one of the most powerful druid currently supports but it has poor performance as well, instead you can use topN query for your purpose.
Link for topN documentation and example can be found here:
http://druid.io/docs/latest/querying/topnquery.html
Is Timeseries() query is not supporting dimension?
I tried it in my project but it is not working.
here is Error:
TypeError: queryRep.dataSource(...).dimension is not a function
2|DSP-api | at dimensionData (/home/ec2-user/reports/dsp_reports/controllers/ReportController.js:228:22)
Let me know if anyone has a solution for this.
TY.
When using google sheets api append method (in any language), the values to be appended to the sheet are added after the last non null row.
So new values appear at the bottom of the sheet, as explained here:
https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append#InsertDataOption
How can I append values in a way that the new values appear at the top of the sheet?
You want to append values by inserting new rows. If my understanding is correct, how about this method? It seems that sheets.spreadsheets.values.append appends values to the last row. So I would like to propose to usesheets.spreadsheets.batchUpdate. The endpoint and request body are as follows. When you use this, please modify ### spreadsheet ID ###, "sheetId": 1234567890 and the parameters for range and values.
Endpoint :
POST https://sheets.googleapis.com/v4/spreadsheets/### spreadsheet ID ###:batchUpdate
Request body :
{
"requests": [
{
"insertRange": {
"range": {
"sheetId": 1234567890,
"startRowIndex": 0,
"endRowIndex": 1
},
"shiftDimension": "ROWS"
}
},
{
"pasteData": {
"data": "sample1, sample2, sample3",
"type": "PASTE_NORMAL",
"delimiter": ",",
"coordinate": {
"sheetId": 1234567890,
"rowIndex": 0,
}
}
}
]
}
Flow of this request :
Insert new row to row 1 using "insertRange".
Import values of "sample1, sample2, sample3" using "pasteData".
When the order of "insertRange" and "pasteData" is changed, at first, the value of "A1:A3" is overwritten. After this, the new row is inserted to the row 1. So it seems that the elements of "requests" which is an array run in the order.
Reference :
sheets.spreadsheets.batchUpdate
If I misunderstand your question, I'm sorry.
I am trying to query results of the parsed Json and if I cannot find I want to do something else.
[
{
"orderId": 136,
"quantity": 5,
"price": 3.75
},
{
"orderId": 129,
"quantity": 9,
"price": 3.55
},
{
"orderId": 113,
"quantity": 11,
"price": 3.75
}
]
My code is like:
type OrdersProvider = JsonProvider<"Orders.json">
let orders = OrdersProvider.GetSamples()
let test id =
let res = query{
for i in orders do
where (i.OrderId = id)
select i
headOrDefault
}
if isNull(res)
then NOT_FOUND("")
else OK(res.JsonValue.ToString())
)
However I am getting compiler error "JsonProvider<...>.Root does not have null as proper value". Which is kinda makes sense except I still want to catch the case when id is not in the file. I guess I could change headOrDefault to head and trap the exception but wonder if there is something better.
Update #1:
Following one of the links in comments I was able to get away with
if obj.ReferenceEquals(res,null)
then NOT_FOUND("")
else OK(res.JsonValue.ToString())
)
Update #2:
While mentioned code works but still feels unnatural for the language. Accepted answer looks more natural.
I think the headOrDefault operation was designed for compatibility with LINQ to SQL, which is why it returns null in the default case - this is not something you'd normally want in well behaved F# code, so using it in the way your query does is not a good idea.
Fortunately, headOrDefault will work with F# option type - if you return Some from your select clause then headOrDefault returns None when the value is not available:
let res =
query {
for i in orders do
where (i.OrderId = id)
select (Some i)
headOrDefault }
Now you can handle the missing case with pattern matching:
match res with
| None -> NOT_FOUND("")
| Some order -> OK(order.JsonValue.ToString())
I've set up a Druid cluster to ingest real-time data from Kafka.
Question
Does Druid support fetching data that's sorted by timestamp? For example, let's say I need to retrieve the latest 10 entries from a Datasource X. Can I do this by using a LimitSpec (in the Query JSON) that includes the timestamp field? Or is there another better option supported Druid?
Thanks in advance.
Get unaggregated rows
To get unaggregated rows, you can do a query with "queryType: "select".
Select queries are also useful when pagination is needed - they let you set a page size, and automatically return a paging identifier for use in future queries.
In this example, if we just want the top 10 rows, we can pass in "pagingSpec": { "pageIdentifiers": {}, "threshold": 10 }.
Order by timestamp
To order these rows by "timestamp", you can pass in "descending": "true".
Looks like most Druid query types support the descending property.
Example Query:
{
"queryType": "select",
"dataSource": "my_data_source",
"granularity": "all",
"intervals": [ "2017-01-01T00:00:00.000Z/2017-12-30T00:00:00.000Z" ],
"descending": "true",
"pagingSpec": { "pageIdentifiers": {}, "threshold": 10 }
}
Docs on "select" type queries
You can use a group by query to do this, So group by __time as an extraction function then set granularity to all and use the limitSpec to sort/limit that will work. Now if you want to use a timeseries query it is more tricky to get the latest 10. One way to do it is to set the granularity to the desired one let say Hour then set the interval to be 10H starting from the most recent point in time. This sounds more easy to say than achieve. I will go the first way unless you have a major performance issue.
{
"queryType": "groupBy",
"dataSource": "wikiticker",
"granularity": "all",
"dimensions": [
{
"type": "extraction",
"dimension": "__time",
"outputName": "extract_time",
"extractionFn": {
"type": "timeFormat"
}
},
],
"limitSpec": {
"type": "default",
"limit": 10,
"columns": [
{
"dimension": "extract_time",
"direction": "descending"
}
]
},
"aggregations": [
{
"type": "count",
"name": "$f2"
},
{
"type": "longMax",
"name": "$f3",
"fieldName": "added"
}
],
"intervals": [
"1900-01-01T00:00:00.000/3000-01-01T00:00:00.000"
]
}
I'm using ES 1.4, Rails 5, and Mongoid 6. I'm also the mongoid-elasticsearch gem, which I don't think is relevant, but including it in case I'm wrong.
I have a Case model. When I run this query, everything works great, here's the query:
GET _search
{
"query":{
"filtered":{
"query":{
"query_string":{
"query":"behemoth"
}
}
}
}
}
Here's a result, notice the organization_id:
hits": [
{
"_index": "cases",
"_type": "case",
"_id": "57d5e583a46100386987d7f4",
"_score": 0.13424811,
"_source": {
"basic info": {
"first name": "Joe",
"last name": "Smith",
"narrative": "behemoth"
},
"organization_id": {
"$oid": "57d4bc2fa461003841507f83"
},
"case_type_id": {
"$oid": "57d4bc7aa461002854f88441"
}
}
}
See how there's that "$oid" for organzation id? That's because in my as_indexed_json method for Case, I have:
["organization_id"] = self.case_type.organization.id
I think that the filter doesn't work b/c Mongoid somehow adds that subkey $oid. So my next thought was, I'll just make it a string:
["organization_id"] = self.case_type.organization.id.to_s
But that throws an error:
{"error":"MapperParsingException[object mapping for [case] tried to parse as object, but got EOF, has a concrete value been provided to it?]","status":400}
Anyone have any idea how to A) either use a mongo id as a filter, or B) get ES the info it needs so it doesn't complain as above?
Thanks for any help,
Kevin
Turns out that this is because there was already an existing index. When I nuked the index and re-indexed the data, this works fine (really bad error syntax with ES).