I am not getting all the rows from my Datastax Astra DB where I have uploaded my data. I am using python http_methods for requesting data. Here is my code-
respond = astra_client.request(
method=http_methods.GET,
path=f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{astra_db_collection}/rows")
This method is only getting me 100 rows, while there are 150 rows in my table. How can I solve this ?
The default page size for this endpoint is 100. To return more rows you'll want to set the query param .../rows?page-size=N.
If your page size is smaller than the total dataset you would like to return you'll need to use paging. This can be accomplished with page-size=N&page-state=SOME_PAGE_STATE where SOME_PAGE_STATE is the string returned in the pageState field of the response body.
Related
I have a Dataverse table named my_sample_table.
Inside the table I have a column named my_sample_column of type integer whose max value should be returned. Am trying to achieve this by using the List rows action provided with PowerAutomate.
Is there a filter query that can be written on the Filter rows property ? similar to what we use with SQL : max(columnname)
Or any other queries that can be included in the List rows action which will return the same result.
I know that I can iterate through the column values to get the max value using an expresion or by sorting it and getting the topmost one. But I was wondering whether there are any direct approach to it.
I would try and use a max aggregate for this column in a Fetch Xml query:
https://learn.microsoft.com/en-us/power-apps/developer/data-platform/use-fetchxml-aggregation#max
I'm writing batched data over HTTP API like this
demo,state=idle
max=100.0,mean=20.0,event_type=0,probability=0.6,min=0.0 1529087114083
demo,state=idle
max=100.0,mean=80.0,event_type=1,probability=0.6,min=0.0 1529087114083
demo,state=idle
max=100.0,mean=20.0,event_type=2,probability=0.6,min=0.0 1529087114083
demo,state=idle
max=100.0,mean=80.0,event_type=3,probability=0.6,min=0.0 1529087114083
And the request returns 204 which is "ok" according to Influx API docs.
Still, when I want to check my data in admin
SELECT median("mean") AS "median_mean", mean("mean") AS "mean_mean"
FROM "sfb"."autogen"."demo" WHERE time > now() - 1h GROUP BY
:interval: FILL(null)
I get
Your query is syntactically correct but returned no results
Solved: need to specify proper timestamp precision - otherwise it calculates wrong data and data is not visible
I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.
The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.
Am creating paging and fetching 10 JSON records per page:
var coursemodel = query.Skip(skip).Take(take).ToList();
I need to display on the web page the total number of records available in database. For example, you are viewing 20 to 30 of x (where x is total number of records). Can x be found without transferring the records over the network?
Sorted it. Did this
http://yourserver7:40479/odata/Courses?$top=1&skip=1&$inlinecount=allpages
Got this
{
"odata.metadata":"http://yourserver7:40479/odata/$metadata#Courses","odata.count":"503","value":[
{
"CourseID":20,"Name":"Name 20","Description":"Description 20","Guid":"Guid 20"
}
]
}
I then got the value from odata.count! My url gets all records found, add $filter where applicable...
You can use the $count operator to return the total number of records, something along these lines:
http://services.odata.org/OData/OData.svc/Categories(1)/Products/$count
Not sure what the syntax would be with Linq, but pretty sure it is possible.
PS - Always a good reference: http://www.odata.org/documentation/odata-version-2-0/uri-conventions/
Just try this:
context.EFCars.Where(c => c.Description == desc).Count();
Where EFCars is an entity set name.
I am following YQL sample query
select * from local.search(500) where query="sushi" and location="san francisco, ca"
but I get 260 max count instead of 500. I tried also to use limit 500 after 'where' and different keywords, always get maximum 260 results. How do you increase it?
The underlying API that the local.search table uses (Yahoo! Local Search Web Service) has restrictions on the number of results returned.
The results parameter (the number of results "per page") has a maximum value of 20.
The start parameter (the offset at which to start) has a maximum value of 250.
Since you ask for the first 500 results, YQL makes multiple queries against the Local Search API returning 20 results at a time. Therefore the start values are 1, 21, 41, ... 241. This brings back 260 results, as you have seen.
Since the YQL query asks for more results, the next start value is tried (261) which is beyond the allowed range so the underlying service returns an error (with the message "invalid value: start (261) must be between 1 and 250"). If you turn on "diagnostics" in the YQL console, you will see the "Bad Request" being returned.
Nothing you do to the query will bring back more results than the underlying service allows.
I figured out, I was missing paging number, so 0++ will work
select * from local.search(0,500) where query="sushi" and location="san francisco, ca"