Can odata total records be found without fetching them? - odata

Am creating paging and fetching 10 JSON records per page:
var coursemodel = query.Skip(skip).Take(take).ToList();
I need to display on the web page the total number of records available in database. For example, you are viewing 20 to 30 of x (where x is total number of records). Can x be found without transferring the records over the network?

Sorted it. Did this
http://yourserver7:40479/odata/Courses?$top=1&skip=1&$inlinecount=allpages
Got this
{
"odata.metadata":"http://yourserver7:40479/odata/$metadata#Courses","odata.count":"503","value":[
{
"CourseID":20,"Name":"Name 20","Description":"Description 20","Guid":"Guid 20"
}
]
}
I then got the value from odata.count! My url gets all records found, add $filter where applicable...

You can use the $count operator to return the total number of records, something along these lines:
http://services.odata.org/OData/OData.svc/Categories(1)/Products/$count
Not sure what the syntax would be with Linq, but pretty sure it is possible.
PS - Always a good reference: http://www.odata.org/documentation/odata-version-2-0/uri-conventions/

Just try this:
context.EFCars.Where(c => c.Description == desc).Count();
Where EFCars is an entity set name.

Related

ActiveRecord: Alternative to find_in_batches?

I have a query that loads thousands of objects and I want to tame it by using find_in_batches:
Car.includes(:member).where(:engine => "123").find_in_batches(batch_size: 500) ...
According to the docs, I can't have a custom sorting order: http://www.rubydoc.info/docs/rails/4.0.0/ActiveRecord/Batches:find_in_batches
However, I need a custom sort order of created_at DESC. Is there another method to run this query in chunks like it does in find_in_batches so that not so many objects live on the heap at once?
Hm I've been thinking about a solution for this (I'm the person who asked the question). It makes sense that find_in_batches doesn't allow you to have a custom order because lets say you sort by created_at DESC and specify a batch_size of 500. The first loop goes from 1-500, the second loop goes from 501-1000, etc. What if before the 2nd loop occurs, someone inserts a new record into the table? That would be put onto the top of the query results and your results would be shifted 1 to the left and your 2nd loop would have a repeat.
You could argue though that created_at ASC would be safe then, but it's not guaranteed if your app specifies a created_at value.
UPDATE:
I wrote a gem for this problem: https://github.com/EdmundMai/batched_query
Since using it, the average memory of my application has HALVED. I highly suggest anyone having similar issues to check it out! And contribute if you want!
The slower manual way to do this, is to do something like this:
count = Cars.includes(:member).where(:engine => "123").count
count = count/500
count += 1 if count%500 > 0
last_id = 0
while count > 0
ids = Car.includes(:member).where("engine = "123" and id > ?", last_id).order(created_at: :desc).limit(500).ids #which plucks just the ids`
cars = Cars.find(ids)
#cars.each or #cars.update_all
#do your updating
last_id = ids.last
count -= 1
end
Can you imagine how find_in_batches with sorting will works on 1M rows or more? It will sort all rows every batch.
So, I think will be better to decrease number of sort calls. For example for batch size equal to 500 you can load IDs only (include sorting) for N * 500 rows and after it just load batch of objects by these IDs. So, such way should decrease have queries with sorting to DB in N times.

Range of IDs via ActiveRecord

I understand that I can use People.first(100) to retrieve the first 100 records, same goes for People.last(100).
What I don`t know, is how do I retrieve all objects in the range of 200-400, when the total number is lets say a 1000 records ?
What you need is limit and offset - read this for more info.
Example:
People.limit(200).offset(200)
The above code takes 200 records starting from 201st record - that means it would be records 201-400.
Are you searching on a specific field, your title suggests you're searching on id?
People.where('id BETWEEN ? AND ?', 200, 400)
or...
People.where(id: 200..400)
If you're not searching on a particular field, you would want to use Big_Bird's limit and offset methods.

Select certain number of records for batch processing

Hi is it possible using Entity Framework and/or linq to select a certain number of rows? For example i want to select rows 0 - 500000 and assign these records to the List VariableAList object, then select rows 500001 - 1000000 and assign this to the List VariableBList object, etc. etc.
Where the Numbers object is like ID,Number,DateCreated, DateAssigned, etc.
Sounds like you're looking for the .Take(int) and .Skip(int) methods
using (YourEntities db = new YourEntities())
{
var VariableAList = db.Numbers
.Take(500000);
var VariableBList = db.Numbers
.Skip(500000)
.Take(500000);
}
You may want to be wary of the size of these lists in memory.
Note: You also may need an .OrderBy clause prior to using .Skip or .Take--I vaguely remember running into this problem in the past.

Search records having comma seperated values that contains any element from the given list

I have a domain class Schedule with a property 'days' holding comma separated values like '2,5,6,8,9'.
Class Schedule {
String days
...
}
Schedule schedule1 = new Schedule(days :'2,5,6,8,9')
schedule1.save()
Schedule schedule2 = new Schedule(days :'1,5,9,13')
schedule2.save()
I need to get the list of the schedules having any day from the given list say [2,8,11].
Output: [schedule1]
How do I write the criteria query or HQL for the same. We can prefix & suffix the days with comma like ',2,5,6,8,9,' if that helps.
Thanks,
Hope you have a good reason for such denormalization - otherwise it would be better to save the list to a child table.
Otherwise, querying would be complicated. Like:
def days = [2,8,11]
// note to check for empty days
Schedule.withCriteria {
days.each { day ->
or {
like('username', "$day,%") // starts with "$day"
like('username', "%,$day,%")
like('username', "%,$day") // ends with "$day"
}
}
}
In MySQL there is a SET datatype and FIND_IN_SET function, but I've never used that with Grails. Some databases have support for standard SQL2003 ARRAY datatype for storing arrays in a field. It's possible to map them using hibernate usertypes (which are supported in Grails).
If you are using MySQL, FIND_IN_SET query should work with the Criteria API sqlRestriction:
http://grails.org/doc/latest/api/grails/orm/HibernateCriteriaBuilder.html#sqlRestriction(java.lang.String)
Using SET+FIND_IN_SET makes the queries a bit more efficient than like queries if you care about performance and have a real requirement to do denormalization.

Getting the Item Count of a large sharepoint list in fastest way

I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site.
This is the code to get the total count:
SPList VoulnterrList = web.Lists[ListTitle];
SPQuery query = new SPQuery();
query.ViewAttributes = "Scope=\"Recursive\"";
string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>";
query.Query = queries;
SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query);
lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString();
The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem
Ours is sharepoint 2007
Use VoulnterrList.ItemCount instead of VoulnterrList.Items.Count.
When List.Items is used, all items in the list are loaded from the content database. Since we don't actually need the items to get the count this is wasted overhead.
This will fix performance at line 8, but you may still have issues at line 9 depending on the number of results returned by the query.
You can do two optimizations here:
Create an index on one of the column of your list
Use that column in <ViewFields> section of your CAML query so that only that indexed column is retrieved.
This should speed up. See this article on how to create index on column:
http://sharepoint.microsoft.com/Blogs/GetThePoint/Lists/Posts/Post.aspx?ID=162

Resources