I have a current implementation of will_paginate that uses the paginate_by_sql method to build the collection to be paginated. We have a custom query for total_entries that's very complicated and puts a large load on our DB. Therefore we would like to cut total_entries from the pagination altogether.
In other words, instead of the typical pagination display of 'previous 1 [2] 3 4 5 next', we would simply like a 'next - previous' button only. But we need to know a few things.
Do we display the previous link? This would only occur of course if records existing prior to the ones displayed in the current selection
Do we display the next link? This would not be displayed if the last record in the collection is being displayed
From the docs
A query for counting rows will
automatically be generated if you
don’t supply :total_entries. If you
experience problems with this
generated SQL, you might want to
perform the count manually in your
application.
So ultimately the ideal situation is the following.
Remove the total_entries count because it's causing too much load on the database
Display 50 records at a time with semi-pagination using only next/previous buttons to navigate and not needing to display all page numbers available
Only display the next button and previous button accordingly
Has anyone worked with a similar issue or have thoughts on a resolution?
There are many occasions where will_paginate does a really awful job of calculating the number of entries, especially if there are joins involved that confuse the count SQL generator.
If all you need is a simple prev/next method, then all you need to do is attempt to retrieve N+1 entries from the database, and if you only get N or less than you're on the last page.
For example:
per_page = 10
page = 2
#entries = Thing.with_some_scope.find(:all, :limit => per_page + 1, :offset => (page - 1) * per_page)
#next_page = #entries.slice!(per_page, 1)
#prev_page = page > 1
You can easily encapsulate this in some module that can be included in the various models that require it, or make a controller extension.
I've found that this works significantly better than the default will_paginate method.
The only performance issue is a limitation of MySQL that may be a problem depending on the size of your tables.
For whatever reason, the amount of time it takes to perform a query with a small LIMIT in MySQL is proportional to the OFFSET. In effect, the database engine reads through all rows leading up to the particular offset value, then returns the next LIMIT number rows, not skipping ahead as you'd expect.
For large data-sets, where you're having OFFSET values in the 100,000 plus range, you may find performance degrades significantly. How this will manifest is that loading page 1 is very fast, page 1000 is somewhat slow, but page 2000 is extremely slow.
Related
I would like to make smart_listing jump to the last page by default, showing the most recent items, if no explicit page is given.
The only way I found to do this is to explicitly calculate the page number myself, like in
.../topics?topics_smart_listing[page]=17
I tried to use "total_pages" etc. on the collection, but apparently this doesn't work. Any suggestions on how I could get the maximum page count, without calculation it myself?
I am using Rbuilder within a application constructed with Delphi. I have a report already built that displays a list of items but then at the bottom I have some subtotal fields as well as a total field. The subtotals and totals are defined as variables which then total up the cost of the individual items.
Unfortunately both the subtotals and totals only give me calculations for items on the first and last pages of data. Lets say there are 5 pages of data that prints out. Page one the totals are accurate.
Page two totals are accurate. Page 3 totals include ONLY the totals from page 1 and page 3. Page 4 total includes page 1 and page 4 and so on. I have been trying to play around with timing settings as well as moving my code calculating the total to different operations (ongettext, onprint, oncalc, etc)
Has anybody ever run into this?
Ok, so I kept working at this and eventually found the problem.
At the report level I changed the report from TwoPass to OnePass. That ended up giving me very close to what I wanted. I ended up having to write some more code to get exactly what I wanted but changing the number of passes worked.
I was trying to display a running total page by page. And as I changed pages it would update the value.
Onepass worked.
My internal website search engine, based on pg_search, sometimes returns so much text in its search results that Heroku cannot load the page.
The problem is, some search results are so long that I could only publish one of them per page, whereas others are so short I could easily publish 20 of them at once.
So I'd like to pagination my search results, but I'd like to limit the amount of content I publish on each page by word count, not by result count.
I've taken a look at the main pagination gems on Ruby Toolbox like will_paginate, but I can't find any that offer this function.
Does a suitable gem exist? Or is there any a straightforward way of doing this with a gem like will_paginate?
Displaying Only the first x words of a string in rails
I modified this method to get
def get_n_words_with_offset(message, first_n_words=1, offset=0)
string_arr = message.split(' ')
string_arr.count > first_n_words ? "#{string_arr[offset..(offset+first_n_words-1)].join(' ')}..." : self
end
From there you can create a pseudomodel that will "paginate" or resplit the array of models returned by word count. Have one of that model encapsulate one page, and you can use a pagination gem where one of those models is on each page.
The easiest way to explain this question is by example. See the following two images of the browse links on a particular website:
Basically, the way that it works is that there are a set number of records per page, and it works "backwards" in some manner to break down the browse pages into an appropriate number of ranges. So when there are relatively more records (as in the case of those starting with an "A"), there are more ranges, and more pages, than when there are fewer records ("X"). I am developing in Ruby on Rails, but would also be interested in some perspective on the logic here. Thanks!
The simplest way to visualize this is to think about the "deepest" groups all having 10 elements each, so split all your records into groups of 10.
Now, each group of 10 should be referenced to by an upper level group.
Each group of 10 of those should be referenced to by an even higher level group.
Finally, you'll reach the highest level group.
For any group, you take the n first letters of the first and last elements in their tree where n is it's depth. So for a group in depth 1, you take the 1st character of the very first element (recursively go deeper until you are at the sparsest branches) as the start of its range and the 1st char of the last elements as the end of it's range.
I could mock it up in PHP if you would be able to get what you need from it, but can't quite grasp the concept here.
The code:
Channel.all.paginate(:page => 3, :per_page => 25)
Say I have a table with 400,000 records, does the above code select all 400,000 records then get the current 25 I need or does it only query for the 25 I need.
If it queries all 400,000 records is there a better optimized way to paginate large datasets using rails?
Mongo Mapper (which I assume your using because of the syntax of your query) is implementing this using the limit and skip expressions.
Basically it would run a query where it skips over a number of Channels and then retrieves the amount specified by the limit (the number you are getting per page).
For example: If you were on page 3 and have 25 per page, the query that mongo mapper runs looks like this:
db.channels.find().skip((page - 1) * per_page).limit(per_page)
Which translates to:
db.channels.find().skip(2 * 25).limit(25)
To return results, mongo has to skip over (page - 1) * per_page number of results which can be costly if the page number is high. Lets say that expression evaluates to 1000, then it would have to run the query, skip over 1000 documents and get the next 25 documents (the limit). MongoDB would essentially be doing a table scan over those documents.
To avoid that you can do range based paging which provides better use of indexes but does not allow you to easily jump to a specific page.
If the Channel model has a date field for example, range based paging would, instead of using skip, use $gte and limit. You would take the date of last document on x page and get the next page's results by querying for documents with date $gte of previous page's final document. If you do that you could get dupes though, so it might make sense to use a different criteria.
In practice, don't worry about it unless you have a really high number of pages.
Cheers and good luck!