I have a page that has to render a huge set of query results - most of them with very, very small images. It is already paginated, so that won't solve my problem.
The query executes fine - it's very zippy, returns in about .0004 seconds, paginates itself out to the View - all is well in the land of Oz.
However there is some big trouble in that ASP.NET MVC dumps the page when it is ready, not as it is loaded. Is there any way around this?
I tried using jQuery to lace through div layers and draw partial views - this alleviated some of the problem, but it still just 'hangs' on the page until the whole thing is ready to be drawn.
I was looking around and found a few suggestions about using Response.Write - but I couldn't uncover anything relevant to my case. Any ideas? The structure is as follows...
PartialView
- Category
- IEnumerable<Models.Images> (List)
PartialView
- Page
- IEnumerable<Models.Images> (List) (Paginated View)
View
- Gallery
-- Index
--- Categories (Ajax Loaded on Demand, not on View render.)
---- ViewPage (No specific model passed)
The problem is clearly the images, I've tested it several times. If I remove the tags from the code, it renders quickly with just any data I tell it to. Each image is around 4 kb in size - so compressing them isn't likely.
Any help would be greatly appreciated.
There are a couple of things that you can do.
First, make sure the results themselves are not inside tables. IE (and perhaps other browsers) have to wait until the table has been fully loaded before rendering to the browser.
Secondly, there is a command called Response.Flush which will push the buffered output to the client. You can call this repeatedly. You may want to call it for every 10 items or so, for example. If you can incorporate this into your code it should do the trick for you.
About how many images are being loaded in a given request? As I'm sure you're aware the issue is less the size of the files and more the quantity of them- it takes longer to move a bunch of small files than an equally sized large file.
One thing to consider would be to send the page down with a specific sized set of results already populated and then use JavaScript (and perhaps scroll events) to dynamically load the rest. Ideally you should try to minimize the size of the initial request so that the page doesn't block user interaction for long; after that initial loading period you could then start pulling in the rest of the results.
Related
I have a large XPage with about 170 fields on it. I have tooltips for a fair number of the fields. The tooltip does a lookup into a help DB and returns the related text. They work fine but they significantly slow down the load process because of the number of times the lookup is performed. I put a dBar.info statement in the JS that loads the text and in one load the document and put it into edit mode it would appear to have fired that one tooltip lookup 6 times. If it does that for every one of the tooltips then clearly that is the reason for the slow performance of the XPage. I did it both with the DynamicContent set to both true and false with similar results. I'm tempted to just remove the tool tips but that kind of defeats the purpose.
Is there a way to limit the tooltip to only fire the lookup when called? Like something linked to the MouseOver event. It seems to me the tooltip in the extension Library works OK if there are only a few fields requiring inline help but does not scale well.
Just as a test I removed all the tooltips from the XPage and while the loading is slow it is probably acceptable, but the tooltips slow it to the point of unacceptability.
Bill,
This is an excellent use case for an ApplicationScope bean. Create a bean that implements Map and uses an internal HashMap as cache. Let's call it tooltip. In your tooltip you define the label as EL e.g. tooltip['Manager']. XPages will call the get function. In it you check the internal HashMap if you have the value, otherwise you look it up. So lookup happens once only.
You could instead of looking up on demand opt for loading when initialized. Using a view navigator that should be very fast. Since it would be an Application scope only loaded once.
Makes sense?
You can use view.isRenderingPhase() to minimise the lookups during a partial refresh. With the tooltip you can also change the showDelay property so the tooltip has a delay before showing. This is a good thing to do for a view, so it doesn't try to load each tooltip as the mouse moves down the page. dynamicContent="true" may also mean it's not loaded with the page, but only when called - I haven't checked this so I'm not certain.
I am building an ember app and it is starting to get large. Is there any way to do lazy loading of the ember files so that it does take 10+ seconds to load when the user first hits the site? For example since I have several logically separate modules as part of the site, I could load the modules as they are accessed. I am using ruby on rails and the ember-rails gem.
If you think about what Ember is actually doing to render that code, you can understand why it is slow. Suppose you're creating 2k view instances, and rendering 2k templates. Templates that for the most part are doing very little. Especially if you don't care about data binding.
For a first stab, let's stop rendering through templates. This code uses itemViewClass to render each item with a custom view instead of the view used internally by each.
// Use with {{each item in items itemViewClass=App.SpanView}}
App.SpanView = Em.View.extend({
render: function(buffer) {
buffer.push("<span>"+this.get('content')+"</span>\n");
}
});
JSBin: http://jsbin.com/enapec/35/edit66
With render over-ridden, we need to interact with the render buffer ourselves.
Even faster would be getting rid of the view entirely. I think there are two ways to do this. You could create a custom view with a render method that loops over all the items, and pushes each element onto the buffer. I think given the previous example you can get that going yourself.
Another simple option is to use a helper. A dumb helper like this is more difficult to wire up for re-rendering when the list changes, but sometimes it is the right solution.
// Use with {{eachInSpan items}}
Em.Handlebars.registerBoundHelper('eachInSpan', function (items) {
return (
new Handlebars.SafeString(
items.map(function (i) {
return '<span>'+i+'</span>';
})
)
);
});
Live JSBin: http://jsbin.com/enapec/34/edit
Lastly, you could do this in jQuery with didInsertElement and the afterRender queue. I don't recommend it though.
Ember.RenderBuffer gathers information regarding the a view and generates the final representation. Ember.RenderBuffer will generate HTML which can be pushed to the DOM.
FYI here is the RenderBuffer API
DEFINED IN
MODULE : Ember-views
I am also new bee but I got this from some resource. Thanks.
On this Keynote on Embercamp London 2016, #wycats and #tomdale talk about the plans for improving Ember with slicing and dicing of the app. They talk about loading only what it's needed for that particular route. This is going to be great. I think that's what you wanted :)
https://www.periscope.tv/w/1mrGmzPBvQqJy#
Just wondering what the shorthand would be in Rails to do this (if any):
I have views/pages/ containing 5 html.erb files and they all use the same default layout.html.erb, with one yield statement in the middle of it (the standard setup).
Now I want one view that incorporates all 5 of those erb files above contiguously, one after the other, in place of the one existing yield statement in that same layout.html.erb.
What minimal changes would I make to the layout.html.erb to accomplish this.
(Rails Newbie - like it more than Django now).
Ah,
I see what you're saying. Try this. Have your file structure such that all the views for said controller are in one folder...
#controllers_views = Dir.glob("your/controllers/views/*.erb")
#controllers_views.each { |cv| puts cv }
Seems like that would work, I'm away from my dev box or I'd test it for you.
Hope that helps.
Good luck!
You could always have a javascript that requests the sequential yields at a time interval as an ajax request. Then just your target element change to reflect the updated information.
Alternatively load all 5 into different divisions, and have them revolve visibility, like a picture gallery. CSS3 could pull this off.
http://speckyboy.com/2010/06/09/10-pure-css3-image-galleries-and-sliders/
Hy,
I was thinking that all my website will have use of cells, using the known plugin cell for rails, so this is my idea:
A table that contains 3 fields: id, view_name and layout. the layout will be a serialized hash.
When a request is made, the layout field is requested and then in the view, default layout, will be unserialized the layout var, that looks like this:
#layout[:sidecol][:gallery] = {... some params for it...};
#layout[:maincol][:comments] = {..params...};
In the <% #ruby code to render the cells in the #layout[:sidecol] %> will be some ruby code that will loop over the #layout[:sidecol] and render all cells in it. the same occurs in the maincol div.
What do you think?
Positive in my opinion:
More modular
controller is used only for post
easy change of structure
easier to implement some kind of traking to see diferences on what layout is better or not.
Negative:
not found yet
EDIT:
1) The request comes, is calculated the view name.
2) Load from the database the layout field that corresponds to the view name. (this will be cached, and will be updated only when changes are made. I intend to use this way, because I will need to test the layout. 33% layout1, 33% layout2 and 33% other layout. So will be used a random number to choose the view layout.)
3) The layout field contains: first subdivision is the name of the div to be created, then in each one will be more components, named cells in this case, that will be instanced in the application controller, because will be repeated for all get requests.
4) In the view will be created the divs and will be rendered in each one the cells defined.
5) The cell will make the request to the db and load the data.
6) The cell will then render the HTML and is ready to go!
So... in discussing MVC, you should understand that the controller layer is probably the most critical of any of the layers for actually accomplishing tasks. The controller layer is intended for actually getting any work done within your framework.
There are ways to abandon monolithic MVC as your primary pattern, for example, building a set of services which communicate with each other, but even there, that's mainly abandoning monoliths, since you can push the MVC pattern down into each component.
With regard to your specific proposal, storing layout data, which seldom changes between requests, in your DB is an unnecessary performance hit. Having to drag the layout from your DB out, unserialize it and run it will require you either to set up a caching framework, or to do crazy DB scaling.
Incidentally Rails3 and Merb (and i think even Rails2) are sufficiently modular in the controller layer, that they don't really care where your layout/templating/view stuff comes from. If you really want, you can just tell them to render a string.
Image i have a view that is cached with the OutputCache attribute but i still need to increment a counter that records that the page has been viewed, how could i do it?
I thought about creating my own Custom ActionFilterAttribute, and using Action Filter Order of Execution to record this .. but i'm not sure it will work.
eg.
[IncrementViewCountFilter(Order=1)]
[OutputCache(Duration=60,Order=2)]
public ActionResult Index(int questionId)
{ ... }
Firstly, my assumption here is that if the OutputCache is called, and the page is cached, then the controller code will not be ran.
Next problem i'm guessing is that the IncrementViewCountFilter wouldn't know about the questionId, so how would it know what to increment (because it is executed before the main Index code is executed).
Secondly, if the IncrementViewCountFilter did know the questionId .. and it's getting lots of hits, you wouldn't want it to write all the time to the DB.. but only when it gets to a certain number .. and then u 'flush' the output.
Anyone have any thoughts?
Well, you have a few options.
Donut caching
One server-side option is 'Donut caching'. Donut caching allows most of the page to be cached, and portions of the page to be not cached (the hole in the middle of the donut). Donut caching is described here, and I have used it with great success.
Image-based tracker
Another option is having an image on the page actually load a server-side action that records the hit. This would look like
<img src="/controller/action">
on the page, where the action serves up an empty image at the end.
Client-side tracking
The last option is client-side tracking -- where some script runs on the client side and uses AJAX to call something on the server to record the hit. Google uses something like this for their Analytics package. If you're on the same domain as your tracking mechanism ... like if your main page is:
http://www.domain.com/home/action
and the tracker is on
http://www.domain.com/tracking/action
then you should be fine.
This gets tricky when your tracker is on a different domain (you need to handle this using JSONP or some other mechanism that allows for relatively safe cross-site scripting).
The filter can get the questionId from the ActionExecutingContext.ActionParameters, which is passed to OnActionExecuting. As for caching the hit counts, well, use the cache. :)
You could also use an HttpModule which is a good option because it can be used for pages and other assets that do not use the MVC pipeline. I use a combination of Donut caching,(http://mvcdonutcaching.codeplex.com/), an MVC filter and an HttpModule to record all types of analytics for cached pages.
I don't know about the MVC side but if I was doing this in WebForms this sounds like it would be a candidate for output cache substitution aka donut caching.