sIFR server requests - sifr

Is there a way to reduce the number of times sIFR 3 requests the sifr.swf file from the server? I have tried using prefetch() without success. Is there a way cache the .swf file?
Thanks.

The best explanation for pre-fetch is at sifr3 - prefetch not working?. So basically, the answer is yes, but it may not always work.
(prefetch() has been removed in more 'recent' nightlies, by the way.)

Related

How can I reduce the waiting (ttfb) time

I have a query which involves getting a list of user from a table in sorted order based on at what time it was created. I got the following timing diagram from the chrome developer tools.
You can see that TTFB (time to first byte) is too high.
I am not sure whether it is because of the SQL sort. If that is the reason then how can I reduce this time?
Or is it because of the TTFB. I saw blogs which says that TTFB should be less (< 1sec). But for me it shows >1 sec. Is it because of my query or something else?
I am not sure how can I reduce this time.
I am using angular. Should I use angular to sort the table instead of SQL sort? (many posts say that shouldn't be the issue)
What I want to know is how can I reduce TTFB. Guys! I am actually new to this. It is the task given to me by my team members. I am not sure how can I reduce TTFB time. I saw many posts, but not able to understand properly. What is TTFB. Is it the time taken by the server?
The TTFB is not the time to first byte of the body of the response (i.e., the useful data, such as: json, xml, etc.), but rather the time to first byte of the response received from the server. This byte is the start of the response headers.
For example, if the server sends the headers before doing the hard work (like heavy SQL), you will get a very low TTFB, but it isn't "true".
In your case, TTFB represents the time you spend processing data on the server.
To reduce the TTFB, you need to do the server-side work faster.
I have met the same problem. My project is running on the local server. I checked my php code.
$db = mysqli_connect('localhost', 'root', 'root', 'smart');
I use localhost to connect to my local database. That maybe the cause of the problem which you're describing. You can modify your HOSTS file. Add the line
127.0.0.1 localhost.
TTFB is something that happens behind the scenes. Your browser knows nothing about what happens behind the scenes.
You need to look into what queries are being run and how the website connects to the server.
This article might help understand TTFB, but otherwise you need to dig deeper into your application.
If you are using PHP, try using <?php flush(); ?> after </head> and before </body> or whatever section you want to output quickly (like the header or content). It will output the actually code without waiting for php to end. Don't use this function all the time, or the speed increase won't be noticable.
More info
I would suggest you read this article and focus more on how to optimize the overall response to the user request (either a page, a search result etc.)
A good argument for this is the example they give about using gzip to compress the page. Even though ttfb is faster when you do not compress, the overall experience of the user is worst because it takes longer to download content that is not zipped.

Fragment Caching with a background worker

I have a page which renders a lot of partials.
I fragment cache them all, which makes it very fast. Horray!
The thing is, that because of the amount of partials, the first run, when writing the cache, takes so long, the request timeout (but the other times are really fast)
I also use sidekiq (but the question is relevant to any background processor)
Is there a way to save those partials in a background process so users that miss the cache (due to expiration) won't have a timeout? So I would go over all partials, and those of which the cache expired (or is going to expire soon) I will recache them?
I only know of preheat gem, but I think it is still not complex enough for my need. Plus it hasn't been maintained for ages.
I was working on some project and had similar problem. Actually it was problem with only what page and problem with loading right after cleaning the cache. I solved it on another way (I didn't have anything like sidekiq, so maybe it will not be right solution for you, but maybe will be helpful)
What I did, is that right after cleaning the cache a called open() method and put the problematic url as parameter like:
open('http://my-website/some-url')
so, after cleaning the cache, that url was being called and it creates a new cache automatically. We solved that problem quickly on that way. I know that the some background workers would be better solutions, but for me it worked.
Just to say, that our cache was cleaning by the cron, not manually.
UPDATE
or maybe if you want do clean the cache manually, you can after cleaning the cache call open('http://my-website/some-url') but using the sidekiq (I didn't try this, it's only idea).
Of course, my problem was with only one page, but if you want whole website, it makes things complicated.

MODX Revo: How can make pages load fast?

I'm working on a site with modx revo. I'm really annoyed by the slow loading op pages. There's a 2sec wait for a page load om my localhost ánd I have a SSD. I've been looking around to find out how to make pageload faster.
I do have alot of getResources-/Gallery (9 total) calls and two Wayfinder calls. I've read it had to to with those, so I got rid of all the getResources and changed them to customs snippets that do only what I need them to do, build a 3-4 item menu. It's still slow, only few hunderd ms slower.
The Galleries (5) are only 3-4 images. I also use babel that checks every resource id for it's translation counterpart.
I'm wondering if it has anything to do with my wampserver (v 2.2) settings...
Now that I've summed it all up, I does look like a heavy page. Will I get long pageloads with any CMS this way?
Any help/hint/tips are apreciated!
You might want to "cache" all snippet tags without using the exclamation mark [[! ... ]].
Here is a blog about caching guidelines: http://www.markhamstra.com/modx-blog/2011/10/caching-guidelines-for-modx-revolution/
Here is a current discussion about speed performance: http://forums.modx.com/thread?thread=74902#dis-post-415390

networkActivityIndicatorVisible logic issue

so my application connects to a URL (via URLConnectionDelegate), gathers data, which contains image URLs. It then connects to each and every image url (again, via URLConnectionDelegate), and gathers the images for each image.
Everything works perfect, couldn't be happier.
But the problem is that I can't really track the networkActivityIndicator. There are like, 100 connections going off at once, so I don't know when or how to set the networkActivityIndicator to turn off once the last image is done loading.
Does anyone have any suggestions without me having to redo a bunch of code?
Thanks for the help guys
The typical solution is a singleton object that you call [NetworkMonitor increaseNetworkCount] and [NetworkMonitor decreaseNetworkCount] at the appropriate points.
The nicer solution is a toolkit like MKNetworkKit, which will handle this and a bunch of similar things for you (like managing your download queue, since 100 simultaneous connections is actually very bad on iOS).

ASP.NET MVC 3: what and when to cache and how to decide?

I have been neglecting learning about caching for quite some time now, and although I've used caching here and there in the past it's not something I'm familiar with.
I found a great tutorial about what caching is and what kinds of cache there are (I know already what caching is), but...
How does one decide what and when to cache? Are there things that should always be cached? On what situations should you never use caching?
First rule is: Don't cache until you need it, that would be premature optimization (first link I found, google for more info)
The biggest problem with caching is invalidation of cache. What happens when the data you have cached is being updated. You need to make sure your cache is updated as well and if not done correctly often becomes a mess.
I would:
Build the application without
caching and make sure the
functionality works as intended
Do some performance testing, and
apply caching when needed
After applying caching do
performance testing again to check
that you are getting the expected speed increase
I think the easiest way is to ask yourself a bunch of questions,
Is this result ever going to change?
No? then cache it permanently
Yes, When is it going to change? When a user updates something.
Is it going to impact only the particular user who changed the value or all of the users. This should give you an indication of when to clear the particular cache.
You can keep on going, but after awhile you will end up with different profiles
UserCache, GlobalCache just being 2 examples.
These profiles should be able to tell you what to cache and have a certain update criteria (When to refresh the cache)

Resources