sifr3 - prefetch not working? - sifr

I am having a problem with the loading times/size of an sifr 3 enabled site, and found out the the font swf is requested several times in my application. This can be seen in the network tab of firebug, as well as in the apache logs.
On http://novemberborn.net/flash/prefetching-movies there are some instructions for prefetching. However that does not work, the prefetch Method is not available (still in the docu!). I understand that prefetching is done automatically, however that does not seem to work.
Even in the demo page of the sifr download package, with an empty browser cache I get several hits for rockwell.swf and cochin.swf! Both with Firefox 3 and IE7...
Any chance for an easy and quick fix?
Greetings,
Simon

Fundamentally, this is an issue between the browser and the Flash player. As sIFR inserts the Flash movies into the page, the browser initializes the Flash plugin with the path to the Flash movie. If the movie is not yet in a local cache, it's requested from the server. Since the movies are inserted within a few milliseconds, this would mean that a request is made for each inserted movie.
sIFR tries to prevent this by prefetching the Flash movies. It does this ones per browser session, based on a session cookie. This merely fires off a request for the movie file, and hopefully that file is in the cache by the time replacement starts. Therefore its important to load the sIFR JavaScript code as early as possible, and to activate sIFR properly by passing the Flash movies to the sIFR.activate() method.
In my experience the only way to reliably test this process is clearing browser cache, closing all browser instances (to get rid of the session cookie), then opening the browser and going straight to the page you want to test. I don't find the activity monitors within browsers reliable, so either check through an HTTP proxy or the server logs.
The one remaining improvement I could make is to try and detect the progress of the prefetch, and hold off on replacing elements until the prefetch is complete.

Do you have the option to move to Cufon? You'll find it much easier to use and isn't quirky.

Related

Why the url of youtube will be changed when I hover mouse on a video?

I was surfing on youtube and I realized something.
When I hover mouse on a video, then the url will be changed.
Interestingly, this happens in some browsers.What's the matter? Why does string start with &? https://www.youtube.com/?&ab_channel=NASA
What is the benefit to change the URL?
Interestingly, this happens in some browsers.
Different browser different support, a what you see is what you get is a standard we all want and must write our scripts specific to each browser if a feature requires it. In this case the new feature may not be widely supported or their coding wasn't compliant enough to give you this exact result each browser.
What's the matter?
No problem here, the URL is a tiny-bit broken but won't impact site performance unless you happen to error out the server and crash the entire network.
Why does string start with ?& https://www.youtube.com/?&ab_channel=NASA
What is the benefit to change the URL?
A URL alone has no parameters passed to it, so youtube.com. When a parameter is passed through the site on its HTTPS request will check these and determine what it is you want. So the response will return NASA cause ab_channel included it.
Because ? has nothing after it like ?video=asd89sa982 it's treated as undefined and serves no value or importance.
YouTube can fix it if they desire with script adjustment.
the URLs works in a way that when the site has started or reloaded, It's going to check for any element that has a href which has a link that has either https://www.youtube.com/watch?v= or https://youtu.be/ basically a YouTube video and then save those links, when one of them i hovered over, It's tell which. This works fine but one downside that I'm currently facing is that new links that are added in after the site has already started or reloaded won't be counted and therefore when I hover them it won't show the links, I'm reffering to comments; for-example if I make a comment that has a link to a video, after i post that comments and hover over that link I posted, It won't show the link. I could make a function which reloads every like 5 seconds but this doesn't seem to be a good idea. Plus what I'm actually working on, realoding every time won't be good.

How to show response from GET ajax call after user is offline through service worker?

Basically, I want my PWA to work offline. But on page load of website, there is an GET ajax call which helps in showing some content of the page.
Question is how do I let my PWA work offline as there will be an ajax call on page load which would require me to either store the response in cache?
As the content can be heavy, is it even correct to cache so much data?
Also, I read somewhere that we cannot cache GET requests, so how can I proceed with making PWA work offline?
I have tried looking at the following links, but these do not say me how to cache a dynamic content
https://developers.google.com/web/ilt/pwa/caching-files-with-service-worker
https://vaadin.com/pwa/learn/caching-strategies
https://jslovers.com/dynamic-cache-serviceworkers.html
Of course you can cache "dynamic" content – that's because from the browser's point of view it's just anothe HTTP request :-) It is of course a matter of your application & server logic whether that is useful in any way. For some application caching dynamic content and then showing it to the user at a later time might work completely ok but for some other application it might come with problems. You know, it would be fine to show a rarely updated avatar image but not ok to show old currency info, right?
You could also design the app around these limitations, maybe show the user a notification saying "hey, you're using an offline version and the data is XX hours old!" or something like that.
You can easily store multiple megabytes of network responses into the cache. If you've got more than 50 megs browsers start to limit you. Also, always have error handling ready if the browser tells you the cache is full or whatever.
Does this explanation help you?

Rails Caching a Page Improperly - How to Stop?

I have a very simple test case that explains the problem.
Here's the page that I'm displaying in Rails in an ERB file.
<div><%=rand%></div>
<p>Go</p>
To show the error, I load the page. I note the random number displayed as rand1. I click on the Google link. I click the browser's "Back" button. I note the random number displayed as rand2.
Here's the problem:
In Firefox and Chrome, rand1 != rand2 (always).
In Safari and IE, rand1 == rand2 (always).
Why the discrepancy in browsers? Why is Safari and IE caching the output from Rails while the other two browsers are not? How do I get Safari and IE to refresh the page?
(This is a simple test case to show the problem - this has implications in my Backbone application).
IE and Safari appear to be caching the response from the server, obviously; as long as your browsers are configured correctly, you can change this by changing the Cache-Control header in the response.
Another Stack Overflow post shows the appropriate way to do that, though in Rails 3 there's a shortcut method to accomplish this: you can invoke expires_now in the controller action to avoid manually setting all these headers.
WebKit in particular has an aggressive page caching strategy for handling exactly the case you're describing (clicking a link and then immediately clicking the back button). The idea is to make the back action happen almost instantaneously by caching not just the resources but also the DOM and other state of the page. You can read about it in these two articles:
WebKit Page Cache I — The Basics
WebKit Page Cache II — The unload Event
You may be able to use a combination of the load/unload and pageshow/pagehide events to accomplish what you need.
I'm not sure if IE implements something similar to WebKit, but maybe this will fix it too.

How to load a document in background?

I'm trying to write a Firefox extension that speeds up browsing page sequences by preloading sequence items, preprocessing them, and showing on request.
Is there any way to load and process DOM of arbitrary web page (on the same site as currently opened) in background from privileged extension code?
Ideally, the document's javascript should work as it would in a normal browser window. I suspect a hidden window would be required for this. The context on that javascript should not be privileged then.
Loading should allow user to continue normal browsing in all visible browser windows.
I don't like the idea of injecting iframes to currently opened document and making them optionally visible (the principle used by Webcomic reader userscript)
From the add-on SDK, the page-worker module might be close to what you need:
The page-worker module provides a way to create a permanent, invisible
page and access its DOM.
That said, I have no idea whether it's possible to load that invisible page into a (current or new) tab / window. You might be able to replace a current tab's document.body by the page-worker's one. Possibly. If it's legal.
You could use a lightweight browser extension to collect all links on a page onload and use link tags to prefetch the content for each, the browser will load those pages in the background: https://developer.mozilla.org/en/Link_prefetching_FAQ
OR
If you need to preload a page and have access to its DOM from extension land, you could use the Page Worker API from the Add-on SDK: https://addons.mozilla.org/en-US/developers/docs/sdk/1.0/packages/addon-kit/docs/page-worker.html
I believe so. assuming your javascript is already running
var doc = gBrowser.selectedBrowser.contentDocument;
will get your the document of the loaded tab, you can then process it and do with it what you want. Doing it in the background and keeping the app responsive is a different story :)

Quickly and accurately grabbing webpage titles

I'm looking to get the title of a webpage, a common feature of many IRC bots that I'm wanting to incorporate into a IRC client I'm writing for fun.
The method that I currently have working basically connects and sends a GET request for the entire webpage then seeks out the tags and reads inbetween them. For larger webpages this can be slower than I'd like. An additional problem I've noticed is webpages with dynamic titles (such as some phpbb forums) will not return the accurate title as it would show in a browser because I don't do any execution of javascript ect..
It seems one way to get an accurate title is to dump the html into a browser control (such as the IE COM control) and pull the title, but this is just going to make it even more time consuming.
Is there a simple method I am un aware of?
In a word, no, not really.
I guess rather than downloading the whole document you could stream the HTTP file into your application and just stop downloading when you reach </title> - that would save you waiting for the whole HTML document to download.
However that doesn't help the situation if you need to read the title after it's been changed by some client-side javascript. As you say, the only way I can think of doing that is by using a browser control.

Resources