I am using the Service Worker example on this page. I have implemented the service worker (ASP.NET MVC application running within an SSL localhost site) and I receive confirmation (console message) that it is registered.
Within the URL's to cache I have added
an index page (MyController/Index) which performs a "fetch" (part of the Fetch API)
as well as the URL that is fetch'd (MyController/GetData) which goes
and gets the data from the database
All looks OK when I am online - i.e. all pages I have specified in the URLs to cache are all successfully retrieved (as per the Chrome|DevTools|Network tab).
When I go offline (via the Chrome|DevTools|Application|Service-Workers|Offline checkbox), if I go to pages that I have not listed in the URL's to cache, I get the "Offline" page I have specified in the service worker (which is correct). However when I navigate to the Index page (MyController/Index - mentioned above) which I have listed in the URL's to cache, the view appears however the "fetch" (to MyController/GetData) on that page fails - I wondered what the expected result of this is.
I was of the assumption that the data (retrieved through MyController/GetData) would be cached and when I go offline that cached data would be substituted if the fetch failed (i.e. the NetworkFirst strategy)
Can anyone point me in the direction of what should occur and ideally an MVC example
Justin was right - my MyController/GetData method was actually using a 'Post' method and in my fetch handler it was attempting to always fetch non-Get requests from the network - hence the failure when offline. I just changed the request to a Get request and my service worker fallback logic got the data from cache
Related
I'm looking to make a web extension for Firefox that stores HTML pages and other resources in local storage and serves them for offline viewing. To do that, I need to intercept requests that the browser makes for the pages and the content in them.
Problem is, I can't figure out how to do that. I've tried several approaches:
The webRequest API doesn't allow fulfilling a request entirely - it can only block or redirect a request, or edit the response after it's been done.
Service Workers can listen to the fetch event, which can do what I want, but calling navigator.serviceWorker.register in an addon page (the moz-extension://<id> domain) results in an error: DOMException: The operation is insecure. Relevant Firefox bug
I could possibly set up the service worker on a self hosted domain with a content script, but then it won't be completely offline.
Is there an API that I missed that can intercept requests from inside a web extension?
I have webapp that uses a service worker that detects, when the network is offline and serves an offline page.
If I just load the main page and the network is offline, then the offline page, is served and rendered, as expected.
But I have problem to handle offline situation in the middle of the program, during a fetch request to the network that should return result data in json format.
In such case, the service worker returns the offline.html page, instead of the json data.
But the function that called the network fetch, expects result data in json format, throws an error of invalid data, and nothing is renderred (because the program doesn't know that it needs to load a new page)
How should I update the calling code to:
check the result
detect that instead of json data, the response was teh offline.html page, and
render the offline page instead of continuing the regular flow?
Thanks,
Avner
The solution is to replace the page content without using window.location.href = ... which forces to reload the page and change the address bar.
See here
when visit the https://vuejs.org/js/vue.min.js, i find it's loaded from ServiceWorker, how it works?
Screenshot of Chrome DevTool
A service worker is a piece of JavaScript code that your browser runs in the background. Service workers are given some special privileges over normal JavaScript code that runs in a browser, with a commonly used priveledge being their ability to intercept fetch events.
A fetch event is activated anytime the client requests a file. Many service workers will download all requested files into a cache. The second time the same file is requested, the service worker will step in and return the cached file without sending any http request.
In this particular case, a service worker is storing the contents of the vue file into a cache the first time it is downloaded, removing the need for an expensive network request on future page loads.
This is one of those questions that maybe should go so serverfault, but then maybe there is a code-level solution.
Anyway, here is the question. I have a regular MVC3 application which requires user login to access (uses the Authorize tag on most of the actions). I also have a Silverlight object within the application that makes HTTP GET calls to a controller action which returns an image (in fact this is a map tile). This particular controller action has no authorize tag, and is therefore public.
The Silverlight component runs slow or just blocks, because the MVC application can apparently process only ONE request at a time, as confirmed by firebug. This means that the map tiles can be served only one after the other. Moreover, regular (non-map-related) requests are enqueued too, and everything times out after a while.
So to make a test, I setup another website with the same document root, and I instructed the Silverlight component to read tiles from there. Now tiles ARE requested concurrently and it runs smoothly.
So, is there any way to resolve this situation and use one site only?
If you are using Session on the server action that would explain why requests are queued. Because the Session is not thread safe ASP.NET serializes all requests from the same session and executes them sequentially.
Trying to figure out a way where I can pass some data/fields from a web page back into my application. This needs to works on Windows/Linux/Mac so I can't use a DLL or ActiveX. Any ideas?
Here's the flow:
1. Application gathers some data and then sends it to a web page using POST that is either imbedded in the app or pops up a new IE window.
2. The web page does some services and then needs to relay the results back to the application.
The only way to do this that I can think of is writing the results locally from the page in a cookie or something like that and have the application monitor for a specific file in that folder.
Alternatively, make a web service that the application hits after passing control to the page and when the page is done the web service will return the data. This sounds like it might have some performance drawbacks.
Can anyone suggest any better solutions for this?
Thanks
My suggestion:
Break the processing logic out of the Web Page into a seperate assembly. You can then create a Web Service that handles all of the processing without needing to pass control over to a page.
Your application can then call the Web Service directly and then serialize the results and work with the data quite easily.
Update
Since the page is supplied by a third party, you obviously can't break anything out. The next best thing would be to handle the entire web request internal to your application (rather than popping a new Window).
With this method, you can get the raw HTTP response (and page markup) and work with it directly. You can then parse the Response stream and gather the required data from it.
During performing an HTTP request you should be able to retrieve the text returned by the page. For instance, if your HTTP POST was to hit a Java servlet, the doPost() method would be fired and you would then perform your actions, you could then use the PrintWriter object from the Response object (PrintWriter out = response.getWriter();) and write text back to the calling application. I'm not sure this helps?
The fact that
web page is hosted by a third party
and they need to be doing the
processing on their servers.
is important to this question.
I like your idea of having the app call a webservice after it passes the data to the third-paty web page. You can always call the webservice asynchronously if you're worried about blocking your application while waiting for results from this webservice.
Another option is that your application implements an XML-RPC server that can be called from the web page using PHP, Python or whatever you use to build the website
A REST server will do the job also...