Completely replacing a request in a WebExtension - firefox-addon

I'm looking to make a web extension for Firefox that stores HTML pages and other resources in local storage and serves them for offline viewing. To do that, I need to intercept requests that the browser makes for the pages and the content in them.
Problem is, I can't figure out how to do that. I've tried several approaches:
The webRequest API doesn't allow fulfilling a request entirely - it can only block or redirect a request, or edit the response after it's been done.
Service Workers can listen to the fetch event, which can do what I want, but calling navigator.serviceWorker.register in an addon page (the moz-extension://<id> domain) results in an error: DOMException: The operation is insecure. Relevant Firefox bug
I could possibly set up the service worker on a self hosted domain with a content script, but then it won't be completely offline.
Is there an API that I missed that can intercept requests from inside a web extension?

Related

Can a web server prevent pages it serves from installing service workers?

Suppose there is a web server that hosts arbitrary user-controlled content under some paths - public IPFS gateways are the example that got me thinking about this. Is it possible for that server to prevent pages that it serves from installing service workers on clients (and thus spoofing content for non-user-controlled paths)?
There's some helpful info in the service worker specification:
An HTTP request to fetch a service worker's script resource will
include the following header:
Service-Worker Indicates this request is a service worker's script
resource request.
Note: This header helps administrators log the requests and detect
threats.
If you'd like to make sure that your web server doesn't allow any service worker registrations, one approach would be to check for the Service-Worker header on incoming requests and have your web server return an appropriate HTTP error response (anything 4xx or 5xx would work—maybe 403 or 412?) whenever you detect that.

Real use of same origin policy

I just got to know about the same origin policy in WebAPI. Enabling CORS helps to call a web service which is present in different domain.
My understanding is NOT enabling CORS will only ensure that the webservice cannot be called from browser. But if I cannot call it from browser I still can call it using different ways e.g. fiddler.
So I was wondering what's the use of this functionality. Can you please throw some light? Apologies if its a trivial or a stupid question.
Thanks and Regards,
Abhijit
It's not at all a stupid question, it's a very important aspect when you're dealing with web services with different origin.
To get an idea of what CORS (Cross-Origin Resource Sharing) is, we have to start with the so called Same-Origin Policy which is a security concept for the web. Sounds sophisticated, but only makes sure a web browser permits scripts, contained in a web page to access data on another web page, but only if both web pages have the same origin. In other words, requests for data must come from the same scheme, hostname, and port. If http://player.example tries to request data from http://content.example, the request will usually fail.
After taking a second look it becomes clear that this prevents the unauthorized leakage of data to a third-party server. Without this policy, a script could read, use and forward data hosted on any web page. Such cross-domain activity might be used to exploit cookies and authentication data. Therefore, this security mechanism is definitely needed.
If you want to store content on a different origin than the one the player requests, there is a solution – CORS. In the context of XMLHttpRequests, it defines a set of headers that allow the browser and server to communicate which requests are permitted/prohibited. It is a recommended standard of the W3C. In practice, for a CORS request, the server only needs to add the following header to its response:
Access-Control-Allow-Origin: *
For more information on settings (e.g. GET/POST, custom headers, authentication, etc.) and examples, refer to http://enable-cors.org.
For a detail read, use this https://developer.mozilla.org/en/docs/Web/HTTP/Access_control_CORS

Is there a way to create connection timeout to activate a service-worker?

I'm using Electron, which is based on Chromium, to create an offline desktop application.
The application uses a remote site, and we are using a service worker to offline parts of the site. Everything is working great, except for a certain situation that I call the "airplane wifi situation".
Using Charles, I have restricted download bandwidth to 100bytes/s. The connection is sent through webview.loadURL which eventually calls LoadURLWithParams in Chromium. The problem is that it does not fail and then activate the service worker, like no connection at all would. Once the request is sent, it waits forever for the response.
My question is, how do I timeout the request after a certain amount of time and load everything from the service worker as if the user was truly offline?
An alternative to writing this yourself is to use the sw-toolbox library, which provides routing and runtime caching strategies for service workers, along with some built in options for helping with these sorts of advanced use cases. In particular, you'd want to use the networkTimeoutSeconds parameter to configure the amount of time to wait for a response from the network before you fall back to a previously cached response.
You can use it like the following:
toolbox.router.get(
new RegExp('my-api\\.com'),
toolbox.networkFirst, {
networkTimeoutSeconds: 10
}
);
That would configure a route that matched GET requests with URLs containing my-api.com, and applied a network-first strategy that will automatically fall back to the previously cached response after 10 seconds.

Http Handler vs Http Module for Url redirection in SharePoint

There is a SharePoint farm with aroubd 5 web apps. Each web app has numerous site collections and within each site collection there are numerous sites.
Some of the sites in each site collection will on longer be used and when any request comes to the site collection, it needs to be routed to a new SharePoint Url in a different farm.
I am trying to implement a http handler or http module to catch the requests that need to be directed and redirect to new url.
However, i need to know:
Is going with Http Handler or Http Module approach the best? The client cant afford to have some custom webparts on home pages of each site collection to redirect. Therefore the request needs to be redirected before it comes to the page. Therefore i am assuming a Http Handler or Module is best way.
What to choose between a Http Handler and a Http Module. I have noticed that
a> if i go with Http Module, it is executed on every request to the web application. For instance, if i just type the Url of a site collection in the browser, the Http Module gets executed around 10 times. Will this not be a performance issue?
b> if i go with a Http Handler(to handle *.aspx), handler class is invoked just once per request but once the code is getting executed, if it is found that no Url redirection is required then i am not getting any html on the page. I guess this is expected as Http Handler will be responsible to generate the response (html) and since the request is handled by the custom handler and since no code is written to generate the html, nothing is diplayed on the page.
Please let me know your thoughts.
Thanks,
Faiz

How to pass data from a web page to an application?

Trying to figure out a way where I can pass some data/fields from a web page back into my application. This needs to works on Windows/Linux/Mac so I can't use a DLL or ActiveX. Any ideas?
Here's the flow:
1. Application gathers some data and then sends it to a web page using POST that is either imbedded in the app or pops up a new IE window.
2. The web page does some services and then needs to relay the results back to the application.
The only way to do this that I can think of is writing the results locally from the page in a cookie or something like that and have the application monitor for a specific file in that folder.
Alternatively, make a web service that the application hits after passing control to the page and when the page is done the web service will return the data. This sounds like it might have some performance drawbacks.
Can anyone suggest any better solutions for this?
Thanks
My suggestion:
Break the processing logic out of the Web Page into a seperate assembly. You can then create a Web Service that handles all of the processing without needing to pass control over to a page.
Your application can then call the Web Service directly and then serialize the results and work with the data quite easily.
Update
Since the page is supplied by a third party, you obviously can't break anything out. The next best thing would be to handle the entire web request internal to your application (rather than popping a new Window).
With this method, you can get the raw HTTP response (and page markup) and work with it directly. You can then parse the Response stream and gather the required data from it.
During performing an HTTP request you should be able to retrieve the text returned by the page. For instance, if your HTTP POST was to hit a Java servlet, the doPost() method would be fired and you would then perform your actions, you could then use the PrintWriter object from the Response object (PrintWriter out = response.getWriter();) and write text back to the calling application. I'm not sure this helps?
The fact that
web page is hosted by a third party
and they need to be doing the
processing on their servers.
is important to this question.
I like your idea of having the app call a webservice after it passes the data to the third-paty web page. You can always call the webservice asynchronously if you're worried about blocking your application while waiting for results from this webservice.
Another option is that your application implements an XML-RPC server that can be called from the web page using PHP, Python or whatever you use to build the website
A REST server will do the job also...

Resources