I'm transforming my SDK-based Firefox extension to WebExtensions and I've come to the issue of updating the extension. The current extension is hosted on my own domain (which is an HTTP domain), along with the update.rdf file.
Now, for SDK-based add-ons, updates were possible via HTTP as long as the update manifest was signed using the McCoy tool and the valid hash of the update file was provided in the manifest. In addition to that, install.rdf would hold the public key portion of the key used to sign the update.rdf.
There seem to be no options to do this using the web extensions ( no manifest entry for public key, and no update manifest (.json) entry for the signature).
Does this mean Firefox will only allow self-hosted extensions to update via HTTPS? How will this affect SDK-based extensions currently hosted on HTTP domains? Will they be able to receive (at least one) update?
As you appear to have determined, the update.rdf for WebExtensions based add-ons must be served over HTTPS, not HTTP. The URL for the update.rdf file must be HTTPS. The documentation for the update_url property in the manifest.json applications key is explicit on this point:
update_url is a link to an add-on update manifest. Note that the link must begin with "https". This key is for managing extension updates yourself (i.e. not through AMO).
There is no way to use the alternate security method, which is available to other types of add-ons, of providing an updateKey (and signing the update.rdf) in an install.rdf file included with the extension.
Add-on SDK based extensions, and other types of non-WebExtensions add-ons, will continue to be able to receive their update.rdf over HTTP in the same manner which they have been doing.
If your issue is transitioning an add-on from being an Add-on SDK based add-on to being a WebExtensions based add-on, then you will need to have an update to that extension which changes the URL from which updates are served. This can either be in some version before transitioning to WebExtensions, or at the same time. Either way, it is just a new version of the add-on (indicated with the update.rdf served via HTTP and appropriately signed). That new version will then have an update_url (WebExtensions) or updateURL (all other types) where the URL is using the HTTPS scheme. All subsequent update.rdf files will then be served over HTTPS.
Related
Background:
I've got a project in Cloud Run with two services, both mapped to custom domains. The production site is mysite.com and the development site is dev.mysite.com. I deployed the development site with the --no-allow-unauthenticated flag to prevent public viewing. I want developers to be able to view the site in a browser though. Based on what I've read the "solution" Google currently has isn't great. You have to run the command gcloud auth print-identity-token to identify your Bearer token then use the ModHeader browser extension to modify the request header. The token is constantly changing and having ModHeader enabled to change the request header breaks authentication on other pages, so it is big PITA, but it works, mostly.
Question:
What doesn't work is having the development site load images from the Google Cloud Storage Bucket. Every resource which should be pulled from the bucket results in a 403 error for that resource, but the page loads fine otherwise. I'm the project owner (i.e. my email address is the "owner") and have admin rights on everything including the bucket in question. The bucket's Access Control is set to "Fine-grained: Object-level ACLs". When I deploy the project using the --allow-unauthenticated the images are accessible. Why isn't the bucket honoring my token?
Update:
I'm not 100% sure, but I think the issue might be related to the fact that ModHeader applies its rules to ALL open tabs. I tried another header modification extension named Requestly which allows rules to be targeted to specific URLs and now my development site is loading images as expected.
I am using PCF and try to bulk /single import application using http url and found network is blocking extrnal http, is there option to upload my task jar without adding into http ?
Following url i am try to import
http://repo.spring.io/libs-snapshot/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Celsius.BUILD-SNAPSHOT/spring-cloud-stream-app-descriptor-Celsius.BUILD-SNAPSHOT.stream-apps-kafka-10-maven
http://repo.spring.io/libs-release-local/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Celsius.SR3/spring-cloud-stream-app-descriptor-Celsius.SR3.stream-apps-rabbit-maven
Yes, you can!
The HTTP URLs that we publish are nothing but a property file with key/value pairs of out-of-the-box application coordinates. You could download the file in your laptop, and use the 3rd choice from the page "Bulk import application coordinates from a property file.". Alternatively, from the same page, you could copy + paste the k/v pairs in the "Apps as Properties" text-area. These two options would allow the registration of application coordinates in SCDF's App registry.
However, at runtime, these applications will be resolved, downloaded, and deployed (by SCDF) as part of the stream/task deployments. That would mean, in a restricted environment, you may still run into the same connectivity problem.
For that reason, we have different other options in PCF to host/resolve application artifacts — see ref. docs. The SCDF App Tool is typically preferred by PCF customers.
I am trying to integrate Google Tag Manager into an Electron app,
but it doesn't seem to be working. it seems like gtm codes I planted in the app are NOT sending the analytics data anywhere.
I found this issue on Electron github repo. Seems like some people are having the same issue.
I wonder if it's impossible at all to integrate GTM on Electron, or is there any way around to do this?
[Update]
While reading Alexander Leithner's answer, a further question popped up.
on Google Tag Manager - Dev Guide - Security, it says:
While most of the tag templates in Google Tag Manager are also
protocol relative, it's important to make sure that, when setting up
custom tags to fire on secure pages, those tags are also either
protocol relative or secure.
Does file:// protocol matters because GTM is protocol relative? Wouldn't it be possible to bypass this with GA's forceSSL=true option which can be set on GTM Interface?
[Final Update]
I found the perfect answer in this blog post:
Run Google Tag Manager And Google Analytics In Local Files.
Thank you Eike Pierstorff, for giving me the hint of setting storage to none, it led me to this post.
GTM by default used to use the same protocol as the webpage - that's what "protocol relative" means. I.e. there is a bit of code that loads the GTM library, and if this uses the file protocol (as per the embedded wegpage) it will try to load the library as a file, which does not work. However GTM has switched from protocol relative to https by default, so I don't think GTM is your problem here.
You mention Analytics data, and if this refers to Google Analytics then your problem is not with GTM, it is that GA does not work on local files. Google Analytics uses a cookie to store the clientId (which is needed to aggregate individual hits into sessions/users), and since you cannot set cookies on a local file this does not work.
A possible workaround would be to go to your GA tag in GTM, to the "set fields" settings, set "storage" to "none" (which means that no cookie is set) and pass in a clientId manually.
As this comment by Samuel Attard (MarshallOfSound), who is an Electron developer, states that Google Tag Manager does not work when the including webpage is loaded using a file:// URL.
If you'd instead load your application via http:// (or more preferably via https://), you would be able to use Google Tag Manager.
Tutorial for installing visual editor of MediaWiki on a shared hosting is not updated on their website.
MediaWiki Visual Installation Guide
May be it has been written for an old version of localhostsettings.js because it doesn't match with the current one.
Form MediaWiki Site, they say:
Open localsettings.example.js, and change parsoidConfig.setMwApi according to your wiki:
The first parameter is the "prefix". The "prefix" is the name given to this wiki configuration in the deprecated Parsoid v1 API - Leave this option as default 'localhost'.
The second parameter is "domain". The "domain" is used for communication with Visual Editor and RESTBase - Leave this optoin as default 'localhost'.
The third parameter is "uri". The "uri" is the URL to your API (normally this is in the root-folder and called api.php) - Change this option according your wiki (For example http://wiki.example.com/api.php or http://example.com/w/api.php or something like that).
"Proxy" parameter leave as default.
If you use SSL on your wiki with a self-signed certificate you should uncomment parsoidConfig.strictSSL = false; in Line 102. You should still use HTTP in the URL above instead of HTTP! Note that, in at-least one instance for a Wiki on SSL having redirect rules for HTTP to HTTPS, using http://example.com/w/api.php instead of https://example.com/w/api.php gave an: "Error loading data from server: 500:docserver-http:HTTP 500." error. This was fixed on using https:// or on removing the redirect rule.
But it doesn't match. Can anyone please explain what settings should I need to change to get it running?
I have a server / client project, both written in dart. Now my server starts on port 1337 and when I run my client with the Run in dartium, my static files are served on port 3030 which allows me to debug my client code in the Dart editor.
The problem is that this causes CORS when using AJAX calls. I have properly setup my server to accept other origins (with Access-Control-Allow-Origin) but, for example, cookies aren't sent along.
Now I'm wondering: is there a way to serve my files with my server (running on 1337) and still have the possibility to debug the client side code in the dart editor?
My understanding is that you can debug, but the real problem is that you don't get the expected data back from the server due to missing cookies.
Standard CORS requests do not send or set any cookies by default.
In order to include cookies as a part of the request, besides setting up the server, you need to specify withCredentials property, e.g.:
HttpRequest.getString(url, withCredentials:true)...
You will also need to setup server to provide Access-Control-Allow-Credentials header.
EDIT: it seems that additional issue is that you don't want to have 2 servers, each serving different part of app.
In that case, you can configure DartEditor to launch the URL, instead of files. Go to Run > Manage Launches and add create a new Dartium or Dart2JS launch with specified URL and source directory.
Another option is to select Run > Remote Connection and attach to a running instance of browser or Dart VM.
Caveat: I haven't tried these options, so I can't tell how stable they are.