Tutorial for installing visual editor of MediaWiki on a shared hosting is not updated on their website.
MediaWiki Visual Installation Guide
May be it has been written for an old version of localhostsettings.js because it doesn't match with the current one.
Form MediaWiki Site, they say:
Open localsettings.example.js, and change parsoidConfig.setMwApi according to your wiki:
The first parameter is the "prefix". The "prefix" is the name given to this wiki configuration in the deprecated Parsoid v1 API - Leave this option as default 'localhost'.
The second parameter is "domain". The "domain" is used for communication with Visual Editor and RESTBase - Leave this optoin as default 'localhost'.
The third parameter is "uri". The "uri" is the URL to your API (normally this is in the root-folder and called api.php) - Change this option according your wiki (For example http://wiki.example.com/api.php or http://example.com/w/api.php or something like that).
"Proxy" parameter leave as default.
If you use SSL on your wiki with a self-signed certificate you should uncomment parsoidConfig.strictSSL = false; in Line 102. You should still use HTTP in the URL above instead of HTTP! Note that, in at-least one instance for a Wiki on SSL having redirect rules for HTTP to HTTPS, using http://example.com/w/api.php instead of https://example.com/w/api.php gave an: "Error loading data from server: 500:docserver-http:HTTP 500." error. This was fixed on using https:// or on removing the redirect rule.
But it doesn't match. Can anyone please explain what settings should I need to change to get it running?
Related
I'm running Artifactory CPP CE 7.7.3 and Traefik v2.2 using docker-compose. The service is only available over http://localhost/ui/. Now, what I need is an option which allows to add a URL path-prefix (e. g. http://localhost/artifactroy/ui).
My Setup
I used the described setup process from the Artifactory Docs suggest it.
My docker.compose.yaml is the official extracted from the jfrog-artifactory-cpp-ce-7.7.3-compose.tar.gz: ./templates/docker-compose.yaml.
I'm using a reverse proxy (traefik). For this, I've added the necessary traefik configuration lines to the docker-compose-file. Here is a small extract what I've added:
[...]
labels:
- "traefik.http.routers.artifactory.rule=Host(`localhost`) && PathPrefix(`/ui`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-stripprefix"
- "traefik.http.middlewares.artifactory-stripprefix.stripprefix.prefixes=/"
- "traefik.http.services.artifactory.loadbalancer.server.port=8082"
With this I managed to access artifactory over http://localhost/ui/.
Problem:
I have multiple small services running on my server, each of this service is accusable via http://localhost/<service-name>. This is very convenient and want to make clear that this URL is related to this service on my production server.
Because of this, I want to have an URL like http://localhost/artifactroy/ui/... instead of http://localhost/ui/...
I struggled getting artifactory setup in that way. I already managed to get a redirection from typing e. g. http://localhost/artifactroy/ to http://localhost/ui/ but this is not what I want on my production server.
What I did
Went through the documentation in hope of finding an option which I just can passt to artifactroy to add a prefix (Not successful).
Tried configure traefik two full days, to alter headers to get the repose point to http://localhost/artifactroy/ui/... (Only partially successful, redirection didn’t work afterwards)
Tried finding the configuration which is responsible for configure artifactory in $JFROG_HOME/artifactory/var/etc (Not successful)
Is this even possible? Help is highly appreciated..
This example (even though not traefic example) gives you a direction to implement it. There are certain routes already used within the product. You need to add a context over and above it to ensure all comes via the new context path.
https://jfrog.com/knowledge-base/how-to-remove-artifactory-from-the-context-url-in-artifactory-7/
I am new to pac files. I am not sure I made it working.
I installed an nginx on a virtual machine. and exposed a pac file to url.
(I can easily download pac file by puting url in browser as proof it is working).
I then set my computer proxy settings as explained in many guides. I ticked the automatic proxy settings and I then put pac file url.
After that I think the file is not being loaded.
puting : "chrome://net-internals/#proxy" gives an empty list while it should show the pac file. Plus the net seems to work as the pac is not even there.
For further information, since I am new to pac file, I am just testing a simple pac file which redirect ALL traffic to youtube.com.
can anyone help me out?
Thanks
function FindProxyForURL(url, host) {
return "youtube.com";
The format to return is something like return "PROXY youtube.com:80". However I do not think this will work, as Youtube is not a proxy.
PAC files must return a Proxy. But Youtube is just a site, which is something completely different than a Proxy.
PAC files are unable to replace a Proxy. All they do is to point to the proxy to use when you want to reach some specific URL. This way you can configure how to reach certain networks, like some Extranet (via some dedicated HTTP proxy or TOR or I2P (via SOCKS proxies). This is why you also must specify the type of the Proxy used and the port number where the Proxy sits. Just a name usually is not enough.
Also note that you can give more than just one Proxy. For more information on PAC-files see the main documentation:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Proxy_servers_and_tunneling/Proxy_Auto-Configuration_(PAC)_file
(Sorry for the link, but if a Link to MDN ever breaks, the Net probably has some bigger trouble than just this broken link.)
I am trying to integrate Google Tag Manager into an Electron app,
but it doesn't seem to be working. it seems like gtm codes I planted in the app are NOT sending the analytics data anywhere.
I found this issue on Electron github repo. Seems like some people are having the same issue.
I wonder if it's impossible at all to integrate GTM on Electron, or is there any way around to do this?
[Update]
While reading Alexander Leithner's answer, a further question popped up.
on Google Tag Manager - Dev Guide - Security, it says:
While most of the tag templates in Google Tag Manager are also
protocol relative, it's important to make sure that, when setting up
custom tags to fire on secure pages, those tags are also either
protocol relative or secure.
Does file:// protocol matters because GTM is protocol relative? Wouldn't it be possible to bypass this with GA's forceSSL=true option which can be set on GTM Interface?
[Final Update]
I found the perfect answer in this blog post:
Run Google Tag Manager And Google Analytics In Local Files.
Thank you Eike Pierstorff, for giving me the hint of setting storage to none, it led me to this post.
GTM by default used to use the same protocol as the webpage - that's what "protocol relative" means. I.e. there is a bit of code that loads the GTM library, and if this uses the file protocol (as per the embedded wegpage) it will try to load the library as a file, which does not work. However GTM has switched from protocol relative to https by default, so I don't think GTM is your problem here.
You mention Analytics data, and if this refers to Google Analytics then your problem is not with GTM, it is that GA does not work on local files. Google Analytics uses a cookie to store the clientId (which is needed to aggregate individual hits into sessions/users), and since you cannot set cookies on a local file this does not work.
A possible workaround would be to go to your GA tag in GTM, to the "set fields" settings, set "storage" to "none" (which means that no cookie is set) and pass in a clientId manually.
As this comment by Samuel Attard (MarshallOfSound), who is an Electron developer, states that Google Tag Manager does not work when the including webpage is loaded using a file:// URL.
If you'd instead load your application via http:// (or more preferably via https://), you would be able to use Google Tag Manager.
I am working with a designer and I'd like them to have access to the interactions I've implemented on the site we're working on. However this time, I have 2 issues. My localhost is configured to a subdomain:
http://store.teststore:3000/ and we're on different networks. Is there anyway to work around this?
ngrok should work for you. Download and install it following these instructions here: https://ngrok.com/download. Documentation on how it is used can be found here https://ngrok.com/docs. Once installed running the below command should work for you (depending on the hosting environment):
ngrok http -host-header=rewrite store.teststore:3000
You will need to give the URL generated by ngrok and displayed in the cmd prompt to the designer.
Update: Handling absolute redirects
Based on your comment it sounds like, after login, your site does an absolute redirect (the full URL is specified). If it is possible I would change your code to do a relative redirect where the domain is omitted. You could also make your root domain configurable in the absolute redirect and configure it to be the ngrok domain provided for now. Lastly, you could attempt to configure your DNS with a CNAME record following ngroks Tunnels to custom domains documentation. This last option, however, requires a paid for ngrok subscription.
Install ngrok if you haven't yet and CD into your project directory and invoke ngrok. Note Your application must be running locally on the same port number ngrok will be running.
I'm transforming my SDK-based Firefox extension to WebExtensions and I've come to the issue of updating the extension. The current extension is hosted on my own domain (which is an HTTP domain), along with the update.rdf file.
Now, for SDK-based add-ons, updates were possible via HTTP as long as the update manifest was signed using the McCoy tool and the valid hash of the update file was provided in the manifest. In addition to that, install.rdf would hold the public key portion of the key used to sign the update.rdf.
There seem to be no options to do this using the web extensions ( no manifest entry for public key, and no update manifest (.json) entry for the signature).
Does this mean Firefox will only allow self-hosted extensions to update via HTTPS? How will this affect SDK-based extensions currently hosted on HTTP domains? Will they be able to receive (at least one) update?
As you appear to have determined, the update.rdf for WebExtensions based add-ons must be served over HTTPS, not HTTP. The URL for the update.rdf file must be HTTPS. The documentation for the update_url property in the manifest.json applications key is explicit on this point:
update_url is a link to an add-on update manifest. Note that the link must begin with "https". This key is for managing extension updates yourself (i.e. not through AMO).
There is no way to use the alternate security method, which is available to other types of add-ons, of providing an updateKey (and signing the update.rdf) in an install.rdf file included with the extension.
Add-on SDK based extensions, and other types of non-WebExtensions add-ons, will continue to be able to receive their update.rdf over HTTP in the same manner which they have been doing.
If your issue is transitioning an add-on from being an Add-on SDK based add-on to being a WebExtensions based add-on, then you will need to have an update to that extension which changes the URL from which updates are served. This can either be in some version before transitioning to WebExtensions, or at the same time. Either way, it is just a new version of the add-on (indicated with the update.rdf served via HTTP and appropriately signed). That new version will then have an update_url (WebExtensions) or updateURL (all other types) where the URL is using the HTTPS scheme. All subsequent update.rdf files will then be served over HTTPS.