I have registered my Firefox extension using the Windows registry under:
HKEY_CURRENT_USER\Software\MozillaPlugins
But, upon restarting Firefox I do not not see my extension installed.
Can anyone explain why this is happening?
An extension is not a plugin. You are using the wrong location within the Windows registry to add an add-on/extension.
Using registry entries to install add-ons is considered obsolete. However, if you are going to do so, the registry key you should be using for the current user is:
HKEY_CURRENT_USER\Software\Mozilla\Firefox\Extensions
MDN provides the following information as to what the content of the registry entry under that key should be:
The ID of the extension must be used as the name of the Registry entry. The Registry entry must have a type of REG_SZ, and its value must be an absolute path to the folder containing the extension (i.e., the location of the unpacked XPI). For example, to install the extension described in the Building an Extension article, create a Registry entry with name equal to sample#foo.net and value equal to c:\extensions\myExtension.
You have not included in the question the exact entries you have used in the registry nor the contents of your add-on including the directory it is located in and, at least, the contents of your install.rdf file or manifest.json file. Thus, it is not possible for us to know if there is another issue in addition to, or instead of, your use of the wrong registry key.
Installation options other than using the Windows registry:
Multiple alternatives to using the registry exist. You can install your extension into one, or more of the possible directories. Depending on which location you use to install, the add-on may, or may not, be automatically updated by Firefox when you release a new version. If you use the Windows registry it will not be automatically updated.
Within the following links along with the official information from MDN, you will find a list of locations in which you can install the extension to have it affect the current user or all users on the machine, and if it will be automatically updated.
Adding Extensions using the Windows Registry (MDN documentation)
Installing extensions (MDN documentation)
Install WebExtensions on Firefox from the command line (Stackoverflow question)
Including extensions with your distribution of Firefox (MDN documentation)
Extension Packaging (MDN documentation)
How to have Firefox auto-update extensions bundled with an application (Stackoverflow question)
How to integrate add-ons (.xpi) into my custom Firefox build? (Stackoverflow question)
Related
I am looking for the legacy mbedtls documentation.
It was available at tls.mbed.org before joining the trustedFirmware project. Now, sadly, it is not reachable anymore.
Thanks!
tls.mbed.org only had the latest version, and then it froze at some point and was showing an old version until it went down. I haven't found a site hosting multiple versions of the documentation.
You can typeset the documentation on a typical Unix-like system (e.g. Linux or macOS or WSL or Cygwin) by checking out the version you want from the GitHub repository. This has the advantage that you can typeset the documentation for your configuration: after setting mbedtls/mbedtls_config.h (mbedtls/config.h in Mbed TLS 2.x), run
make apidoc
and browse apidoc/modules.html or apidoc/files.html.
If you want the whole documentation including all compile-time options and features that may or may not be enabled in your build, run
scripts/apidoc_full.sh
Note that this overwrites mbedtls/mbedtls_config.h.
I have the following "problem". Let's say I set up a Docker container with Node on it and want to use it as a development container. I connect in Visual Studio Code via the extension "Remote - Containers" with the container and create a working folder, respectively I install some extensions e.g. Prettier.
If I now delete this container and create a new one with the same image, all extensions of the old container are automatically reinstalled and Visual Studio Code also tries to connect to the old working folder, which may not be available at all.
Does anyone know where this information is stored regarding the image so that I can delete it after I delete a container. I work on macOS
I have found the files related to my problem. They are located in the following path under macOS:
~/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/imageConfigs
The .json files contain information about the workspaces folder and the installed extensions.
how to setup 8th wall project for web ar for Mac and use our system instead of using their web editor . I want to code in my computer and test and then upload that to their console rather then coding it in their web editor .
Steps to locally develop 8thwall webAR without using 8th wall cloud editor
Create a project using 8thwall dashboard and navigate to the dashboard settings and copy the appkey.
copy this base glitch get-started project and replace the appkey with your project appkey.
Navigate back to the dashboard and authorize your browser with the help of the dev-token.
You are ready to now use and test 8thwall development locally.
You can later self-host the project instead of copy pasting the code and reformatting according to the 8thwall cloud editor.
You can also directly remix any of the glitch projects as well which is a much quicker option.
NOTE: The glitch projects are under-maintained hence refer docs for latest SDK version as well as syntax changes
You can develop locally by choosing self-hosted project option with 8thwall, then downloading 8thwall's own web repository to tinker with. I struggled with the 8thwall docs to figure this out but the web repository makes locally development pretty straight-forward.
Follow the steps on the getting started guide ,
firstly you'll need to create an 8thwall account and self-hosted project.
Copy your unique App Key from the project settings page.
Clone the source code from the repo, replacing the app key in index.html file with your own app key (this lives in the header of the html file) :
<script async src="//apps.8thwall.com/xrweb?appKey=insert-your-key-here"></script>
8thwall included a serve script, which serves your source code on local network over https. This means you can add your local URL as a trusted domain in your self-hosted project settings for testing.
you'll need to ensure Node.js and npm are installed to run the script
Using the serve script depends on your computer, (there's instructions here for Windows also) but for the case of Mac, open a terminal in your project directory :
cd <to_this_serve_directory>
npm install
cd ..
./serve/bin/serve -d <sample_project_location>
I use Node version 16.16.0 as I had issues with my current node version 18.12.1. You can get Node version manager npm package to help manage your Node versions.
What's great about this is when you run the serve script from your terminal, this generates a QR code so you can test your app on a mobile device over local network. Make sure you copy the entire Listening URL into your browser, including the port number. e.g. https://245.678.0.11:8080
Final thing to mention, don't include the port number in your trusted domains URL. e.g. https://245.678.0.11
GitHub recently released a container registry alongside their package registry. What is the difference? When would it be better to use one or the other? Do we need both?
Packages are generally simple: they are essentially an archive (i.e. zip file) that contains contents (code libraries, application executables, etc.) and a manifest file (json document, xml file, etc) that describes those contents with a package name and version number (at a minimum).
ie:- npm,pip and composer packages.
Container images are also simple, but they're more like an archive (i.e. a zip file) than a package.
ie:- nginx, redis etc
Verdict:- if some libs repetitively used in any project then we can create package and use in project .while for all project based dependencies we need to choose container to run this. Yes we need both.
After debating this with a Docker-using friend for a while I think I've found a satisfactory explanation:
Packages are for modules of code which are compiled together into an
Application.
Containers are for Applications which are compiled together into a
Server.
This is a bit confused by the fact that a Package can contain a standalone Applications, and Containers will often use package managers like Apt to install these applications. I feel like this is an abuse of package management due to a legacy where we didn't have Containers. Eventually I would expect most Applications will be delivered in Container form.
I'm working in an Erlang environment. I'm looking to establish a dependency manager so that our build server can publish binaries for reuse instead of using source code dependencies. The Hexpm GitHub project implies that it is possible to run it outside of the hex.pm website, but I don't see any instructions for doing so. Specifically, I would like my build server to be able to publish packages either directly (via the filesystem) or via rebar3, and for subsequent rebar3 builds to be able to use those published packages
Is it possible to run Hex on my own server?
If so, where would I find some documentation on how to set it up (or provide the instructions directly)?
If you look at https://github.com/hexpm/hex_web there are instructions in the README.md for both installing and running it. It's a phoenix application, so it should all be relatively familiar ground if you've looked at the phoenix framework before.
As for getting rebar3 to work with your installation, there is documentation here as to the config values to use for setting the URLs to use for hex packages: http://www.rebar3.org/docs/hex-package-management.
HTH.