Loading additional javascript code from firefox add-on content script - firefox-addon

I'm writing something that I want to release as both a chrome extension and a firefox add-on.
The chrome extension is already available on github. I've factored my code into several modules using a module load format similar to what requirejs uses; I did this to separate the parts that were chrome-specific from the parts I hoped to re-use in the firefox add-on.
Specifically, I split up not only the backend work, but also the content scripts.
In chrome, when my content script needs to load another module, it sends a message to the background page saying "please load this module"; the script on the background page then does:
function onLoadLibrary(request, sender, sendResponse) {
var allFrames = request.allFrames || false;
chrome.tabs.executeScript(
sender.tab.id, {file: request.library.toLowerCase() + '.js',
allFrames: allFrames},
function () {
sendResponse({});
});
return true;
}
That is, I'm able to load additional javascript into the same sandbox as the content script that asked for that code. This is necessary to make module dependencies work.
In firefox, I can't figure out how to do this. I'll attach my initial content scripts through pageMods and by calling tab.attach from the "ready" event of tabs. That seems straightforward, but then if that content script needs to load more code I can't see how to do it.
There doesn't seem to be a way to access the sandbox my content script is running in from the main.js file so that I might inject more code into it. Even if I somehow kept a reference to the relevant tab instance (which only lets me inject into the top frame in any case), it appears that each new call to tab.attach puts injected code into a new sandbox. The object tab that's passed to my ready event handle isn't a real XUL tab that I could pass to require("tabs/util").getBrowserForTab; if it were, then I think I can follow through enough of the sdk code to create my own sandbox, though I'd worry about leaving accidental memory leaks behind.
I considered passing the code back to the content script through a "eval-this-code" message, but I really don't want to use eval in my extension because of security concerns; I also worry that using eval would make it difficult to impossible to get my firefox add-on approved for AMO. (Also, how would that interact when my add-on runs on sites with a Content Security Policy?)
The usage of traits to define the add-on API seems to close off access to objects such that I can't reach inside a Worker to get a reference to the sandbox my content script is executing in. At this point, it appears that I'd need to include nearly a full copy of the sdk in my add-on just to expose one method on WorkerSandbox.
Note: I'm using the Add-On sdk (the project formerly known as JetPack). I'm willing to use Components.utils.import if someone can tell me how to use that from inside an Add-On SDK-managed content script.

Content-scripts do not expose a public API to attach more scripts to a content-script sandbox after it was initialized. You should probably file an enhancement bug and state your use case, if there isn't one filed already (search first), and/or even come up with some patches yourself.
In cases where there is a DOM that your add-on own (widget), then it's just a matter of attaching another script tag.
For things like page-mods where there is no DOM you own, you're left with a couple of options, none of which is really satisfying. As you already found out yourself, the use of traits prohibits you from accessing "private" properties/methods.
Fork page-mod/tab/content-worker to provide the functionality you need. That would require creating your own copies of the modules and expose the necessary APIs to inject scripts into existing workers.
This is has a steep learning curve (but given that you already figured out details such as traits, should be doable for you), but more importantly hard to maintain as you need to make sure you keep up with the upstream. And AMO editors will not like you very much for it :p
On the plus side, you could try to get your stuff committed upstream, fixing this problem for everybody and become a hero to many authors using the Add-on SDK.
The eval method you propose. Not only is this eval a major source for security issues, but it also may be a performance killer, as right now IIRC evaled code will not use the JIT. And, of course, it will make us AMO editors cringe, even if used "correctly".
Do not use lazy loading at all, and specify all content scripts from the very beginning. This is what add-ons usually do (I'm almost inclined to say "always"). However, this conflicts with your current design, and depending on your add-on may pose a serious performance penalty for loading stuff you didn't really need in the end.
You could use the require mechanism to have most scripts as SDK module and not content-scripts. This is not always feasible, of course, e.g. when dealing with code that would normally modify the DOM in your content-script, but might work for some other stuff.
Replace page-mod, etc with your own Greasemonkey-like, enhanced API. This means lots of work, it is error-prone, security-sensitive and has to be maintained. So, it's not really a viable solution, IMO...
Components.utils.import does not help you. It isn't available to content-scripts anyway.

Related

Why use defaults/preferences/*.js vs setting Firefox Add-on defaults in JavaScript?

The more I think about it, I wonder why should anyone bother to use the defaults/preferences/*.js file to set defaults, versus setting the defaults in JavaScript?
I am doing work on an older XUL/Overlay based add-on created by someone else, which actually uses a prefs.js file in addition to setting all the defaults in the JavaScript (in case the prefs.js failed for some inexplicable reason?), which essentially makes all the preferences into user-prefs directly after installation. This confused me at first, as the defaults were showing up as modified (user-set) when looking at the prefs in about:config. Then I realized it was unconditionally setting some of the defaults (very large strings).
So I realized, not only do I maintain the same prefs in 3 locations (prefs.js, interface and content scripts), but the prefs.js file is largely redundant, which seems to add additional maintenance for no reason. This just seems silly, and I am looking for a better way, to just store prefs and manage them in one location (which probably is why the prefs.js should be used exclusively).
Now, I realize, this question has the potential to be flagged as an "opinion", and may or may not have a specific "right" answer. But I think it is a valid question, and I would like to learn more about the pros and cons of using a prefs.js file, versus setting all prefs during initialization in a shared JS code. Are there any performance concerns, or objective list of criteria that I could use to make this determination? Is it possible that the prefs.js mechanism would ever fail? Is it safe to assume it never will fail? Was it more prone to failure back in the FF 1.0-3.5 days?
Default preferences:
Can be changed in a new version of the add-on without overriding the value set explicitly by the user.
Do not clutter up the user's profile after the add-on is uninstalled
Are located in the same place, not scattered thoughout the code. It's easy to see what preferences there are. They also appear in about:config, which allows you to have UI for "hidden" prefs - for the sake of power users.
It is a pity they did not make the get*Pref API return the default value in case the user-set value is invalid, and I understand not wanting to maintain the default in two locations. This whole API was "frozen" a long time ago, and will eventually go away with the rest of XPCOM, so it doesn't really matter...

Dart dynamic class loading

I want to build an application server with Dart. The httpServer in the dart:io library is certainly a good starting point for that. But I struggle with the task to "deploy" an application without restarting the server process.
To be more precise: I want to have something like a servlet container in Java, like Tomcat, into which I can easily deploy or redeploy an application without restarting the container. I thought I could utilize the mirror system, which allows me in principle to load a library and its contained classes from the filesystem. But unfortunately it seems that I cannot re-load the library. When I add for example a new class to the library, or change the coding of an existing class, a new reflection of the library without restarting the dart process, does not reflect the changes. Only when I stop the process and restart it again, the changes are visible.
So: is there a way to scrub the mirror system and let it load the library and its classes again, within the same Dart process?
I think isolates are a good fit for this requirement.
I haven't used them myself much yet but as far as I know you can load and unload them dynamically.
The documentation is not very extensive yet.
A few things I found:
https://api.dartlang.org/apidocs/channels/stable/dartdoc-viewer/dart-isolate
Recent documentation about Dart Isolates
https://www.youtube.com/watch?v=TQJ1qnrbTwk
https://www.youtube.com/watch?v=4GlK-Ln7HAc
So, yes, it is possible in Dart to dynamically (re-)load dart-files at runtime. Every new isolate has its own MirrorSystem. If you want to reload a dart-file you must create a new isolate and use the MirrorSystem of this isolate to iterate over the contents in the libraries known to this MirrorSystem. If your dart-file is part of a library known to the MirrorSystem, all functions and classes contained in this file are loaded and reflected anew.
This solution has some drawbacks: First, it is quite heavyweight. The programming of inter-isolate communication is cumbersome. Also it is to be seen whether the memory consumption increases with each reload. Second, the solution is not really dynamic: Isolates load only libraries that are "known" at design-time. They must be directly or indirectly imported into the dart file that contains the static function, which is called when the isolate is created.
Two ideas how the situation could be improved:
1. It would help if the spawn and the spawnUri methods of Isolate could get a list of additional libraries as parameter, which are included in the MirrorSystem of the isolate.
2. The classloaders in Java are independent of processes and threads. They just load classes. Why isn't this possible in Dart?

Way to write code "in the debugger" in Lua?

I just played around a bit with Lua and tried the Koneki eclipse plugin, which is quite nice. Problem is that when I make changes in a function I'm debugging at the moment the changes do not become effective when saving the changes. So I'm forced to restart the application. Would be so nice if I could make changes in the debugger and they would become effective on the fly as for example with Smalltalk or to some extend as in hot code replacement in Java. Anybody has a clue whether this is possible?
It is possible to some degree with some limitations. I've been developing an IDE/debugger that provides this functionality. It gives you access to a remote console to execute commands in the context/environment of your running application. The IDE also supports live coding, which reloads modified code as you make changes to it; see demos here.
The main limitation is that you can't modify a currently running function (at least without changes to Lua VM). This means that the effect of your changes to the currently running function will only be seen after you exit and re-enter that function. It works well for environments that call the same function repeatedly (for example a game engine calling draw), but may not work in your case.
Another challenge is dealing with upvalues (values that are created outside of your function and are referenced inside it). There are methods to "read" current upvalues and re-create them when the (new) function is created, but it requires some code analysis to find what functions will be recreated to query them for upvalues, to get the current values, and then to create a new environment with those upvalue and assign proper values to them. My current implementation doesn't do this, which means you need to use global variables as a workaround.
There was also relevant discussion just the other day on the Lua mailing list.

Delphi logging with multiple sinks and delayed classification?

Imagine i want to parse a binary blob of data. If all comes okay, then all the logs are INFO, and user by default does not even see them. If there is an error, then user is presented with error and can view the log to see exact reason (i don't like programs that just say "file is invaid. for some reason. you do not want to know it" )
Probably most log libraries are aimed at quickly loading, classifying and keeping many many log lines per second. which by itself is questionable, as there is no comfort lazy evaluation and closures in Delphi. Envy Scala :-)
However that need every line be pre-сlassified.
Imagine this hypothetical flow:
Got object FOO [ok]
1.1. found property BAR [ok]
1.1.1. parsed data for BAR [ok]
1.2 found property BAZ [ok]
1.2.1 successfully parsed data for BAR [ok]
1.2.2 matching data: checked dependancy between BAR and BAZ [fail]
...
So, what can be desired features?
1) Nested logging (indenting, subordination) is desired then.
Something like highlighted in TraceTool - see TraceNode.Send Method at http://www.codeproject.com/KB/trace/tracetool.aspx#premain0
2) The 1, 1.1, 1.1.1, 1.2, 1.2.1 lines are sent as they happen in a info sink (TMemo, OutputDebugString, EventLog and so one), so user can see and report at least which steps are complete before error.
3) 1, 1.2, 1.2.2 are retroactively marked as error (or warning, or whatever) inheriting from most specific line. Obviously, warning superseeds info, error superseeds warning and info, etc/
4) 1 + 1.2 + 1.2.2 can be easily combined like with LogMessage('1.2.2').FullText to be shown to user or converted to Exception, to carry the full story to human.
4.1) Optionally, with relevant setup, it would not only be converted to Exception, but the latter even would be auto-raised. This probably would require some kind of context with supplied exception class or supplied exception constructing callback.
5) Multisink: info can be just appended into collapsible panel with TMemo on main form or currently active form. The error state could open such panel additionally or prompt user to do it. At the same time some file or network server could for example receive warning and error grade messages and not receive info grade ones.
6) extra associated data might be nice too. Say, if to render it with TreeView rather than TMemo, then it could have "1.1.1. parsed data for BAR [ok]" item, with mouse tooltip like "Foo's dimensions are told to be 2x4x3.2 metres"
Being free library is nice, especially free with sources. Sometimes track and fix the bug relying solely on DCUs is much harder.
Non-requiring extra executable. it could offer extra more advanced viewer, but should not be required for just any functionality.
Not being stalled/abandoned.
ability to work and show at least something before GUI is initialized would be nice too. Class constructors are cool, yet are executed as part of unit visualization, when VCL is not booted yet. If any assertion/exception is thrown from there, user would only see Runtime error 217, with all the detail lost. At least OutputDebugStreen can be used, if nothing more...
Stack tracing is not required, if needed i can do it and add with Jedi CodeLib. But it is rarely needed.
External configuration is not required. It might be good for big application to reconfigure on the fly, but to me simplicity is much more important and configuration in code, by calling constructors or such, is what really matters. Extra XML file, like for Log4J, would only make things more fragile and complex.
I glanced few mentioned here libraries.
TraceTool has a great presentation, link is above. Yet it has no info grade, only 3 predefined grades (Debug/Error/Warning) and nothing more, but maybe Debug would suit for Info replacement... Seems like black box, only saving data into its own file, and using external tool to view it, not giving stream of events back to me. But their messages nesting and call chaining seems cool. Cools is also attaching objects/collection to messages.
Log4D and Log4Delphi seems to be in a stasis, with last releases of 2007 and 2009, last targeted version Delphi 7. Lack documentation (probably okay for log4j guy, but not for me :- ) Log4Delphi even had test folder - but those test do not compile in Delphi XE2-Upd1. Pity: In another thread here Log4delphi been hailed for how simple is to create custom log appender (sink)...
BTW, the very fact that the only LOG4J was forked into two independent Delphi ports leaves the question of which is better and that both lack something, if they had to remain in split.
mORMot part is hardly separated from the rest library. Demo application required UAC escalation for use its embedded SQLite3 engine and is frozen (no window opened, yet the process never exits normally) if refused Admin grants. Another demo just started infinite stream of AV exceptions, trying to unwind the stack. So is probably not ready yet for last Delphi. Though its list of message grades is excessive, maybe even a bit too many.
Thank you.
mORMot is stable, even with latest XE2 version of Delphi.
What you tried starting were the regression tests. Among its 6,000,000 tests, it includes the HTTP/1.1 Client-Server part of the ORM. Without the Admin rights, the http.sys Server is not able to register the URI, so you got errors. Which makes perfectly sense. It's a Vista/Seven restriction, not a mORMot restriction.
The logging part can be used completely separated from the ORM part. Logging is implemented in SynCommons.pas (and SynLZ.pas for the fast compression algorithm used for archival and .map embedding). I use the TSynLog class without any problem to log existing applications (even Delphi 5 and Delphi 6 applications), existing for years. The SQLite3 / ORM classes are implemented in other units.
It supports the nesting of events, with auto-leave feature, just as you expect. That is you can write:
procedure TMyClass.MyMethod(const Params: integer);
begin
TSynLog.Enter;
// .... my method code
end;
And adding this TSynLog.Enter will be logged with indentation corresponding to the recursive level. IMHO this may meet your requirements. It will declare an ISynLog interface on the stack, which will be freed by Delphi at the "end;" code line, so it will implement an Auto-Leave feature. And the exact unit name, method name and source code line number will be written into the log (as MyUnit.TMyClass.MyMethod (123)), if you generated a .map file at compilation (which may be compressed and appended to the .exe so that your customers logs will contain the source line numbers). You have methods at the ISynLog interface level to add some custom logging, including parameters and custom state (you can log objects properties as JSON if you need to, or write your custom logging data).
The exact timing of each methods are tracked, so you are able to profile your application from the data supplied by your customer.
If you think the logs are too much verbose, you have several levels of logging, to be customized on the client side. See the blog articles and the corresponding part of the framework documentation (in the SynCommons part). You've for instance "Fail" events and some custom kind of events. And it is totally VCL-independent, so you can use it without GUI or before any GUI is started.
You have at hand a log viewer, which allow client-side profiling and nested Enter/Leave view (if you click on the "Leave" line, you'll go back to the corresponding "Enter", e.g.):
If this log viewer is not enough, you have its source code to make it fulfill your requirements, and all the needed classes to parse and process the .log file on your own, if you wish. Logs are textual by default, but can be compressed into binary on request, to save disk space (the log viewer is able to read those compressed binary files). Stack tracing and exception interception are both implemented, and can be activated on request.
You could easily add a numeration like "1.2.1" to the logs, if you wish to. You've got the whole source code of the logging unit. Feel free to ask any question in our forum.
Log4D supports nested diagnostic contexts in the TLogNDC class, they can be used to group together all steps which are related to one compound action (instead of a 'session' based grouping of log events). Multi-Sinks are called Appenders in log4d and log4delphi so you could write a TLogMemoAppender with around twentyfive lines of code, and use it at the same time as a ODSAppender, a RollingFileAppender, or a SocketAppender, configurable at run time (no external config file required).

Check For Updates

I'm developing a application in Lazarus, that need to check if there is a new version of a XML file on every Form_Create.
How can I do this?
I have used the synapse library in the past to do this kind of processing. Basically include httpsend in your uses clause, and then call httpgetbinary(url,xmlstream) to retrieve a stream containing the resource. I wouldn't do this in the OnCreate though, since it can take some time to pull the resource. Your better served by placing this in another thread that can make a synchronize call back to the form to enable updates, or set an application flag. This is similar to how the Chrome browser displays updates on the about page, a thread is launched when the form is displayed to check to see if there are updates, and when the thread completes it updates the GUI...this allows other tasks to occur (such as a small animation, or the ability for the user to close the dialog).
Synapse is not a visual component library, it is a library of blocking functions that wrap around most of the common internet protocols.
You'll need to read up on FPC Networking, lNet looks especially useful for this task.

Resources