Why use defaults/preferences/*.js vs setting Firefox Add-on defaults in JavaScript? - firefox-addon

The more I think about it, I wonder why should anyone bother to use the defaults/preferences/*.js file to set defaults, versus setting the defaults in JavaScript?
I am doing work on an older XUL/Overlay based add-on created by someone else, which actually uses a prefs.js file in addition to setting all the defaults in the JavaScript (in case the prefs.js failed for some inexplicable reason?), which essentially makes all the preferences into user-prefs directly after installation. This confused me at first, as the defaults were showing up as modified (user-set) when looking at the prefs in about:config. Then I realized it was unconditionally setting some of the defaults (very large strings).
So I realized, not only do I maintain the same prefs in 3 locations (prefs.js, interface and content scripts), but the prefs.js file is largely redundant, which seems to add additional maintenance for no reason. This just seems silly, and I am looking for a better way, to just store prefs and manage them in one location (which probably is why the prefs.js should be used exclusively).
Now, I realize, this question has the potential to be flagged as an "opinion", and may or may not have a specific "right" answer. But I think it is a valid question, and I would like to learn more about the pros and cons of using a prefs.js file, versus setting all prefs during initialization in a shared JS code. Are there any performance concerns, or objective list of criteria that I could use to make this determination? Is it possible that the prefs.js mechanism would ever fail? Is it safe to assume it never will fail? Was it more prone to failure back in the FF 1.0-3.5 days?

Default preferences:
Can be changed in a new version of the add-on without overriding the value set explicitly by the user.
Do not clutter up the user's profile after the add-on is uninstalled
Are located in the same place, not scattered thoughout the code. It's easy to see what preferences there are. They also appear in about:config, which allows you to have UI for "hidden" prefs - for the sake of power users.
It is a pity they did not make the get*Pref API return the default value in case the user-set value is invalid, and I understand not wanting to maintain the default in two locations. This whole API was "frozen" a long time ago, and will eventually go away with the rest of XPCOM, so it doesn't really matter...

Related

Log only if logger is explicitly turned on (set at a certain level - ignoring root/parents/hierarchy)

I've spent quite a bit more time than necessary to figure this out, and came up with zilch! This may not be the most kosher way to do what I need, but in the legacy version, I found it easy enough to use in practice. Basically, I'd have a specially named logger, and I want it to be enabled (i.e. logging diagnostic) only if it's explicitly enabled (i.e. specified in the XML configuration file).
The hierarchical fashion the logger configuration works would cancel this out if ROOT logger is set at a lower level than my log lines. It would log this particular logger's lines, even if I don't want it to (yes, I know this is not the expected usage).
In legacy log4j, there were 2 ways to get the "level": getLevel and getEffectiveLevel. Effective traversed up the hierarchy, and the regular one returned the value at my particular logger. Using this, I could check the Level, and if it's null (i.e. not explicitly specified in my configuration), I'd programatically set the level to be OFF, and disable it.
There doesn't seem to be a way to access that piece of information in the straightforward part of the API. Looking through the object model in debugger, I've found the following map that could be checked if it contains the name of my logger or not, but it seems a bit convoluted:
LoggerContext.getContext(false).getConfiguration().getLoggers()
Is there another/simpler way to achieve this? Or should I just take my square peg, and move away from the round hole?! :)

How to handle SAP Kapsel Offline app OData conflicts properly?

I build an app that is able to store OData offline by using SAP Kapsel Plugins.
More or less it's the same as generated by WEB ID or similer to the apps in this example: https://blogs.sap.com/2017/01/24/getting-started-with-kapsel-part-10-offline-odatasp13/
Now I am at the point to check the error resolution potential. I created a sync conflict (chaning data on the server after the offline database was stored and changed something on the app and started a flush).
As mentioned in the documentation I can see the error in ErrorArchive and could also see some details. But what I am missing is the information of the "current" data on the database.
In the error details I can just see the data on the device but not the data changed on the server.
For example:
Device is loading some names into offline store
Device is offline
User A is changing some names
User B is changing one of this names directly online
User A is online again and starts a sync
User A is now informend about the entity that was changed BUT:
not the content user B entered
I just see the "offline" data.
Is there a solution to see the "current" and the "offline" one in a kind of compare view?
Please also note that the server communication is done by the Kapsel Plugin and not with normal AJAX calls. This could be an alternative but I am wondering if there is no smarter way supported by the API?
Meanwhile I figured out how to load the online data (manually).
This could be done by switching http handler back to normal one.
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
Anyhow this does not look like a proper solution and I also have the issue with the conflict log itself. It must be deleted before any refresh could be applied.
I could not find any proper documentation for that. Also ETag handling is hardly described in SAPUI5 and SAP Kapsel documentation.
This question is a really tricky one, due to its implications. I understand that you are simulating a synchronization error due to concurrent modification, and want to know if there is a way for the client to obtain the "current" server state in order to give the user a means to compare the local and server state.
First, let me give you the short answer: No, there is no way for the client to see the current server state "for reference" via the Offline APIs when there are synchronization errors. Doing an online query as outlined above might work, but it certainly is a bad idea.
Now for the longer answer, which explains why this is not necessarily a defect and why I said there are quite some implications to the answer.
Types of Synchronization Errors
We distinguish a number of synchronization errors, and in this context, we are clearly dealing with business-related issues. There are two subtypes here: Those that the user can correct, e.g. validation errors, and those that are issues in the business process itself.
If the user violates the input range, e.g. by putting a negative price for a product, the server would reply with the corresponding message: "-1 is not a valid input value for 'Price'". You, as a developer, can display such messages to the user from the error archive, and the ensuing fix is indeed a very easy one.
Now when we talk about concurrent modification, things get really, really nasty. In fact, I like to say that in this case there is an issue with the business process, because on one hand, we allow data to get out of sync. On the other hand, the process allows multiple users to manipulate the same piece of information. How all relevant users should now be notified and synchronize, is no longer just a technical detail, but in fact a new business process. There just is no way to generically device how to handle this case. In most cases, it would involve back-office experts who need to decide how the changes should be merged.
A Better Solution
Angstrom pointed out that there is no way to manipulate ETags on the client side, and you should in fact not even think about it. ETags work like version numbers in optimistic locking scenarios, and changing the ETag basically means "Just overwrite what's on the server". This is a no-go in serious scenarios.
An acceptable workaround would be the following:
Make sure the server returns verbose error messages so that the user can see what happened and what caused the conflict.
If that does not help, refresh the data. This will get you an updated ETag, and merge the local changes into the "current" server state, but only locally. "Merging" really means that local changes always overwrite remote changes.
The user now has another opportunity to review the data and can submit it again.
A Good Solution
Better is not necessarily good, so here is what you should really do: Never let concurrent modification happen because it is really expensive to handle. This implies that not the developer should address this issue, but the business needs to change the process.
The right question to ask is, "When you replicate data in a distributed system, why do you allow it to be modified concurrently at all?" Typically stakeholders will not like this kind of question, and the appropriate reaction is to work out a conflict resolution process together with them. Only then they will realize how expensive fixing that kind of desynchronization is, and more often than not they will see that adjusting the process is way cheaper than insisting in yet another back-office process to fix the issues it causes. Even if they insist that there is a need for this concurrent modification, they will now understand that it is not your task to sort this out and that they need to invest in a conflict resolution process.
TL;DR
There is no way to compare the server and client state to the server state on the client, but you can do a refresh to retain the local changes and get an updated ETag. The real solution, however, is to rework the business process, because this no longer is a purely technical issue.
The default solution is that SMP or HCPms is detecting errors by ETags. At client side there is no API to manipulate ETags in case of conflicts. A potential solution to implement a kind of diff view on the device would work like this:
Show errors
Cache errors (maybe only in memory?)
delete the errors
do a refresh of the database
build a diff view with current data and cached errors
The idea with
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
could also work but could be very tricky and may introduce side effects.
Maybe some requests are triggered against the "wrong" backend.

Loading additional javascript code from firefox add-on content script

I'm writing something that I want to release as both a chrome extension and a firefox add-on.
The chrome extension is already available on github. I've factored my code into several modules using a module load format similar to what requirejs uses; I did this to separate the parts that were chrome-specific from the parts I hoped to re-use in the firefox add-on.
Specifically, I split up not only the backend work, but also the content scripts.
In chrome, when my content script needs to load another module, it sends a message to the background page saying "please load this module"; the script on the background page then does:
function onLoadLibrary(request, sender, sendResponse) {
var allFrames = request.allFrames || false;
chrome.tabs.executeScript(
sender.tab.id, {file: request.library.toLowerCase() + '.js',
allFrames: allFrames},
function () {
sendResponse({});
});
return true;
}
That is, I'm able to load additional javascript into the same sandbox as the content script that asked for that code. This is necessary to make module dependencies work.
In firefox, I can't figure out how to do this. I'll attach my initial content scripts through pageMods and by calling tab.attach from the "ready" event of tabs. That seems straightforward, but then if that content script needs to load more code I can't see how to do it.
There doesn't seem to be a way to access the sandbox my content script is running in from the main.js file so that I might inject more code into it. Even if I somehow kept a reference to the relevant tab instance (which only lets me inject into the top frame in any case), it appears that each new call to tab.attach puts injected code into a new sandbox. The object tab that's passed to my ready event handle isn't a real XUL tab that I could pass to require("tabs/util").getBrowserForTab; if it were, then I think I can follow through enough of the sdk code to create my own sandbox, though I'd worry about leaving accidental memory leaks behind.
I considered passing the code back to the content script through a "eval-this-code" message, but I really don't want to use eval in my extension because of security concerns; I also worry that using eval would make it difficult to impossible to get my firefox add-on approved for AMO. (Also, how would that interact when my add-on runs on sites with a Content Security Policy?)
The usage of traits to define the add-on API seems to close off access to objects such that I can't reach inside a Worker to get a reference to the sandbox my content script is executing in. At this point, it appears that I'd need to include nearly a full copy of the sdk in my add-on just to expose one method on WorkerSandbox.
Note: I'm using the Add-On sdk (the project formerly known as JetPack). I'm willing to use Components.utils.import if someone can tell me how to use that from inside an Add-On SDK-managed content script.
Content-scripts do not expose a public API to attach more scripts to a content-script sandbox after it was initialized. You should probably file an enhancement bug and state your use case, if there isn't one filed already (search first), and/or even come up with some patches yourself.
In cases where there is a DOM that your add-on own (widget), then it's just a matter of attaching another script tag.
For things like page-mods where there is no DOM you own, you're left with a couple of options, none of which is really satisfying. As you already found out yourself, the use of traits prohibits you from accessing "private" properties/methods.
Fork page-mod/tab/content-worker to provide the functionality you need. That would require creating your own copies of the modules and expose the necessary APIs to inject scripts into existing workers.
This is has a steep learning curve (but given that you already figured out details such as traits, should be doable for you), but more importantly hard to maintain as you need to make sure you keep up with the upstream. And AMO editors will not like you very much for it :p
On the plus side, you could try to get your stuff committed upstream, fixing this problem for everybody and become a hero to many authors using the Add-on SDK.
The eval method you propose. Not only is this eval a major source for security issues, but it also may be a performance killer, as right now IIRC evaled code will not use the JIT. And, of course, it will make us AMO editors cringe, even if used "correctly".
Do not use lazy loading at all, and specify all content scripts from the very beginning. This is what add-ons usually do (I'm almost inclined to say "always"). However, this conflicts with your current design, and depending on your add-on may pose a serious performance penalty for loading stuff you didn't really need in the end.
You could use the require mechanism to have most scripts as SDK module and not content-scripts. This is not always feasible, of course, e.g. when dealing with code that would normally modify the DOM in your content-script, but might work for some other stuff.
Replace page-mod, etc with your own Greasemonkey-like, enhanced API. This means lots of work, it is error-prone, security-sensitive and has to be maintained. So, it's not really a viable solution, IMO...
Components.utils.import does not help you. It isn't available to content-scripts anyway.

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

How to log user activity with time spent and application name using c#.net 2.0?

I am creating one desktop application in which I want to track user activity on the system like opened Microsoft Excel with file name and worked for ... much of time on that..
I want to create on xml file to maintain that log.
Please provide me help on that.
This feels like one of those questions where you have to figure out what is meant by the question itself. Taken at face value, it sounds like you want to monitor how long a user spends in any process running in their session, however it may be that you only really want to know if, and for how long a user spends time in a specific subset of all running processes.
Since I'm not sure which of these is the correct assumption to make, I will address both as best I can.
Regardless of whether you are monitoring one or all processes, you need to know what processes are running when you start up, and you need to be notified when a new process is created. The first of these requirements can be met using the GetProcesses() method of the System.Diagnostics.Process class, the second is a tad more tricky.
One option for checking whether new processes exist is to call GetProcesses after a specified interval (polling) and determine whether the list of processes has changed. While you can do this, it may be very expensive in terms of system resources, especially if done too frequently.
Another option is to look for some mechanism that allows you to register to be notified of the creation of a new process asynchronously, I don't believe such a thing exists within the .NET Framework 2.0 but is likely to exist as part of the Win32 API, unfortunately I cant give you a specific function name because I don't know what it is.
Finally, however you do it, I recommend being as specific as you can about the notifications you choose to subscribe for, the less of them there are, the less resources are used generating and processing them.
Once you know what processes are running and which you are interested in you will need to determine when focus changes to a new process of interest so that you can time how long the user spends actually using the application, for this you can use the GetForegroundWindow function to get the window handle of the currently focused window.
As far as longing to an XML file, you can either use an external library such as long4net as suggested by pranay's answer, or you can build the log file using the XmlTextWriter or XmlDocument classes in the System.Xml namespace

Resources