I am developing one extension for Firefox using addon-sdk. That extension opens one tab and load one web page which is updated each N minutes. This web page is processed using one content script. The problem is that memory grows each time the content script gets executed. Do you know why? Is there any way to call any garbage collector in order to maintain memory consuption stable?
EDIT:
The web page contains bank account details and the Content Scripts look for new movements on in. It is a framed page and one of its frames (which contains movements list) is reloaded to see if any change occured. I use jquery to proccess the list.
When new movements appear, those are sent to the extension using port and the extension saves them in a remote web server using Response.
Trying check this instructions by mozilla:
https://developer.mozilla.org/en/XUL_School/JavaScript_Object_Management
https://developer.mozilla.org/en/Zombie_Compartments#Proactive_checking_of_add-ons
Depends on what are you using on your add-on... If you're using some oberserver for example, you need to unregister this observer so it won't leak... Can you give more descriptions about your addon? The code or exactly what it does...
Maybe you're not declaring variabled and you're using globals all the time, try also unset the variable after use it.
Are you using jquery?
Related
I have been extensively using a custom protocol on all our internal apps to open any type of document (CAD, CAM, PDF, etc.), to open File Explorer and select a specific file, and to run other applications.
Years ago I defined one myprotocol protocol that executes C:\Windows\System32\wscript.exe passing the name of my VBScript and whatever argument each request has. The first argument passed to the script describe the type of action (OpenDocument, ShowFileInFileExplorer, ExportBOM, etc.), the following arguments are passed to the action.
Everything worked well until last year, when wscript.exe stopped working (see here for details). I fixed that problem by copying it to wscript2.exe. Creating a copy is now a step in the standard configuration of all our computers and using wscript2.exe is now the official configuration of our custom protocol. (Our anti-virus customer support couldn't find anything that interacts with wscript.exe).
Today, after building a new computer, we found out that:
Firefox doesn't see wscript2.exe. If I click on a custom protocol link, then click on the browse button and open the folder, I only see a small subset of .exe files, which includes wscript.exe, but doesn't include wscript2.exe (I don't know how recent this problem is because I don't personally use FireFox).
Firefox sees wscript.exe, but it still doesn't work (same behavior as described in my previous post linked above)
Chrome works with wscript2.exe, but now it always asks for confirmation. According to this article this seems to be the new approach, and things could change again soon. Clicking on a confirmation box every time is a big no-no with my users. This would slow down many workflows that require quickly clicking hundreds of links on a page and, for example, look at a CAD application zooming to one geometry in a large drawing.
I already fixed one problem last year, I am dealing with another one now, and reading that article scares me and makes me think that more problems will arise soon.
So here is the question: is there an alternative to using custom protocols?
I am not working on a web app for public consumption. My custom protocol requires the VBScript file, the applications that the script uses and tons of network shared folders. They are only used in our internal network and the computers that use them are manually configured.
First of all, that's super risky even if it's on internal network only. Unless computers/users/browsers are locked out of internet, it is possible that someone guesses or finds out your protocol's name, sends link to someone in your company and causes a lot of trouble (possibly loss too).
Anyway...
Since you are controlling software on all of the computers, you could add a mini-server on every machine, listening to localhost only, that simply calls your script. Then define host like secret.myprotocol to point to that server, e.g., localhost:1234.
Just to lessen potential problems a bit, local server would use HTTPS only, with proper certificate, HSTS and HPKP set to a very long time (since you control software, you can refresh those when needed). The last two, just in case someone tries to setup the same domain and, for whatever reason, host override doesn't work and user ends up calling a hostile server.
So, links would have to change from myprotocol://whatever to https://secret.myprotocol/whatever.
It does introduce new attack surface ("mini-server"), but should be easy enough to implement, to minimize size of that surface :). "Mini-server" does not even have to be real www server, a simple script that can listen on socket and call wscript.exe would do (unless you need to pass more info to it).
Real server has more code that can have bugs in it, but also allows to add more things, for example a "pass through" page, that shows info "Opening document X in 3 seconds..." and a "cancel" button.
It could also require session login of some kind (just to be sure it's user who requests action, and not something else).
The title of this blog post says it all: Browser Architecture: Web-to-App Communication Overview.
It describes a list of Web-to-App Communication techniques and links to dedicated posts for some of them.
The first in the list is Application Protocols, which I have been using for years already, and it started to crumble in the last year or so (hence my question).
The fifth is Local Web Server, which is the one described by ahwayakchih.
UPDATE (this update follows the update on the blog post above mentioned)
Apparently I wasn't the only one thinking that this change in behavior was a regression, so a workaround has been issued: the old behavior (showing a checkbox that allows to remember the answer) can be restored by adding these keys to the registry:
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Chromium]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
I need some help with umbraco.
Let's say that I have an umbraco grid with a custom editor, just like the one in this tutorial: https://our.umbraco.com/documentation/Getting-Started/Backoffice/Property-Editors/Built-in-Property-Editors/Grid-Layout/build-your-own-editor
Ok, so I wrote this editor to build a gallery of items with image/title, I get the item list from an api call made by an angular service and this works fine when I publish the page by hand. What I want is to automatically update this gallery with new items where available, so my idea was to make a timed ajax call, let's say every hour, to update the items. But sadly this doesn't work, I suppose that the call is made but the list isn't updated.
Any suggestion? Thanks
You need to handle this differently. Right now it sounds like what you have is an implementation that works when you are browsing to this node in the backoffice using your browser and the browser makes the API calls through Angular. This all happens in your UI and when you manually hit save/publish - the data in the UI gets saved. Keep in mind that this is basically your browser doing the "work" - and this (and all other Angular code) will of course only ever run while your browser is open, in the backoffice, viewing this node.
What you want to do is to have this run automatically (and preferably in some sort of background task) to ensure that you do not really have to open up the backoffice for this to actually be automatically updated over time.
You need to create some sort of background job running on the server-side instead. This would have to be done in C# and I would recommend looking into Hangfire or Quartz frameworks to handle all the scheduling/making sure the job runs.
This job/task should do the external API calls in C# and transform the result into the same format as the format you are saving when you save data from the manual update. Then fetch the content nodes you need to update using the ContentService API and update the specific property values on those nodes. When this is done you need to make sure the changes are saved and the node is then republished with its updated data. All of this is done through the ContentService.
I have a bigger SAPUI5 application with multiple pages.
If user navigates through all these pages, they will reside in memory of course.
Now I have the problem that some of these pages have a complex context with several bindings to an ODataModel. That leads to the problem that a .refresh() call on the underlying ODataModel take some time.
Because: all known bindings will be reloaded (also from pages not currently shown)
Now I am searching for a better solution to refresh the ODataModel.
The refresh must be done because sometimes a client action triggers the server to updates multiple data (in different models!).
Further information (Edit)
I am using multiple ODataModels in my application and they are created in the Component.js (as suggested in the Best practice chapter of the SDK documentation).
Navigating through the pages will increase the cached data in the ODataModel.
Calling a .refresh() seems to reload all cached data (still used or not).
According to the first reply it is possible to refresh one binding but how to refresh all bindings of a given view/page with multiple models?
Would it be the right way to set multiple instances of the ODataModel for each view? And just call the .refresh() method there? But also on this scenario the locally cached data will increase over time?
Any ideas welcome :)
You can access the binding of a specific UI control and call refresh there. This should just process this specific binding.
My first hint would be to use the v2 OData Model (sap.ui.model.odata.v2.ODataModel), as it uses the Batch Mode by default.
Moreover, when it performs updates, it refreshes all bindings of entities that have been updated automatically so you should not need to refresh the whole model at all.
For me it worked to just re-bind that binding on a specific element in the view as I did earlier to create it at all.
We had an update-problem after another update call had side effects on the information in question, but a refresh on the binding of the elemnt itself did not solve that. I guess there were no local changes to that path in the model, so there was nothing to refresh. But on server-side there where updates the model/Cache didn't know about. Rebinding made my day, and also only the one necessary call to the servcie was made.
Ok, I'm building a PoC for a mobile application that needs to have offline capabilities, and I have several questions about whether I'm designing the application correctly and also what behavior I will get from the cache manifest.
This question is about including URLs of Controller actions in both the CACHE section of the manifest as well as in the NETWORK section.
I believe I've read some conflicting information online about this. In a few sites I read that including the wild card in the NETWORK section would make the browser try to retrieve everything from the server when it's online, and just use whatever is cached if there is no internet connection.
However, this morning I read the following on Dive into HTML5 : Let's take this offline:
The line marked NETWORK: is the beginning of the “online whitelist” section.
Resources in this section are never cached and are
not available offline. (Attempting to load them while offline will
result in an error.)
So, which information is correct? How would the application behave if I added the URL for a controller action in both the CACHE and the NETWORK sections?
I have a very simple and small PoC working so far, and this is what I've observed regarding this question:
I have a controller action that just generates 4 random numbers and sets them on the ViewBag, and the View will display them on a UL.
I'm not using Output caching at all. The only caching comes from the manifest file.
Before adding the manifest attribute to my Layout.cshtml's html tag, each time I requested the View, I'd get different random numbers every time, and a breakpoint set on the controller action would be hit.
The first time I requested the URL/View after adding the manifest attribute, the breakpoint on the controller is hit 3 times (as opposed to just 1 before). This is already weird and I'll post a separate question about this, I'm just writing it here for reference.
After the manifest and the resources are cached (verified by looking at the Console window on Chrome Dev Tools), everytime I request the View/URL I get the cached version and the breakpoint is never hit again.
This behavior makes me believe that whatever is in the CACHE section will override or ignore anything that is on the NETWORK section, but like I said (and the reason I'm asking here) is because I'm new to working with this and I'm not sure if this is how it's supposed to work or if I'm missing something or not using it correctly.
Any help is greatly appreciated
Here's the relevant section of the cache.manifest:
CACHE MANIFEST
#V1.0
CACHE:
/
/Content/Site.css
/Content/themes/base/jquery-ui.css
NETWORK:
*
/
FALLBACK:
As it turns out, html5 appcache or manifest caching does work differently than I expected it to.
Here's a quote from whatwg.org, which explains it nicely:
Offline Web Applications
The application cache feature works best if the application logic is
separate from the application and user data, with the logic (markup,
scripts, style sheets, images, etc) listed in the manifest and stored
in the application cache, with a finite number of static HTML pages
for the application, and with the application and user data stored in
Web Storage or a client-side Indexed Database, updated dynamically
using Web Sockets, XMLHttpRequest, server-sent events, or some other
similar mechanism.
Legacy applications, however, tend to be designed so that the user
data and the logic are mixed together in the HTML, with each operation
resulting in a new HTML page from the server.
The mixed-content model does not work well with the application cache
feature: since the content is cached, it would result in the user
always seeing the stale data from the previous time the cache was
updated.
While there is no way to make the legacy model work as fast as the
separated model, it can at least be retrofitted for offline use using
the prefer-online application cache mode. To do so, list all the
static resources used by the HTML page you want to have work offline
in an application cache manifest, use the manifest attribute to select
that manifest from the HTML file, and then add the following line at
the bottom of the manifest:
SETTINGS:
prefer-online
NETWORK:
*
so, as it turns out, application cache is not a good fit for pages with dynamic information that are rendered on the server. whatwg.org calls these type of apps "legacy".
for a natural fit with application cache, you'd need to have only the display and generic logic on your html page and retrieve any dynamic information through ajax requests.
hope this helps.
I have page with table that populates data based on complex query(takes lot of time even though I use pagination). I use BeanItemcontainer to load data in to the table. Now coming to the problem, I wish to load the data in to the table in a async way(after the complete page gets loaded on the user screen. I wan to populate the data). Is there some thing like onPageLoad event or onPageRenderEvent or equivalent that I can use to achieve this?
Details -
Version - Vaadin -7.0
Component - Table
Data Container - BeanItemContainer.
There's a number of ways to solve this, but you have to decide how you want to approach the problem. Here's two that I know of:
1) Request from the client.
2) Push from the server. (technically still a request from the client)
1 -- A) Request from a Javascript function programmed on the client-side. Attach the component to a layout by using a Javascript Extension or a Javascript component and make requests by calling a server-side API function. You can then update the table within that request and not worry about syncing issues.
1 -- B) Make your own GWT / Vaadin widget (same thing, basically) to do 1A.
2 -- A) Request from a Vaadin component programmed on the "server-side." Make a visible (or invisible) progress indicator that polls the server. Populate the table in a background thread and Vaadin will send the data you've currently gathered. The progress indicator can do this many times and the table will only send the difference to the client-side code (called the delta). Be careful updating client-facing components from background threads; you need to sync on changes to the component to avoid concurrent modification errors.
2 -- B) Add a refresher to the application and follow 2A above without needing a progress indicator.
Either solution involves you explicitly updating the table, either in a background thread (2A,B) or in the server method called from the client (1A,B). And both are technically client-request based. The Vaadin team are working on adding true 'push.'
The Vaadin way to solve this problem is to use a lazy loading Container. There are several lazy loading containers available for Vaadin: SQLContainer in core, LazyQueryContainer and JPAContainer.