I am developing a Firefox addon named gradientoo. I use a colorpicker that generates color values which change the colors of an element on the page.To achieve this, addonScript and contentScript communicate with each other using "port". I have experienced that there is a lag between the color values changing on colorpicker and the values that reach page element. Is there a faster way to communicate to a page content from addon. Iframe is an idea as idscussed here. However , even an Iframe like this would need to use "port". Any ideas about a faster communication mechanism with high data transfer rate?
Related
I have a set of iPads (3rd gen) running iOS 9.3.5. This is the highest they support. I am attempting to replace the functionality of a custom iOS app with an equivalent web application. The site is built in Angular 11, targeting es2015, and I've declared iOS > 9 in the browser support file. The majority of the site performs exactly as I would expect for such old devices. The linchpin of the whole thing is this third party form that needs to be filled out that is served up as an iframe, and it performs so, so horribly.
Here is the code of component HTML with the iframe (URL changed for privacy):
<div class="row header"><a class="restart" (click)='restart()'>Start Over</a><a routerLink='/welcome'><i class="fa fa-times close-x"></i></a></div>
<iframe src="https://notarealsite.com" (load)="waiverLoaded()"></iframe>
The waiverLoaded function is a simple boolean flip to detect the embedded page reloading upon completion so that I can route to a different component in my app. The performance is unchanged if the iframe is the only element on the page. Other pages that play nice when embedded do not suffer a performance drop like the waiver does, so I have to assume it's something in the code of the waiver page.
The original custom iOS app uses a webviewer to load the page, and it loads in 2 seconds or less. The same performance can be seen by visiting the page directly in Safari. In both instances the responsiveness of the dynamic form is reasonable given the age of the devices. When loading the page via an iframe the performance tanks. We're talking 30-45 seconds of loading, and sometimes up to a full second delay between tapping a button or field, and the form reacting.
I have attached the device to my computer and used the remote inspector in the MacOS Safari to view the timeline. The majority of the initial load time is the several 1-2 kb image files used by the form like the scroll bar and some icons. Usually one of these items ends up taking 20-25 seconds all for themselves. When interacting with the form there's an XML resource that appears in the timeline every field change or button press, and that is what takes around a second or two before the page is up to date again.
I'm not well versed in troubleshooting web application performance like this, and this is my first real Angular application that I've built, so I might be missing something painfully obvious. What can contribute to such a difference in performance between a page being visited directly, vs being placed in an iframe?
Some additional info:
The performance is the same between running locally via npm serve or built and deployed to my local server.
I have tried hosting it via HTTP and HTTPS in case it was some security hangup, as the form is only delivered via HTTPS. However I only have a self-signed cert at the moment.
I have seen no discernable difference in load times between the two methods on my other devices, including an iPad Air running iOS 12.5.2.
There is a warning in the console about jQuery support on iOS 9.3.5 being outdated, but it occurs in both scenarios.
Additional Troubleshooting
I've tried to look into some of the suggestions I've received. I've checked the timings and headers on the slow loading elements in Safari in the following configurations:
In the iframe on my app on the iPad.
Visited directly on the iPad.
In the iframe on my app on my MacBook Pro.
Visited directly on my MacBook Pro.
On my laptop the performance is nearly identical. Comparatively, on the iPad the actual load times of all the elements is within a millisecond or two of their times on my laptop. What ends up being notably different is the latency between request and response for the elements.
The latency when visiting directly is higher than on my laptop. When loading through the iframe all the timings increase fairly proportionally. A time of 750ms jumps to 4s. A time of 6s jumps to 30s.
Lastly, a real oddity to me, is the super high latency image for some reason results in a 302 to Cloudflare, but only on the iPad. It doesn't matter if it's through the iframe or direct. Both devices are using the same WiFi network. I can't think of a reason for the redirect to happen unless the destination site is handling the iPad differently somehow. Changing the UserAgent in Safari on my laptop to match the iPad doesn't result in the same redirect.
So I've confirmed that even outside of the iframe the site in question behaves differently on iOS 9.3.5 than on newer versions, and compared to my computers.
Please post your code for us to diagnose and help. In general there is more overhead for an IFrame my suspicion is the jQuery handler or cookie policy, to be sage I also listed other items to check.
It looks like you already checked the previous unsecured elements which was an issue. Both in the network graph and the calls, double check there are no dependencies or URL calls of the Iframe of unsecured elements then it wont display, it used make multiple round trips if Https had mixed content between secure and non-secure stuff, there's a ticket for A new TLS issue.
Once you fix that ensuring its hosted/called securely, clear the cache, cookies, data completely and reloading the page fixes the problem even on mobile browsers.
Redirects/Bouncing Your order of loading scripts/css, if there are dependent scripts and they are not cached it could be blocking, or if its redirecting back to HTTP this will also slow it down.
For IOS mobile apps only (doesn't apply here) - if you are using the 3rd gen, check the default for Cookie Security policy (HTTPCookieAcceptPolicy) for UIWebView has changed to NSHTTPCookieAcceptPolicyOnlyFromMainDocumentDomain
Animations Are slow in safari Iframes
You have the click event in your code, could be the issue
..."Turns out that any non-anchor element assigned a click handler in jQuery must either have an onClick attribute (can be empty like below):"...
Put the following element declaration cursor:pointer or tap event
I need to build a web based interface for a quick personal project and I'd like to use electron.
It's essentially a kind of gaming Zoom app with lots of bells and whistles.
The problem is, I need to have a main window that will run in fullscreen, which I capture with OBS and broadcast to a popular streaming service.
I also need another window, what I refer to as the Control Panel, to control elements in the main window that I work with behind-the-scenes, for showing graphics, playing sound effects, controlling video settings etc.
If I were to code this app for a web browser, I would have absolutely no problem, as I can create the secondary window from my main window (window.open) and the 2 windows can easily refer to one another and their documents.
In electron however, windows are essentially contained boxes. Communication between 2 windows has to be channelled via ipc essentially as encoded commands and interpreted by the main process...
So to control loads of elements and entities from my control panel, I'd either have to implement a complex messaging system, which frankly seems MASSIVELY unnecessary, or to be hacky I could simply issue javascript commands as strings to the other window with BrowserWindow.webContents.executeJavaScript()... but yuck.
I could also contain all the logic-y stuff in the main process, but again, this still requires a messaging system that I don't have the time to implement.
I guess I want to know if there's a better way.
I'm just imagining a scenario win which a developer has written a web-app that uses multiple windows and lots of direct window-to-window communication, and now they need to migrate it to electron... how would they best go about it without re-writing a ton of code?
Found my answer, 'nativeWindowOpen':
main_window = new electron.BrowserWindow({
webPreferences: {
nativeWindowOpen: true
}
});
I can now use window.open() from my main window and refer directly back and forth.
So now I can do this:
control_panel = window.open("./control_panel.html", "Control Panel", "width=720,height=480")
console.log(control_panel.document)
And from the opened window:
console.log(window.opener.document)
And I can refer to variables and document elements.
I couldn't find anyone mentioning this useful 'nativeWindowOpen' option when I was scouring stackoverflow et al earlier, only found it by reading documentation.
I have a long list of cells which each contain an image.
The images are large on disk, as they are used for other things in the app like wallpapers etc.
I am familiar with the normal Android process for resampling large bitmaps and disposing of them when no longer needed.
However, I feel like trying to resample the images on the fly in a list adapter would be inefficient without caching them once decoded, otherwise a fling would spawn many threads and I will have to manage cancelling unneeded images etc etc.
The app is built making extensive use of the fantastic MVVMCross framework. I was thinking about using the MvxImageViews as these can load images from disk and cache them easily. The thing is, I need to resample them before they are cached.
My question is, does anybody know of an established pattern to do this in MVVMCross, or have any suggestions as to how I might go about achieving it? Do I need to customise the Download Cache plugin? Any suggestions would be great :)
OK, I think I have found my answer. I had been accidentally looking at the old MVVMCross 3.1 version of the DownloadCache Plugin / MvxLocalFileImageLoader.
After cloning the up to date (v3.5) repo I found that this functionality has been added. Local files are now cached and can be resampled on first load :)
The MvxImageView has a Max Height / Width setter method that propagates out to its MvxImageHelper, which in turn sends it to the MvxLocalFileImageLoader.
One thing to note is that the resampling only happens if you are loading from a file, not if you are using a resource id.
Source is here: https://github.com/MvvmCross/MvvmCross/blob/3.5/Plugins/Cirrious/DownloadCache/Cirrious.MvvmCross.Plugins.DownloadCache.Droid/MvxAndroidLocalFileImageLoader.cs
Once again MVVMCross saves my day ^_^
UPDATE:
Now I actually have it all working, here are some pointers:
As I noted in the comments, the local image caching is only currently available on the 3.5.2 alpha MVVMCross. This was incompatible with my project, so using 3.5.1 I created my own copies of the 3.5.2a MvxImageView, MvxImageHelper and MvxAndroidLocalFileImageLoader, along with their interfaces, and registered them in the Setup class.
I modified the MvxAndroidLocalFileImageLoader to also resample resources, not just files.
You have to bind to the MvxImageView's ImageUrl property using the "res:" prefix as documented here (Struggling to bind local images in an MvxImageView with MvvmCross); If you bind to 'DrawableId' this assigns the image directly to the underlying ImageView and no caching / resampling happens.
I needed to be able to set the customised MvxImageview's Max Height / Width for resampling after the layout was inflated/bound, but before the images were retrieved (I wanted to set them during 'OnMeasure', but the images had already been loaded by then). There is probably a better way but I hacked in a bool flag 'SizeSet'. The image url is temporarily stored if this is false (i.e. during the initial binding). Once this is set to true (after OnMeasure), the stored url is passed to the underlying ImageHelper to be loaded.
One section of the app uses full screen images as the background of fragments in a pager adapter. The bitmaps are not getting garbage collected quick enough, leading to eventual OOMs when trying to load the next large image. Manually calling GC.Collect() when the fragments are destroyed frees up the memory but causes a UI stutter and also wipes the cache as it uses weak refs.
I was getting frequent SIGSEGV crashes on Lollipop when moving between fragments in the pager adapter (they never happened on KitKat). I managed to work around the issue by adding a SetImageBitmap(null) to the ImageView's Dispose method. I then call Dispose() on the ImageView in its containing fragment's OnDestroyView().
Hope this helps someone, as it took me a while!
I use hidden window extensively but i read that might be problematic in the multi-process future. I use it mainly for canvas element so I can load png's onto it and then overlay other pngs on it and then save it as a png. I do scaling too. What would be the alternative, without hidden window assuming no windows available?
Thanks
For SDK Addons you could use the page-worker module, load a html page from a resource:// into it and interact with it through content scripts and the port api.
Sending images through the port api would be quite inefficient since it json-serializes right now, but hopefully they'll switch to structured clone in the future.
Of course right now the page worker API just uses the hidden window under the hood, but if the implementation were to change (e.g. being moved to the content process) the module the module API should stay the same.
I'm working on a website with reasonably heavy traffic and I'm looking into using a CSS sprite to reduce the number of image loads in its design.
Are there any advantages to using a CSS sprite besides reducing the amount of transmitted data? How much space do you really save? Is there a threshold where using sprites becomes worthwhile to a website?
UPDATE: Thank you for your responses. They are obviously all very carefully thought out and present good sources to verify your points. I feel much more capable to make an informed decision about using CSS sprites in my site design now.
The question is generally not about the amount of bandwith it might save. It is more about lowering the number of HTTP requests needed to render a webpage.
Considering :
web browsers only do a few HTTP requests in parallel
doing an HTTP request means a round-trip to the server, which takes lots of time
we have "fast" internet connection, which means we download fast...
What takes time, when doing lots of requests to get small contents (like images, icons, and the like) is the multiple round-trips to the server : you end up spending time waiting for the request to go, and the server to respond, instead of using this time to download data.
If we can minimize the number of requests, we minimize the number of trips to the server, and use our hight-speed connection better (we download a bigger file, instead of waiting for many smaller ones).
That's why CSS sprites are used.
For more informations, you can have a look at, for instance : CSS Sprites: Image Slicing’s Kiss of Death
Less http requests = faster loading overall. Yahoo and co. use this technique, if you can imagine the amount of users they have it saves a lot of bandwidth. Imagine 50 seperate images for icons, that's 50 seperate http requests as opposed to having just one css sprite containing all the images, that would save 49 http requests and multiply that per all the users of the site.
Actually, sprites are not used to reduce the amount of transmitted data (in most cases it slightly increases the amount of data transferred), but to reduce the amount of requests done on the server.
HTTP requests on a browsers are traditionally done in sequence. Which means that one request will not start until the previous one is completed. Also, it is expensive to open a connection to do a request. By limiting the amount of requests made on the server, you are increasing the speed the elements load.
I think Yahoo has the best argument for CSS sprites. Besides, the whole page is worth reading:
http://developer.yahoo.com/performance/rules.html#num_http
Besides the performance enhancement of the overall page load by limiting the amount of requests, image sprites can also make dynamically swapping images (for example changing the background image of a nav item on hover) "perform" a little better since all you do is change the x,y instead of the src.
So I guess to answer what is the threshold to warrant using them, I'd say immediately because of the potential loading improvements on each individual client.
In addition to reducing HTTP requests (as already noted), CSS sprites aren't dependent on JavaScript. This gives a few other advantages:
less code to maintain
easier cross-browser testing
can be coded inline via style attributes
no DOM hacking
no image preloading (so less administrivia -- "Oh wait, I need to preload that new nav button ... crap which .js file has my preloader?")
you can use css classes to apply it to several selectors
can be applied to any selector with the :hover pseudoclass, or in any selector that can be wrapped with an anchor (not just imgs)
If you're not averse to DOM hacking, though, you can get some nifty animation effects just by pushing the X and Y values around. Which makes it easier to animate lots of different states (like keypress or onmouseclick).
There are a few interesting graphic production side effects as well:
fewer graphic production files
easier to do layout for buttons etc. directly in HTML (less need for PSD comps)
easier to make GUI changes without having to regenerate a ton of graphics
just that much tougher for image pirates to slurp your graphics