TL;DR: How can images processed by html2canvas be cached using a ServiceWorker? Why the existing ServiceWorker cache isn't used?
I'm writing a PWA that also can be used offline. It's an application that is used for creating grids of custom images. Images are coming from an external API and I cache these requests to the API using Workbox/ServiceWorker.
Offline capabilities are working great, but when using html2canvas in order to create thumbnails of the image grids, it's only working online. html2canvas seems to create an iframe-copy of the page in order to create the screenshots. And for all images in the iframe/screenshot new requests are done, and the existing cache from the ServiceWorker isn't used.
This screenshot shows the network traffic for opening my app with a grid of 2 images from the API:
request (1) is are the images loaded by the app - coming from ServiceWorker
requests (2-4) are three attemts of loading the images from html2canvas, where the last one succeeds using the ServiceWorker, however the images are not visible on the screenshot.
Any ideas for making html2canvas usable offline using either the existing ServiceWorker cache or another one are welcome.
I'm using html2canvas 1.4.1.
I have never used html2canvas, so I might be wrong, but if it is creating an <iframe>, then keep in mind that an iframe establishes a new browsing context, and that the communication between browsing contexts is severely constrained for security reasons.
The iframe created by html2canvas should be on the same origin of your PWA, so maybe you could try using the BroadcastChannel API to let these browsing contexts (i.e. the iframe and the service worker) communicate between each other.
See also:
Cache iframe request with ServiceWorker
Related
I am using TinyMCE 5 on my ruby on rails app that runs on https. I have a Wordpress website running on http that hosts images.
After uploading an image to Wordpress, I copy its HTTP URL into tinyMCE image section and these work fine and displayed properly as well.
However, some users are complaining that they can't see images. Whenever I check it works fine. What could be the problem?
Possible reasons could be too many calls at the same time, usage of http for Wordpress site or slow network connection of the user.
This is most likely an issue with the host that you are using for storing images. (maybe there is a daily limit on number of requests that you can process)
Possible solution:
Store your images on AWS S3 or Google Drive etc and use their links in tinyMce. It will most definitely work.
I have a web app that wants to load bootstrap.min.js
It's on these two CDN's (among others):
https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.3.1/js/bootstrap.min.js
https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js
The odds of a cache hit from some other app using these CDN's is relatively high.
How can I tell the browser to check if they are cached and load from browser cache?
Can a service worker do this?
I believe that there are some privacy/security restrictions in place that attempt that make it difficult to determine, using JavaScript, whether a third-party URL is present in the browser's cache.
Adding a service worker into the mix will not get around those restrictions.
It's possible to use the Fetch API to create a Request with a mode of 'only-if-cached', which will behave more or less in the way you describe, but that will only work if the request's mode is 'same-origin'. In other words, only if the Request is for a first-party URL, not a third-party CDN URL as in your example.
I am trying to record a web test for an ajax based web application using WAPT Pro. Web site images, css, javascript are not captured by WAPT Pro.
Is there any setting available to record capture the images of ajax web application?
WAPT captures all HTTP requests issued by your browser. By default, css, js and image files are recorded as page elements. You can find them on the "Page Elements" tab. Note also that it is highly recommended to clear browser cache before recording. Otherwise some page elements may be taken from the cache instead of loading them. You can select the corresponding option in the "Recording Options" dialog.
I'm trying to optimize my application in Ruby on Rails, and I realized that the pictures in my application is what most long does it take to load, but I also noticed another problem, which is that google chrome isn't caching the images.
I noted this because in the Google Developers Console you can see that Google Chrome makes a request to load the images that are canceled before the images are truly loaded.
This can be seen here, first I open the Google Developers Console, then refresh the page and within the first requests there you can see the ones of the images, but they are canceled immediately.
After that you can see the requests that actually loaded the images.
I don't understand why is this happening if in the response headers you can see that the Cache Control is set to public with max-age = 31536...
I put the images in my application this way:
<div class="col-xs-3"><%= image_tag "#{#hero.id}/ability_1.png", class: "center-block"%></div>
And the images are organized in folders in app/assets/images
Is there a RoR way to fix this?
Edit: Now testing my app (which is in Heroku) in Windows I noticed that in fact Google Chrome caches the images sometimes, but it happens like the 50% of the times (and when I was in Ubuntu in development it didn't work a single time), while in firefox the first time the images are loaded, but the subsequent times I load the same view I can't even notice the reload, it's beatiful, Why google Chrome is not like that? Is normal that Google Chrome acts so weird?
The most important thing to realize when analyzing browser caching is the "Status Code". In your example, you can see you got a "304", which stands for "Not Modified" Which means the browser "could potentially use it's cache". So you ARE in fact caching. Caching != Not hitting your web server.
The definition according to Mozilla:
This is used for caching purposes. It is telling to client that response has not been modified. So, client can continue to use same cached version of response.
It sends the etag and last-modified to your web server, and your web server then looks at those meta and say "Nope, this file hasn't changed, so feel free to use your cache", and that's it. It actually does not send the file again. You can see that the "Size" is much less then when it's a "200" status code, where the web server IS sending the file, and the timing should me much shorter as well.
In Chrome you can force "non-caching" by checking the "Disable cache" option in the Network tab.
Hope that helps!
It looks like Chrome does handle image caching differently. What type of reload are you doing (following links, pressing enter in the address bar, Ctrl+r)? It looks like if you press enter in the search bar it will respect max-age but if you use Ctrl+r Chrome sets max-age to 0.
expires_in max-age cache control doesn't work
Chrome doesn't cache images/js/css
You can force caching with manifest file. There's plenty of docs on the web about the topic. Here's a starter: http://www.w3schools.com/html/html5_app_cache.asp
the request headers contain max-age=0. Try setting that to a big number!
I'm still new to the whole CDN ideaology, so this might be a stupid question but I'm sure someone can shed some light on this. I've got a basic php script that takes user image uploads, resizes them, creates a directory ($user_id), and stores the finished product in the directory (like www.mysite.com/uploads/$user_id/image1.jpg). Works like a charm.
I just got all the hosting stuff squared away and I'm using the Rackspace (Slicehost?) "Cloud Server" architecture. I also signed up for the Rackspace (Mosso?) "Cloud Files". So far so good.
So my question is: Should I be storing the images that users upload locally (on my apache server) or as objects via Cloud Files? It seems like a great idea to separate the static content from my web server so I can just use it to generate the dynamic content. But would it be a lot of overhead to create a CDN-enabled Container each time a user uploads an image?
Hopefully I'm not missing the boat on this one totally. I can't seem to find a whole lot of info about this, but I'm sure there is a good reason why I should either pursue or avoid this idea. Any suggestions are greatly appreciated!
I am not familiar with Rackspace's offering, but the general logic behind using a CDN for static content is to achieve two goals:
offload the bandwidth and processing
to other servers, freeing up yours.
move the requests off to the client
Move the large static content closer
to the client
When you send the generated HTML to the browser, it will "see" the images as www.yourdomain.com/my_image.jpg for example, and perform additional requests for each piece of static content, potentially starving your server of threads to service requests. If you move all this content onto a CDN, the browser would see something like cdn.yourdomain.com, and the browser will request the images from the CDN, thus allowing your server to service other requests instead. Additionally, most CDN's distribute your content to multiple locations and have geographic routing for requests to serve the content from the closest possible location, improving the perceived load time for clients.