I am trying to record a web test for an ajax based web application using WAPT Pro. Web site images, css, javascript are not captured by WAPT Pro.
Is there any setting available to record capture the images of ajax web application?
WAPT captures all HTTP requests issued by your browser. By default, css, js and image files are recorded as page elements. You can find them on the "Page Elements" tab. Note also that it is highly recommended to clear browser cache before recording. Otherwise some page elements may be taken from the cache instead of loading them. You can select the corresponding option in the "Recording Options" dialog.
Related
TL;DR: How can images processed by html2canvas be cached using a ServiceWorker? Why the existing ServiceWorker cache isn't used?
I'm writing a PWA that also can be used offline. It's an application that is used for creating grids of custom images. Images are coming from an external API and I cache these requests to the API using Workbox/ServiceWorker.
Offline capabilities are working great, but when using html2canvas in order to create thumbnails of the image grids, it's only working online. html2canvas seems to create an iframe-copy of the page in order to create the screenshots. And for all images in the iframe/screenshot new requests are done, and the existing cache from the ServiceWorker isn't used.
This screenshot shows the network traffic for opening my app with a grid of 2 images from the API:
request (1) is are the images loaded by the app - coming from ServiceWorker
requests (2-4) are three attemts of loading the images from html2canvas, where the last one succeeds using the ServiceWorker, however the images are not visible on the screenshot.
Any ideas for making html2canvas usable offline using either the existing ServiceWorker cache or another one are welcome.
I'm using html2canvas 1.4.1.
I have never used html2canvas, so I might be wrong, but if it is creating an <iframe>, then keep in mind that an iframe establishes a new browsing context, and that the communication between browsing contexts is severely constrained for security reasons.
The iframe created by html2canvas should be on the same origin of your PWA, so maybe you could try using the BroadcastChannel API to let these browsing contexts (i.e. the iframe and the service worker) communicate between each other.
See also:
Cache iframe request with ServiceWorker
In automating processes sometimes webpages get involved. And sometimes it's not possible to access the HTML code in a direct way (e.g. javascript needed) so you need a browser to fetch the content of the URL. Now, in order to process that content you need to be sure that the download is complete. How can you do that? Is there any Linux command that can check that?
Note: The question is general, for any browser, not for particular cases (like Gecko).
What are the pros and cons of using:
PathLocationStrategy - the default "HTML 5 pushState" style.
HashLocationStrategy - the "hash URL" style.
for instance, using HashLocationStrategy will prevent the feature of scrolling to an element by its #ID, but some 3rd party plugins require the HashLocationStrategy or the Hashbang #! in order to work in ajax websites.
I would like to know which one offers more for a webapp.
For me the main difference is that the PathLocationStrategy requires a configuration on the server side to all the paths configured in #RouteConfig to be redirected to the main HTML page of your Angular2 application. Otherwise you will have some 404 errors when trying to reload your application in the browser or try to access it using a particular URL.
Here is a question that could give you some hints about this:
When I refresh my website I get a 404. This is with Angular2 and firebase.
Hope it helps you,
Thierry
# can only be processed on the client, the servers just ignore them. This can cause problems with search engines (SEO), redirects can cause redundant page reloads.
This page https://github.com/browserstate/history.js/wiki/Intelligent-State-Handling has some detailed explanation, while some of the arguments don't apply for Angular applications (for example - doesn't work with JS disabled).
The "disadvantage" of HTML5 pushstate is that is requires server support like explained by Thierry.
According to official docs:
When the router navigates to a new component view, it updates the browser's location and history with a URL for that view. This is a strictly local URL. The browser shouldn't send this URL to the server and should not reload the page.
PathLocationStrategy
Modern HTML5 browsers support history.pushState, a technique that changes a browser's location and history without triggering a server page request. The router can compose a "natural" URL that is indistinguishable from one that would otherwise require a page load.
Here's the HTML5 pushState style URL that routes to the xyz component: localhost:4200/xyz/
HashLocationStrategy
Older browsers send page requests to the server when the location URL changes unless the change occurs after a # (called the hash). Routers can take advantage of this exception by composing in-application route URLs with hashes.
Here's a hash style URL that routes to the xyz component: localhost:4200/src/#/xyz/
I would like to know which one offers more for a webapp.
Almost all Angular projects should use the default HTML5 style as:
It produces URLs that are easier for users to understand.
It preserves the option to do server-side rendering later.
Rendering critical pages on the server is a technique that can greatly improve perceived responsiveness when the app first loads. An app that would otherwise take ten or more seconds to start could be rendered on the server and delivered to the user's device in less than a second.
This option is only available if application URLs look like normal web URLs without hashes (#) in the middle.
Stick with the default unless you have a compelling reason to resort to hash routes.
If I embed a Youtube video on my web page, what are the data usage implications on my server?
I have a shared web hosting plan for my website with a data transfer limit of 5 GB/month. When a user plays video on my site, is my server taxed for data transfer i.e. if the video is of size 1GB, is my data transfer limit decreased by 1GB?
And is my server processor taxed for video streaming?
What other things should I be concerned about?
Is there any link you can point me towards? That will be helpful.
Thanks
Both the youtube player and the video content is streamed from Youtube's server. The only price you pay is the few bytes it takes to add the video player embed code in your HTML pages.
When you embed a YouTube video, it streams directly from YouTube's servers.
Your server is not involved.
When you insert youtube or any embed code in your html page, your server serves the html content to the user's web browser / client (technically means user's web browser). And this html content is processed and translated by the client/ user's web browser. This means that it is served by your server as a link, but when the client translates it, it becomes an action; an action to pull content from the somewhere. So the client (user's web browser pulls the video from the specified url embedded in the iframe. In turn, the bandwith been used are calculated from
In addition to what Etienne Perot said,
There are 3 nodes in play here, namely:
Your server
Youtube Servers
The Client (i.e the user accessing you website)
In simple words: embed is an html tag that allows you include a link to a resource. And since youtube's embed goes in the form of youtu.be/foo or youtube.com/foo. Your browser simply parses (processes the link) and gets the content from that link, thereby visiting youtube website (underground) to fetch the referenced link without going through your server nor anything like that.
Meanwhile when you insert youtube or any embed code in your html page, your server serves the html content to the visitor's web browser / client (technically means visitor's web browser). And this html content is processed and translated by the client / user's web browser.
This means that it is served by your server as a link, but when the client translates it, it becomes an action; an action to pull content from the somewhere. So the client (user's web browser pulls the video from the specified url embedded in the iframe.
In turn, the bandwith been used are calculated from
The client (i.e the bandwidth used to access in the internet and the video's url) - calculated or billed by your ISP from your data active subscription.
Youtube's Server (i.e the bandwidth from the server that serves the content being streamed) - calculated or billed by google cloud service from their inhouse cloud resource allocation.
If you use the google chrome browser, you can check this our by right clicking on the video and clicking inspect element, then switching to the network tab; you might have to hit refresh so that the page tries to get all the content loaded all over again: the purpose of this is to see where the content is loaded from.
See Network Analysis Reference on how to use the network tab in google chrome developer tools. Mozilla firefox and some named browsers also have the inspect element and network monitor feature.
I hope this helps somebody.
I'm working off of this Railscast tutorial: episode 247
I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching.
Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest).
Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)?
Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files