I am trying to build an offline web app for the iPad, and I am trying to verify that the cache.manifest is being served correctly by Apache Web Server 2, and is working. I have added an 'AddType' for the .manifest extension to the mime-types configuration file for the Apache web server.
If I look at the access logs, the first request to the cache-manifest is returned with a 200 HTTP response code, any further requests are served with 304, which is 'not modified'. I take this to mean it is working. The assets (html, images) are returned with a combination of both (200, then 304 as above) so indicates it is working.
When I load it on the iPad, I get the page, but when I go offline, and reload it is unable to load as it does not have a connection to the internet.
I am serving it off the Apache web server of my Mac, so having trouble reliably testing it with my Mac. Any ideas on what is going wrong, or how to verify it is working?
Testing the cache manifest is somewhat of a pain in general, but there are a few useful techniques.
First, start with testing it using Safari on the Mac directly. Just turn off Apache when you want to check it in offline mode.
In Safari, open the Activity monitor and look for any resources that are listed as "cancelled" -- those are typically ones that are missing from the manifest.
Also use the Web Inspector to check the response-type of the manifest file.
In most cases the problem is that you have resources in the application which aren't specified in the manifest; this causes the whole caching operation to fail. Unfortunately there's no method in the HTML5 API to list which resources failed; this would be supremely helpful to developers.
Related
I'm using Dart to build JS applications that are loaded on web pages hosted from an ASP.NET Core application, and I'm trying to establish a development workflow with either pub serve or potentially pub build that allows for debugging. I've seen some related posts, but I'm still stuck. This is what I've tried:
I used pub build with dart2js and the --mode=debug flag set to generate dart sources and a sourceMap, and then used Chrome to load and debug the web pages. The problem here, apart from long compile times, is the sourceMaps don't seem to work well for the debugging. Lines in the .dart files are often unavailable for debugging, and stepping over function calls doesn't work well, instead diving into framework code. I'm also unable to see values reported reliably.
I used pub get with the --packages-dir flag to copy in dependencies and then loaded the web pages with Dartium hosted by the IIS Express server. This loads pages fine and lets me develop, but I was unable to get breakpoints working at all in Dartium unless I used the debugger() statement directly in my code. I'm also concerned about this approach in general because Dartium is no longer being updated and the Dart team's plan is to move away from it.
As an offshoot of #2, I also tried simply changing my script tag URLs in my ASP.NET pages to point to the resources on the pub serve dev server. This is blocked because pub serve apparently only serves on http, and the ASP.NET application is hosted via HTTPS locally. I tried to change the backend to load on HTTP, but now I'm running into issues with authentication/authorization not working in my .NET app. Also, I had hoped to be able to use dartdevc with this approach, but that gave me 404 errors with requirejs, I think because it was trying to load it from the IIS Express server instead of pub serve (I'm really not sure about that).
I've found some mentions in other StackOverflow posts of setting up some sort of proxying behavior in order to have a backend server request resources from pub serve, but I have no idea how this might be done or if it applies to this situation. I can't find any information.
What strategies are people using for this, and is there a best-practice in mind going forward with Dart 2.0 and dartdevc?
I installed TFS 2017 to be accessible on both, HTTP (port 8080, default settings) and HTTPS. Now I removed HTTP binding form the IIS and reapplied the Public URL (via Administration Console -> Change Public URL).
Most of the TFS application tier works normally (as it uses relative addressing). However, build extensions somehow want to get their icons from HTTP (port 8080). See screenshot. When I noticed this, I first checked the HTML/JS source and I found that _vssPageContext variable still holds some URLs pointing to old HTTP configuration.
Has anyone solved that mistery or has any idea what to do?
EDIT: Later I re-enabled the HTTP bindings in IIS just to make the TFS work and I get a lot of warnings and errors due to HTTP / HTTPS mixup (I access TFS via HTTPS, however some content is still accessed via HTTP):
Mixed Content: The page at
'https://xxxx.xxxxx.xxxx/tfs/TFSDefault/Project/_build/definitionEditor?definitionId=113&_a=simple-process'
was loaded over HTTPS, but requested an insecure image
'http://xxxx.xxxxx.xxxx:8080/tfs/TFSDefault/_apis/distributedtask/tasks/9fcb05af-0ffe-4687-99f2-99821aad927e/0.1.1305/icon'.
This content should also be served over HTTPS.
WebSocket connection to
'ws://xxxx.xxxxx.xxxx:8080/tfs/signalr/connect?transport=webSockets&clientProtocol=1.5&contextToken=412c3608-de3b-4dab-a00d-bf5c13728d97&connectionToken=OoSymcl1qzWg%2BrHB9pzSBpb%2BdHVywo7NNUWN5xMx3Z51p9ZdZQ14wvoQKXqxB%2Bvo66eTap4iUdlqzHR1hJNUf%2By8oFUaudlkCbQIZjHQhLBHsEWtcLdfLlL7MAevl4h0My1yQA%3D%3D&connectionData=%5B%7B%22name%22%3A%22builddetailhub%22%7D%5D&tid=7'
failed: HTTP Authentication failed; no valid credentials available.
This is an issue related to the default endpoint of TFS being initially set as http, which all the elements are then defaulting their requests to, rather than relying on the initial request you are making in the browser. so you end up with a javascript element attempting to connect to the server via http and get a cross content issue.
Here is a really good article that covers the issues you are probably facing and how to fix them to use https: https://hybriddbablog.com/2017/12/16/changing-tfs-to-use-https-update-your-agent-settings-too/
I have to caveat that I havent done this yet, we actually went back in favour of running http until we moved to the next version of TFS, but from my experience of TFS, the steps look sound.
This issue is strange, and i've spend a couple of days trying to solve it but i'm completely lost. I've developed a webapp with CodeIgniter 3.0.6 + AngularJS 1.5.5 as main frameworks for front/backend.
The problem is when I change the iPhone/iPad network from WIFI to 3G/4G,
some random HTTP GET request to static files fail. The files aren't always the same, but it only fails on images and js scripts.
The HTTP GET Status Code is 503 - Service Unavailable, and opening the file's URL points to a static HTML file with the same error.
The weirdest thing is that the response header Server changes from WIFI request (Apache) to 3G/4G request (nginx).
File loaded properly:
File error:
There are also other headers that are different between WIFI and (X)G request.
PHP works fine, HTML and dynamic data load properly. The problem appears to be at the static resources request.
EDIT
I've checked several websites hosted in 1and1, different hosting packs, and i 've even checked other domains hosted in the shared host where my app is running and it happends everywhere. The only change is the number of failing files, and it's random.
EDIT 2
After test with other ios browsers (Firefox and Opera), the problem seems to be focus on Safari and Chrome. Maybe i should say Webkit, but Opera seems fine.
EDIT 3
I've found and article (in comments, repu problems) while searching for a way to handle angular $http request from an offline device.
I need to go deeply and perform the tests described in the link, but seems a problem with the Websockets and the proxy servers used by operators, Vodafone in this case.
did anyone else find this issue?
I will edit this post with the improvements you suggest or the info you need.
The method for including scripts in my wordpress plugin is in another post: how to load jquery dialog in wordpress using wp_enqueue_script?
I think this works fine for me, but I'm getting a weird error in the Firefox development tools console when I load my page, after enqueueing the jquery-ui stuff (js and css). Here is my code:
wp_register_script( 'myplugin-jquery-ui', plugins_url("myplugin/js/jquery-ui.min.js" ) );
wp_enqueue_script( 'myplugin-jquery-ui');
But when I load the page in Firefox, the console says:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at
http://fonts.gstatic.com/s/opensans/v10/u-WUoqrET9fUeobQW7jkRT8E0i7KZn-EPnyo3HZu7kw.woff.
This can be fixed by moving the resource to the same domain or
enabling CORS.
I can't find "fonts.gstatic.com" referenced ANYWHERE in ANY of my files, least of all the jquery-ui.min.js file. Can you please help me understand a) why/how I'm getting this error, and b) if it's something I should just ignore?
And if I only need it for the dialog plugin, should I do be doing this differently?
This is a bug by Google. It's not serving the header correctly sometimes for reasons only they know. A bullet-proof way to prevent this shame is get the font files and serve them yourself.
You can verify the received headers when the woff is served and you will se how they are not sending the header when the browser fails to load the font. If you can't believe your browser, check with a network sniffer tool like Wireshark.
I have a program for class which involves C# and MVC. Upon running it in a classroom computer, the program's home page looks like this:
But, when I run the same program at home, it looks like this:
So obviously the program's resources are not loading, and I see this error when I press F12:
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
For six different files, among which is the css file:
http://localhost:4321/Content/site.css
Now, I have installed IIS, so writing localhost on the browser URL bar takes me to the IIS homepage. But what could be causing this problem? Does this have to do with port forwarding (I have a comcast router, Arris TG862G which I heard sucks)? Is it possible to change the directory where the project takes the source files from to avoid this? Thank you for any help in advance
The problem is not your IIS or localhost. The errors are occurring when you are requesting certain external resources. The link http://www.gfcf14greendream.com/ise500.png does not have a well configured mime type mapping.
You either have to have someone rectify the error(s) on that server or better yet, download the resources you need in your project and access them locally.