Cross-Origin warning on wp_enqueue_script for jquery-ui - jquery-ui

The method for including scripts in my wordpress plugin is in another post: how to load jquery dialog in wordpress using wp_enqueue_script?
I think this works fine for me, but I'm getting a weird error in the Firefox development tools console when I load my page, after enqueueing the jquery-ui stuff (js and css). Here is my code:
wp_register_script( 'myplugin-jquery-ui', plugins_url("myplugin/js/jquery-ui.min.js" ) );
wp_enqueue_script( 'myplugin-jquery-ui');
But when I load the page in Firefox, the console says:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at
http://fonts.gstatic.com/s/opensans/v10/u-WUoqrET9fUeobQW7jkRT8E0i7KZn-EPnyo3HZu7kw.woff.
This can be fixed by moving the resource to the same domain or
enabling CORS.
I can't find "fonts.gstatic.com" referenced ANYWHERE in ANY of my files, least of all the jquery-ui.min.js file. Can you please help me understand a) why/how I'm getting this error, and b) if it's something I should just ignore?
And if I only need it for the dialog plugin, should I do be doing this differently?

This is a bug by Google. It's not serving the header correctly sometimes for reasons only they know. A bullet-proof way to prevent this shame is get the font files and serve them yourself.
You can verify the received headers when the woff is served and you will se how they are not sending the header when the browser fails to load the font. If you can't believe your browser, check with a network sniffer tool like Wireshark.

Related

Unknown Format in feature testing rails capybara

I am writing capybara tests. There is a link I have in the view. When I click over the link that links open a pop-up js warning. I have configured Js. in capybara by using phantomjs and petergiest gem.
Without the requested information it's impossible to give an exact answer, but the error you are seeing means the app is requesting a non-JS response (probably HTML). This could be occurring for a couple of reasons
You're not actually running the test with a JS supporting driver. I don't see any js metadata on your scenarios so depending on how you've configured Capybara/RSpec this could by your issue. To confirm, swap from Poltergeist to using Selenium with Chrome or Firefox (non-headless while trying to debug) so you can see if the browser actually starts
You have a JS error preventing JS from running so a normal request is being made instead of XHR. This could be because you actually have a bug in your JS or because you're using Poltergeist/PhantomJS which is massively out of date in JS/CSS support. To test this, swap to using Selenium with Chrome or Firefox and look in the developer console.
Your link isn't correctly configured to make an ajax request - This is impossible to tell without the HTML of the link
Additionally, neither of the tests shown in your image are actually asserting/expecting anything so it's very unclear what exactly you're trying to test.

Uncaught DOMException

Migrating my website to a secure server, a frame is being blocked by browsers because of a security issue which doesn't happen on my existing website, which is hosted on an http server. [Google Chrome Developer Console screen shot][1] [1]: https://i.stack.imgur.com/wQkpr.jpg
The page should load a calendar, but it does not do so.
I'm not a coding expert, and don't know how to resolve this. The issue happens when loading this page:
Page which generates DOMException
However, the site under development is hosted on a non-public server. In order to access it, the hosts file on a Windows platform would need to have this code added: 199.168.187.45 mauitradewinds.com www.mauitradewinds.com secure.mauitradewinds.com m.mauitradewinds.com
Without adding that code to the hosts file, a browser would be redirected to my existing http site, which is not where the issue is happening.
I'd be grateful for guidance on how to eliminate this frame blocking.
My guess is you have a protocol conflict between your iframe and your main page.
Your main page is accessing through http and the iframe through https.
Your existing website most probably has a redirect from http to https which is why the issue is not happening on the existing site.
A web developer solved this by observing that adding www to the URL would prevent the DOMException, and allow page frame content to load.

Firefox JavaScript debugger: wrong cookie value sent

I'm running Firefox 36.0.4 on Windows 7 32-bit. I've diabled all add-ons, extensions and user scripts before retesting this.
I'd like to step through JavaScript code that is served up in a <script> tag in the HTML document being produced by a Java (Tomcat) web server.
Unfortunately, when I select the HTML document under Debugger > Sources, the source of the page returns to the login page of the application - it appears that session information is not being used to request the source.
I stepped through the server-side code and found that the correct session cookie values were being sent for the real page request and some AJAX requests sent by the page. However, when I tried to load the page source in the JavaScript debugger, I found that an incorrect session cookie was being sent by the JavaScript debugger.
I can replicate this behaviour in other webapps, not just my own. For example, Stack Overflow:
Is this a configuration issue, or a bug in the Firefox Developer Tools?
I can't reproduce your problem using StackOverflow as an example, at least in Firefox Developer Edition ( currently version 38 ):
One thing that might help - try disabling the cache while the toolbox is open - this setting is in the developer tools setting panel ( click on the 'gear' icon at the top right of the toolbox ):
After reviewing canuckistani's answer, I downloaded Firefox Developer Edition. Seemingly, the problem was fixed.
Five minutes in, I became sick of being asked whether to remember passwords and having to manually clear session cookies (I prefer being able to do it by simply closing the browser) - it makes testing easier.
As per usual, I went to Options > Privacy > History to disable this behaviour, by setting the value to Never remember history.
Changing this setting requires the browser to restart. However, upon restarting, I once again saw the same erroneous behaviour - the wrong session cookie was being sent to the web application again.
The workaround here is to not use the Never remember history setting. I have filed a bug report at Mozilla.org Bugzilla.

MVC Bundling with HTTPS IE7

I have successfully implemented MVC bundling for my MVC application. There is one problem with the run time which runs under HTTPS.
I am sure there is a problem because when I switch the debug field to false the user gets the warning message "This page contains secure and nonsecure items. Do you wish to proceed?
I know that I can turn this prompt off using the security setting in IE. I would like to know if there is something I can do to the application so that bundled scripts and styles come through the secure pipe.
If you use the Scripts.Render helper to include the bundle it will use the same HTTP scheme as the one used to request the main page. So if the main request was done over HTTPS then this helper will generate a <script> element using HTTPS. You could use the Net tab of FireBug to see which resources are served through HTTP and HTTPS and be able to isolate the problem.
Thank you for this suggestion. I figured out that the problem was coming from modernizr-1.7.js
The strange thing was that this problem only occurs when modernizr is bundled. I removed modernizr because we don't really need it.

Testing cache.manifest on an iPad with Apache Web Server 2

I am trying to build an offline web app for the iPad, and I am trying to verify that the cache.manifest is being served correctly by Apache Web Server 2, and is working. I have added an 'AddType' for the .manifest extension to the mime-types configuration file for the Apache web server.
If I look at the access logs, the first request to the cache-manifest is returned with a 200 HTTP response code, any further requests are served with 304, which is 'not modified'. I take this to mean it is working. The assets (html, images) are returned with a combination of both (200, then 304 as above) so indicates it is working.
When I load it on the iPad, I get the page, but when I go offline, and reload it is unable to load as it does not have a connection to the internet.
I am serving it off the Apache web server of my Mac, so having trouble reliably testing it with my Mac. Any ideas on what is going wrong, or how to verify it is working?
Testing the cache manifest is somewhat of a pain in general, but there are a few useful techniques.
First, start with testing it using Safari on the Mac directly. Just turn off Apache when you want to check it in offline mode.
In Safari, open the Activity monitor and look for any resources that are listed as "cancelled" -- those are typically ones that are missing from the manifest.
Also use the Web Inspector to check the response-type of the manifest file.
In most cases the problem is that you have resources in the application which aren't specified in the manifest; this causes the whole caching operation to fail. Unfortunately there's no method in the HTML5 API to list which resources failed; this would be supremely helpful to developers.

Resources