What is strict MIME type checking? - microsoft-edge

When running a web site I'm getting the following error in my browser's (Edge in this case) console:
Refused to apply style from 'https://example.com/products/styles.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
I've done various searches and can see solutions posted for various different scenarios that cause this issue, but none of them explain what strict MIME type checking is, or how I determine what the supported MIME types are for various resources. I'm looking for a good general explanation of what strict MIME checking is and how it determines which MIME types are acceptable?

Related

How to implement Schema.org on HTTPS pages?

Is it correct to statically set up Microdata’s itemtype attribute with HTTP value (http://schema.org/WebPage) on HTTPS pages or do I need to use HTTPS value (https://schema.org/WebPage) on all pages?
Since both HTTP and HTTPS versions of the site are available, can I set it up to //schema.org/WebPage or not?
tl;dr: Use http URIs.
In this answer on Webmasters SE I explained why you should favor http over https Schema.org URIs: The http URIs seem to be canonical, as the actual definition of the Schema.org vocabulary only defines http, not https. In addition: all examples (even on HTTPS) use the HTTP variant, the authors mentioned that they prefer to see the use of the HTTP variant, and RDFa’s Initial Context defines the HTTP variant only (so most of the RDF world will use HTTP).
In this answer on Webmasters SE I explained why you should not use protocol-relative URIs for vocabularies: Vocabulary URIs typically don’t get dereferenced, and there will never get something embedded from a vocabulary, so there is absolutely no need to use HTTPS for these just because you use HTTPS (it’s similar to simply linking to an external page, which might not even be accessible via HTTPS). On top of that, your Schema.org markup would no longer work if the document is accessed via a different protocol than HTTP/HTTPS, and it’s likely that some parsers won’t be able to recognize that you are using the Schema.org vocabulary because they might look for full URIs without applying URI resolution for the itemtype attribute.
There's been an update to that answer on Webmasters SE (dated November 2015), with a link to the schema.org FAQ about https:
Q: Should we write https://schema.org or http://schema.org in our markup?
The short of it is that schema.org will be moving to https, and you can use https URLs now, but there's no rush to switch.
Regarding protocol-relative URLs… please don't use them as they're a hack. Favor use of absolute or root-relative URLs whenever hyperlinking documents on the Web.
Is it correct to statically set up Microdata’s itemtype attribute with HTTP value [...]?
Either HTTP or HTTPS is fine in your itemtype according to the Schema.org FAQ. Your examples containing HTTP and HTTPS schemes are both correct for pages served with and without TLS.
If you've got a mix of absolute URLs pointing to different schemes it's more likely a person will notice it and wonder why things aren't consistent. So when you update refactor your existing itemtypes.

Sending Signed XML to secure WebService returns BadSignature

I am using Delphi 7's HTTPReqResp component to send a digitally signed SOAP XML Document to a HTTPS web service. I use Eldos XML BlackBox and have set all the transformAlgorithms, CanonicalizationMethod, signaturemethod, etc. to the ones the web service requires and have confirmed this with a tech support officer.
I have validated the signature using XML BlackBox and also this XML Verifier website.
Both ways confirm the signature is correct. However, when I send the XML document via HTTPReqResp.execute, the response I get back is BadSignature (The signature value is invalid).
Originally, I received back a different error messages due to XML errors (malformed, etc.). It appears that the service will do all the standard formatting checks first, then it will attempt to validate the signature. Since I get back the BadSignature response, the rest of the XML must be correct.
I suppose I have 2 questions here.
Does the HTTPReqResp component alter the XML.
Is it likely the webservice alters the XML.
The site is using Access Manager WebSEAL.
It's very likely that the receiving partner is getting a modified document somehow. Some minor modifications shouldn't affect the signature (that's the idea, at least) so you may want to check the following:
"Recommended" encoding used by the receiving partner. A very annoying practice by some receiving partners is to favor one form of encoding and completely ignore others. XML signatures should use utf-8 but I've seen servers that only accept iso-8859-1
Make sure you don't accidentally change encoding after signing.
Verify that the receiving partner is using a sane canonicalization method.
Verify with your receiving partner that no extraneous elements are being added to your document.
Also, have you tried to post this using the SecureBlackBox components? They also have an HTTP client that can do SSL, and that can be used to also verify the bytes being sent through the wire.

What does "No Native to Message converter set" mean?

I need to talk to some web service and thus I imported the WSDL. I now try to call it but it reports this exception: No Native to Message converter setVery, very irritating, especially since I have no permission to post code snippets from this service here. Still, have to try... Does anyone have some suggestions about how to fix this error?
The error is generated in rio.pas in the function TRIO.Generic. This line:
if not Assigned(FConverter) then
raise Exception.Create(SNoMessageConverter);
For unknown reasons, FConverter is set to nil, thus the exception is generated. This happens even before the request is sent. Nothing is sent to the service, since Delphi crashes even before it can call the service.
WSDL Import options, checked options:
One Outparam is Return
Unwind literal params
Generate destructors
Warning comments
Map string to widestring
Generate verbose information about types and interfaces
Ignore porttypes with HTTP bindings
Do not emit unused types
Validate enumeration types
Import fault types
Import header types
Process included and imported schemas
Generate class aliases as class types
Process nillable and optional elements
Actually, My system is new, Delphi was installed about 3 days ago and importing this WSDL was the first thing I did, basically using these default settings.
Use SoapUI consume the WSDL and make a mock service. Point your app at your SoapUI mockservice, and you can capture your outbound requests. Now you can turn around and submit those requests to the service and see the response. That should give you an idea of where the message is coming from. i.e. is it coming from Delphi's SOAP library as a result of something that it doesn't understand, or is it coming from the web service itself, as a result of something that IT didn't understand in your request?
Alternately, you can do this in Delphi: Intercept the inbound/outbound XML by leveraging the RIO_BeforeExecute/RIO_AfterExecute events of your HttpRIO object.
If your traffic is http (harder with SSL but possible) you can also intercept with Fiddler2.
Once you have the raw XML, submit requests with SoapUI, and see what you get. You may find that your requests need "tweaking", or if everything looks fine in SoapUI, you may need to tweak the responses before de-serialization.

What is the "Best Practice" for SOAP servers to implement error notification?

I am developing some SOAP web services using Ruby on Rails and considering how to handle generic failures. These generic errors are applicable to all the methods within the service and include the following :-
Missing Request element
Missing Authentication element (Custom)
Invalid Authentication details
I can intercept these errors within my controller before calling the relevant method and respond appropriately. My question is which implementation is easiest to manage from a Client perspective. My options for handling these errors seem to be as follows.
Raise an exception and let the SOAP service generate a SoapFault. This would be fine except I have little (no) control over the structure of the message contained within the SOAP fault.
Return an Http 400 response with an agreed data structure to indicate the error message. This structure would not be defined within the WSDL though.
Include a Status element in all responses, whether successful or not and have that status element include a code and an array of error data (Including error messages).
Option three seems like the best solution but is also the most error prone to implement as the implementation of web services in ROR precludes me from implementing this in a generic way and each method becomes responsible for checking the result of the checks and rendering an appropriate response. Admittedly this would be a single function call and return on failure but it is relying on the developer to remember to do this as we add more options.
I appreciate that most ROR developers will say that this should be implemented as a REST service and I agree, in fact we already have REST services to do this but the spread of SOAP in the corporate world, and its impressive tooling support means that we have to provide SOAP services to remain competitive.
In your experience what would be the easiest implementation for clients to handle and does this differ dependant upon the libraries/language of the client process.
A SoapFault would be the preferred way to signify errors. SoapFaults can contain additional information in their <detail> element.
The advantage of a SoapFault over some status element is that the caller can use standard exception handling, instead of checking for some status field.

Sending custom HTTP error information to Flash, JavaScript, etc

I'm developing a REST API at the moment, and one of the core features of this is that is uses a variety of HTTP status codes to return status/error information, some of which may be extended information (e.g. if an item is not found, some other similar items) which will be in the response body.
This is fine until you get to 'crippled' clients like Flash and JavaScript which can't access the response body or headers unless the HTTP status code is 200 OK (even a 201 Created success code can cause Flash to fail thinking it's an error).
So my question is, is there a standard way for allowing this type of client to request that all status codes are HTTP 200, and to indicate the real status code in another way?
One solution I was thinking of is, in the pattern of the HTTP Accept-* family of headers, using an X-Accept-Status extension header to specify which status codes can be handled, e.g. Flash would send...
X-Accept-Status: 200
...and then any status code not in this list would be mapped to one that is, and the error returned in the response body, possibly with another extension header indicating the real status code, e.g.
X-HTTP-Status-Code: 404 Not Found
This all seems a bit horrible, and working against the protocol, but if you have clients that cannot use the protocol property then that's unavoidable. I'm just looking for something a bit like X-HTTP-Method-Override (which is a 'standard' way of working around the protocol for clients that cannot send PUT/DELETE requests) but for clients that cannot understand status codes.
well, actually the problem with HTTP and REST is, that REST is a really good idea, and HTTP describes a really good implementation of it ... but really, many clients and servers only implement part of HTTP ...
i don't think HTTP is a must ... still, REST is a good idea and RESTfulness of a system is a powerful property ... so why not use HTTP as a stupid transport layer for a RESTful system?
this is what you are doing, although in my opinion, you are holding on a bit too much to HTTP and all it's theoretically built-in features ... do you really need to transport the information in a status code?
don't depend so much on your transport protocol/layer ... have a clear idea in mind, how your service should work ... seperate the protocol semantics from its implementation ... on both client and server ... abstract your RESTfulness and status codes too (make them more then just integers ... make it enums, or objects ... exceptions, why not?)...
and then plug-in protocols/transport layers at will ...
make a standard HTTP implementation
make a hacky one, using the solution you described (which to me seems perfectly valid ... if people are using technologies unable to use the standards, why should you bother too much finding the most standard-conform solution)
make whatever you have the time to do, and your server is able to do, binary, JSON, XML ... whatever seems adequate ...
two technical notes, though:
flash player does it's HTTP traffic over the browser ... and it simply does not get the status codes from the browser ... well it depends on the browser in fact ... the specs say, it does not work for: "Netscape, Mozilla, Safari, Opera, and Internet Explorer for the Macintosh." ... so IE for windows should be working? Chrome? I don't know ... but i think, it doesn't matter, since obviously, you cannot rely on it ... oh, and to state the most obvious: JavaScript also does its HTTP over the browser, of course ... so same problem here ...
for both this implies, that if you would succeed in finding something like X-HTTP-Method-Override for response, that is built in the protocol, a good browser would understand that, and would remap things accordingly, before deciding which information to give to JavaScript or 3rd-party plugins ... so you'd end up with nothing again ... i guess ...
you should simply choose your response method based on the client ... and maybe the client should send some extra info, if it is unable to use the HTTP standard ... otherwise throw at it, what follows the standard ... i'd first make an implementation using standard HTTP, yet hiding the HTTP itself away, and once everything works, write one using
greetz
back2dos
Am I wrong for thinking that one shouldn't let a crippled out-of-the-box potential client to the API dictate the features of the API implementation? I guess practical considerations win the day, but in general I guess my vote is in favor of building API implementations "properly" and requiring custom client-side programming as needed.
Bit late for that response, but...
When I implemented a flash client API with an early version of OpenRasta, I had X-ResponseLine that contained the response code and text, on each outgoing request.
As headers are by default only generic headers, they have no involvement in caching, so no reason to have an Accept / Vary on this.

Resources