My application has a weird problem. I have a login webservice which is used to authenticate the users, it works well for everyone except for a tester who is in Lebanon. For her, the request always fails. It turns out that the json response is not getting parsed for her.
My first guess was that her network place is using a proxy server that converts json to html, so I asked her to switch to cellular network but this isn't solving the problem either.
Please refer to the debug message in the screenshot below.
Any suggestions on what must be wrong will be greatly helpful.
You'd really need the exact data that was received. JSON parsing is totally independent of any localisation. On the other hand, whatever service produced the JSON data may not. There is a good chance that being in the Lebanon, that customer receives non-ASCII data (which should be just fine), while other customers don't. It is possible that the server sends that data not in UTF-8 but say in some Windows encoding. That would be fine for ASCII but not for non-ASCII data. Or it could be that the server figures out that full UTF-8 is needed and not ASCII, and transmits a byte order marker, which is not legal JSON and would likely create the error message that you received.
To reproduce, I'd try to set up things so that non-ASCII data would be used. For example a username or password with non-ASCII data.
I would like to use AContext.Connection.IOHandler.ReadLn in Indy10 on my IdTcpServer but I don't know how to use the ByteEncoding Parameter. Does the client have to send a WideString or AnsiString?
If you are in control of both the client and server code, then you decide what specific encoding the client sends the text as (UTF-8 is a good choice), and then code the server accordingly. Indy defaults to ASCII, but you can change that. However, if you are not in control of the client code, then you need to find out what encoding the client is actually using so that you can then code the server accordingly. Don't just make blind assumptions. Find out what is actually being used.
I am using Delphi 7's HTTPReqResp component to send a digitally signed SOAP XML Document to a HTTPS web service. I use Eldos XML BlackBox and have set all the transformAlgorithms, CanonicalizationMethod, signaturemethod, etc. to the ones the web service requires and have confirmed this with a tech support officer.
I have validated the signature using XML BlackBox and also this XML Verifier website.
Both ways confirm the signature is correct. However, when I send the XML document via HTTPReqResp.execute, the response I get back is BadSignature (The signature value is invalid).
Originally, I received back a different error messages due to XML errors (malformed, etc.). It appears that the service will do all the standard formatting checks first, then it will attempt to validate the signature. Since I get back the BadSignature response, the rest of the XML must be correct.
I suppose I have 2 questions here.
Does the HTTPReqResp component alter the XML.
Is it likely the webservice alters the XML.
The site is using Access Manager WebSEAL.
It's very likely that the receiving partner is getting a modified document somehow. Some minor modifications shouldn't affect the signature (that's the idea, at least) so you may want to check the following:
"Recommended" encoding used by the receiving partner. A very annoying practice by some receiving partners is to favor one form of encoding and completely ignore others. XML signatures should use utf-8 but I've seen servers that only accept iso-8859-1
Make sure you don't accidentally change encoding after signing.
Verify that the receiving partner is using a sane canonicalization method.
Verify with your receiving partner that no extraneous elements are being added to your document.
Also, have you tried to post this using the SecureBlackBox components? They also have an HTTP client that can do SSL, and that can be used to also verify the bytes being sent through the wire.
I have made a SOAP API using WSDL, and I need to transfer bytes of a file from client to the server of the API. I am using unsignedBinary[] array type in the WSDL to describe the data to be passed. The WSDL description is as follows:
<complexType name="TByteArray">
<complexContent>
<restriction base="soapenc:Array">
<sequence/>
<attribute ref="soapenc:arrayType" n1:arrayType="xs:unsignedByte[]" xmlns:n1="http://schemas.xmlsoap.org/wsdl/"/>
</restriction>
</complexContent>
</complexType>
I am trying to send the data in raw bytes, which is giving error.
Now I am doing a base64 encode while sending the binary data using the soap API call.
It works fine if I send the data using PHP Client, but it is not working when I use a Delphi application to send the data.
Do you think that converting the binary data to base64 is making the Delphi application not allow to send the data?
unsignedByes is not allowed to transfer to Base 64? Does anyone know the details of this?
Thanks for you valuable comments, but i have solved this problem.
I removed the following code and transferred the binary on the "type:xs:hexbinary"
<part name="abData" type="xs:hexBinary"/>
and it works like magic.
It converts the binary automatically to hex and on receiving on the server it again converts to binary on it's own...
Thanks.
I'm trying to build a web service using Ruby on Rails. Users authenticate themselves via HTTP Basic Auth. I want to allow any valid UTF-8 characters in usernames and passwords.
The problem is that the browser is mangling characters in the Basic Auth credentials before it sends them to my service. For testing, I'm using 'カタカナカタカナカタカナカタカナカタカナカタカナカタカナカタカナ' as my username (no idea what it means - AFAIK it's some random characters our QA guy came up with - please forgive me if it is somehow offensive).
If I take that as a string and do username.unpack("h*") to convert it to hex, I get: '3e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a83e28ba3e28fb3e28ba3e38a8' That seems about right for 32 kanji characters (3 bytes/6 hex digits per).
If I do the same with the username that's coming in via HTTP Basic auth, I get:
'bafbbaacbafbbaacbafbbaacbafbbaacbafbbaacbafbbaacbafbbaacbafbbaac'. It's obviously much shorter. Using the Firefox Live HTTP Headers plugin, here's the actual header that's being sent:
Authorization: Basic q7+ryqu/q8qrv6vKq7+ryqu/q8qrv6vKq7+ryqu/q8o6q7+ryqu/q8qrv6vKq7+ryqu/q8qrv6vKq7+ryqu/q8o=
That looks like that 'bafbba...' string, with the high and low nibbles swapped (at least when I paste it into Emacs, base 64 decode, then switch to hexl mode). That might be a UTF16 representation of the username, but I haven't gotten anything to display it as anything but gibberish.
Rails is setting the content-type header to UTF-8, so the browser should be sending in that encoding. I get the correct data for form submissions.
The problem happens in both Firefox 3.0.8 and IE 7.
So... is there some magic sauce for getting web browsers to send UTF-8 characters via HTTP Basic Auth? Am I handling things wrong on the receiving end? Does HTTP Basic Auth just not work with non-ASCII characters?
I want to allow any valid UTF-8 characters in usernames and passwords.
Abandon all hope. Basic Authentication and Unicode don't mix.
There is no standard(*) for how to encode non-ASCII characters into a Basic Authentication username:password token before base64ing it. Consequently every browser does something different:
Opera uses UTF-8;
IE uses the system's default codepage (which you have no way of knowing, other than it's never UTF-8), and silently mangles characters that don't fit into to it using the Windows ‘guess a random character that looks a bit like the one you wanted or maybe just not’ secret recipe;
Mozilla uses only the lower byte of character codepoints, which has the effect of encoding to ISO-8859-1 and mangling the non-8859-1 characters irretrievably... except when doing XMLHttpRequests, in which case it uses UTF-8;
Safari and Chrome encode to ISO-8859-1, and fail to send the authorization header at all when a non-8859-1 character is used.
*: some people interpret the standard to say that either:
it should be always ISO-8859-1, due to that being the default encoding for including raw 8-bit characters directly included in headers;
it should be encoded using RFC2047 rules, somehow.
But neither of these proposals are on topic for inclusion in a base64-encoded auth token, and the RFC2047 reference in the HTTP spec really doesn't work at all since all the places it might potentially be used are explicitly disallowed by the ‘atom context’ rules of RFC2047 itself, even if HTTP headers honoured the rules and extensions of the RFC822 family, which they don't.
In summary: ugh. There is little-to-no hope of this ever being fixed in the standard or in the browsers other than Opera. It's just one more factor driving people away from HTTP Basic Authentication in favour of non-standard and less-accessible cookie-based authentication schemes. Shame really.
It's a known shortcoming that Basic authentication does not provide support for non-ISO-8859-1 characters.
Some UAs are known to use UTF-8 instead (Opera comes to mind), but there's no interoperability for that either.
As far as I can tell, there's no way to fix this, except by defining a new authentication scheme that handles all of Unicode. And getting it deployed.
HTTP Digest authentication is no solution for this problem, either. It suffers from the same problem of the client being unable to tell the server what character set it's using and the server being unable to correctly assume what the client used.
Have you tested using something like curl to make sure it's not a Firefox issue? The HTTP Auth RFC is silent on ASCII vs. non-ASCII, but it does say the value passed in the header is the username and the password separated by a colon, and I can't find a colon in the string that Firefox is reporting sending.
If you are coding for Windows 8.1, note that the sample in the documentation for HttpCredentialsHeaderValue is (wrongly) using UTF-16 encoding. Reasonably good fix is to switch to UTF-8 (as ISO-8859-1 is not supported by CryptographicBuffer.ConvertStringToBinary).
See http://msdn.microsoft.com/en-us/library/windows/apps/windows.web.http.headers.httpcredentialsheadervalue.aspx.
Here a workaround we used today to circumvent the issue of non-ascii characters in the password of a colleague:
curl -u "USERNAME:`echo -n 'PASSWORT' | iconv -f ISO-8859-1 -t UTF-8`" 'URL'
Replace USERNAME, PASSWORD and URL with your values. This example uses shell command substitution to transform the password character encoding to UTF-8 before executing the curl command.
Note: I used here a ` ... ` evaluation instead of ${ ... } because it doesn't fail if the password contains a ! character... [shells love ! characters ;-)]
Illustration of what happens with non-ASCII characters:
echo -n 'zz<zz§zz$zz-zzäzzözzüzzßzz' | iconv -f ISO-8859-1 -t UTF-8
I might be a total ignorant, but I came to this post while looking for a problem while sending a UTF8 string as a header inside an ajax call.
I could solve my problem by encoding in Base64 the string right before sending it. That means that you could with some simple JS convert the form to base64 right before submittting and that way it can be conevrted back on the server side.
This simple tools allowed me to have utf8 strings send as simple ASCII. I found that thanks to this simple sentence:
base64 (this encoding is designed to make binary data survive transport through transport layers that are not 8-bit clean). http://www.webtoolkit.info/javascript-base64.html
I hope this helps somehow. Just trying to give back a little bit to the community!