Seems that AFHTTPSessionManager will set a random boundary string. But the server I am accessing will verify this boundary string to determine whether it is legal request.
Can I set specify this boundary string?
Related
I am trying to validate the JWKS stored in the resource server. One of the checks I have implemented is to check for the kid that I decode from the JWT and check for it in the configured resource server.
I went through the JWK RFC for "kid". In the rfc its mentioned that kid is a case sensitive string. But it is not clear from the doc, what values can the kid, hold. Is it valid for kid to just to have all numeric characters in the string. Also what is the maximum limit for the number of characters in the string, that kid can hold.
It can really be just any kind of string, as long as it is unique for each key in the JWKS.
According to the RFC7517:
The structure of the "kid" value is unspecified.
I've seen uuids, numbers, timestamps and thumbprints (hash of the key) used as kid.
A JWK thumbprint can also be used as a key identifier which is practically guaranteed to be unique.
from (https://connect2id.com/products/nimbus-jose-jwt/examples/jwk-thumbprints)
Regarding the length of the kid, there is no limit mentioned in the RFC, but a JWK is a JSON object and the kid is usually also part of a JWT header, which is also a JSON object. JSON has also no inherent limitation of the size, but, according to RFC7159, Section 9:
An implementation may set limits on the length and character contents of strings.
So theoretically it is limited by the implementation but practically you won't experience any severe limitations when you use any reasonable size suitable for an unique identifier.
I have an application that has list of ID's as a part of the URL but soon the number of ID's are going to increase drastically. What is the best method to have a big number of ID's in the URL. The ID's can go up to 200 or more at a time.
You could encode your ID array in a string (JSON is an easy format for that) and transmit it as a single variable via POST.
Simple GET Parameters or even the URL itself has some limits on it's length that can no be avoided. Most Webservers also have security Filters in place that wont accept more than a certain number of Parameters. (Suhosin)
See:
What is the maximum length of a URL in different browsers?
What is apache's maximum url length?
http://www.suhosin.org/stories/configuration.html
I've encountered a situation whereby a GET request for a URL where the query string contains unencoded special characters returns 200 OK for the URL as-is but returns a 400 Bad Request when the special characters are url-encoded.
A simplified example of such a URL is: http://example.com/??foo/bar+foobar
Note the double question mark such that the entire query string is a single key-value pair with a null value.
URL-encoding the query string key gives us: http://example.com/?%3Ffoo%2Fbar+foobar
A GET request for this URL containing encoded characters will return 400 Bad Request.
The application handling these URLs (a third party I don't control) appears to not like the query string key to contain url-encoded equivalents of non-alphanumeric characters.
I was under the assumption that http://example.com/??foo/bar+foobar and http://example.com/?%3Ffoo%2Fbar+foobar should be equivalent and consequently interchangeable. This may be an invalid assumption.
Are these two URLs equivalent and consequently interchangable?
Is it the application that handles these URLs at fault for not treating these URLs as equivalent or is it my application, which is applying url-encoding to query string keys and values, at fault for applying such encoding?
I have noticed that Google does not encode all special characters in the query part of the URL . For example:
Placing this string in Google's search: !##$%^&*()
Yields this URL: https://www.google.com/#q=!%40%23%24%25^%26*()
Notice that the !, ^, *, ( , and ) are not encoded.
Some of the characters such as : or < are considered unsafe or reserved, yet Google doesn't encode them.
Can someone explain why Google does this, and if they have a reference document as to exactly what characters get encoded and which don't?
Thanks for any help!
As documented here:
Some characters are not safe to use in a URL without first being
encoded. Because a Google search request is made by using an HTTP URL,
the search request must follow URL conventions, including character
encoding, where necessary.
The HTTP URL syntax defines that only alphanumeric characters, the
special characters $-_.+!*'(), and the reserved characters ;/?:#=& can
be used as values within an HTTP URL request. Since reserved
characters are used by the search engine to decode the URL, and some
special characters are used to request search features, then all
non-alphanumeric characters used as a value to an input parameter must
be URL-encoded.
To URL-encode a string:
Replace space characters with a "+" character Replace each
non-alphanumeric character by its hexadecimal ASCII value, in the
format of a "%" character followed by two hexadecimal digits. (Such an
ASCII value may be referred to as an escape code.)
Some input parameters require that the values passed to Google search are double-URL-encoded. This requirement means that you must apply the URL encoding to the string twice in succession to generate the final value.
What are the valid characters that can be used in a URL query variable?
I'm asking because I would like to create GUIDs of minimal string length by using the largest character set so long as they can be passed as a URL query variable (www.StackOverflow.com?query=guiddaf09834fasnv)
Edit
If you want to encode a UUID/GUID or any other information represented in a byte array into a url-friendly string, you can use this method in the Apache Commons Code library:
Base64.encodeBase64URLSafeString(byte[])
When in doubt, just go to the RFC.
Note: A query variable is not dealt with any differently then the rest of the URL.
From the section "2.2. URL Character Encoding Issues"
... only alphanumerics, the special characters "$-_.+!*'(),", and reserved characters used for their reserved purposes may be used unencoded within a URL.