After viewing this previous SO question regarding percent encoding, I'm curious as to which styles of encodings are correct - the Wikipedia article on percent encoding alludes to using + instead of %20 for spaces, while still having an application/x-www-urlencoded content type.
This leads me to think the + vs. %20 behavior depends on which part of the URL is being encoded. What differences are preferred for path segments vs. query strings? Details and references for this specification would be greatly appreciated.
Note: I assume that non-alphanumeric characters will be encoded via UTF-8, in that each octet for a character becomes a %XX string. Correct me if I am wrong here (for instance latin-1 instead of utf-8), but I am more interested in the differences between the encodings of different parts of a URL.
This leads me to think the + vs. %20 behavior depends on which part of the URL is being encoded.
Not only does it depend on the particular URL component, but it also depends on the circumstances in which that component is populated with data.
The use of '+' for encoding space characters is specific to the application/x-www-form-urlencoded format, which applies to webform data that is being submitted in an HTTP request. It does not apply to a URL itself.
The application/x-www-form-urlencoded format is formally defined by W3C in the HTML specifications. Here is the definition from HTML 4.01:
Section 17.13.3 Processing form data, Step four: Submit the encoded form data set
This specification does not specify all valid submission methods or content types that may be used with forms. However, HTML 4 user agents must support the established conventions in the following cases:
• If the method is "get" and the action is an HTTP URI, the user agent takes the value of action, appends a `?' to it, then appends the form data set, encoded using the "application/x-www-form-urlencoded" content type. The user agent then traverses the link to this URI. In this scenario, form data are restricted to ASCII codes.
• If the method is "post" and the action is an HTTP URI, the user agent conducts an HTTP "post" transaction using the value of the action attribute and a message created according to the content type specified by the enctype attribute.
Section 17.13.4 Form content types, application/x-www-form-urlencoded
This is the default content type. Forms submitted with this content type must be encoded as follows:
1.Control names and values are escaped. Space characters are replaced by '+', and then reserved characters are escaped as described in [RFC1738], section 2.2: Non-alphanumeric characters are replaced by '%HH', a percent sign and two hexadecimal digits representing the ASCII code of the character. Line breaks are represented as "CR LF" pairs (i.e., '%0D%0A').
2.The control names/values are listed in the order they appear in the document. The name is separated from the value by '=' and name/value pairs are separated from each other by '&'.
The corresponding HTML5 definitions (Section 4.10.22.3 Form submission algorithm and Section 4.10.22.6 URL-encoded form data) are way more refined and detailed, but for purposes of this discussion, the jist is roughly the same.
So, in the situation where the webform data is submitted via an HTTP GET request instead of a POST request, the webform data is encoded using application/x-www-form-urlencoded and placed as-is in the URL query component.
Per RFC 3986: Uniform Resource Identifier (URI): Generic Syntax:
URI producing applications should percent-encode data octets that correspond to characters in the reserved set unless these characters are specifically allowed by the URI scheme to represent data in that component.
'+' is a reserved character:
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
The query component explicitly allows unencoded '+' characters, as it allows characters from sub-delims:
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
pct-encoded = "%" HEXDIG HEXDIG
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
query = *( pchar / "/" / "?" )
So, in the context of a webform submission, spaces are encoded using '+' prior to then being put as-is into the query component. This is allowed by the URL syntax, since the encoded form of application/x-www-form-urlencoded is compatible with the definition of the query component.
So, for example: http://server/script?field=hello+world
However, outside of a webform submission, putting a space character directly into the query component requires the use of pct-encoded, since ' ' is not included in either unreserved or sub-delims, and is not explicitly allowed by the query definition.
So, for example: http://server/script?hello%20world
Similar rules also apply to the path component, due to its use of pchar:
path = path-abempty ; begins with "/" or is empty
/ path-absolute ; begins with "/" but not "//"
/ path-noscheme ; begins with a non-colon segment
/ path-rootless ; begins with a segment
/ path-empty ; zero characters
path-abempty = *( "/" segment )
path-absolute = "/" [ segment-nz *( "/" segment ) ]
path-noscheme = segment-nz-nc *( "/" segment )
path-rootless = segment-nz *( "/" segment )
path-empty = 0<pchar>
segment = *pchar
segment-nz = 1*pchar
segment-nz-nc = 1*( unreserved / pct-encoded / sub-delims / "#" )
; non-zero-length segment without any colon ":"
So, although path does allow for unencoded sub-delims characters, a '+' character gets treated as-is, not as an encoded space. application/x-www-form-urlencoded is not used with the path component, so a space character has to be encoded as %20 due to the definitions of pchar and segment-nz-nc.
Now, regarding the charset used to encode characters -
For a webform submission, that charset is dictated by rules defined in the webform encoding algorithm (more so in HTML5 than HTML4) used to prepare the webform data prior to inserting it into the URL. In a nutshell, the HTML can specify an accept-charset attribute or hidden _charset_ field directly in the <form> itself, otherwise the charset is typically the charset used by the parent HTML.
However, outside of a webform submission, there is no formal standard for which charset is used to encode non-ascii characters in a URL component (the IRI syntax, on the other hand, requires UTF-8 especially when converting an IRI into an URI/URL). Outside of IRI, it is up to particular URI schemes to dictate their charsets (the HTTP scheme does not), otherwise the server decides which charset it wants to use. Most schemes/servers use UTF-8 nowadays, but there are still some servers/schemes that use other charsets, typically based on the server's locale (Latin1, Shift-JIS, etc). There have been attempts to add charset reporting directly in the URL and/or in HTTP (such as Deterministic URI Encoding
), but those are not commonly used.
Related
My Chrome version 101 allows me to open
https://%65%78%61%6D%70%6C%65%2E%63%6F%6D (https://example.com, encoded except for the https://.)
but not
https://%65%78%61%6D%70%6C%65%2E%63%6F%6D%2F%74%65%73%74 (https://example.com/test, with the path delimiter / also encoded.).
Exactly what parts and what characters of a URL can be URL-encoded, according to the latest specification?
By “parts,” I mean the scheme, username, password, host, port, path, query, fragment, ., :, //, #, ?, #, et cetera.
By “what characters,” I mean “characters of what value in what part.”
By the specification
From RFC 3986.
2.1. Percent-Encoding
….
pct-encoded = "%" HEXDIG HEXDIG
The uppercase hexadecimal digits “A” through “F” are equivalent to the lowercase digits “a” through “f,” respectively. If two URIs differ only in the case of hexadecimal digits used in percent-encoded octets, they are equivalent. For consistency, URI producers and normalizers should use uppercase hexadecimal digits for all percent-encodings.
Percent-encoding is case-insensitive.
2.2. Reserved Characters
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "="
The purpose of reserved characters is to provide a set of delimiting characters that are distinguishable from other data within a URI. URIs that differ in the replacement of a reserved character with its corresponding percent-encoded octet are not equivalent. Percent-encoding a reserved character, or decoding a percent-encoded octet that corresponds to a reserved character, will change how the URI is interpreted by most applications. Thus, characters in the reserved set are protected from normalization and are therefore safe to be used by scheme-specific and producer-specific algorithms for delimiting data subcomponents within a URI.
A subset of the reserved characters (gen-delims) is used as delimiters of the generic URI components described in Section 3. A component’s ABNF syntax rule will not use the reserved or gen-delims rule names directly; instead, each syntax rule lists the characters allowed within that component (i.e., not delimiting it), and any of those characters that are also in the reserved set are “reserved” for use as subcomponent delimiters within the component. Only the most common subcomponents are defined by this specification; other subcomponents may be defined by a URI scheme’s specification, or by the implementation-specific syntax of a URI’s dereferencing algorithm, provided that such subcomponents are delimited by characters in the reserved set allowed within that component.
URI producing applications should percent-encode data octets that correspond to characters in the reserved set unless these characters are specifically allowed by the URI scheme to represent data in that component. If a reserved character is found in a URI component and no delimiting role is known for that character, then it must be interpreted as representing the data octet corresponding to that character’s encoding in US-ASCII.
The characters “:/?#[]#!$&'()*+,;=” are reserved characters.
URL scheme specifications define syntactic URL delimiters to be some characters from the reserved characters.
Syntactic URL delimiters are not percent-encoded.
The reserved characters that are not syntactic URL delimiters can be either percent-encoded or not, but are recommended to be percent-encoded.
2.3. Unreserved Characters
Characters that are allowed in a URI but do not have a reserved purpose are called unreserved. These include uppercase and lowercase letters, decimal digits, hyphen, period, underscore, and tilde.
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
URIs that differ in the replacement of an unreserved character with its corresponding percent-encoded US-ASCII octet are equivalent: they identify the same resource. However, URI comparison implementations do not always perform normalization prior to comparison (see Section 6). For consistency, percent-encoded octets in the ranges of ALPHA (%41–%5A and %61–%7A), DIGIT (%30–%39), hyphen (%2D), period (%2E), underscore (%5F), or tilde (%7E) should not be created by URI producers and, when found in a URI, should be decoded to their corresponding unreserved characters by URI normalizers.
6. Normalization and Comparison
…URI comparison is performed for some particular purpose. Protocols or implementations that compare URIs for different purposes will often be subject to differing design trade-offs in regards to how much effort should be spent in reducing aliased identifiers. This section describes various methods that may be used to compare URIs, the trade-offs between them, and the types of applications that might use them.
Characters that are allowed in a URL and not the reserved, that is, “ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~”, are unreserved characters.
The unreserved characters can be either percent-encoded or not, but are recommended to be not.
Summary
Syntactic URL delimiters → cannot be percent-encoded.
Other than those → can be either percent-encoded or not.
Percent-encoding is case-insensitive.
How the implementations would do
Some implementations don’t do complete, extensive URL normalization. For example, “%68%74%74%70%73://example.com” is a valid URL by the specification, but Chrome (version 101) does not normalize it into “https://example.com” when it’s put into the omnibar.
For example I quite often see this URL come up.
https://ghbtns.com/github-btn.html?user=example&repo=card&type=watch&count=true
Is the & meant to be & or should/can it be left as &?
& is for encoding the ampersand in HTML.
For example, in a hyperlink:
…
(Note that this only changes the link, not the URL. The URL is still /github-btn.html?user=example&repo=card&type=watch&count=true.)
While you may encode every & (that is part of the content) with & in HTML, you are only required to encode ambiguous ampersands.
From rfc3986:
Reserved Characters
URIs include components and subcomponents that are delimited by characters in the "reserved" set. These characters are called "reserved" because they may (or may not) be defined as delimiters by the generic syntax, by each scheme-specific syntax, or by the implementation-specific syntax of a URI's dereferencing algorithm.
...
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
The purpose of reserved characters is to provide a set of delimiting
characters that are distinguishable from other data within a URI. URIs
that differ in the replacement of a reserved character with its
corresponding percent-encoded octet are not equivalent.
Percent-encoding a reserved character, or decoding a percent-encoded
octet that corresponds to a reserved character, will change how the
URI is interpreted by most applications.
...
URI producing applications should percent-encode data octets that
correspond to characters in the reserved set unless these characters
are specifically allowed by the URI scheme to represent data in that
component. If a reserved character is found in a URI component and
no delimiting role is known for that character, then it must be
interpreted as representing the data octet corresponding to that
character's encoding in US-ASCII.
So & within a URL should be encoded if it's part of the value and has no delimiting role.Here's simple PHP code fragment using urlencode() function:
<?php
$query_string = 'foo=' . urlencode($foo) . '&bar=' . urlencode($bar);
echo '<a href="mycgi?' . htmlentities($query_string) . '">';
?>
Is it actually safe/valid to use multidimensional array synthax in the URL query string?
http://example.com?abc[]=123&abc[]=456
It seems to work in every browser and I always thought it was OK to use, but accodring to a comment in this article it is not: http://www.456bereastreet.com/archive/201008/what_characters_are_allowed_unencoded_in_query_strings/#comment4
I would like to hear a second opinion.
The answer is not simple.
The following is extracted from section 3.2.2 of RFC 3986 :
A host identified by an Internet Protocol literal address, version 6
[RFC3513] or later, is distinguished by enclosing the IP literal
within square brackets ("[" and "]"). This is the only place where
square bracket characters are allowed in the URI syntax.
This seems to answer the question by flatly stating that square brackets are not allowed anywhere else in the URI. But there is a difference between a square bracket character and a percent encoded square bracket character.
The following is extracted from the beginning of section 3 of RFC 3986 :
Syntax Components
The generic URI syntax consists of a hierarchical sequence of
components referred to as the scheme, authority, path, query, and
fragment.
URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]
So the "query" is a component of the "URI".
The following is extracted from section 2.2 of RFC 3986 :
2.2. Reserved Characters
URIs include components and subcomponents that are delimited by
characters in the "reserved" set. These characters are called
"reserved" because they may (or may not) be defined as delimiters by
the generic syntax, by each scheme-specific syntax, or by the
implementation-specific syntax of a URI's dereferencing algorithm.
If data for a URI component would conflict with a reserved
character's purpose as a delimiter, then the conflicting data must
be percent-encoded before the URI is formed.
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
So square brackets may appear in a query string, but only if they are percent encoded. Unless they aren't, to be explained further down in section 2.2 :
URI producing applications should percent-encode data octets that
correspond to characters in the reserved set unless these characters
are specifically allowed by the URI scheme to represent data in that
component. If a reserved character is found in a URI component and
no delimiting role is known for that character, then it must be
interpreted as representing the data octet corresponding to that
character's encoding in US-ASCII.
So because square brackets are only allowed in the "host" subcomponent, they "should" be percent encoded in other components and subcomponents, and in this case in the "query" component, unless RFC 3986 explicitly allows unencoded square brackets to represent data in the query component, which is does not.
However, if a "URI producing application" fails to do what it "should" do, by leaving square brackets unencoded in the query, then readers of the URI are not to reject the URI outright. Instead, the square brackets are to be considered as belonging to the data of the query component, since they are not used as delimiters in that component.
This is why, for example, it is not a violation of RFC 3986 when PHP accepts both unencoded and percent encoded square brackets as valid characters in a query string, and even assigns to them a special purpose. However, it would appear that authors who try to take advantage of this loophole by not percent encoding square brackets are in violation of RFC 3986.
According to RFC 3986, the Query component of an URL has the following grammar:
*( pchar / "/" / "?" )
From appendix A of the same RFC:
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
[...]
pct-encoded = "%" HEXDIG HEXDIG
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
[...]
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
My interpretation of this is that anything that isn't:
ALPHA / DIGIT / "-" / "." / "_" / "~" /
"!" / "$" / "&" / "'" / "(" / ")" /
"*" / "+" / "," / ";" / "=" / ":" / "#"
...should be pct-encoded, i.e percent-encoded. Thus [ and ] should be percent-encoded to follow RFC 3986.
David N. Jafferian's answer is fantastic. I just want to add a couple updates and practical notes:
For many years, every browser has left square brackets in query strings unencoded when submitting the request to the server. (Source: https://bugzilla.mozilla.org/show_bug.cgi?id=1152455#c6). As such, I imagine a huge portion of the web has come to rely on this behavior, which makes it extremely unlikely to change.
My reading of the WHATWG URL standard which, at least for web purposes, can be seen as superseding RFC 3986, is that it codifies this behavior of not encoding [ and ] in query strings.
Edit: Based on the comments and other answers, a more correct reading of the WHATWG URL standard is that unencoded [/] are invalid, but also should be tolerated when received/parsed and, once parsed that way, should even be re-serialized without encoding.
I'd ideally like to comment on Ethan's answer really, but don't have sufficient reputation to do it.
I'm not sure that the relevant part of the WHATWG URL standard is being referenced here. I think the correct part might be in the definition of a valid URL-query string, which it describes as being composed of URL units that themselves are formed from URL code points and percent-encoded bytes. Square brackets are listed within URL code points and thus fall into the percent-encoded bytes category.
Thus, in answer to the original question, multidimensional array syntax (i.e. using square brackets to represent array indexing) within the query part of the URL is valid, provided the square brackets are percent encoded (as %5B for [ and %5D for ]).
My understanding that square brackets are not first-class citizens anyway. Here is the quote:
https://www.rfc-editor.org/rfc/rfc1738
Other characters are unsafe because gateways and other transport
agents are known to sometimes modify such characters. These
characters are "{", "}", "|", "", "^", "~", "[", "]", and "`".
I always had a temptation to go for that sort of query when I had to pass an array, but I steered away from it. The reason being:
It is not cleared defined in RFC.
Different languages may interpret it differently.
You have a couple of options to pass an array:
Encode the string representation of the array(JSON may be?)
Have parameters like "val1=blah&val2=blah&.." or something like that.
And if you are sure about the language you are using, you can (safely) go for the kind of query string you have (Just that you need to %-encode [] also).
Background (question further down)
I've been Googling this back and forth reading RFCs and SO questions trying to crack this, but I still don't got jack.
So I guess we just vote for the "best" answer and that's it, or?
Basically it boils down to this.
3.4. Query Component
The query component is a string of information to be interpreted by the resource.
query = *uric
Within a query component, the characters ";", "/", "?", ":", "#", "&", "=", "+", ",", and "$" are reserved.
The first thing that boggles me is that *uric is defined like this
uric = reserved | unreserved | escaped
reserved = ";" | "/" | "?" | ":" | "#" | "&" | "=" | "+" | "$" | ","
This is however somewhat clarified by paragraphs such as
The "reserved" syntax class above refers to those characters that are allowed within a URI, but which may not be allowed within a particular component of the generic URI syntax; they are used as delimiters of the components described in Section 3.
Characters in the "reserved" set are not reserved in all contexts. The set of characters actually reserved within any given URI component is defined by that component. In general, a character is reserved if the semantics of the URI changes if the character is replaced with its escaped US-ASCII encoding.
This last excerpt feels somewhat backwards, but it clearly states that the reserved character set depends on context. Yet 3.4 states that all the reserved characters are reserved within a query component, however, the only things that would change the semantics here is escaping the question mark (?) as URIs do not define the concept of a query string.
At this point I've given up on the RFCs entirely but found RFC 1738 particularly interesting.
An HTTP URL takes the form:
http://<host>:<port>/<path>?<searchpart>
Within the <path> and <searchpart> components, "/", ";", "?" are reserved. The "/" character may be used within HTTP to designate a hierarchical structure.
I interpret this at least with regards to HTTP URLs that RFC 1738 supersedes RFC 2396. Because the URI query has no notion of a query string also the interpretation of reserved doesn't really let allow me to define query strings as I'm used to doing by now.
Question
This all started when I wanted to pass a list of numbers together with the request of another resource. I didn't think much of it, and just passed it as a comma separated values. To my surprise though the comma was escaped. The query page.html?q=1,2,3 encoded turned into page.html?q=1%2C2%2C3 it works, but it's ugly and didn't expect it. That's when I started going through RFCs.
My first question is simply, is encoding commas really necessary?
My answer, according to RFC 2396: yes, according to RFC 1738: no
Later I found related posts regarding the passing of lists between requests. Where the csv approach was poised as bad. This showed up instead, (haven't seen this before).
page.html?q=1;q=2;q=3
My second question, is this a valid URL?
My answer, according to RFC 2396: no, according to RFC 1738: no (; is reserved)
I don't have any issues with passing csv as long as it's numbers, but yes you do run into the risk of having to encode and decode values back and forth if the comma suddenly is needed for something else. Anyway I tried the semi-colon query string thing with ASP.NET and the result was not what I expected.
Default.aspx?a=1;a=2&b=1&a=3
Request.QueryString["a"] = "1;a=2,3"
Request.QueryString["b"] = "1"
I fail to see how this greatly differs from a csv approach as when I ask for "a" I get a string with commas in it. ASP.NET certainly is not a reference implementation but it hasn't let me down yet.
But most importantly -- my third question -- where is specification for this? and what would you do or for that matter not do?
That a character is reserved within a generic URL component doesn't mean it must be escaped when it appears within the component or within data in the component. The character must also be defined as a delimiter within the generic or scheme-specific syntax and the appearance of the character must be within data.
The current standard for generic URIs is RFC 3986, which has this to say:
2.2. Reserved Characters
URIs include components and subcomponents that are delimited by characters in the "reserved" set. These characters are called "reserved" because they may (or may not) be defined as delimiters by the generic syntax, by each scheme-specific syntax, or by the implementation-specific syntax of a URI's dereferencing algorithm. If data for a URI component would conflict with a reserved character's purpose as a delimiter [emphasis added], then the conflicting data must be percent-encoded before the URI is formed.
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
3.3. Path Component
[...]
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
[...]
3.4 Query Component
[...]
query = *( pchar / "/" / "?" )
Thus commas are explicitly allowed within query strings and only need to be escaped in data if specific schemes define it as a delimiter. The HTTP scheme doesn't use the comma or semi-colon as a delimiter in query strings, so they don't need to be escaped. Whether browsers follow this standard is another matter.
Using CSV should work fine for string data, you just have to follow standard CSV conventions and either quote data or escape the commas with backslashes.
As for RFC 2396, it also allows for unescaped commas in HTTP query strings:
2.2. Reserved Characters
Many URI include components consisting of or delimited by, certain
special characters. These characters are called "reserved", since
their usage within the URI component is limited to their reserved
purpose. If the data for a URI component would conflict with the
reserved purpose, then the conflicting data must be escaped before
forming the URI.
Since commas don't have a reserved purpose under the HTTP scheme, they don't have to be escaped in data. The note from § 2.3 about reserved characters being those that change semantics when percent-encoded applies only generally; characters may be percent-encoded without changing semantics for specific schemes and yet still be reserved.
I think the real question is: "What characters should be encoded in a query string?" And that depends mainly on two things: The validity and the meaning of a character.
Validity according to the RFC standard
In RFC3986 we can find which special characters are valid and which are not inside a query string:
// Valid:
! $ & ' ( ) * + , - . / : ; = ? # _ ~
% (should be followed by two hex chars to be completely valid (e.g. %7C))
// Invalid:
" < > [ \ ] ^ ` { | }
space
# (marks the end of the query string, so it can't be a part of it)
extended ASCII characters (e.g. °)
Deviations from the standard
Browsers and web frameworks do not always strictly follow the RFC standard. Below are some examples:
[, ] are not valid, but Chrome and Firefox do not encode these characters inside a query string. The reasoning given by Chrome devs is simply: "If other browsers and an RFC disagree, we will generally match other browsers." QueryHelpers.AddQueryString from ASP.NET Core on the other hand will encode these characters.
Other invalid characters that are not encoded by Chrome and Firefox are:
\ ^ ` { | }
' is a valid character inside a query string but will be encoded by Chrome, Firefox and QueryHelpers nevertheless. The explanation given by Firefox devs is that they knew that they don't have to encode it according to the RFC standard, but did it to reduce vulnerabilities.
Special meaning
Some characters are valid and also don't get encoded by browsers, but should still be encoded in certain cases.
+: Spaces are normally encoded as %20 but alternatively they can be encoded as +. So + inside a query string means it's an encoded space. If you want to include a character that's actually supposed to literally mean plus, then you have to use the encoded version of + which is %2B.
~: Some old Unix systems interpreted URI parts that started with ~ as a path to a home directory. So it's a good idea to encode ~ if it's not meant to denote the start of a Unix home directory path for an old system (so nowadays probably always encode).
=, &: Usually (although RFC doesn't specify that this is required) query strings contain parameters in the format "key1=value1&key2=value2". If that's the case and =s or &s should be part of the parameter key or the parameter value instead of giving them the role of separating the key and value or separating the parameters, then you have to encode those =s and &s. So if a parameter value should for some reason consist of the string "=&" then it has to be encoded as %3D%26 which then can be used for the full key and value: "weirdparam=%3D%26".
%: Usually web frameworks figure out that %s that are not followed by two hex characters simply mean the % itself, but it's still a good idea to always encode % when it's supposed to only mean % and not indicate the start of an encoded character (e.g. %7C) because RFC3986 specifies that % is only valid when followed by two hex characters. So don't use "percentageparam=%" use "percentageparam=%25" instead.
Encoding guidelines
Encode every character that is otherwise invalid* according to RFC3986 and every character that can have special meaning but should only be interpreted in a literal way without giving it a special meaning. You can also encode things that aren't required to be encoded, like '. Why? Because it doesn't hurt to encode more than necessary. Servers and web frameworks when parsing a query string will decode every encoded character, no matter if it was really necessary to previously encode that character or not.
The only characters of a query string that shouldn't be encoded are those that can have a special meaning and shouldn't lose that special meaning, e.g. don't encode the = of "key1=value1". For that to achieve don't apply an encoding method to the whole query string (and also not to the whole URI) but apply it only and separately to the query parameter keys and values. For example, with JS:
var url = "http://example.com?" + encodeURIComponent(myKey1) + "=" + encodeURIComponent(myValue1) + "&" + encodeURIComponent(myKey2)...;
Note that encodeURIComponent encodes a lot more characters than necessary meaning characters that are valid in a query string and don't have special meaning there e.g. /, ?, ...
The reason is that encodeURIComponent wasn't created for query strings alone but instead encodes characters that have special meaning outside of the query string as well, e.g. / for the path URI component. QueryHelpers.AddQueryString works in a similar manner. Under the hood it uses System.Text.Encodings.Web.DefaultUrlEncoder which is not just meant for query strings but also for isegment, ipath-noscheme and ifragment.
* You could probably get away with only regarding those characters as invalid that are both not allowed by the RFC and that are also always encoded by Chrome for instance. This would be Space " < >. But it's probably better to be on the safer side and encode at least everything that RFC3986 considers invalid.
OP's questions
My first question is simply, is encoding commas really necessary -> No it's not necessary, but it doesn't hurt (except ugliness) and will happen with default encoding methods e.g. encodeURIComponent and decoding and query string parsing should work nevertheless.
My second question, is this a valid URL (page.html?q=1;q=2;q=3)? -> It's RFC valid, but your server / web framework might have a hard time parsing the query string when it might expect the typical "key1=value1&key2=value2" format for query strings.
Where is specification for this? -> There isn't a single specification that covers everything because some things are implementation specific. For instance there are different ways of specifying arrays inside of query strings.
Just use ?q=1+2+3
I am answering here a fourth question :) that did not ask but all started with: how do i pass list of numbers a-la comma-separated values? Seems to me the best approach is just to pass them space-separated, where spaces will get url-form-encoded to +. Works great, as longs as you know the values in the list contain no spaces (something numbers tend not to).
page.html?q=1;q=2;q=3
is this a valid URL?
Yes. The ; is reserved, but not by an RFC. The context that defines this component is the definition of the application/x-www-form-urlencoded media type, which is part of the HTML standard (section 17.13.4.1). In particular the sneaky note hidden away in section B.2.2:
We recommend that HTTP server implementors, and in particular, CGI implementors support the use of ";" in place of "&" to save authors the trouble of escaping "&" characters in this manner.
Unfortunately many popular server-side scripting frameworks including ASP.NET do not support this usage.
I would like to note that page.html?q=1&q=2&q=3 is a valid url as well. This is a completely legitimate way of expressing an array in a query string. Your server technology will determine how exactly that is presented.
In Classic ASP, you check Response.QueryString("q").Count and then use Response.QueryString("q")(0) (and (1) and (2)).
Note that you saw this in your ASP.NET, too (I think it was not intended, but look):
Default.aspx?a=1;a=2&b=1&a=3
Request.QueryString["a"] = "1;a=2,3"
Request.QueryString["b"] = "1"
Notice that the semicolon is ignored, so you have a defined twice, and you got its value twice, separated by a comma. Using all ampersands Default.aspx?a=1&a=2&b=1&a=3 will yield a as "1,2,3". But I am sure there is a method to get each individual element, in case the elements themselves contain commas. It is simply the default property of the non-indexed QueryString that concatenates the sub-values together with comma separators.
I had the same issue. The URL that was hyperlinked was a third party URL and was expecting a list of parameters in format page.html?q=1,2,3 ONLY and the URL page.html?q=1%2C2%2C3 did not work. I was able to get it working using javascript. May not be the best approach but can check out the solution here if it helps anyone.
If you are sending the ENCODED characters to FLASH/SWF file, then you should ENCODE the character twice!! (because of Flash parser)
The RFC 3986 URI: Generic Syntax specification lists a semicolon as a reserved (sub-delim) character:
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
What is the reserved purpose of the ";" of the semicolon in URIs? For that matter, what is the purpose of the other sub-delims (I'm only aware of purposes for "&", "+", and "=")?
There is an explanation at the end of section 3.3.
Aside from dot-segments in
hierarchical paths, a path segment is
considered opaque by the generic
syntax. URI producing applications
often use the reserved characters
allowed in a segment to delimit
scheme-specific or
dereference-handler-specific
subcomponents. For example, the
semicolon (";") and equals ("=")
reserved characters are often used
to delimit parameters and parameter
values applicable to that segment.
The comma (",") reserved character is
often used forsimilar purposes.
For example, one URI producer might
use a segment uch as "name;v=1.1"
to indicate a reference to version 1.1
of "name", whereas another might
use a segment such as "name,1.1" to
indicate the same. Parameter types
may be defined by scheme-specific
semantics, but in most cases the
syntax of a parameter is specific to
the implementation of the URI's
dereferencing algorithm.
In other words, it is reserved so that people who want a delimited list of something in the URL can safely use ; as a delimiter even if the parts contain ;, as long as the contents are percent-encoded. In other words, you can do this:
foo;bar;baz%3bqux
and interpret it as three parts: foo, bar, baz;qux. If semicolon were not a reserved character, the ; and %3bwould be equivalent, so the URI would be incorrectly interpreted as four parts: foo, bar, baz, qux.
The intent is clearer if you go back to older versions of the specification:
path_segments = segment *( "/" segment )
segment = *pchar *( ";" param )
Each path segment may include a
sequence of parameters, indicated by the semicolon ";" character.
I believe it has its origins in FTP URIs.
Section 3.3 covers this - it's an opaque delimiter a URI-producing application can use if convenient:
Aside from dot-segments in
hierarchical paths, a path segment is
considered opaque by the generic
syntax. URI producing applications
often use the reserved characters
allowed in a segment to delimit
scheme-specific or
dereference-handler-specific
subcomponents. For example, the
semicolon (";") and equals ("=")
reserved characters are often used to
delimit parameters and parameter
values applicable to that segment. The
comma (",") reserved character is
often used for similar purposes. For
example, one URI producer might use a
segment such as "name;v=1.1" to
indicate a reference to version 1.1 of
"name", whereas another might use a
segment such as "name,1.1" to indicate
the same. Parameter types may be
defined by scheme-specific semantics,
but in most cases the syntax of a
parameter is specific to the
implementation of the URI's
dereferencing algorithm.
There are some conventions around its current usage that are interesting. These speak to when to use a semicolon or comma. From the book "RESTful Web Services":
Use punctuation characters to separate multiple pieces of data at the same level of hierarchy. Use commas when the order of the items matters, ... Use semicolons when the order doesn't matter.
Since 2014, path segments are known to contribute to Reflected File Download attacks. Let's assume we have a vulnerable API that reflects whatever we send to it:
https://google.com/s?q=rfd%22||calc||
{"results":["q", "rfd\"||calc||","I love rfd"]}
Now, this is harmless in a browser as it's JSON, so it's not going to be rendered, but the browser will rather offer to download the response as a file. Now here's the path segments come to help (for the attacker):
https://google.com/s;/setup.bat;?q=rfd%22||calc||
Everything between semicolons (;/setup.bat;) will be not sent to the web service, but instead the browser will interpret it as the file name... to save the API response.
Now, a file called setup.bat will be downloaded and run without asking about dangers of running files downloaded from the Internet (because it contains the word "setup" in its name). The contents will be interpreted as a Windows batch file, and the calc.exe command will be run.
Prevention:
sanitize your API's input (in this case, they should just allow alphanumerics); escaping is not sufficient
add Content-Disposition: attachment; filename="whatever.txt" on APIs that are not going to be rendered; Google was missing the filename part which actually made the attack easier
add X-Content-Type-Options: nosniff header to API responses
I found the following use cases:
It's the final character of an HTML entity:
List of XML and HTML character entity references
To use one of these character entity references in an HTML or XML
document, enter an ampersand followed by the entity name and a
semicolon, e.g., & for the ampersand ("&").
Apache Tomcat 7 (or newer versions?!) us it as path parameter:
Three Semicolon Vulnerabilities
Apache Tomcat is one example of a web server that supports "Path
Parameters". A path parameter is extra content after a file name,
separated by a semicolon. Any arbitrary content after a semicolon does
not affect the landing page of a web browser. This means that
http://example.com/index.jsp;derp will still return index.jsp, and not
some error page.
URI scheme splits by it the MIME and data:
Data URI scheme
It can contain an optional character set parameter, separated from the
preceding part by a semicolon (;) .
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA
AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO
9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot" />
And there was a bug in IIS 5 and IIS 6 to bypass file upload restrictions:
Unrestricted File Upload
Blacklisting File Extensions This protection might be bypassed by: ...
by adding a semi-colon character after the forbidden extension and
before the permitted one (e.g. "file.asp;.jpg")
Conclusion:
Do not use semicolons in URLs or they could accidentally produce an HTML entity or URI scheme.