How can I verify URLs are correct, and/or extract valid URLs from arbitrary text? - ios

Sometimes I've had a text entry form, where I want to disable the "Accept" button until a user has entered a valid URL. Searching here or the web turns up a huge number of regular expressions, but given the complexity of the URL specification (RFC-3986), its neigh impossible to write your own verification test suite for them. Once my app is in the App store, how would I ever know how many false negatives I got due to defects in the regular expression?
Other times I've had the need to extract all the valid URLs from a web site or some other text, and want to get an array of them so I can filter it down to say just those that point to an image file. Faulty regular expressions are less likely to be a problem in this case, since if I miss an image or two, or get a bogus URL, its not a major problem. In any case, the better the regular expression, the more correct the returned list of images.
So, how can I with virtual certainty validate a presented string as a proper URL? Also it would be nice to have the means to extract valid URLs out of arbitrary text.

There are a huge number of regular expression on the web that claim to verify URLs. The problem with most is while they may work, they have no credentials - that is, there does not exist any way to prove their correctness one way or the other.
The reference spec on URLs is RFC-3986, and while on a long search for the best regular expression I tripped over Jeff Roberson's regular expression page. What he did was start from the spec, constructing small regular expressions to match the low level parts of the RFC, and gradually building them up into a full expression.
For instance, this is how one gets the full scheme:
# From http://jmrware.com/articles/2009/uri_regexp/URI_regex.html Copyright # Jeff Roberson
(⌽[A-Za-z][A-Za-z0-9+\-.]*)
# DFH Addition: change ⌽ from "?:" to "" to get capture groups of the various components
The unicode character after the first "(" gets changed to either "?:", meaning non-capture group, or "" to turn it into a capture group. Note that this matches a single character with one or more of the characters contained in the second "[]" group,
The full authority is found using this expression:
# RFC-3986 URI component: relative-part
(?: // # ( "//"
(?: (⌽(?:[A-Za-z0-9\-._~!$&'()*+,;=:]|%[0-9A-Fa-f]{2}☯)* ) #)? # authority DFH modified to grab the authority without '#'
(⌽
\[
(?:
(?:
(?: (?:[0-9A-Fa-f]{1,4}:){6}
| :: (?:[0-9A-Fa-f]{1,4}:){5}
| (?: [0-9A-Fa-f]{1,4})? :: (?:[0-9A-Fa-f]{1,4}:){4}
| (?: (?:[0-9A-Fa-f]{1,4}:){0,1} [0-9A-Fa-f]{1,4})? :: (?:[0-9A-Fa-f]{1,4}:){3}
| (?: (?:[0-9A-Fa-f]{1,4}:){0,2} [0-9A-Fa-f]{1,4})? :: (?:[0-9A-Fa-f]{1,4}:){2}
| (?: (?:[0-9A-Fa-f]{1,4}:){0,3} [0-9A-Fa-f]{1,4})? :: [0-9A-Fa-f]{1,4}:
| (?: (?:[0-9A-Fa-f]{1,4}:){0,4} [0-9A-Fa-f]{1,4})? ::
) (?:
[0-9A-Fa-f]{1,4} : [0-9A-Fa-f]{1,4}
| (?: (?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?) \.){3}
(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
)
| (?: (?:[0-9A-Fa-f]{1,4}:){0,5} [0-9A-Fa-f]{1,4})? :: [0-9A-Fa-f]{1,4}
| (?: (?:[0-9A-Fa-f]{1,4}:){0,6} [0-9A-Fa-f]{1,4})? ::
)
| [Vv][0-9A-Fa-f]+\.[A-Za-z0-9\-._~!$&'()*+,;=:]+
)
\]
| (?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}
(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
|
(?:[A-Za-z0-9\-._~!$&'()*+,;=]|%[0-9A-Fa-f]{2}☯)*
)
(?: : (⌽[0-9]*) )? # DFH addition to grab just the port
(⌽ # DFH addition to get one capture group
(⌽ / (?:[A-Za-z0-9\-._~!$&'()*+,;=:#]|%[0-9A-Fa-f]{2}☯)* )* # path-abempty
| / # / path-absolute
(⌽: (?:[A-Za-z0-9\-._~!$&'()*+,;=:#]|%[0-9A-Fa-f]{2}☯)+
(?:/ (?:[A-Za-z0-9\-._~!$&'()*+,;=:#]|%[0-9A-Fa-f]{2}☯)* )*
)?
| (⌽ (?:[A-Za-z0-9\-._~!$&'()*+,;=#] |%[0-9A-Fa-f]{2}☯)+ # / path-noscheme
(?:/ (?:[A-Za-z0-9\-._~!$&'()*+,;=:#]|%[0-9A-Fa-f]{2}☯)* )*
) # DFH Wrapper
| # / path-empty
(⌽) # DFH addition so constant number of capture groups
)
) # )
# DFH Addition: change ☯ to "|[\u0080-\U0010ffff]" to get inline Unicode detection (making this an IRI, not a URI, but you can later hex encode it), or "" for standard behavior
# DFH Addition: change ⌽ from "?:" to "" to get capture groups of the various components
If you read the above, you can see that this expression can be extended to find Unicode characters by the addition of "|[\u0080-\U0010ffff]" in just a few places.
Because he actually started with the RFC, and all portions of his expression fully reference the ABNF specification, I have great confidence in them.
However, when I started testing, I found that a URL verifier for say http:// passed! It turns out that the spec allows virtually everything to be an empty string! Sort of hard to use that for a UI form verifier.
So I took his expressions, and made some small additions. First, I found I could change the path specifier from a '*' to a '?', so that in form entry, a user would be forced to type at least one '/' after 'http://'. This makes the validator more stringent than it needs to be, but a more realistic one.
Jeff's regular expressions just use non-capturing groups, so I looked at ways to support capturing groups, so all components of a URL could be extracted if need be.
Also, think of non-USA users, who often need to enter non-ASCII characters into a URL - they want to enter an accented character - but the normal validator would reject a Unicode character. It would be nice to validate a string containing unicode characters, then convert the unicode to '%' encoded hex before actually using it. This requires extending the expressions to accept unicode characters by adding |[\\u0080-\\U0010ffff] to the sections accepting ASCII.
The whole problem begged putting together a test harness that could construct one or more regular expressions with the options a given app might need, and that could test those against various test strings; thus was borne URLFinderAndVerifier.
The test harness uses extended expression strings taken from Jeff's page, with all their spaces and comments intact, and with additional comments made by me. Those make the expressions much easier to read and understand. The test app reads the text files and removes all comments and spaces, preprocess them based on the options selected in the UI, and then sets those for use or for pasting (so you can use them in your app). The test app also lets you use it in a interactive mode, where it validates as you modify the input text.
Options:
look for http/https, http/https/ftp, or any scheme
for form entry, require a "/" after "scheme://", it makes the toggling of an "Accept" button more realistic (also requires at least one character after query's "?" and frament's "#")
enable capture groups, so for each URL extract the scheme, userinfo, host, port, path, and optionally the query and/or the fragment)
in extract mode, include or exclude the query and/or fragment
USAGE:
clone the project, and determine what regular expression you want, then paste it into the results window and use it in your app (suitable for a text file or NSString in code)
copy the URLFinder interface and implementation files into your project
instantiate a URLFinder and supply it the regular expression from the first step.

Surely the easiest way to validate a url is by constructing an NSURL object.
NSURL *url = [NSURL URLWithString:urlString];
According to the documentation:
Must be a URL that conforms to RFC 2396.
If the string was malformed, returns nil.
Ultimately you'll likely want to convert the url into an NSURL object anyway, so it's probably in the best position to decide whether your string is valid or not.
Then to find urls in a block of text you can perform a very simple regex search, just looking for potential candidates. For example, something like this:
[^\s]+://[^\s]+
Then use the NSURL construction technique described above to validate whether those candidates are genuine matches or not.

Related

ANTLR4 match any not-matched sections into one single STRING token

I am trying to create a Lexer/Parser with ANTLR that can parse plain text with 'tags' scattered inbetween.
These tags are denoted by opening ({) and closing (}) brackets and they represent Java objects that can evaluate to a string, that is then replaced in the original input to create a dynamic template of sorts.
Here is an example:
{player:name} says hi!
The {player:name} should be replaced by the name of the player and result in the output i.e. Mark says hi! for the player named Mark.
Now I can recognize and parse the tags just fine, what I have problems with is the text that comes after.
This is the grammar I use:
grammar : content+
content : tag
| literal
;
tag : player_tag
| <...>
| <other kinds of tags, not important for this example>
| <...>
;
player_tag : BRACKET_OPEN player_identifier SEMICOLON player_string_parameter BRACKET_CLOSE ;
player_string_parameter : NAME
| <...>
;
player_identifier : PLAYER ;
literal : NUMBER
| STRING
;
BRACKET_OPEN : '{';
BRACKET_CLOSE : '}';
PLAYER : 'player'
NAME : 'name'
NUMBER : <...>
STRING : (.+)? /* <- THIS IS THE PROBLEMATIC PART !*/
Now this STRING Lexer definition should match anything that is not an empty string but the problem is that it is too greedy and then also consumes the { } bracket tokens needed for the tag rule.
I have tried setting it to ~[{}]+ which is supposed to match anything that does not include the { } brackets but that screws with the tag parsing which I don't understand either.
I could set it to something like [ a-zA-Z0-9!"§$%&/()= etc...]+ but I really don't want to restrict it to parse only characters available on the british keyboard (German umlaute or French accents and all other special characters other languages have must to work!)
The only thing that somewhat works though I really dislike it is to force strings to have a prefix and a suffix like so:
STRING : '\'' ~[}{]+ '\'' ;
This forces me to alter the form from "{player:name} says hi!" to "{player:name}' says hi!'" and I really desperately want to avoid such restrictions because I would then have to account for literal ' characters in the string itself and it's just ugly to work with.
The two solutions I have in mind are the following:
- Is there any way to match any number of characters that has not been matched by the lexer as a STRING token and pass it to the parser? That way I could match all the tags and say the rest of the input is just plain text, give it back to me as a STRING token or whatever...
- Does ANTLR support lookahead and lookbehind regex expressions with which I could match any number of characters before the first '{', after the last '}' and anything inbetween '}' and '{' ?
I have tried
STRING : (?<=})(.+)?(?={) ;
but I can't seem to get the syntax right because that won't compile at all, which leads me to believe that ANTLR does not support lookahead and lookbehind syntax, but I could not find a definitive answer on the internet to that question.
Any advice on what to do?
Antlr does not support lookahead or lookbehind. It does support non-greedy wildcard matches, but only when the .* non-greedy wildcard is followed in the rule with the termination sequence (which, as you say, is also contained in the match, although you could push it back into the input stream).
So ~[{}]* is correct. But there's a little problem: lexer rules are (normally) always active. So that lexer rule will be active inside the braces as well, which means that it will swallow the entire contents between the braces (unless there are nested braces or braces inside quotes or some such, and that's even worse).
So you need to define different lexical contents, called "lexical modes" in Antlr. There's a publically viewable example in the Antlr Definitive Reference, which shows a solution to a very similar problem: parsing HTML.

Space delimited, except inside braces in a log file - Python

I'm a long time reader, first time asker (please be gentle).
I've been doing this with a pretty messy WHILE READ in Unix Bash, but I'm learning python and would like to try to make a more effective parser routine.
So I have a bunch of log files which are space delimited mostly, but contain square braces where there may also be spaces. How to ignore content within the braces, when looking for delimiters?
(I'm assuming that RE library is necessary to do this)
i.e. sample input:
[21/Sep/2014:13:51:12 +0000] serverx 192.0.0.1 identity 200 8.8.8.8 - 500 unavailable RESULT 546 888 GET http ://www.google.com/something/fsd?=somegibberish&youscanseethereisalotofcharactershere+bananashavealotofpotassium [somestuff/1.0 (OSX v. 1.0; this_is_a_semicolon; colon:93.1.1) Somethingelse/1999 (COMMA, yep_they_didnt leave_me_a_lot_to_make_this_easy) DoesanyonerememberAOL/1.0]
Desired output:
'21/Sep/2014:13:51:12 +0000'; 'serverx'; '192.0.0.1'; 'identity'; '200'; '8.8.8.8'; '-'; '500'; 'unavailable'; 'RESULT'; '546'; '888'; 'GET'; 'htp://www.google.com/something/fsd?=somegibberish&youscanseethereisalotofcharactershere+bananashavealotofpotassium'; 'somestuff/1.0 (OSX v. 1.0; this_is_a_semicolon; rev:93.1.1) Somethingelse/1999 (COMMA, yep_they_didnt leave_me_a_lot_to_make_this_easy DoesanyonerememberAOL/1.0'
If you'll notice the first and last fields (the ones that were in the square braces) still have spaces intact.
Bonus points
The 14th field (URL) is always in one of these formats:
htp://google.com/path-data-might-be-here-and-can-contain-special-characters
google.com/path-data-might-be-here-and-can-contain-special-characters
xyz.abc.www.google.com/path-data-might-be-here-and-can-contain-special-characters
google.com:443
google.com
I'd like to add an additional column to the data which includes just the domain (i.e. xyz.abc.www.google.com or google.com).
Until now, I've been taking the parsed output using a Unix AWK with an IF statement to split this field by '/' and check to see if the third field is blank. If it is, then return first field (up until the : if it is present), otherwise return third field). If there is a better way to do this--preferably in the same routine as above, I'd love to hear it--so my final output could be:
'21/Sep/2014:13:51:12 +0000'; 'serverx'; '192.0.0.1'; 'identity'; '200'; '8.8.8.8'; '-'; '500'; 'unavailable'; 'RESULT'; '546'; '888'; 'GET'; 'htp://www.google.com/something/fsd?=somegibberish&youscanseethereisalotofcharactershere+bananashavealotofpotassium'; 'somestuff/1.0 (OSX v. 1.0; this_is_a_semicolon; rev:93.1.1) Somethingelse/1999 (COMMA, yep_they_didnt leave_me_a_lot_to_make_this_easy DoesanyonerememberAOL/1.0'; **'www.google.com'**
Footnote: I changed http to htp in the sample, so it wouldn't create a bunch of distracting links.
The regular expression pattern \[[^\]]*\]|\S+ will tokenize your data, though it doesn't strip off the brackets from the multi-word values. You'll need to do that in a separate step:
import re
def parse_line(line):
values = re.findall(r'\[[^\]]*\]|\S+', line)
values = [v.strip("[]") for v in values]
return values
Here's a more verbose version of the regular expression pattern:
pattern = r"""(?x) # turn on verbose mode (ignores whitespace and comments)
\[ # match a literal open bracket '['
[^\]]* # match zero or more characters, as long as they are not ']'
\] # match a literal close bracket ']'
| # alternation, match either the section above or the section below
\S+ # match one or more non-space characters
"""
values = re.findall(pattern, line) # findall returns a list with all matches it finds
If your server logs have JSON in them you can include a match for curly braces as well:
\[[^\]]*\]|\{[^\}]*\}|\S+
https://regexr.com/

Matching function in erlang based on string format

I have user information coming in from an outside source and I need to check if that user is active. Sometimes I have a User and a Server and other times I have User#Server. The former case is no problem, I just have:
active(User, Server) ->
do whatever.
What I would like to do with the User#Server case is something like:
active([User, "#", Server]) ->
active(User, Server).
Doesn't seem to work. When calling active in the erlang terminal with a#b for example, I get an error that there is no match. Any help would be appreciated!
You can tokenize the string to get the result:
active(UserString) ->
[User,Server] = string:tokens(UserString,"#"),
active(User,Server).
If you need something more elaborate, or with better handling of something like email addresses, it might then be time to delve into using regular expressions with the re module.
active(UserString) ->
RegEx = "^([\\w\\.-]+)#([\\w\\.-]+)$",
{match, [User,Server]} = re:run(UserString,RegEx,[{capture,all_but_first,list}]),
active(User,Server).
Note: The supplied Regex is hardly sufficient for email address validation, it's just an example that allows all alphanumeric characters including underscores (\\w), dots (\\.), and dashes (-) seperated by an at symbol. And it will fail if the match doesn't stretch the whole length of the string: (^ to $).
A note on the pattern matching, for the real solution to your problem I think #chops suggestions should be used.
When matching patterns against strings I think it's useful to keep in mind that erlang strings are really lists of integers. So the string "#" is actually the same as [64] (64 being the ascii code for #)
This means that you match pattern [User, "#", Server] will match lists like: [97,[64],98], but not "a#b" (which in list form is [97,64,98]).
To match the string you need to do [User,$#,Server]. The $ operator gives you the ascii value of the character.
However this match pattern limits the matching string to be 1 character followed by # and then one more character...
It can be improved by doing [User, $# | Server] which allows the server part to have arbitrary length, but the User variable will still only match one single character (and I don't see a way around that).

What's valid and what's not in a URI query?

Background (question further down)
I've been Googling this back and forth reading RFCs and SO questions trying to crack this, but I still don't got jack.
So I guess we just vote for the "best" answer and that's it, or?
Basically it boils down to this.
3.4. Query Component
The query component is a string of information to be interpreted by the resource.
query = *uric
Within a query component, the characters ";", "/", "?", ":", "#", "&", "=", "+", ",", and "$" are reserved.
The first thing that boggles me is that *uric is defined like this
uric = reserved | unreserved | escaped
reserved = ";" | "/" | "?" | ":" | "#" | "&" | "=" | "+" | "$" | ","
This is however somewhat clarified by paragraphs such as
The "reserved" syntax class above refers to those characters that are allowed within a URI, but which may not be allowed within a particular component of the generic URI syntax; they are used as delimiters of the components described in Section 3.
Characters in the "reserved" set are not reserved in all contexts. The set of characters actually reserved within any given URI component is defined by that component. In general, a character is reserved if the semantics of the URI changes if the character is replaced with its escaped US-ASCII encoding.
This last excerpt feels somewhat backwards, but it clearly states that the reserved character set depends on context. Yet 3.4 states that all the reserved characters are reserved within a query component, however, the only things that would change the semantics here is escaping the question mark (?) as URIs do not define the concept of a query string.
At this point I've given up on the RFCs entirely but found RFC 1738 particularly interesting.
An HTTP URL takes the form:
http://<host>:<port>/<path>?<searchpart>
Within the <path> and <searchpart> components, "/", ";", "?" are reserved. The "/" character may be used within HTTP to designate a hierarchical structure.
I interpret this at least with regards to HTTP URLs that RFC 1738 supersedes RFC 2396. Because the URI query has no notion of a query string also the interpretation of reserved doesn't really let allow me to define query strings as I'm used to doing by now.
Question
This all started when I wanted to pass a list of numbers together with the request of another resource. I didn't think much of it, and just passed it as a comma separated values. To my surprise though the comma was escaped. The query page.html?q=1,2,3 encoded turned into page.html?q=1%2C2%2C3 it works, but it's ugly and didn't expect it. That's when I started going through RFCs.
My first question is simply, is encoding commas really necessary?
My answer, according to RFC 2396: yes, according to RFC 1738: no
Later I found related posts regarding the passing of lists between requests. Where the csv approach was poised as bad. This showed up instead, (haven't seen this before).
page.html?q=1;q=2;q=3
My second question, is this a valid URL?
My answer, according to RFC 2396: no, according to RFC 1738: no (; is reserved)
I don't have any issues with passing csv as long as it's numbers, but yes you do run into the risk of having to encode and decode values back and forth if the comma suddenly is needed for something else. Anyway I tried the semi-colon query string thing with ASP.NET and the result was not what I expected.
Default.aspx?a=1;a=2&b=1&a=3
Request.QueryString["a"] = "1;a=2,3"
Request.QueryString["b"] = "1"
I fail to see how this greatly differs from a csv approach as when I ask for "a" I get a string with commas in it. ASP.NET certainly is not a reference implementation but it hasn't let me down yet.
But most importantly -- my third question -- where is specification for this? and what would you do or for that matter not do?
That a character is reserved within a generic URL component doesn't mean it must be escaped when it appears within the component or within data in the component. The character must also be defined as a delimiter within the generic or scheme-specific syntax and the appearance of the character must be within data.
The current standard for generic URIs is RFC 3986, which has this to say:
2.2. Reserved Characters
URIs include components and subcomponents that are delimited by characters in the "reserved" set. These characters are called "reserved" because they may (or may not) be defined as delimiters by the generic syntax, by each scheme-specific syntax, or by the implementation-specific syntax of a URI's dereferencing algorithm. If data for a URI component would conflict with a reserved character's purpose as a delimiter [emphasis added], then the conflicting data must be percent-encoded before the URI is formed.
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
3.3. Path Component
[...]
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
[...]
3.4 Query Component
[...]
query = *( pchar / "/" / "?" )
Thus commas are explicitly allowed within query strings and only need to be escaped in data if specific schemes define it as a delimiter. The HTTP scheme doesn't use the comma or semi-colon as a delimiter in query strings, so they don't need to be escaped. Whether browsers follow this standard is another matter.
Using CSV should work fine for string data, you just have to follow standard CSV conventions and either quote data or escape the commas with backslashes.
As for RFC 2396, it also allows for unescaped commas in HTTP query strings:
2.2. Reserved Characters
Many URI include components consisting of or delimited by, certain
special characters. These characters are called "reserved", since
their usage within the URI component is limited to their reserved
purpose. If the data for a URI component would conflict with the
reserved purpose, then the conflicting data must be escaped before
forming the URI.
Since commas don't have a reserved purpose under the HTTP scheme, they don't have to be escaped in data. The note from § 2.3 about reserved characters being those that change semantics when percent-encoded applies only generally; characters may be percent-encoded without changing semantics for specific schemes and yet still be reserved.
I think the real question is: "What characters should be encoded in a query string?" And that depends mainly on two things: The validity and the meaning of a character.
Validity according to the RFC standard
In RFC3986 we can find which special characters are valid and which are not inside a query string:
// Valid:
! $ & ' ( ) * + , - . / : ; = ? # _ ~
% (should be followed by two hex chars to be completely valid (e.g. %7C))
// Invalid:
" < > [ \ ] ^ ` { | }
space
# (marks the end of the query string, so it can't be a part of it)
extended ASCII characters (e.g. °)
Deviations from the standard
Browsers and web frameworks do not always strictly follow the RFC standard. Below are some examples:
[, ] are not valid, but Chrome and Firefox do not encode these characters inside a query string. The reasoning given by Chrome devs is simply: "If other browsers and an RFC disagree, we will generally match other browsers." QueryHelpers.AddQueryString from ASP.NET Core on the other hand will encode these characters.
Other invalid characters that are not encoded by Chrome and Firefox are:
\ ^ ` { | }
' is a valid character inside a query string but will be encoded by Chrome, Firefox and QueryHelpers nevertheless. The explanation given by Firefox devs is that they knew that they don't have to encode it according to the RFC standard, but did it to reduce vulnerabilities.
Special meaning
Some characters are valid and also don't get encoded by browsers, but should still be encoded in certain cases.
+: Spaces are normally encoded as %20 but alternatively they can be encoded as +. So + inside a query string means it's an encoded space. If you want to include a character that's actually supposed to literally mean plus, then you have to use the encoded version of + which is %2B.
~: Some old Unix systems interpreted URI parts that started with ~ as a path to a home directory. So it's a good idea to encode ~ if it's not meant to denote the start of a Unix home directory path for an old system (so nowadays probably always encode).
=, &: Usually (although RFC doesn't specify that this is required) query strings contain parameters in the format "key1=value1&key2=value2". If that's the case and =s or &s should be part of the parameter key or the parameter value instead of giving them the role of separating the key and value or separating the parameters, then you have to encode those =s and &s. So if a parameter value should for some reason consist of the string "=&" then it has to be encoded as %3D%26 which then can be used for the full key and value: "weirdparam=%3D%26".
%: Usually web frameworks figure out that %s that are not followed by two hex characters simply mean the % itself, but it's still a good idea to always encode % when it's supposed to only mean % and not indicate the start of an encoded character (e.g. %7C) because RFC3986 specifies that % is only valid when followed by two hex characters. So don't use "percentageparam=%" use "percentageparam=%25" instead.
Encoding guidelines
Encode every character that is otherwise invalid* according to RFC3986 and every character that can have special meaning but should only be interpreted in a literal way without giving it a special meaning. You can also encode things that aren't required to be encoded, like '. Why? Because it doesn't hurt to encode more than necessary. Servers and web frameworks when parsing a query string will decode every encoded character, no matter if it was really necessary to previously encode that character or not.
The only characters of a query string that shouldn't be encoded are those that can have a special meaning and shouldn't lose that special meaning, e.g. don't encode the = of "key1=value1". For that to achieve don't apply an encoding method to the whole query string (and also not to the whole URI) but apply it only and separately to the query parameter keys and values. For example, with JS:
var url = "http://example.com?" + encodeURIComponent(myKey1) + "=" + encodeURIComponent(myValue1) + "&" + encodeURIComponent(myKey2)...;
Note that encodeURIComponent encodes a lot more characters than necessary meaning characters that are valid in a query string and don't have special meaning there e.g. /, ?, ...
The reason is that encodeURIComponent wasn't created for query strings alone but instead encodes characters that have special meaning outside of the query string as well, e.g. / for the path URI component. QueryHelpers.AddQueryString works in a similar manner. Under the hood it uses System.Text.Encodings.Web.DefaultUrlEncoder which is not just meant for query strings but also for isegment, ipath-noscheme and ifragment.
* You could probably get away with only regarding those characters as invalid that are both not allowed by the RFC and that are also always encoded by Chrome for instance. This would be Space " < >. But it's probably better to be on the safer side and encode at least everything that RFC3986 considers invalid.
OP's questions
My first question is simply, is encoding commas really necessary -> No it's not necessary, but it doesn't hurt (except ugliness) and will happen with default encoding methods e.g. encodeURIComponent and decoding and query string parsing should work nevertheless.
My second question, is this a valid URL (page.html?q=1;q=2;q=3)? -> It's RFC valid, but your server / web framework might have a hard time parsing the query string when it might expect the typical "key1=value1&key2=value2" format for query strings.
Where is specification for this? -> There isn't a single specification that covers everything because some things are implementation specific. For instance there are different ways of specifying arrays inside of query strings.
Just use ?q=1+2+3
I am answering here a fourth question :) that did not ask but all started with: how do i pass list of numbers a-la comma-separated values? Seems to me the best approach is just to pass them space-separated, where spaces will get url-form-encoded to +. Works great, as longs as you know the values in the list contain no spaces (something numbers tend not to).
page.html?q=1;q=2;q=3
is this a valid URL?
Yes. The ; is reserved, but not by an RFC. The context that defines this component is the definition of the application/x-www-form-urlencoded media type, which is part of the HTML standard (section 17.13.4.1). In particular the sneaky note hidden away in section B.2.2:
We recommend that HTTP server implementors, and in particular, CGI implementors support the use of ";" in place of "&" to save authors the trouble of escaping "&" characters in this manner.
Unfortunately many popular server-side scripting frameworks including ASP.NET do not support this usage.
I would like to note that page.html?q=1&q=2&q=3 is a valid url as well. This is a completely legitimate way of expressing an array in a query string. Your server technology will determine how exactly that is presented.
In Classic ASP, you check Response.QueryString("q").Count and then use Response.QueryString("q")(0) (and (1) and (2)).
Note that you saw this in your ASP.NET, too (I think it was not intended, but look):
Default.aspx?a=1;a=2&b=1&a=3
Request.QueryString["a"] = "1;a=2,3"
Request.QueryString["b"] = "1"
Notice that the semicolon is ignored, so you have a defined twice, and you got its value twice, separated by a comma. Using all ampersands Default.aspx?a=1&a=2&b=1&a=3 will yield a as "1,2,3". But I am sure there is a method to get each individual element, in case the elements themselves contain commas. It is simply the default property of the non-indexed QueryString that concatenates the sub-values together with comma separators.
I had the same issue. The URL that was hyperlinked was a third party URL and was expecting a list of parameters in format page.html?q=1,2,3 ONLY and the URL page.html?q=1%2C2%2C3 did not work. I was able to get it working using javascript. May not be the best approach but can check out the solution here if it helps anyone.
If you are sending the ENCODED characters to FLASH/SWF file, then you should ENCODE the character twice!! (because of Flash parser)

What is the semicolon reserved for in URLs?

The RFC 3986 URI: Generic Syntax specification lists a semicolon as a reserved (sub-delim) character:
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "#"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
What is the reserved purpose of the ";" of the semicolon in URIs? For that matter, what is the purpose of the other sub-delims (I'm only aware of purposes for "&", "+", and "=")?
There is an explanation at the end of section 3.3.
Aside from dot-segments in
hierarchical paths, a path segment is
considered opaque by the generic
syntax. URI producing applications
often use the reserved characters
allowed in a segment to delimit
scheme-specific or
dereference-handler-specific
subcomponents. For example, the
semicolon (";") and equals ("=")
reserved characters are often used
to delimit parameters and parameter
values applicable to that segment.
The comma (",") reserved character is
often used forsimilar purposes.
For example, one URI producer might
use a segment uch as "name;v=1.1"
to indicate a reference to version 1.1
of "name", whereas another might
use a segment such as "name,1.1" to
indicate the same. Parameter types
may be defined by scheme-specific
semantics, but in most cases the
syntax of a parameter is specific to
the implementation of the URI's
dereferencing algorithm.
In other words, it is reserved so that people who want a delimited list of something in the URL can safely use ; as a delimiter even if the parts contain ;, as long as the contents are percent-encoded. In other words, you can do this:
foo;bar;baz%3bqux
and interpret it as three parts: foo, bar, baz;qux. If semicolon were not a reserved character, the ; and %3bwould be equivalent, so the URI would be incorrectly interpreted as four parts: foo, bar, baz, qux.
The intent is clearer if you go back to older versions of the specification:
path_segments = segment *( "/" segment )
segment = *pchar *( ";" param )
Each path segment may include a
sequence of parameters, indicated by the semicolon ";" character.
I believe it has its origins in FTP URIs.
Section 3.3 covers this - it's an opaque delimiter a URI-producing application can use if convenient:
Aside from dot-segments in
hierarchical paths, a path segment is
considered opaque by the generic
syntax. URI producing applications
often use the reserved characters
allowed in a segment to delimit
scheme-specific or
dereference-handler-specific
subcomponents. For example, the
semicolon (";") and equals ("=")
reserved characters are often used to
delimit parameters and parameter
values applicable to that segment. The
comma (",") reserved character is
often used for similar purposes. For
example, one URI producer might use a
segment such as "name;v=1.1" to
indicate a reference to version 1.1 of
"name", whereas another might use a
segment such as "name,1.1" to indicate
the same. Parameter types may be
defined by scheme-specific semantics,
but in most cases the syntax of a
parameter is specific to the
implementation of the URI's
dereferencing algorithm.
There are some conventions around its current usage that are interesting. These speak to when to use a semicolon or comma. From the book "RESTful Web Services":
Use punctuation characters to separate multiple pieces of data at the same level of hierarchy. Use commas when the order of the items matters, ... Use semicolons when the order doesn't matter.
Since 2014, path segments are known to contribute to Reflected File Download attacks. Let's assume we have a vulnerable API that reflects whatever we send to it:
https://google.com/s?q=rfd%22||calc||
{"results":["q", "rfd\"||calc||","I love rfd"]}
Now, this is harmless in a browser as it's JSON, so it's not going to be rendered, but the browser will rather offer to download the response as a file. Now here's the path segments come to help (for the attacker):
https://google.com/s;/setup.bat;?q=rfd%22||calc||
Everything between semicolons (;/setup.bat;) will be not sent to the web service, but instead the browser will interpret it as the file name... to save the API response.
Now, a file called setup.bat will be downloaded and run without asking about dangers of running files downloaded from the Internet (because it contains the word "setup" in its name). The contents will be interpreted as a Windows batch file, and the calc.exe command will be run.
Prevention:
sanitize your API's input (in this case, they should just allow alphanumerics); escaping is not sufficient
add Content-Disposition: attachment; filename="whatever.txt" on APIs that are not going to be rendered; Google was missing the filename part which actually made the attack easier
add X-Content-Type-Options: nosniff header to API responses
I found the following use cases:
It's the final character of an HTML entity:
List of XML and HTML character entity references
To use one of these character entity references in an HTML or XML
document, enter an ampersand followed by the entity name and a
semicolon, e.g., & for the ampersand ("&").
Apache Tomcat 7 (or newer versions?!) us it as path parameter:
Three Semicolon Vulnerabilities
Apache Tomcat is one example of a web server that supports "Path
Parameters". A path parameter is extra content after a file name,
separated by a semicolon. Any arbitrary content after a semicolon does
not affect the landing page of a web browser. This means that
http://example.com/index.jsp;derp will still return index.jsp, and not
some error page.
URI scheme splits by it the MIME and data:
Data URI scheme
It can contain an optional character set parameter, separated from the
preceding part by a semicolon (;) .
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA
AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO
9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot" />
And there was a bug in IIS 5 and IIS 6 to bypass file upload restrictions:
Unrestricted File Upload
Blacklisting File Extensions This protection might be bypassed by: ...
by adding a semi-colon character after the forbidden extension and
before the permitted one (e.g. "file.asp;.jpg")
Conclusion:
Do not use semicolons in URLs or they could accidentally produce an HTML entity or URI scheme.

Resources