Dot-dot removed from URL by Firefox - url

When I enter an URL like this, with ..
http://SERVER:8085/../tn/d9dd6c39d71276487ae798d976f8f629_tn.jpg
I obtain a request in my Web-Server without ..-part
Does Firefox remove it silently? Are the .. not allowed in URLs?
P.S.: wget removes .. also :-(

I have recently begun seeing this and, despite what the marked answer states, adding this to a URL does make sense and is a valid folder path in the world of IT security where we intetionally bypass security measures in mis-configured sites, classified as Directory Traversal attacks.
Web (browsers, wget, curl, etc...) tools silently evaluate the URL path and strip out the "/../" making my job of finding vulnerabilities more difficult. To get around this, I use Firefox along with Burpsuite, a proxying assessment tool that captures the request and allows me to modify it before sending to the server.
Doing this, I can type:
https://example.com/vpn/../vpns/cfg/etc
in my browser URL, and when I capture it using Burpsuite, it looks like:
https://example.com/vpns/cfg/etc
showing me that Firefox has in fact changed my original intended URL string. So within Burpsuite, I modify the request back to say:
GET /vpn/../vpns/cfg/etc HTTP/1.1
send it to the server, and voila, the path remains intact and navigates to the correct location. Yes, in a normal well-configured application with proper request handling, doing this shouldn't be necessary. This particular string acts differently in these 2 formats, so modifying it necessary to cause the server to handle it in the manner we want to show there is a configuration problem with how the application handles the request (a Directory Traversal vulnerability).
This can also be proven using curl. If you send a normal curl command like below, curl will do the same as Firefox and evaluate the path, removing "/vpn/.." from it before sending to the server:
curl -i -s -k "https://example.com/vpn/../vpns/cfg/etc"
However, if you add the "--path-as-is" argument, curl will not modify it and send it as-is, and the "/vpn/.." remains intact:
curl -i -s -k "https://example.com/vpn/../vpns/cfg/etc" --path-as-is
After some additional reading, I found this behavior is due in part to URI Normalization standards (https://en.wikipedia.org/wiki/URI_normalization).
This points to RFC 3986 for defining URI Syntax https://www.rfc-editor.org/rfc/rfc3986#section-5.2.4.

".." means a relative path and used for moving up in the hierarchy. So ".." is not a valid name for a folder therefore you cannot use it in the middle of URL. It just makes no sense.
So to answer your question: ".." is allowed in url but only in the beginning.

Complementary information:
"../" will be stripped by the developer tools as well (up to 54.0.1 at least), meaning you cannot use the "Edit and resend" to hand-craft a valid request like this:
GET /../tn/d9dd6c39d71276487ae798d976f8f629_tn.jpg
... which could potentially result in a directory traversal and the file being retrieved.

Related

Convert curl command with --form-string to a URL

I have command (line-breaks added between command-line parameters for readability):
curl
-s
--form-string "token=AppToken"
--form-string "user=UserToken"
--form-string "message=Msg"
--form-string "title=Title"
https://api.pushover.net/1/messages.json
Can you tell me if this command can be converted into a URL link?
Can you tell me if this command can be converted into a url link?
It cannot.
That curl command is for a POST with an application/x-www-form-urlencoded request body.
"Links" are always GET requests and never for POST requests.
<a href="#"> links in HTML and the web can only make GET requests without a request-body (at least, not without custom JavaScript interception).
In desktop software frameworks and toolkits (that have built-in Hyperlink widgets), I find (in my personal experience) that they're similarly designed around the assumption they'll be used to open a URL to a web-page and so pass the URL to the user's default browser, which will only make a GET request.
This is because following a link (i.e. executing a GET request) must always be "safe" (i.e. GET requests should not mutate resource state).
Additionally, "Links" cannot have a request body.
Though while GET requests can (technically) have a request-body, support for that is not widespread; and obviously single URIs for hyperlink GET requests don't have any request-body data associated with them.
GET request bodies are intended to allow user-agents to make GET requests with associated request/query data that is too long to fit into the querystring of the URI (due to the common 1024 or 2048 char limit).

Nginx - What precautions need to be taken when I turn underscores_in_headers on?

I'm writing a rails application and passing in a custom access token through the HTTP headers. To accommodate this I need to turn on underscores_in_header in nginx.conf for my code to run. (See Rails Not able to access headers after moving to Digital Ocean)
Because this option is by default off, I assume there are some security risks I assume by turning it on. However, I have been unable to find an explanation for what these risks or concerns are. What are these risks and how do I account for them within my code?
Thanks!
According to the Nginx Pitfalls...
This is done in order to prevent ambiguities when mapping headers to CGI variables, as both dashes and underscores are mapped to underscores during that process.
So it looks like a question of avoiding collisions between variable names. FWIW, the applicable RFC 7230, sec 3.2.6 specifically allows underscores and RFC 3875, sec. 4.1.18 states that:
The HTTP header field name is converted to upper case, has all occurrences of "-" replaced with "_" and has "HTTP_" prepended to give the meta-variable name.
The security problem, then, is related to this conversion process of "-" to "_" and how receiving applications then access the User-Agent variable. For instance, "User-Agent" would be mapped to "User_Agent" by the server, and then in PHP (for example) the CGI environment var is accessed as:
$_SERVER['HTTP_USER_AGENT']
In rails:
request.env['HTTP_USER_AGENT']
So what happens if the client sends "User_Agent" instead of "User-Agent?" The underscore would be left in place and then "HTTP_USER_AGENT" will have been explicitly set by the a client script (normally, it's set by the browser). The following post from 2007 discusses the potential to exploit this process:
Exploiting reflected XSS vulnerabilities, where user input must come through HTTP Request Headers
That post suggests there is a problem if the server app "insecurely prints" the header value (to the client browser) and in the example it would presumably execute a javascript alert popup. It's just an example though.
The question is, does the problem still exist? Well, yes. See the following post that discusses the Shellshock vulnerability where the same idea is used to exploit the BASH shell:
Inside Shellshock: How hackers are using it to exploit systems
Therefore, if you intend to parse any header with an older version of BASH, you need to be aware of the vulnerability presented by Shellshock. At the end of the day, you should always take care to sanitize any data value that has been sent to your application outside of your control.

Are unnecessary slashes in a URL bad?

I noticed that https://stackoverflow.com//////////questions/4659504/ is a valid URL. However https://www.google.com//////////analytics/settings is not. Are there differences inherent in web server technologies that explain this? Should a url with unnecessary slashes be interpreted correctly or should it return an error?
First of all, adding a slash changes the semantics of a URL path like any other character does. So by definition /foo/bar and /foo//bar are not equivalent just as /foo/bar and /foo/bar/ are not equivalent.
But since the URL path is mostly used to be directly mapped onto the file system, web servers often remove empty path segments (Apache does that) so that /foo//bar and /foo/bar are handled equivalently. But this is not the expected behavior; it’s rather done for error correction.
They are both valid URLs.
However, Google's server can't handle the second one.
There is no specific reason to either handle or reject URLs with duplicate slashes; you should spend more time on more important things.
What do you consider "interpreted correctly"? HTTP only really specifices how the stuff in front of the slash after the server name gets interpreted. The rest is entirely up to the web server. It parses what you give it after that point (in whatever manner it likes) and presents you with whatever HTML it feels like providing for that text.
There is a difference in how every application processes requests. If you setup your app to replace succeeding slashes before routing the request you shouldn't have any problems.

commandline curl POST gives zero content length

I'm using commandline curl to do a POST via a proxy but my form data is vanishing to zero content length. Any ideas what I'm doing wrong?
Here's my command line (uses a public test form so others can try it):
curl -v --proxy-ntlm --proxy proxyserver:proxyport --proxy-user : -d "fname=a&lname=b" http://www.snee.com/xml/crud/posttest.cgi
-v = verbose
next few arguments get us through the proxy using windows authentication
-d = should do a post with the given arguments
However, both the response and the verbose print out suggest the form content is vanishing. The curl prints "Content-Length: 0" and the returned html has both arguments missing and a content length of 0.
The bug doesn't seem to be in the proxy server as curl admits it is sending a content length of 0. Does anyone know a solution to this problem? Has anyone else seen it?
Update: this person appears to have the same bug, but no solution suggested, apart from not using ntlm which I have to
Update 2: This definitely only happens with NTLM authentication, I've tried an alternative authentication method which works. Also, using -F instead of -d (for binary form data) fails in the same way.
Update 3 (workaround): I've had a bit of discussion on the curl-users list about this. A workaround was provided which is to use --proxy-anyauth instead of --proxy-ntlm. I'm still investigating the problem but this workarounf works for me.
NTLM is a challenge-response protocol. When you indicate that you're going to use NTLM, a client will first send a request without the body (to avoid wasting the bandwidth of sending the body only to have it rejected by the HTTP/401 challenge from the server). Only once the Challenge/Response protocol is complete will the body actually be posted.
This causes a number of problems in cases where the client expects NTLM but the proxy or server has no idea (and thus acts on the 0-byte POST, never challenging the client).
I was having this problem making a request using HTTP digest. Eric is correct, curl is trying to be clever and not post any data because it knows it will have to make the request again after it receives the challenge from the server.
It turns out if you provide the --anyauth option (which asks curl to autodetect the authentication method), the initial request will include all the POST data, and (in my case) the server responded as expected.

How do you see the client-side URL in ColdFusion?

Let's say, on a ColdFusion site, that the user has navigated to
http://www.example.com/sub1/
The server-side code typically used to tell you what URL the user is at, looks like:
http://#cgi.server_name##cgi.script_name#?#cgi.query_string#
however, "cgi.script_name" automatically includes the default cfm file for that folder- eg, that code, when parsed and expanded, is going to show us "http://www.example.com/sub1/index.cfm"
So, whether the user is visiting sub1/index.cfm or sub1/, the "cgi.script_name" var is going to include that "index.cfm".
The question is, how does one figure out which URL the user actually visited? This question is mostly for SEO-purposes- It's often preferable to 301 redirect "/index.cfm" to "/" to make sure there's only one URL for any piece of content- Since this is mostly for the benefit of spiders, javascript isn't an appropriate solution in this case. Also, assume one does not have access to isapi_rewrite or mod_rewrite- The question is how to achieve this within ColdFusion, specifically.
I suppose this won't be possible.
If the client requests "GET /", it will be translated by the web server to "GET /{whatever-default-file-exists-fist}" before ColdFusion even gets invoked. (This is necessary for the web server to know that ColdFusion has to be invoked in the first place!)
From ColdFusion's (or any application server's) perspective, the client requested "GET /index.cfm", and that's what you see in #CGI#.
As you've pointed out yourself, it would be possible to make a distinction by using a URL-rewriting tool. Since you specifically excluded that path, I can only say that you're out of luck here.
Not sure that it is possible using CF only, but you can make the trick using webserver's URL rewriting -- if you're using them, of course.
For Apache it can look this way. Say, we're using following mod_rewrite rule:
RewriteRule ^page/([0-9]+)/?$
index.cfm?page=$1&noindex=yes [L]
Now when we're trying to access URL http://website.com/page/10/ CGI shows:
QUERY_STRING page=10&noindex=yes
See the idea? Think same thing is possible when using IIS.
Hope this helps.
I do not think this is possible in CF. From my understanding, the webserver (Apache, IIS, etc) determines what default page to show, and requests it from CF. Therefore, CF does not know what the actual called page is.
Sergii is right that you could use URL rewrting to do this. If that is not available to you, you could use the fact that a specific page is given precedence in the list of default pages.
Let's assume that default.htm is the first page in the list of default pages. Write a generic default.htm that automatically forwards to index.cfm (or whatever). If you can adjust the list of defaults, you can have CF do a 301 redirect. If not, you can do a meta-refresh, or JS redirect, or somesuch in an HTML file.
I think this is possible.
Using GetHttpRequestData you will have access to all the HTTP headers.
Then the GET header in that should tell you what file the browser is requesting.
Try
<cfdump var="#GetHttpRequestData()#">
to see exactly what you have available to use.
Note - I don't have Coldfusion to hand to verify this.
Edit: Having done some more research it appears that GetHttpRequestData doesn't include the GET header. So this method probably won't work.
I am sure there is a way however - try dumping the CGI scope and see what you have.
If you are able to install ISAPI_rewrite (Assuming you're on IIS) - http://www.helicontech.com/isapi_rewrite/
It will insert a variable x-rewrite-url into the GetHttpRequestData() result structure which will either have / or /index.cfm depending on which URL was visited.
Martin

Resources