Web Hosting URL Length Limit? - url

I am designing a web application which is a tie in to my iPhone application. It sends massively large URLs to the web server (15000 about.) I was using NearlyFreeSpeech.net, but they only support URLS up to 2000 characters. I was wondering if anybody knows of web hosting that will support really large URLs? Thanks, Isaac
Edit: My program needs to open a picture in Safari. I could do this 2 ways:
send it base64 encoded in the URL and just echo the query parameters.
first POST it to the server in my application, then the server would send back a unique ID after storing the photo in a database, which I would append to a URL which I would open in Safari which retrieved the photo from the database and delete it from the database.
You see, I am lazy, and I know Mobile Safari can support URI's up to 80 000 characters, so I think this is a OK way to do it. If there is something really wrong with this, please tell me.
Edit: I ended up doing it the proper POST way. Thanks.

If you're sending 15,000 character long URLs, in all likelyhood:
alt text http://img16.imageshack.us/img16/3847/youredoingitwronga.jpg
Use something like an HTTP POST instead.
The limitations you're running up against aren't so much an issue with the hosts - it's more the fact that web servers have a limit for the length of a URL. According to this page, Apache limits you to around 4k characters, and IIS limits you to 16k by default.

Although it's not directly answering your question, and there is no official maximum length of a URL, browsers and servers have practical limits - see http://www.boutell.com/newfaq/misc/urllength.html for some details. In short, since IE (at least some versions in use) doesn't support URLs over 2,083 characters, it's probably wise to stay below that length.

If you need to just open it in Safari, and the server doesn't need to be involved, why not use a data: URI?
Sending long URIs over the network is basically never the right thing to do. As you noticed, some web hosts don't support long URIs. Some proxy servers may also choke on long URLs, which means that your app might not work for users who are behind those proxies. If you ever need to port your app to a different browser, other browsers may not support URIs that long.
If you need to get data up to a server, use a POST. Yes, it's an extra round trip, but it will be much more reliable.
Also, if you are uploading data to the server using a GET request, then you are vulnerable to all kinds of cross-site request forgery attacks; basically, an attacker can trick the user into uploading, say, goatse to their account simply by getting them to click on a link (perhaps hidden by TinyURL or another URL shortening service, or just embedded as a link in a web page when they don't look closely at the URL they're clicking on).
You should never use GET for sending data to the server, beyond query parameters that don't actually change anything on the server.

Related

Handling large URL query parameters for SPA

So, I've recently finished my SPA and published it online. The application allows you to create content and share your content by providing a permalink. The permalink is generated by stringifying the object, encrypting it, making it URL safe, and tacking it onto the base url as a query parameter.
The problem I'm facing, is that when the user creates content that causes the JS object to be large, the URL of course becomes large as well. I want the application to be able to handle any size, but my site crashes with a Request-URI Too Long error.
The alternative I've considered is setting up a back-end that can take the data and provide an id of some kind to use in the url instead, so my application can just call the back-end with the id to fetch the data.
I'd like to avoid doing that if possible though, as I don't really feel like paying for the server onto of already paying for my site hosting. I'm hosting the site on my GoDaddy account, but have seen other sites handle obscenely large URLs through NameCheap, not sure if that has something to do with it.
Hash the content with a hash such as SHA-256, Base64 encode the hash, URL encode it and use that as the permalink or at least part of it.

Request URL Too Long (20K characters) IIS 7

I have an iPad application which sends data to a .NET application. The iPad application was written by a bunch of monkeys who implemented all the requests as GET instead of POST.
The application is live now, and with the client's data is sending requests over 20k characters, which is giving me this response (using Safari, which has been tested to work with URLs of at least 80k characters):
Generic 414 Error
Instead of the detailed IIS response I would get if, say, the request exceeded the requestFiltering/maxURL value in the web.config, which looks like this:
IIS 414.14 Error
Since I am getting the generic error message instead of the IIS-specific message, it makes me think this is not due to something I can fix in configuration settings (I have maxURL set to 2 billion, just to be safe...)
I understand that the requests should be using POST, but I don't really have time to rewrite the iPad application at the moment, and all of my research has only turned up unhelpful responses which say "you should limit GET requests to 2K characters" "you should use a POST instead of a GET". If that is the only feedback you have, please don't bother answering. (For instance, I am aware of this question and it's answers.)
I need to know if I can throw in a quick workaround to make this function until I have time to do it the right way. And I'm also wondering if anyone knows about hard limitations URL lengths from either the iOS or IIS side, because I can't find any specifics.
Edit: My httpRuntime parameters are also set to accept far more than 20k characters.
I know this is an old one, but in case someone will face it like I did yesterday - setting web.config parameters didn't help me either. What I've found was this MS article: http://support.microsoft.com/kb/820129/en-us.
I've added two DWORD keys to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters:
MaxFieldLength = 65534
MaxRequestBytes = 100000
NOTE: you need to restart your server or at least HTTP service to make these keys work. After restart I've managed to send a request with query string length up to ~32k characters (don't know why only ~32k though, maybe character encoding?). So I guess this is the limit for URL lenght in Windows 2003 and up.
If you are seeing this in multiple clients, it's likely to be your server settings. In the .NET 4.0's httpRuntime section, the maximum values for both maxUrlLength and maxQueryStringLength are a 32 bit signed integer, or 2147483647.
<httpRuntime maxUrlLength="2147483647" maxQueryStringLength="2147483647" />

What are the steps involved from entering a web site address to the page being displayed on the browser?

And how can the process be speeded up from a developer point of view?
There are a lot of things going on.
When you first type in an address, the browser will lookup the hostname in DNS, if it is not already in the browser cache.
Then the browser sends a HTTP GET request to the remote server.
What happens on the server is really up to the server; but it should respond back with a HTTP response, that includes headers, which perhaps describe the content to the browser and how long it is allowed to be cached. The response might be a redirect, in which case the browser will send another request to the redirected page.
Obviously, server response time will be one of the critical points for perceived performance, but there are many other things to it.
When a response is returned from the server, the browser will do a few things. First it will parse the HTML returned, and create it's DOM (Document Object Model) from that. Then it will run any startup Javascript on the page; before the page is ready to be displayed in the browser. Remember, that if the page contains any ressources such as external stylesheets, scripts, images and so on, the browser will have to download those, before it can display the page. Each resource is a separate HTTP get, and there are some latency time involved here. Therefore, one thing that in some cases can greatly reduce load times is to use as few external ressources as possible, and make sure they are cached on the client (so the browser don't have to fetch them for each page view).
To summarize, to optimize performance for a web page, you want to look at, as a minimum:
Server response time
Bandwith /content transfer time.
Make sure you have a small and simple DOM (especially if you need to support IE6).
Make sure you understand client side caching and the settings you need to set on the server.
Make sure you make the client download as little data as possible. Consider GZipping resources and perhaps dynamic content also (dependent on your situation).
Make sure you don't have any CPU intensive javascript on Page load.
You might want to read up on the HTTP Protocol, as well as some of the Best Practices. A couple of tools you can use are YSlow and Google Page Speed
What are the steps involved from entering a web site address to the page being displayed on the browser?
The steps are something like:
Get the IP address of the URL
Create a TCP (HTTP) connection to the IP address, and request the specified page
Receive/download the page via TCP/HTTP; the page may consist of several files/downloads: e.g. the HTML document, CSS files, javascript files, image files ...
Render the page
And how can the process be speeded up from a developer point of view?
Measure to discover which of these steps is slow:
It's only worth optimizing whichever step is the slow one (no point in optimizing steps which are already fast)
The answer to your question varies depending on which step it is.

How large can a HTTP form parameter string be?

How large can a HTTP form parameter string be?
If I want to pass say a couple of pages worth of XML will a form parameter be ok? Or if not would I need to treat it as a file upload to be sure?
Thanks
Assuming you're sending the content via POST, rather than as part of the querystring in GET, there's no universal limit. Some servers may constrain POST requests to some specific size to reduce the risk of denial of service requests, but those limits are likely to be in 1-8 megabyte range on most server configurations.
As a developer, you'll probably be able to configure that limit if there is one; in Rails, the mechanism depends on what HTTP application server you're using. Mongrel sets it in a Const::MAX_BODY, I think.
File Upload is just a specially encoded POST request, so it won't have much effect on limits, if any.
If you're passing them in via that POST method, there is no limit.
If you're passing them in via a GET request, then the limit varies depending on the browser, proxy software, web server, and so on; you generally should make sure that you have less than 2000 characters in your URL as a whole, to be on the safe side (IE has a limit of 2048 characters in the path portion of the URL).
There is definitely a limit, both on client and server.
Limits are imposed by web browser
Maximum URL length is 2,083 characters in Internet Explorer
http://support.microsoft.com/kb/208427/en-us
In IIS Web Server default limit is 16,384 characters
See MaxFieldLength:
http://support.microsoft.com/kb/820129/en-us

Best way to redirect image requests to a different webserver?

I am trying to reduce the load on my webservers by adding an "Image server" (a dedicated server for handling image requests), and redirecting all requests for .gif,.jpg,.png etc., to it.
My question is, what is the best way to handle the redirection?
At the firewall level? (can I do this using iptables?)
At the load balancer level? (can ldirectord handle this?)
At the apache level - using rewrite rules?
Thanks for any suggestions on the best way to do this.
--Update--
One thing I would add is that these are domains that are hosted for 3rd parties, so I can't expect all the developers to modify their code and point their images to another server.
The further up the chain you can do it, the better.
Ideally, do it at the DNS level by using a different domain for your images (eg imgs.example.com)
If you can afford it, get someone else to do it by using a CDN (Content delivery network).
-Update-
There are also 2 featuers of apache's mod_rewrite that you might want to look at. They are all described well at http://httpd.apache.org/docs/1.3/misc/rewriteguide.html.
The first is under the heading "Dynamic Miror" in the above document, that uses the mod_rewrite Proxy flag [p]. This lets your server silently fetch files from another domain and return them.
The second is to just redirect the request to the new domain. This second option puts less strain on your server, but requests still need to come in and it slows down the final rendering of the page, as each request needs to make an essentially redundant request to your server first.
i agree with rikh. If you want images to be served from a different webserver, then serve them on a different web-server. For example:
<IMG src="images/Brett.jpg">
becomes
<IMG src="http://brettnesbitt.akamia-technologies.com/images/Brett.jpg">
Any kind of load balancer will still feed the image from the web-server's pipe, which is what you're trying to avoid.
i, of course, know what you really want. What you really want is for any request like:
GET images/Brett.jpg HTTP/1.1
to automatically get converted into:
HTTP/1.1 307 Temporary Redirect
Location: http://brettnesbitt.akamia-technologies.com/images/Brett.jpg
this way you don't have to do any work, except copy the images to the other web-server.
That i really don't know how to do.
By using the phrase "NAT", it implies that the firewall/router receives HTTP requests, and you want to forward the request to a different internal server if the HTTP request was for image files.
This then begs the question about what you're actually trying to save. No matter which internal web-server services the HTTP request, the data is still going to have to flow through the firewall/router's pipe.
The reason i bring it up is because the common scenario when someone wants to serve images from a different server is because they want to split up high-bandwidth, mostly static, low-CPU cost content from their actual logic.
Only using NAT to re-write the packet and send it to a different server will not work towards that common issue.
The other reason might be because images are not static content on your system, and a request to
GET images/Brett.jpg HTTP/1.1
actually builds an image on the fly, with a high-CPU cost, or only using with data available (i.e. SQL Server database) to ServerB.
If this is the case then i would still use a different server name on the image request:
GET http://www.brettsoft.com/default.aspx HTTP/1.1
GET http://imageserver.brettsoft.com/images/Brett.jpg HTTP/1.1
i understand what you're hoping for, with network packet inspection to override the NAT rule and send it to another server - i've never seen any such thing that can do that.
It sounds more "proxy-ish", where the web-proxy does this. (i.e. pfSense and m0n0wall can't do it)
Which then leads to a kind of solution we used once: a custom web-server that analyzes the request, makes the appropriate request off some internal server, and binary writes the response to the client.
That pain in the ass solution was insisted upon by a "security consultant", who apparently believes in security through obscurity.
i know IIS cannot do such things for you itself - i don't know about other web-server products.
i just asked around, and apparently if you wanted to write a custom kernel module for you linux based router, you could have it inspect packets and take appropriate action. Such a module might exist. There are, apparently, plenty of other open-sourced modules to use as a starting point.
But i'd rather shoot myself in the head.

Resources