Is there a default limit on request message size in NuSoap? - nusoap

Is there a default limit on request message size in NuSoap? I am asking this because when I send CSV data with size 194 KB using NuSOAP client to a NuSOAP server I get the following response from the server.
HTTP/1.1 100 Continue
HTTP/1.0 500 Internal Server Error
Date: Fri, 13 Apr 2012 04:36:36 GMT
Server: Apache/2.2.3 (CentOS)
X-Powered-By: PHP/5.2.6
Content-Length: 0
Connection: close
Content-Type: text/html
I have tried looking at the error log files for apache and PHP, but nothing can be found there.
I have been fighting with the issue for a few hours. And I have tried searching around for an answer. Some posts recommended increasing the memory limit in php.ini I did that with no luck. Your help is greatly appreciated.
--Abdul

I think message size will be limited rather by a PHP memory limit, than by some hardcoded value. At least I could send a 6.5MB string without any problems. When I tried to send a 8MB string I got an out of memory exception inside nusoap.php (my server has 64MB limit for PHP).

Related

mongodb schema for rails app

I am working on an internal app to do host/service discovery. The type of data I am storing looks like:
IP Address: 10.40.10.6
DNS Name: wiki-internal.domain.com
1st open port:
port 80 open|close
open port banner:
HTTP/1.1 200 OK
Date: Tue, 07 Jan 2014 08:58:45 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.3
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8
And so on. My first thought is to just put it all in one document with a string that identifies what the data is like "port","80". After initial data collection I realized that there was a lot of data duplication because web server banners and such will often get reused. Also out of 8400 machines with ssh there are only 6 different banners.
Is there a better way to do the design the database with references so certain banners only get created once. Performance is a big issue since the database size will double in the next year. If possible I would like to keep historical banner information for trending.
MongoDB's flexible schema allows you to match the needs of your application. While we often talk about denormalizing for speed, you certainly can normalize to reduce redundancy and storage costs. From your initial analysis and concern over database size, it seems clear that factoring out the redundancy fits your application, in this case, store banners separately and reference them with small ints for _ids, etc.
So do what you need for your application, and store your data in MongoDB in the form that matches the needs of your application.

Request-URI Too Large :: IBM HTTP Server

We are using IBM HTTP Server Version 7.0
I am getting following error message on webpage
Request-URI Too Large
The requested URL's length exceeds the capacity limit for this server
IBM_HTTP_Server at WIN2K862 Port 80
LimitRequestLine is useful?
What changes I need to do to resolve this issue?
LimitRequestLine clearly needs to be increased.

how to set the max length of webservice parameter

the system's configuration is:
web server: WEBrick
software enviroment: ruby on rails
when the browser passes more than 400 bytes parameters to the server, the server returns 414 code(Request-URI Too Large).
How to solve this problem?
Ryan Bates answered your question here: https://github.com/intridea/omniauth/issues/43
In short: use mongrel in development.
A few useful details:
The URI limits vary given client, server and even browser.
Browsers
IE has a limit around 2kb, firefox 65kb. Since Api are triggered from servers it's not that annoying.
Servers
Nginx's default limit is 4kb for 32bits and 8kb for 64bits.
Apache's default is 8190 bytes.
Both can be changed inside config.
Source: the excellent 'Service Oriented Design with RoR'

invalid content-Length ERROR while uploading 2 GB of stream

When Trying to upload 2GB of stream i got invalid content length error
am running Apache as frontend server to mongrel and chrome as my browser.
One more thing one i do it with mongrel alone am able to upload this 2 GB of stream ,cud anybody tell me whats the problem and how do i configure content length in apache??
I'd imagine the problem is that Apache considers it a denial of service attempt and has a cap to prevent you locking up all the server's resources (very reasonable), whether that's configurable or not I'm not sure - can't find anything but will keep hunting. If it's not, you can always build your own copy of Apache with the limit removed.
Have you considered sending the data in reasonable-sized chunks?
You might also wish to inspect the outgoing request using packet inspection or browser debugging tools. It's possible the content-length is being malformed by the browser (I doubt they have a 2GB test case...)

Is there a way to tell server up time from http response?

I'm looking for a way to find out how long a server has been up based only on the page it sends back. Like, if I went to www.google.com, is there some sort of response header variable that tells how long the server I connected to has been up? I'm doubting there is, but never hurts to ask...
No, because HTTP, as a protocol, doesn't really care. In any case, 50 different requests to google.com may end up at 50 different servers.
If you want that information, you need to build it into the application, something like "http://google.com/uptime" which will deliver Google's view of how long it's been up - Google will probably do this as a static page showing the date of their inception :-).
Not from HTTP.
It is possible, however, to discover uptime on many OSs by interrogating the the TCP packets received. Look to RFC 1323 for more information. I believe that a timestamp header is incremented by some value with every transaction, and reset to zero on reboot.
Caveats: it doesn't work with all OSs, and you've got to track servers over time to get accurate uptime data.
Netcraft does this: see here for a vague description:
The 'uptime' as presented in these
reports is the "time since last
reboot" of the front end computer or
computers that are hosting a site. We
can detect this by looking at the data
that we record when we sample a site.
We can detect how long the responding
computer(s) hosting a web site has
been running, and by recording these
samples over a long period of time we
can plot graphs that show this as a
line. Note that this is not the same
as the availability of a site.
Unfortunately there really isn't. You can check this for yourself by requesting the HTTP headers from the server in question. For example, from google.com you will get:
HTTP/1.0 200 OK
Cache-Control: private, max-age=0
Date: Mon, 08 Jun 2009 03:08:11 GMT
Expires: -1
Content-Type: text/html; charset=UTF-8
Server: gws
Online tool to check HTTP headers:
http://network-tools.com/default.asp?prog=httphead&host=www.google.com
Now if it's your own server, you can create a script that will report the uptime, but I don't think that's what you were looking for.
To add to what Pax said, there are a number of third party services that monitor site up-time by attempting to access server resources at specific intervals. They maintain some amount of history in their own databases, and then report back to you when those resources are inaccessible.
I use Uptime Party for monitoring a server.

Resources