I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.
Related
This is my first time posting to stackoverflow, so I apologize in advance if I am not following certain protocols. I will fix and / or expand my question as needed.
I am trying to add 2 different influxdb sources that are hosted on 2 different servers to chronograf kapacitor but I cannot get it working.
Can you connect to 2 different influxdb instances through the UI?
How do you configure kapacitor.conf to read from 2 different influxdb instances?
Through the Chronograf UI I can get either source working correctly but not both at the same time. This seems to be expected through the UI so I must be missing something.
If I set the sources in kapacifor.conf, chronograf does not read from them. There are also no errors in kapacitor logs.
This is my kapacitor.conf influxdb settings that do not work:
[[influxdb]]
enabled = true
default = true
name = "localcluster"
urls = ["http://localhost:8086"]
username = ""
password = ""
timeout = 0
[[influxdb]]
enabled = true
default = false
name = "remoteCluster"
urls = ["http://remotehost:8086"]
username = ""
password = ""
timeout = 0
I have read the documentation and also have the latest TICK stack packages.
I have also searched online and found some references that look like my configuration and are said to work, but they do not seem to work for me.
TICK stack host information:
CentOS Linux release 7.6.1810 (Core)
telegraf-1.9.1-1.x86_64
influxdb-1.7.2-1.x86_64
chronograf-1.7.4-1.x86_64
kapacitor-1.5.1-1.x86_64
Any help would be greatly appreciated.
I got it working but I am not sure if the configuration is recommended:
Add a new InfluxDB connection through the Chronograf web UI.
Do not create another Kapacitor Connection as only one can be active at a time.
In the graph Queries tab, select the new
InfluxDB connection from the drop down list.
Metrics from the alternate InfluxDB instance will appear and can be queried.
I have a problem, as I think, with my prosody configuration. When I am sending files (for example photos) more the ~2 or 3 megabytes (as I established experimentally) using Converstions 2.* version (android IM app) it transfers this files using peer to peer connection instead of uploading this file to server and sending a link to my interlocutor. Small files transfers well using http upload. And I couldn't find a reason for such behavior.
Here are some lines for http_upload module from my config, that I took from official documentation (where I hadn't found a setup for turning off peer to peer files transfer):
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
http_upload_expire_after = 604800 -- 60 * 60 * 24 * 7
http_upload_quota = 10737418240 -- 10 GB
http_upload_path = "/var/lib/prosody"
And this is my full config: https://pastebin.com/V6DNYrhe
Small files are transferred well using http upload. And I couldn't
find a reason for such behavior.
TL;DR: You put options in the wrong place. The default 1MB limit
applies. This is advertised to clients so they know about it and can use
more efficient p2p transfer methods for very large files.
http_upload_path = "/var/lib/prosody"
This line makes Prosodys data directory public, allowing anyone easy
access to all user data. You really don't want to do that. You are
lucky you did not put that in the correct section.
And this is my full config: https://pastebin.com/V6DNYrhe
"http_upload" is in the global modules_enabled list which will load
it onto all VirtualHost(s).
You have added options to the end of the config file, putting them under
a Component section. That makes those options only apply to that
Component.
Thus, the VirtualHost where mod_http_upload is loaded sees no options
set and will use the defaults.
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
Don't do this. Prosodys built-in HTTP server is not optimized for very
large uploads. There is a safety limit on HTTP request size that will
cap HTTP upload size limit to 10M to prevent DoS attacks.
While that limit can be changed, I would strongly suggest you look at
https://modules.prosody.im/mod_http_upload_external.html instead.
i have this code
(on the fly compression and stream)
#cherrypy.expose
def backup(self):
path = '/var/www/httpdocs'
zip_filename = "backup" + t.strftime("%d_%m_%Y_") + ".zip"
cherrypy.response.headers['Content-Type'] = 'application/zip'
cherrypy.response.headers['Content-Disposition'] = 'attachment; filename="%s"' % (zip_filename,)
#https://github.com/gourneau/SpiderOak-zipstream/blob/3463c5ccb5d4a53fc5b2bdff849f25bae9ead761/zipstream.py
return ZipStream(path)
backup._cp_config = {'response.stream': True}
the problem i faced is when i'm downloading the file i cant browse any other page or send any other request until the download done...
i think that the problem is that cherrypy can't serve more than one request at a time/ per user
any suggestion?
When you say "per user", do you mean that another request could come in for a different "session" and it would be allowed to continue?
In that case, your issue is almost certainly due to session locking in cherrypy. You can read more about it is the session code. Since the sessions are unlocked late by default, the session is not available for use by other threads (connections) while the backup is still being processed.
Try setting tools.sessions.locking = 'explicit' in the _cp_config for that handler. Since you’re not writing anything to the session, it’s probably safe not to lock at all.
Good luck. Hope that helps.
Also, from the FAQ:
"CherryPy certainly can handle multiple connections. It’s usually your browser that is the culprit. Firefox, for example, will only open two connections at a time to the same host (and if one of those is for the favicon.ico, then you’re down to one). Try increasing the number of concurrent connections your browser makes, or test your site with a tool that isn’t a browser, like siege, Apache’s ab, or even curl."
I have a problem with changing the standard options used by an Axis 1.4 generated web service client code.
We consume a certain web service of a partner who is using the old RPC/Encoded style, which basically means we're not able to go for Axis 2 but are limited to Axis 1.4.
The service client is retrieving data from the remote server through our proxy which actually runs quite nicely.
Our application is deployed as a servlet. The retrieved response of the foreign web service is inserted into a (XML) document we provide to our internal systems/CMS.
But if the external service is not responding - which didn't happen yet but might happen at anytime - we want to degrade nicely and return our produced XML document without the calculated web service information within a resonable time.
The data retrieved is optional (if this specific calculation is missing it isn't a big issue at all).
So I tried to change the timeout settings. I did apply/use all methods and keys I could find in the documentation of axis to alter the connection and socket timeouts by searching the web.
None of these seems to influence the connection timeouts.
Can anyone give me advice how to alter the settings for an axis stub/service/port based on version 1.4?
Here's an example for the several configurations I tried:
MyService service = new MyServiceLocator();
MyServicePort port = null;
try {
port = service.getMyServicePort();
javax.xml.rpc.Stub stub = (javax.xml.rpc.Stub) port;
stub._setProperty("axis.connection.timeout", 10);
stub._setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, 10);
AxisProperties.setProperty("axis.connection.timeout", "10");
AxisProperties.setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, "10");
AxisProperties.setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, "10");
AxisProperties.setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, "10");
logger.error(AxisProperties.getProperties());
service = new MyClimateServiceLocator();
port = service.getMyServicePort();
}
I assigned the property changes before the generation of the service and after, I set the properties during initialisation, I tried several other timeout keys I found, ...
I think I'm getting mad about that and start to forget what I tried already!
What am I doing wrong? I mean there must be an option, mustn't it?
If I don't find a proper solution I thought about setting up a synchronized thread with a timeout within our code which actually feels quite awkward and somehow silly.
Can you imagine anything else?
Thanks in advance
Jens
axis1.4 java client soap wsdl2java rpc/encoded xml servlet generated alter change setup stub timeout connection socket keys methods
I think it may be a bug, as indicated here:
https://issues.apache.org/jira/browse/AXIS-2493?jql=text%20~%20%22CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY%22
Typecast service port object to org.apache.axis.client.Stub.
(i.e)
org.apache.axis.client.Stub stub = (org.apache.axis.client.Stub) port;
Then set all the properties:
stub._setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, 10);
I'm trying to upload a file larger than 2GB to a local PHP 5.3.4 server. I've set the following server variables:
memory_limit = -1
post_max_size = 9G
upload_max_filesize = 5G
However, in the error_log I found:
PHP Warning: POST Content-Length of 2120909412 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
Can anyone tell me why this keeps failing please?
I had a similar problem, but my config was:
post_max_size = 1.8G
upload_max_filesize = 1.8G
and yet I could not upload a 1.2GB file. The error was very same:
PHP Warning: POST Content-Length of 1347484420 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
I spent a day wondering where the heck was this "limit of 1073741824" coming from!
Solution:
Actually, the error was in the php.ini parser: It only understands INTEGER numbers, so essentially it was parsing 1.8G as 1G !!
Changing the value to e.g. 1800M fixed it.
Pls ensure to restart the apache server with the below command service apache2 restart
I don't know about in 5.3.x, but in 5.2.x there are some int/long issues in the PHP code. even if you're on a 64-bit system and have a version of PHP compiled with 64-bit, there are several problems.
First, the code that converts post_max_size and others from ascii to integer stores the value in an int, so it converting "9G" and putting the result into this int will bork the value because 9G is a larger number than a 32-bit variable can hold.
But there are also several other areas of PHP code that are used with the Apache module, CGI, etc. that need to be changed from int to long.
So...for this to work, you need to edit the PHP code and compile it by hand (make sure you compile it as 64-bit). here's a link to a list of diffs:
http://www.archive.org/~tracey/downloads/patches/karmic-64bit-post-large-files.patch
Referenced from this php bug post: http://bugs.php.net/bug.php?id=44522
The file above is a diff on 5.2.10 code, but I just made the changes by hand to 5.2.17 code and i just uploaded a 3.4gb single file through apache/php (which hadn't worked before the change).
ope that helps.
I figure out how to use http and php to upload a 10G file.
php.ini:
post_max_size = 0
upload_max_filesize = 0
It works in php 5.3.10.
if you do not load that file all into memory , memory_limit is unrelated.
Maybe this can come from apache limitations on POST size:
http://httpd.apache.org/docs/current/mod/core.html#limitrequestbody
It seems this limitation on 2Gb can be greater on 64bits installations, maybe. And i'm not sure setting 0 in this directove does not reach the compilation limit. see for examples that thread:
http://ubuntuforums.org/archive/index.php/t-1385890.html
Then do not forget to alter as well the max_input_time in PHP.
But you are reaching high limits :-) maybe you could try a rich client (flash? js?) on the browser side, doing the transfer in chunks or some sort of FTP things, with progress indicators for the user.
As phliKtid mentioned, this is a limitation with the PHP framework. Save for editing the source code as mentioned in the bug report phliKtid linked, there is a workaround that involves setting the upload_max_filesize to 0 in the php.ini file.
; Maximum allowed size for uploaded files.
; http://php.net/upload-max-filesize
upload_max_filesize = 0
By doing this, PHP will not crash when trying to convert "5G" into a 32-bit integer and you will be able to upload files as big as you allow with the "post_max_size" variable.
We've had the same problem: uploads stopped at 2GB.
Under SLES (SUSE Linux Enterprise Server) 11 SP 2, php53 was the problem.
Then we added a new repository that has php54:
http://download.opensuse.org/repositories/server:/php/SLE_11_SP2/
and upgraded to that, we now can upload 5GB :-)