How select prefered file transport method? - lua

I have a problem, as I think, with my prosody configuration. When I am sending files (for example photos) more the ~2 or 3 megabytes (as I established experimentally) using Converstions 2.* version (android IM app) it transfers this files using peer to peer connection instead of uploading this file to server and sending a link to my interlocutor. Small files transfers well using http upload. And I couldn't find a reason for such behavior.
Here are some lines for http_upload module from my config, that I took from official documentation (where I hadn't found a setup for turning off peer to peer files transfer):
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
http_upload_expire_after = 604800 -- 60 * 60 * 24 * 7
http_upload_quota = 10737418240 -- 10 GB
http_upload_path = "/var/lib/prosody"
And this is my full config: https://pastebin.com/V6DNYrhe

Small files are transferred well using http upload. And I couldn't
find a reason for such behavior.
TL;DR: You put options in the wrong place. The default 1MB limit
applies. This is advertised to clients so they know about it and can use
more efficient p2p transfer methods for very large files.
http_upload_path = "/var/lib/prosody"
This line makes Prosodys data directory public, allowing anyone easy
access to all user data. You really don't want to do that. You are
lucky you did not put that in the correct section.
And this is my full config: https://pastebin.com/V6DNYrhe
"http_upload" is in the global modules_enabled list which will load
it onto all VirtualHost(s).
You have added options to the end of the config file, putting them under
a Component section. That makes those options only apply to that
Component.
Thus, the VirtualHost where mod_http_upload is loaded sees no options
set and will use the defaults.
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
Don't do this. Prosodys built-in HTTP server is not optimized for very
large uploads. There is a safety limit on HTTP request size that will
cap HTTP upload size limit to 10M to prevent DoS attacks.
While that limit can be changed, I would strongly suggest you look at
https://modules.prosody.im/mod_http_upload_external.html instead.

Related

How to cache based on size in Varnish?

I've been trying to cache based on response size of varnish.
Other answers suggested using Content-Length to decide whether or not to cache but I'm using InfluxDB (Varnish reverse proxies to this) and it responds with a Transfer-Encoding:Chunked which omits the Content-Length header and I am not able to figure out the size of the response.
Is there any way I could access response body size and make decision in vcl_backend_response?
Cache miss: chunked transfer encoding
When Varnish processes incoming chunks from the origin, it has no idea ahead of time how much data will be received. Varnish streams the data through to the client and stores the data byte per byte.
Once the 0\r\n\r\n is received to mark the end of the stream, Varnish will finalize the object storage and calculate the total amount of bytes.
Cache hit: content length
The next time the object is requested, Varnish no longer needs to use Chunked Transfer Encoding, because it has the full object in cache and knows the size. At that point a Content-Length header is part of the response, but this header is not accessible in VCL because it seems to be generated after sub vcl_deliver {} is executed.
Remove objects after the fact
It is possible to remove objects after the fact by monitoring their size through VSL.
The following command will look at the backend request accounting field of the VSL output and check the total size. If the size is greater than 5MB, it generates output
varnishlog -g request -i berequrl -q "BereqAcct[5] > 5242880"
Here's some potential output:
* << Request >> 98330
** << BeReq >> 98331
-- BereqURL /
At that point, you know that the / resource is bigger than 5 MB. You can then attempt to remove it from the cache using the following command:
varnishadm ban "obj.http.x-url == / && obj.http.x-host == domain.com"
Replace domain.com with the actual hostname of your service and set / to the URL of the actual endpoint you're trying to remove from the cache.
Don't forget to add the following code to your VCL file to ensure that the x-url and x-host headers are available:
sub vcl_backend_response {
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
}
sub vcl_deliver {
unset resp.http.x-url;
unset resp.http.x-host;
}
Conclusion
Although there's no turn-key solution to access the size of the body in VCL, but the hacky solution I suggested where we remove objects after the fact is the only thing I can think of.

Play 2.6, URI length exceeds the configured limit of 2048 characters

I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.

How to log uWSGI vassal metrics in emperor mode?

We're using uWSGI in Emperor mode. We want to be able to track the default (non-custom) metrics like worker.0.requests, and we're trying to use the metrics-dir configuration parameter in the vassals' ini files. For example:
enable-metrics = true
metrics-dir = /tmp/pametrics
Files are being written to the directory we specify, and their timestamps are being updated each time we hit the app being served by the vassal, but they are all 4096 bytes long and full of zero bytes; they are not text files as the documentation says.
What are we missing?
They are memory mapped files so their size is the same of a memory page.
Being 0 terminated, you can use the classic unix utilities to manage them

How much size is too large for Session on Rails 3

I want to load a text file in Session.
The file size is about 50KB ~ 100KB.
When user trigger the function in my page. it will create the Session.
My Server's RAM is about 8GB. and the max users is about 100
Because there will be a script run in background to collect IP and MAC in LAN.
The script continues write data into text file.
In the same time, the webpage will using Ajax to fetch fresh data from text file.and display on the page.
Is it suitable to implement by session to keep the result? or any better way to achieve ?
Thanks ~
The Python script will collect the data in the LAN in 1 ~ 3 minutes.(Background job)
To avoid blocking for 1~3 minutes. I will use Ajax to fetch the data in text file (continuing added by Python script) and show on the page.
And my user should carry the information cross pages. So I want to store the data in Session.
00:02:D1:19:AA:50: 172.19.13.39
00:02:D1:13:E8:10: 172.19.12.40
00:02:D1:13:EB:06: 172.19.1.83
C8:9C:DC:6F:41:CD: 172.19.12.73
C8:9C:DC:A4:FC:07: 172.19.12.21
00:02:D1:19:9B:72: 172.19.13.130
00:02:D1:13:EB:04: 172.19.13.40
00:02:D1:15:E1:58: 172.19.12.37
00:02:D1:22:7A:4D: 172.19.11.84
00:02:D1:24:E7:0F: 172.19.1.79
00:FD:83:71:00:10: 172.19.11.45
00:02:D1:24:E7:0D: 172.19.1.77
00:02:D1:81:00:02: 172.19.11.58
00:02:D1:24:36:35: 172.19.11.226
00:02:D1:1E:18:CA: 172.19.12.45
00:02:D1:0D:C5:A8: 172.19.1.45
74:27:EA:29:80:3E: 172.19.12.62
Why does this need to be stored in the browser? Couldn't you fire off what you're collecting to a data store somewhere?
Anyway, assuming you HAD to do this, and the example you gave is pretty close to the data you'll actually be seeing, you have a lot of redundant data there. You could save space for the IPs by creating a hash pointing to each successive value, I.E.
{172 => {19 => {13 => [39], 12 => [40, 73, 21], 1 => [83]}}} ...etc. Similarly for the MAC addresses. But again, you can probably simplify this problem a LOT by storing the info you need somewhere other than the session.

Uploading a file larger than 2GB using PHP

I'm trying to upload a file larger than 2GB to a local PHP 5.3.4 server. I've set the following server variables:
memory_limit = -1
post_max_size = 9G
upload_max_filesize = 5G
However, in the error_log I found:
PHP Warning: POST Content-Length of 2120909412 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
Can anyone tell me why this keeps failing please?
I had a similar problem, but my config was:
post_max_size = 1.8G
upload_max_filesize = 1.8G
and yet I could not upload a 1.2GB file. The error was very same:
PHP Warning: POST Content-Length of 1347484420 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
I spent a day wondering where the heck was this "limit of 1073741824" coming from!
Solution:
Actually, the error was in the php.ini parser: It only understands INTEGER numbers, so essentially it was parsing 1.8G as 1G !!
Changing the value to e.g. 1800M fixed it.
Pls ensure to restart the apache server with the below command service apache2 restart
I don't know about in 5.3.x, but in 5.2.x there are some int/long issues in the PHP code. even if you're on a 64-bit system and have a version of PHP compiled with 64-bit, there are several problems.
First, the code that converts post_max_size and others from ascii to integer stores the value in an int, so it converting "9G" and putting the result into this int will bork the value because 9G is a larger number than a 32-bit variable can hold.
But there are also several other areas of PHP code that are used with the Apache module, CGI, etc. that need to be changed from int to long.
So...for this to work, you need to edit the PHP code and compile it by hand (make sure you compile it as 64-bit). here's a link to a list of diffs:
http://www.archive.org/~tracey/downloads/patches/karmic-64bit-post-large-files.patch
Referenced from this php bug post: http://bugs.php.net/bug.php?id=44522
The file above is a diff on 5.2.10 code, but I just made the changes by hand to 5.2.17 code and i just uploaded a 3.4gb single file through apache/php (which hadn't worked before the change).
ope that helps.
I figure out how to use http and php to upload a 10G file.
php.ini:
post_max_size = 0
upload_max_filesize = 0
It works in php 5.3.10.
if you do not load that file all into memory , memory_limit is unrelated.
Maybe this can come from apache limitations on POST size:
http://httpd.apache.org/docs/current/mod/core.html#limitrequestbody
It seems this limitation on 2Gb can be greater on 64bits installations, maybe. And i'm not sure setting 0 in this directove does not reach the compilation limit. see for examples that thread:
http://ubuntuforums.org/archive/index.php/t-1385890.html
Then do not forget to alter as well the max_input_time in PHP.
But you are reaching high limits :-) maybe you could try a rich client (flash? js?) on the browser side, doing the transfer in chunks or some sort of FTP things, with progress indicators for the user.
As phliKtid mentioned, this is a limitation with the PHP framework. Save for editing the source code as mentioned in the bug report phliKtid linked, there is a workaround that involves setting the upload_max_filesize to 0 in the php.ini file.
; Maximum allowed size for uploaded files.
; http://php.net/upload-max-filesize
upload_max_filesize = 0
By doing this, PHP will not crash when trying to convert "5G" into a 32-bit integer and you will be able to upload files as big as you allow with the "post_max_size" variable.
We've had the same problem: uploads stopped at 2GB.
Under SLES (SUSE Linux Enterprise Server) 11 SP 2, php53 was the problem.
Then we added a new repository that has php54:
http://download.opensuse.org/repositories/server:/php/SLE_11_SP2/
and upgraded to that, we now can upload 5GB :-)

Resources