I'm trying to pass a rather large post request to php, and when I var_dump $_POST array, one, the most large, variable is missing. (Actually that's base64 encoded binary upload as part of a post request)
Funny thing, that on my development PC exactly same request is parsed correctly, without any missing variables.
I checked out contents of php://input on server and development PC and they are exactly the same, md5 matches. Yet development PC recognizes all variables, and server misses one.
I tried changing many different options in php.ini, and got zero effect.
Maybe someone will point me to the right one.
Here is my php://input (~5 megabytes) http://www.mediafire.com/?lp0uox53vhr35df
It's possible the server is blocking it because of Suhosin extension.
http://www.hardened-php.net/suhosin/configuration.html#suhosin.post.max_value_length
suhosin.post.max_value_length
Type: Integer Default: 65000 Defines the maximum length of a variable
that is registered through a POST request.
This will have to be changed in the php.ini.
Keep in mind that this is different than the Suhosin patch which is common on alot of shared hosts. I don't know if the patch would cause this problem.
Related
I am a newbie in CMake and trying to understand the following CMake command
FetchContent_Declare(curl
URL https://github.com/curl/curl/releases/download/curl-7_75_0/curl-7.75.0.tar.xz
URL_HASH SHA256=fe0c49d8468249000bda75bcfdf9e30ff7e9a86d35f1a21f428d79c389d55675
USES_TERMINAL_DOWNLOAD TRUE)
When I open a browser and put in https://github.com/curl/curl/releases/download/curl-7_75_0/curl-7.75.0.tar.xz, the file curl-7.75.0.tar.xz will start downloading without the need for the URL_HASH. I am sure it is not redundant. I wanted to know what the purpose of the URL_HASH is?
Also how can SHA256 be found? Because when I visit https://github.com/curl/curl/releases/download/curl-7_75_0 to find out more, the link is broken.
I am sure it is not redundant. I wanted to know what the purpose of the URL_HASH is?
Secure hash functions like SHA256 are designed to be one-way; it is (in practice) impossible to craft a malicious version of a file with the same SHA256 hash as the original. It is even impossible to find two arbitrary files that have the same hash. Such a pair is called a "collision" and finding even one would constitute a major breakthrough in cryptanalysis.
The purpose of this hash in a CMakeLists.txt, then, is as an integrity check. If a bad actor has intercepted your connection somehow, then checking the hash of the file you actually downloaded against this hard-coded expected hash will detect whether or not the file changed in transit. This will even catch less nefarious data corruptions, like those caused by a faulty hard drive.
Including such a hash (a "checksum") is absolutely necessary when downloading code or other binary artifacts.
Also how can SHA256 be found?
Often, these will be published alongside the binaries. Use a published value if available.
If you have to compute it yourself, you have a few options. On the Linux command line, you can use the sha256sum command. As a hack, you can write a deliberately wrong SHA256=0 value or something and fish the observed value from the error message.
Note that if you compute the hash yourself, you should either (a) download the file from an absolutely trusted connection and device or (b) download it from multiple independent devices (free CI systems like GitHub Actions are useful for this) and ensure the hash is the same across all of them.
when i have a route that sends data to two different file component endpoints,
where about one EP i don't really care about the encoding but about the other EP, i need to ensure a certain encoding, should i still set the charsetname in both encodings?
I'm asking because a client of ours had a problem in that area. the route receives UTF-8 and and we need to write iso-8859-1 to the file.
And now, after the whole hardware was restarted (after power-outage), we found things like "??" instead of the expected "รค".
Now, by specifying the charsetname on all file producer endpoints, we were able to solve the issue.
My actual question now is:
do you think i can now expect that the problem is solved for good?
Or shouldn't there be a relation and I would be well advised not to lean back until I 100% understand the issue.
Notes that might be helpfull:
in addition, before writing to any of those two file endpoints, we
also do .convertBodyTo(byte[].class, "iso-8859-1")
we use camel 2.16.1
In the end, the problem was not about having two file endpoints in one pipeline.
It was about the JVM's default encoding as written here:
http://camel.465427.n5.nabble.com/Q-about-how-to-help-the-file2-component-to-do-a-better-job-td5783381.html
I have a query which involves getting a list of user from a table in sorted order based on at what time it was created. I got the following timing diagram from the chrome developer tools.
You can see that TTFB (time to first byte) is too high.
I am not sure whether it is because of the SQL sort. If that is the reason then how can I reduce this time?
Or is it because of the TTFB. I saw blogs which says that TTFB should be less (< 1sec). But for me it shows >1 sec. Is it because of my query or something else?
I am not sure how can I reduce this time.
I am using angular. Should I use angular to sort the table instead of SQL sort? (many posts say that shouldn't be the issue)
What I want to know is how can I reduce TTFB. Guys! I am actually new to this. It is the task given to me by my team members. I am not sure how can I reduce TTFB time. I saw many posts, but not able to understand properly. What is TTFB. Is it the time taken by the server?
The TTFB is not the time to first byte of the body of the response (i.e., the useful data, such as: json, xml, etc.), but rather the time to first byte of the response received from the server. This byte is the start of the response headers.
For example, if the server sends the headers before doing the hard work (like heavy SQL), you will get a very low TTFB, but it isn't "true".
In your case, TTFB represents the time you spend processing data on the server.
To reduce the TTFB, you need to do the server-side work faster.
I have met the same problem. My project is running on the local server. I checked my php code.
$db = mysqli_connect('localhost', 'root', 'root', 'smart');
I use localhost to connect to my local database. That maybe the cause of the problem which you're describing. You can modify your HOSTS file. Add the line
127.0.0.1 localhost.
TTFB is something that happens behind the scenes. Your browser knows nothing about what happens behind the scenes.
You need to look into what queries are being run and how the website connects to the server.
This article might help understand TTFB, but otherwise you need to dig deeper into your application.
If you are using PHP, try using <?php flush(); ?> after </head> and before </body> or whatever section you want to output quickly (like the header or content). It will output the actually code without waiting for php to end. Don't use this function all the time, or the speed increase won't be noticable.
More info
I would suggest you read this article and focus more on how to optimize the overall response to the user request (either a page, a search result etc.)
A good argument for this is the example they give about using gzip to compress the page. Even though ttfb is faster when you do not compress, the overall experience of the user is worst because it takes longer to download content that is not zipped.
I'm using OpenResty with lua-resty; and obviously for each request the program has its own variables. To share simple strings or configurations across requests I currently use lua-shared-dict.
But, if I need to share and maintain a big variable (e.g.: a complex table made by the parsing of a large INI file) across requests (the variable is created every hour, for example, in order to improve performance), how can I do it?
(e.g.: another example, imagine to translate this in LUA: https://github.com/dangrossman/node-browscap/blob/master/browscap.js; how can I maintain the browser[] array across multiple OpenResty HTTP requests, without having to re-parse it for each request?)
how can I maintain the browser[] array across multiple OpenResty HTTP requests, without having to re-parse it for each request?
I assume you mean "across multiple OpenResty workers" or "across requests that may hit different workers" as all the requests that hit the same worker can access the same variables, but if so, you probably can't. Since you seem only need to read that browser[] value (as you are parsing a large INI file), you can try a hybrid approach:
Store the result of parsing in a serialized form in one of the lua-shared-dict values (let's say iniFile).
When the request gets in, check if the iniFile variable in that request is nil and if it is, then read iniFile value from lua-shared-dict, deserialize it and store as the value of the iniFile variable that is shared by all the code that is run by same worker.
If you need to refresh it after 1h to keep it up-to-date, store the time when the value is retrieved from the dictionary, and add a check to #2 to re-retrieve when the time exceeds your limit.
So, we have a very large and complex website that requires a lot of state information to be placed in the URL. Most of the time, this is just peachy and the app works well. However, there are (an increasing number of) instances where the URL length gets reaaaaallllly long. This causes huge problems in IE because of the URL length restriction.
I'm wondering, what strategies/methods have people used to reduce the length of their URLs? Specifically, I'd just need to reduce certain parameters in the URL, maybe not the entire thing.
In the past, we've pushed some of this state data into session... however this decreases addressability in our application (which is really important). So, any strategy which can maintain addressability would be favored.
Thanks!
Edit: To answer some questions and clarify a little, most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long. These parameters can contain anything legal in a URL (meaning they aren't just numbers or just letters, could be anything). Case sensitivity may or may not matter.
Also, ideally we could convert these to POST, however due to the immense architectural changes required for that, I don't think that is really possible.
If you don't want to store that data in the session scope, you can:
Send the data as a POST parameter (in a hidden field), so data will be sent in the HTTP request body instead of the URL
Store the data in a database and pass a key (that gives you access to the corresponding database record) back and forth, which opens a lot of scalability and maybe security issues. I suppose this approach is similar to use the session scope.
most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long
I don't see a way to get around this if you want to keep full state info in the URL without resorting to storing data in the session, or permanently on server side.
You might save a few bytes using some compression algorithm, but it will make the URLs unreadable, most algorithms are not very efficient on small strings, and compressing does not produce predictable results.
The only other ideas that come to mind are
Shortening parameter names (query => q, page=> p...) might save a few bytes
If the parameter order is very static, using mod_rewritten directory structures /url/param1/param2/param3 may save a few bytes because you don't need to use parameter names
Whatever data is repetitive and can be "shortened" back into numeric IDs or shorter identifiers (like place names of company branches, product names, ...) keep in an internal, global, permanent lookup table (London => 1, Paris => 2...)
Other than that, I think storing data on server side, identified by a random key as #Guido already suggests, is the only real way. The up side is that you have no size limit at all: An URL like
example.com/?key=A23H7230sJFC
can "contain" as much information on server side as you want.
The down side, of course, is that in order for these URLs to work reliably, you'll have to keep the data on your server indefinitely. It's like having your own little URL shortening service... Whether that is an attractive option, will depend on the overall situation.
I think that's pretty much it!
One option which is good when they really are navigatable parameters is to work these parameters into the first section of the URL e.g.
http://example.site.com/ViewPerson.xx?PersonID=123
=>
http://example.site.com/View/Person/123/
If the data in the URL is automatically generated can't you just generate it again when needed?
With little information it is hard to think of a solution but I'd start by researching what RESTful architectures do in terms of using hypermedia (i.e. links) to keep state. REST in Practice (http://tinyurl.com/287r6wk) is a very good book on this very topic.
Not sure what application you are using. I have had the same problem and I use a couple of solutions (ASP.NET):
Use Server.Transfer and HttpContext (PreviousPage in .Net 2+) to get access to a public property of the source page which holds the data.
Use Server.Transfer along with a hidden field in the source page.
Using compression on querystring.