I have a php script that allows users to upload multiple files to server on POST, then redirect to next page.
It seems to have been working for some time but lately users are reporting it hanging infinitely. They input all fields, select files to upload, hit post, then wait for hours then give up and close the window. But when I check it appears the files were successfully uploaded and in tact. Just the fields were not posted.
It seems the script cannot transition to the next section where form fields get parsed and inserted to mysql database. I've did some small tests and cannot recreate the problem. Although I don't have the time to test with large files such as 200M.
The max total filesize any user would upload would be 200M so I feel my php core settings are sufficient. Here is what I have:
max_execution_time = 7200
max_file_uploads = 20
max_input_time = 7200
memory_limit = 8000M
output_buffering = 4096
upload_max_filesize = 500M
Anything else in the core settings that could perhaps be giving me this problem? Or would it be a browser problem?
This is most likely your users' connection speed. Ask one of your users their connection speed and to use Google Chrome and look at the status bar, it should increment the percentage of the progress of the upload. Or I recommend trying this yourself and throttling your bandwidth someone. Remember your users most likely have a maximum of 1.5 up unless they have Fios or a better connection (e.g. T1).
Related
I'm generating json of 65,000 users to populate a typeahead. The query is quick, turns out building the json was the bottleneck. I'm trying to cache the result but what happens when the cache expires, does it rebuild it automatically or is it going to wait until someone triggers the call, resulting in a 9-second page load once every 12-hours?
def user_json
Rails.cache.fetch("users", expires_in: 12.hours) do
User.all.to_json
end
end
If you did not want to hit the database each time then you could look in to a solution such as elastic search or sphinx which are designed to perform quick searching like your describing.
I was listening to javascript jabber this morning and they where saying that the average web page is now a shade under 2mb including images and CSS. Your request doubles that size. While that's fine for north Americans your page is likely to feel much slower in internet backwaters such as Australia.
It's also worth noting that older browsers such as IE don't handle iteration in javascript too well. I would suggest that your application would crash in any IE pre version 9.
Because of these reasons I would avoid pushing JSON that contains 65,000 rows over the wire and in to the browser. If the query is quick why not do a trip back to the server each time the user changes the input. Many trips back to the server based on input would be quicker than sending all 65,000 records and in the process removes the entire class of problems I have described above. Your original problem also goes away as you don't have to cache any responses any more.
I have a query which involves getting a list of user from a table in sorted order based on at what time it was created. I got the following timing diagram from the chrome developer tools.
You can see that TTFB (time to first byte) is too high.
I am not sure whether it is because of the SQL sort. If that is the reason then how can I reduce this time?
Or is it because of the TTFB. I saw blogs which says that TTFB should be less (< 1sec). But for me it shows >1 sec. Is it because of my query or something else?
I am not sure how can I reduce this time.
I am using angular. Should I use angular to sort the table instead of SQL sort? (many posts say that shouldn't be the issue)
What I want to know is how can I reduce TTFB. Guys! I am actually new to this. It is the task given to me by my team members. I am not sure how can I reduce TTFB time. I saw many posts, but not able to understand properly. What is TTFB. Is it the time taken by the server?
The TTFB is not the time to first byte of the body of the response (i.e., the useful data, such as: json, xml, etc.), but rather the time to first byte of the response received from the server. This byte is the start of the response headers.
For example, if the server sends the headers before doing the hard work (like heavy SQL), you will get a very low TTFB, but it isn't "true".
In your case, TTFB represents the time you spend processing data on the server.
To reduce the TTFB, you need to do the server-side work faster.
I have met the same problem. My project is running on the local server. I checked my php code.
$db = mysqli_connect('localhost', 'root', 'root', 'smart');
I use localhost to connect to my local database. That maybe the cause of the problem which you're describing. You can modify your HOSTS file. Add the line
127.0.0.1 localhost.
TTFB is something that happens behind the scenes. Your browser knows nothing about what happens behind the scenes.
You need to look into what queries are being run and how the website connects to the server.
This article might help understand TTFB, but otherwise you need to dig deeper into your application.
If you are using PHP, try using <?php flush(); ?> after </head> and before </body> or whatever section you want to output quickly (like the header or content). It will output the actually code without waiting for php to end. Don't use this function all the time, or the speed increase won't be noticable.
More info
I would suggest you read this article and focus more on how to optimize the overall response to the user request (either a page, a search result etc.)
A good argument for this is the example they give about using gzip to compress the page. Even though ttfb is faster when you do not compress, the overall experience of the user is worst because it takes longer to download content that is not zipped.
This is my very first question and I hope it's well explained and so I can find an answer.
I work at the website project for a delivery company that has all the data in an Oracle9i server.
Most of the web user just want to know when they're going to get their package but I'm sure there's also robots that query that info several times a day to update their systems.
I'm working on a code to stop those robots (asking for a captcha after the 3rd query in 15min, for example) because we have some web services they can use to query all the data in bulk.
Now, my problem is that peak hours 12.00-14.00 the database starts to answer very slowly.
Here is some data I've parsed from the web application. I don't have logs at this level for the web services, but there was also a lot of queries there.
It shows the timestamp when I request a connection from the datasource, the Integer.toHexString(connection.hashCode()), the name of the datasource, the timestamp when I close the connection and the difference between both timestamps.
Most of the time the queries end in less than a second but yesterday I had this strange delay for more than 2minutes.
Is there some kind of maximun number of connections allowed on the database so when it surpass that limit the database queues my query for sometime before trying again?
Thanks in advance.
Is there some kind of maximun number of connections allowed on the databas
Yes.
SESSIONS is one of the basic initialization parameters and
specifies the maximum number of sessions that can be created in the
system. Because every login requires a session, this parameter
effectively determines the maximum number of concurrent users in the
system.
The default value is derived from the PROCESSES parameter (1.5 times this plus 22); therefore if you didn't change the PROCESSES parameter (default 100) the maximum number of sessions to your database will be 172.
You can determine the value by querying V$PARAMETER:
SQL> select value
2 from v$parameter
3 where name = 'sessions';
VALUE
--------------------------------
480
so when it surpass that limit the database queues my query for sometime before trying again?
No.
When you attempt to exceed the value of the SESSIONS parameter the exception ORA-00018: maximum number of sessions exceeded will be raised.
Something may well be queuing your query but it will be within your own code and not specified by Oracle.
It sounds as though you should find out more information. If not at the maximum number of sessions then you need to capture the query that's taking a long time and profile it; this would, I think, be the more likely scenario. If you're at the maximum number of sessions then you need to look at your (companies) code to determine what's happening.
You haven't really explained anything about your application but it sounds as though you're opening a session (or more) per user. You might want to reconsider whether this is the correct approach.
Thanks for the edit vape.
I've also found the real problem.
I had the method that asks for a connection to the datasource synchronized and it caused locks while requesting connections at peak hours. I've had it removed and everything is working fine.
I have a script which collect some data about certain files in my computer and then make a POST to a google-script published as service.
I was wondering what should be better: collect all the data (which couldn't be more than few MB, maybe 10) and make a single POST, or make one POST request for each piece (which are just some kb) ?
Which is better for performance at both sides, my local computer and for google servers?
Could be understood as abuse if I make a hundred of POST? it will run just once a month.
There are a lot of factors that would go into this decision -
In general, I would argue its better to do one upload as 10mb isn't a large amount of data
Is this Asynchronous (or automatic) or is there a user clicking button? If its happening automatically then you don't have to worry about reporting progress accurately to the user. If there is a user watching the upload then smaller uploads are better as you'll be able to measure how many of the units (or chunks) are properly uploaded.
Your computer should not be in the picture at all - Google Apps Script runs on the Google Servers. Perhaps there is some confusion here?
I am working on a web site project PHP/APACHE without any js until now.
I found out various ways to set the upload limit of an image to the server.
They work, but when I upload a very large one, the delay before the message "your file is too big" is from far too long. This means if a user does'nt understand what max 2.4MB is he will be likely to wait more than a minute or 2 before seeing the message.
My question is :
Do you know any mean to have the uopload automatically cancelled if the image he tries to transfer exceeds the limit ?
Thank a lot
SunnyOne.
Basically, there are 2 ways to do this: With Flash/Java, or with fancy HTML5 JavaScript that only works on some browsers (and the most recent version of those, as well.
Check these other SO questions for pointers:
Client Checking file size using HTML5? and Detecting file upload size on the client side?.
Also, check out these tools: YUI2 Uploader, FancyUpload, SWFUpload