php-fpm "pool seems busy error". Why am I getting this? - php-5.3

I have a server with 64gb RAM using apache + fastcgi to connect to php-fpm.
I am running some load tests with ApacheBench. 500k reqs with 200 reqs/sec (goal is 10k/sec per server). I keep getting the "pool seems busy error" and am at a loss as to how to configure fpm properly to handle even 200reqs/sec. Feels like i'm missing something obvious.
fpm-config:
pm = dynamic
pm.max_children = 8192
pm.start_servers = 2048
pm.min_spare_servers = 2048
pm.max_spare_servers = 2048
pm.max_requests = 8000
apache config:
<IfModule worker.c>
StartServers 2048
ServerLimit 8175
MaxClients 8175
MinSpareThreads 2048
MaxSpareThreads 2048
ThreadsPerChild 25
MaxRequestsPerChild 8000
</IfModule>
What am I doing wrong?

My initial gut reaction is that having a max of 8000 children would seem to be quite a large number of processes to have running unless you have a lot of wait time per request. After a while the large number of processes will actually cause a degradation of performance since context switches would end up swapping the running processes in and out of CPU time, which takes time to do. Unless you have a lot of external service calls with processes waiting a lot, this seems to be a little excessive. Additionally, with an assumption of 20 MB allocated over the course of a request you are using 60+% of your free RAM just to serve start_servers.
As for the "pool seems busy" error, I don't know offhand. It's tough (for me) to say without getting deeper into the environment. What is your free CPU time like and your memory utilization when you are running AB?
I also wonder if there is a system limit on the number of connections an individual process (like the FPM) can have open... Check ulimit -a

Related

FSB, RAM MHz, MT/s, memory divider 1:1, Core2Duo

Due to temp (hopefully) financial problems I have to use old laptop. It's FSB (Front Side Bridge) clock is 333MHz (https://www.techsiting.com/mt-s-vs-mhz/). It has 2 SO-DIMM slots for DDR2 SDRAM. It had only 1 DIMM 2 Gb previously and it was a nightmare.
Each slot can handle maximum 2Gb so maximum amount of memory is 4Gb. Knowing that supported DDR stands for double data ratio, I've bought for funny money (10 euro) 2 DDR2 DIMM SO-DIMM 800MHz hoping to get (assuming memory divider is 1:2 - it's a double data ratio, isn't it?) 2x333MHz->apply divider=667MT/s (no idea how they have avoided 666). As I have Core2Duo I even had a very little hope to get 4x333MHz=1333MT/s.
But it seems that my memory divider is 1:1, so I get either
2x333MHzxDivider=333MT/s
4x333MHzxDivider=?
And utilities like lshw and dmidecode seem to confirm that:
~ >>> sudo lshw -C memory | grep clock
clock: 333MHz (3.0ns) # notice 333MHz here
clock: 333MHz (3.0ns) # notice 333MHz here
~ >>> sudo dmidecode --type memory | grep Speed
Supported Speeds:
Current Speed: Unknown
Current Speed: Unknown
Speed: 333 MT/s # notice 333MT/s here
Speed: 333 MT/s # notice 333MT/s here
~ >>>
So my 333MHz on FSB has been multiplied by 1 (one) and I've got 333MT/s (if I understood correct). I'm still satisfied: OS does not swap that much, boot process is faster, programs starts faster, browser does not hang every hour and I can open much more tabs). I just want to know, since I have Core2Duo what **MT/s8*8 do I have from these two? Or maybe it is even more comlicated?
2x333MHzxDivider=333MT/s
4x333MHzxDivider=667MT/s # 4 because of Duo
and is there any difference for 2 processors system with just 4Gb of RAM with MT\s == MHz?
PS BIOS is old (although latest) and I cannot see real FSB clock there, nor change it nor change the memory divider.
Looks like there's no point in checking I/O bus clock using some Linux command/tool because it is just always half of memory clock.
if what is written in electronics.stackexchange.com/a/424928:
I/O bus clock is always half of bus data rate.
my old machine has these parameters:
It is DDR2-333 (not standardized by JEDEC since they start from DDR-400)
It has memory MHz = 333
It has memory MT/s = 333
It has I/O bus MHz = 166.5 # just because
The thing I still don't get is that I have Core2Duo, so is my memory MT/s = 333 or 666.

What is the best way to performance test an SQS consumer to find the max TPS that one host can handle?

I have a SQS consumer running in EventConsumerService that needs to handle up to 3K TPS successfully, sometimes upwards of 20K TPS (or 1.2 million messages per minute). For each message processed, I make a REST call to DataService's TCP VIP. I'm trying to perform a load test to find the max TPS that one host can handle in EventConsumerService without overstraining:
Request volume on dependencies, DynamoDB storage, etc
CPU utilization in both EventConsumerService and DataService
Network connections per host
IO stats due to overlogging
DLQ size must be minimal, currently I am seeing my DLQ growing to 500K messages due to 500 Service Unavailable exceptions thrown from DataService, so something must be wrong.
Approximate age of oldest message. I do not want a message sitting in the queue for over X minutes.
Fatals and latency of the REST call to DataService
Active threads
This is how I am performing the performance test:
I set up both my consumer and the other service on one host, the reason being I want to understand the load on both services per host.
I use a TPS generator to fill the SQS queue with a million messages
The EventConsumerService service is already running in production. Once messages started filling the SQS queue, I immediately could see requests being sent to DataService.
Here are the parameters I am tuning to find messagesPolledPerSecond:
messagesPolledPerSecond = (numberOfHosts * numberOfPollers * messageFetchSize) * (1000/(sleepTimeBetweenPollsPerMs+receiveMessageTimePerMs))
messagesInSurge / messagesPolledPerSecond = ageOfOldestMessageSLA
ageOfOldestMessage + settingsUpdatedLatency < latencySLA
The variables for SqsConsumer which I kept constant are:
numberOfHosts = 1
ReceiveMessageTimePerMs = 60 ms? It's out of my control
Max thread pool size: 300
Other factors are all game:
Number of pollers (default 1), I set to 150
Sleep time between polls (default 100 ms), I set to 0 ms
Sleep time when no messages (default 1000 ms), ???
message fetch size (default 1), I set to 10
However, with the above parameters, I am seeing a high amount of messages being sent to the DLQ due to server errors, so clearly I have set values to be too high. This testing methodology seems highly inefficient, and I am unable to find the optimal TPS that does not cause such a tremendous number of messages to be sent to the DLQ, and does not cause such a high approximate age of the oldest message.
Any guidance is appreciated in how best I should test. It'd be very helpful if we can set up a time to chat. PM me directly

How to increase the default memory usage in odoo?

I'm using ubuntu server and configure the odoo project. it has 8GB of ram and available memory is arround 6GB so i need to increase the odoo default memory. So please let me know how to increase?
Have you tried playing with some of Odoo's Advanced and Multiprocessing options?
odoo.py --help
Advanced options:
--osv-memory-count-limit=OSV_MEMORY_COUNT_LIMIT
Force a limit on the maximum number of records kept in
the virtual osv_memory tables. The default is False,
which means no count-based limit.
--osv-memory-age-limit=OSV_MEMORY_AGE_LIMIT
Force a limit on the maximum age of records kept in
the virtual osv_memory tables. This is a decimal value
expressed in hours, and the default is 1 hour.
--max-cron-threads=MAX_CRON_THREADS
Maximum number of threads processing concurrently cron
jobs (default 2).
Multiprocessing options:
--workers=WORKERS Specify the number of workers, 0 disable prefork mode.
--limit-memory-soft=LIMIT_MEMORY_SOFT
Maximum allowed virtual memory per worker, when
reached the worker be reset after the current request
(default 671088640 aka 640MB).
--limit-memory-hard=LIMIT_MEMORY_HARD
Maximum allowed virtual memory per worker, when
reached, any memory allocation will fail (default
805306368 aka 768MB).
--limit-time-cpu=LIMIT_TIME_CPU
Maximum allowed CPU time per request (default 60).
--limit-time-real=LIMIT_TIME_REAL
Maximum allowed Real time per request (default 120).
--limit-request=LIMIT_REQUEST
Maximum number of request to be processed per worker
(default 8192).
Also if you are using WSGI or something similar to run Odoo, these may also need some tuning.

Restart heroku dynos when their RAM exceeded

I have a memory leak problem with my server (who is written in ruby on rails)
I want to implement a temporary solution that restarts the dynos automatically when their memory is exceeding. What is the best way to do this? And is it risky ?
There is a great solution for it if you're using Puma as a server.
https://github.com/schneems/puma_worker_killer
You can restart your server when the RAM exceeds some threshold:
for example:
PumaWorkerKiller.config do |config|
config.ram = 1024 # mb
config.frequency = 5 # seconds
config.percent_usage = 0.98
config.rolling_restart_frequency = 12 * 3600 # 12 hours in seconds
end
PumaWorkerKiller.start
Also, to prevent data corruption and other funny issues in your DB, I would also suggest to make sure you are covered with atomic transactions.

Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in

I am using PHPbb , everything works fine,
But i am getting the following error in a single page inside admin.
Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in home/mytestsite/public_html/includes/template.php on line 458
How to fix this error?
As you can imagine, this error message occurs when PHP tries to use more memory than is avialable. I'm assuming that changing code is not an option but you CAN increase the amount of memory available to PHP.
To change the memory limit for one specific script, include a line such as this at the top of the script:
ini_set("memory_limit","20M");
The 12M (for example) sets the limit to 20 Megs. If this does not work, keep increasing the memory limit until your script fits or your server squeals for mercy.
You can also make this a permanent change for all PHP scripts running on the server by adding a line such as this to the server’s php.ini file:
memory_limit = 20M
Hope this helps

Resources