Nginx + passenger serving 4 requests/s for a static page - ruby-on-rails

I deployed my app to Digitalocean with Passenger and Nginx. I used apache bench to see how many requests per second I can get on a static page (simple hello world rails view), but I am only getting 4 requests/s.
ab -n 100 http://107.170.100.242/fo
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 107.170.100.242 (be patient).....done
Server Software: nginx/1.8.0
Server Hostname: 107.170.100.242
Server Port: 80
Document Path: /fo
Document Length: 5506 bytes
Concurrency Level: 1
Time taken for tests: 22.662 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 632600 bytes
HTML transferred: 550600 bytes
Requests per second: 4.41 [#/sec] (mean)
Time per request: 226.617 [ms] (mean)
Time per request: 226.617 [ms] (mean, across all concurrent requests)
Transfer rate: 27.26 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 181 226 65.4 204 445
Waiting: 181 226 65.4 204 445
Total: 181 227 65.4 204 446
It should be literally thousands per second as I am using Nginx. I have been researching this for the entire day without results, can someone please direct me to the right path to solve this?

This would be the nginx config directive that will cause it to bypass the app server and serve the static files directly:
root /var/www/my_app/public;
Are you sure that is right?

Related

uwsgi log format: seeing uwsgi req: N/M where N > M

I have a uwsgi process running a flask application. There is haproxy (running in mode http) sitting between the client and the application.
I am seeing occational haproxy termination state as "SD--" and the Tc = 0 and Tr = -1, and the returned http code is -1. This means that the haproxy encountered a explicit tcp disconnection from the uwsgi server.
Looking at the uwsgi logs, I found that the server was normally processing requests at the same time. But the affected request never reached the server.
Only thing strange about the uwsgi logs at that point of time is that
the Number of requests managed by the current uwsgi worker is greater than the sum total of requests managed by the whole uwsgi app.
like this:
[pid: 22759|app: 0|req: **47188**/**47178**] * POST * => generated 84 bytes in 970 msecs (HTTP/1.1 200) 2 headers in 71 bytes (3 switches on core 98)
I am wondering if this is abnormal, or what what scenarios can these counters be so?

Unused Passenger process stays alive and consumes server resources for a Rails 4 app

we have a Rails app that runs using Apache -> Passenger. At least once a week, our alerts that monitor server CPU and RAM start getting triggered on one or more of our app servers, and the root cause is that one or more of the Passenger processes are taking up a large chunk of the server CPU and RAM , without actually serving any requests.
for example, when i run "passenger-status" on the server that triggers these alerts, i see this:
Version : 5.3.1
Date : 2022-06-03 22:00:13 +0000
Instance: (Apache/2.4.51 (Amazon) OpenSSL/1.0.2k-fips Phusion_Passenger/5.3.1)
----------- General information -----------
Max pool size : 12
App groups : 1
Processes : 9
Requests in top-level queue : 0
----------- Application groups -----------
Requests in queue: 0
* PID: 16915 Sessions: 1 Processed: 3636 Uptime: 3h 2m 30s
CPU: 5% Memory : 1764M Last used: 0s ago
* PID: 11275 Sessions: 0 Processed: 34 Uptime: 55m 24s
CPU: 45% Memory : 5720M Last used: 35m 43s ago
...
see how the 2nd process hasn't been used for > 35 minutes but is taking up so much of the server resources?
the only solution has been to manually kill the PID which seems to resolve the issue, but is there a way to automate this check?
i also realize that the Passenger version is old and can be upgraded (which I will get done soon) but i have seen this issue in multiple versions prior to the current version, so i wasn't sure if an upgrade by itself is guaranteed to resolve this or not.

Informix - Locked DB due to lock created by cancelled session?

SI attempted to run a script to generate a table in my Informix database, but the script was missing a newline at EOF, so I think Informix had problems to read it and hence the script got blocked doing nothing. I had to kill the script and add the new line to the file so now the script works fine, except it does not create the table due to a lockecreated when I killed the script abruptly.
I am new to this, so sorry for the dumb question. IBM page does not have a clear and simple explanation of how to clean this now.
So, my question is: How do I unlock the locks so I can continue working in my script?
admin_proyecto#li1106-217 # onstat -k
IBM Informix Dynamic Server Version 12.10.FC9DE -- On-Line (CKPT REQ) -- Up 9 ds
Blocked:CKPT
Locks
address wtlist owner lklist type tbz
44199028 0 44ca6830 0 HDR+S
44199138 0 44cac0a0 0 HDR+S
441991c0 0 44cac0a0 4419b6f0 HDR+IX
44199358 0 44ca44d0 0 S
441993e0 0 44ca44d0 44199358 HDR+S
4419ac50 0 44cac0a0 441991c0 HDR+X
4419aef8 0 44ca44d0 441993e0 HDR+IX
4419b2b0 0 44ca79e0 0 S
4419b3c0 0 44ca82b8 0 S
4419b6f0 0 44cac0a0 44199138 HDR+X
4419b998 0 44ca8b90 0 S
4419bdd8 0 44ca44d0 4419aef8 HDR+X
12 active, 20000 total, 16384 hash buckets, 0 lock table overflows
On my "toy" systems i usually point LTAPEDEV to a directory:
LTAPEDEV /usr/informix/dumps/motor_003/backups
Then, when Informix blocks due to having all of it's logical logs full, i manually do an ontape -a to backup to files the used logical logs and free them to be reused.
For example, here I have an Informix instance blocked due to no more logical logs available:
$ onstat -l
IBM Informix Dynamic Server Version 12.10.FC8DE -- On-Line (CKPT REQ) -- Up 00:18:58 -- 213588 Kbytes
Blocked:CKPT
Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-1 0 64 1043 21 49.67
phybegin physize phypos phyused %used
2:53 51147 28085 240 0.47
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-1 13 64 191473 12472 6933 15.4 1.8
Subsystem numrecs Log Space used
OLDRSAM 191470 15247376
HA 3 132
Buffer Waiting
Buffer ioproc flags
L-1 0 0x21 0
address number flags uniqid begin size used %used
44d75f88 1 U------ 47 3:15053 5000 5 0.10
44b6df68 2 U---C-L 48 3:20053 5000 4986 99.72
44c28f38 3 U------ 41 3:25053 5000 5000 100.00
44c28fa0 4 U------ 42 3:53 5000 2843 56.86
44d59850 5 U------ 43 3:5053 5000 5 0.10
44d598b8 6 U------ 44 3:10053 5000 5 0.10
44d59920 7 U------ 45 3:30053 5000 5 0.10
44d59988 8 U------ 46 3:35053 5000 5 0.10
8 active, 8 total
On the online log I have:
$ onstat -m
04/23/18 18:20:42 Logical Log Files are Full -- Backup is Needed
So I manually issue the command:
$ ontape -a
Performing automatic backup of logical logs.
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000041
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000042
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000043
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000044
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000045
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000046
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000047
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000048
Do you want to back up the current logical log? (y/n) n
Program over.
If I check again the status of the logical logs:
$ onstat -l
IBM Informix Dynamic Server Version 12.10.FC8DE -- On-Line -- Up 00:23:42 -- 213588 Kbytes
Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-2 33 64 1090 24 45.42
phybegin physize phypos phyused %used
2:53 51147 28091 36 0.07
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-1 0 64 291335 15878 7023 18.3 2.3
Subsystem numrecs Log Space used
OLDRSAM 291331 22046456
HA 4 176
address number flags uniqid begin size used %used
44d75f88 1 U-B---- 47 3:15053 5000 5 0.10
44b6df68 2 U-B---- 48 3:20053 5000 5000 100.00
44c28f38 3 U---C-L 49 3:25053 5000 3392 67.84
44c28fa0 4 U-B---- 42 3:53 5000 2843 56.86
44d59850 5 U-B---- 43 3:5053 5000 5 0.10
44d598b8 6 U-B---- 44 3:10053 5000 5 0.10
44d59920 7 U-B---- 45 3:30053 5000 5 0.10
44d59988 8 U-B---- 46 3:35053 5000 5 0.10
8 active, 8 total
The logical logs are now marked as "Backed Up" and can be reused and the Informix instance is no longer blocked on Blocked:CKPT .

How to improve Nginx, Rails, Passenger memory usage?

I currently have a rails app set up on a Digital Ocean VPS (1GB RAM) trough Cloud 66. The problem being that the VPS' memory runs full with Passenger processes.
The output of passenger-status:
# passenger-status
Version : 4.0.45
Date : 2014-09-23 09:04:37 +0000
Instance: 1762
----------- General information -----------
Max pool size : 2
Processes : 2
Requests in top-level queue : 0
----------- Application groups -----------
/var/deploy/cityspotters/web_head/current#default:
App root: /var/deploy/cityspotters/web_head/current
Requests in queue: 0
* PID: 7675 Sessions: 0 Processed: 599 Uptime: 39m 35s
CPU: 1% Memory : 151M Last used: 1m 10s ago
* PID: 7686 Sessions: 0 Processed: 477 Uptime: 39m 34s
CPU: 1% Memory : 115M Last used: 10s ago
The max_pool_size seems to be configured correctly.
The output of passenger-memory-stats:
# passenger-memory-stats
Version: 4.0.45
Date : 2014-09-23 09:10:41 +0000
------------- Apache processes -------------
*** WARNING: The Apache executable cannot be found.
Please set the APXS2 environment variable to your 'apxs2' executable's filename, or set the HTTPD environment variable to your 'httpd' or 'apache2' executable's filename.
--------- Nginx processes ---------
PID PPID VMSize Private Name
-----------------------------------
1762 1 51.8 MB 0.4 MB nginx: master process /opt/nginx/sbin/nginx
7616 1762 53.0 MB 1.8 MB nginx: worker process
### Processes: 2
### Total private dirty RSS: 2.22 MB
----- Passenger processes -----
PID VMSize Private Name
-------------------------------
7597 218.3 MB 0.3 MB PassengerWatchdog
7600 565.7 MB 1.1 MB PassengerHelperAgent
7606 230.8 MB 1.0 MB PassengerLoggingAgent
7675 652.0 MB 151.7 MB Passenger RackApp: /var/deploy/cityspotters/web_head/current
7686 652.1 MB 116.7 MB Passenger RackApp: /var/deploy/cityspotters/web_head/current
### Processes: 5
### Total private dirty RSS: 270.82 MB
.. 2 Passenger RackApp processes, OK.
But when I use the htop command, the output is as follows:
There seem to be a lot of Passenger Rackup processes. We're also running Sidekiq with the default configuration.
New Relic Server reports the following memory usage:
I tried tuning Passenger settings, adding a load balancer and another server but honestly don't know what to do from here. How can I find out what's causing so much memory usage?
Update: I had to restart ngnix because of some changes and it seemed to free quite a lot of memory.
Press Shift-H to hide threads in htop. Those aren't processes but threads within a process. The key column is RSS: you have two passenger processes at 209MB and 215MB and one Sidekiq process at 154MB.
Short answer: this is completely normal memory usage for a Rails app. 1GB is simply a little small if you want multiple processes for each. I'd cut down passenger to one process.
Does your application create child processes? If so, then it's likely that those extra "Passenger RackApp" processes are not actually processes created by Phusion Passenger, but are in fact processes created by your own app. You should double check whether your app spawns child processes and whether you clean up those child processes correctly. Also double check whether any libraries you use, also properly clean up their child processes.
I see that you're using Sidekiq and you've configured 25 Sidekiq processes. Those are also eating a lot of memory. A Sidekiq process eats just as much memory as a Passenger RackApp process, because both of them load your entire application (including Rails) in memory. Try reducing the number of Sidekiq processes.

rails performance test warmup time

I am using a rails performance test run as rake test:benchmark. The result gives me the warmup time.
I can't find the meaning of the 211 ms warm up time. Some of the test take longer warmup
time. I know what wall_time, user_time and etc.
.ApiTest#test_license_pool (211 ms warmup)
wall_time: 167 ms
user_time: 47 ms
memory: 6.2 MB
gc_runs: 0
gc_time: 0 ms

Resources