uWSGI worker free, but request' handling has significant delay - uwsgi

I would like to run Django app under uWSGI behind nginx. I've launched 2 uwsgi workers, but I noticed next sad circumstance: when one worker is busy, another worker starts handle request only after 10-15 seconds of waiting.
Configuration is pretty simple.
uWSGI:
uwsgi --socket 127.0.0.1:3031 --wsgi-file wsgi.py --master --processes 2 --threads 1
nginx:
server {
listen 8000;
server_name example.org;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}
}
and /etc/nginx/nginx.conf - default value
test Django view:
def test(request):
print('Start!!!')
time.sleep(9999)
print('End')
return HttpResponse()
And wsgi.py has default Django value.
So when I launch all this together and send 2 GET requests then I see in console only one "Start!!!" and only after 10-15 seconds apears second "Start!!!".
I have the same strange behaviour without nginx (with uwsgi --http); with multiple threads for each worker; without "--master" uwsgi option; without django app; with a few uwsgi instances behind Nginx load balancer.
Additional info:
uwsgi version: 2.0.12
nginx version: 1.4.6
host OS: Ubuntu 14.04
Python version: 3.4
Django: 1.9
CPU: 4 cores

Related

nginx permission denied accessing puma socket that does exist in the correct location

On a Digital Ocean droplet running Ubuntu 21.10 impish I am deploying a bare bones Rails 7.0.0.alpha2 application to production. I am setting up nginx as the reverse proxy server to communicate with Puma acting as the Rails server.
I wish to run puma as a service using systemctl without sudo root privileges. To this effect I have a puma service setup in the users home folder located at ~/.config/systemd/user, the service is enabled and runs as I would expect it to run.
systemctl status --user puma_master_cms_production
reports the following
● puma_master_cms_production.service - Puma HTTP Server for master_cms (production)
Loaded: loaded (/home/comtechmaster/.config/systemd/user/puma_master_cms_production.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-18 22:31:02 UTC; 1h 18min ago
Main PID: 1577 (ruby)
Tasks: 10 (limit: 2338)
Memory: 125.1M
CPU: 2.873s
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/puma_master_cms_production.service
└─1577 puma 5.5.2 (unix:///home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock)
Nov 18 22:31:02 master-cms systemd[749]: Started Puma HTTP Server for master_cms (production).
The rails production.log is empty.
The puma error log shows the following
cat log/puma_error.log
=== puma startup: 2021-11-18 22:31:05 +0000 ===
The pid files exist in the application roots shared/tmp/pids folder
ls tmp/pids
puma.pid puma.state
and the socket that nginx needs but is unable to connect to due to permission denied exists
ls -l ~/apps/master_cms/shared/tmp/sockets/
total 0
srwxrwxrwx 1 comtechmaster comtechmaster 0 Nov 18 22:31 puma_master_cms_production.sock
nginx is up and running and providing a
502 bad gateway
response. The nginx error log reports the following error
2021/11/18 23:18:43 [crit] 1500#1500: *25 connect() to unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock failed (13: Permission denied) while connecting to upstream, client: 86.160.191.54, server: 159.65.50.229, request: "GET / HTTP/2.0", upstream: "http://unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock:/500.html"
sudo nginx -t reports the following
sudo nginx -t
nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successfu
just to be pedantic both an ls and a sudo ls to the path reported in the error shows
ls /home/comtechmaster/apps/master_cms/shared/tmp/sockets/
puma_master_cms_production.sock
as expected so I am stumped to understand why nginx running as root using sudo service nginx start is being denied access to a socket that exists, that is owned by the local user rather than root.
I expect the solution is going to be something totally obvious but I can not see what
This problem ended up being related to the folder permissions for the users home folder and specifically a change in the way Ububntu 20.10 sets permissions differently to previous versions of ubuntu, or at least a difference in the way the DigitalOcean setup scripts behave.
This was resolved with a simple command line chmod o=rx from the /home against the user folder concerned e.g.
cd /home
chmod o=rx the_home_folder_for_user

Passenger not running (Ruby on Rails + Nginx)

My AWS instance was working fine with my app. But, today, the server was down without memory ram. Then I run:
sync; echo 1 > /proc/sys/vm/drop_caches
sudo service nginx start
After that, ram memory consumption is ok, but the app not.
I'm running a Rails 4.2.1 website with Ruby 2.2.2 and nginx/1.8.0 in a Ubuntu 14 AWS instance.
When I access the site, I have the error:
502 Bad Gateway
nginx/1.8.0
When I run passenger-config restart-app I have:
*** ERROR: Phusion Passenger doesn't seem to be running. If you are sure that it
is running, then the causes of this problem could be one of:
1. You customized the instance registry directory using Apache's
PassengerInstanceRegistryDir option, Nginx's
passenger_instance_registry_dir option, or Phusion Passenger Standalone's
--instance-registry-dir command line argument. If so, please set the
environment variable PASSENGER_INSTANCE_REGISTRY_DIR to that directory
and run this command again.
2. The instance directory has been removed by an operating system background
service. Please set a different instance registry directory using Apache's
PassengerInstanceRegistryDir option, Nginx's passenger_instance_registry_dir
option, or Phusion Passenger Standalone's --instance-registry-dir command
line argument.
In the file /var/log/nginx/error.log I have:
2021/06/19 13:21:12 [crit] 26618#0: *48688773 connect() to unix:/tmp/passenger.26EHXct/agents.s/server failed (2: No such file or directory) while connecting to upstream, client: XXX.XXX.34.163, server: www.XXX.com, request: "GET / HTTP/1.1", upstream: "passenger:unix:/tmp/passenger.26EHXct/agents.s/server:", host: "XXX.com"
I already tried this solution and not working.
When I run: passenger-config validate-install I have:
Use <space> to select.
If the menu doesn't display correctly, press '!'
‣ ⬢ Passenger itself
⬡ Apache
-------------------------------------------------------------------------
* Checking whether this Passenger install is in PATH... ✓
* Checking whether there are no other Passenger installations... ✓
Everything looks good. :-)
When I run: sudo passenger-memory-stats I have:
Version: 5.0.10
Date : 2021-06-19 13:31:40 -0300
------------- Apache processes -------------
*** WARNING: The Apache executable cannot be found.
Please set the APXS2 environment variable to your 'apxs2' executable's filename, or set the HTTPD environment variable to your 'httpd' or 'apache2' executable's filename.
---------- Nginx processes ----------
PID PPID VMSize Private Name
-------------------------------------
26615 1 230.7 MB 26.3 MB nginx: worker process
26616 1 230.4 MB 27.4 MB nginx: worker process
26617 1 229.7 MB 25.8 MB nginx: worker process
26618 1 233.3 MB 27.4 MB nginx: worker process
### Processes: 4
### Total private dirty RSS: 106.78 MB
--- Passenger processes ---
### Processes: 0
### Total private dirty RSS: 0.00 MB
Anyone knows how can I solve this?
When I ran sudo service nginx restart, I didn't notice the flag [fail] on the right of the terminal.
Then, I ran sudo service nginx status I got the message nginx is not running.
After ran sudo nginx -t I got the message
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
But I saw several nginx processes, then, I killed all nginx process with sudo kill $(ps aux | grep '[n]ginx' | awk '{print $2}') and then, sudo service nginx start.
Everything works fine again.

How does one connect two services in the local docker-compose network?

I have followed the instructions, I think, and have come up with the following configuration:
version: '3.9'
services:
flask:
image: ops:imgA
ports:
- 5000:5000
volumes:
- /opt/models:/opt/models
entrypoint: demo flask
streamlit:
image: ops:imgB
ports:
- 8501:8501
entrypoint: streamlit run --server.port 8501 demo -- stream --flask-hostname flask
The --flask-hostname flask sets the host name used in an http connect, i.e.: http://flask:5000. I can set it to anything.
The basic problem here is that I can spin up one of these images, install tmux, and run everything within a single image.
But, when I split it across multiple images and use docker-compose up (which seems better than tmux), the containers can't seem to connect to each other.
I have rattled around the documentation on docker's website, but I've moved on to the troubleshooting stage. This seems to be something that should "just work" (since there are few questions along these lines). I have total control of the box I am using, and can open or close whatever ports needed.
Mainly, I am trying to figure out how to allow, with 100% default settings nothing complicated, these two services (flask and streamlit) to speak to each other.
There must be 1 or 2 settings that I need to change, and that is it.
Any ideas?
Update
I can access all of the services externally, so I am going to open up external connections between the services (using the external IP) as a "just work" quick fix, but obviously getting the composition to work internally would be the best option.
I have also confirmed that the docker-compose and docker versions are up to date.
Update-2: changed from flask#127.0.0.1 to flask#0.0.0.0
Flask output:
flask_1 | * Serving Flask app "flask" (lazy loading)
flask_1 | * Environment: production
flask_1 | WARNING: This is a development server. Do not use it in a production deployment.
flask_1 | Use a production WSGI server instead.
flask_1 | * Debug mode: on
flask_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | 2020-12-19 02:22:16.449 INFO werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | INFO:werkzeug: * Restarting with inotify reloader
flask_1 | 2020-12-19 02:22:16.465 INFO werkzeug: * Restarting with inotify reloader
flask_1 | WARNING:werkzeug: * Debugger is active!
flask_1 | 2020-12-19 02:22:22.003 WARNING werkzeug: * Debugger is active!
Streamlit:
streamlit_1 |
streamlit_1 | You can now view your Streamlit app in your browser.
streamlit_1 |
streamlit_1 | Network URL: http://172.18.0.3:8501
streamlit_1 | External URL: http://71.199.156.142:8501
streamlit_1 |
streamlit_1 | 2020-12-19 02:22:11.389 Generating new fontManager, this may take some time...
And the streamlit error message:
ConnectionError:
HTTPConnectionPool(host='flask', port=5000):
Max retries exceeded with url: /foo/bar
(Caused by NewConnectionError(
'<urllib3.connection.HTTPConnection object at 0x7fb860501d90>:
Failed to establish a new connection:
[Errno 111] Connection refused'
)
)
Update-3: Hitting refresh fixed it.
The server process must be listening on the special "all interfaces" address 0.0.0.0. Many development-type servers by default listen on "localhost only" 127.0.0.1, but in Docker each container has its own private notion of localhost. If you use tmux or docker exec to run multiple processes inside a container, they have the same localhost and can connect to each other, but if the client and server are running in different containers, the request doesn't arrive on the server's localhost interface, and if the server is listening on "localhost only" it won't receive it.
Your setup is otherwise correct, with only the docker-compose.yml you include in the question. Some other common problems:
You must connect to the port the server process is listening on inside the container. If you remap it externally with ports:, that's ignored, and you'd connect to the second ports: number. Correspondingly, ports: aren't required. (expose: also isn't required and doesn't do anything at all.)
The client may need to wait for the server to start up. If the client depends_on: [flask] the host name will usually resolve (unless the server dies immediately) but if it takes a while to start up you will still get "connection refused" errors. See Docker Compose wait for container X before starting Y.
Neither container may use network_mode: host. This disables Docker's networking features entirely.
If you manually declare networks:, both containers need to be on the same network. You do not need to explicitly create a network for inter-container communication to work: Compose provides a default network for you, which is used if nothing else is declared.
Use the Compose service names as host names. You don't need to explicitly specify container_name: or links:.

nginx + uwsgi running on Amazon ECS returns socket: Too many open files (24) ( changing Ulimit did not help )

I have a UWSGI running behind Nginx proxy server. I tried to benchmark my backend instance using Apache bench. At one point in time, I get Too many open files (24) error when I run the command ab -c 1100 -n 2000 https://example.com/test.
I changed the ulimits of my ECS Instance as well as the docker containers and confirmed it by typing ulimit -n which returns 100000 in both the locations.
I cross checked the Individual NGINX, Uwsgi processes limits by opening the /proc/PID where the Max open files is set to 100000.
The worker_connections and worker_rlimit_nofile parameters in /etc/nginx/nginx.conf are also set to highest limit possible.

Wildfly/Jboss Docker Cluster using Docker-compose

I am new to Wildly and Docker
I am trying to build a test cluster of wildfly.
I am using docker compose for orchestration.
Following is my docker-compose.yml file
node:
image: wildfly-mgmt
links:
- lb:lb
lb:
image: wildfly-cluster-httpd
ports:
- "9090:80"
After running docker-compose up
I can not see the nodes in the mod cluster Management page.
http://localhost:9090/mod_cluster_manager
It is blank, somehow mod_cluster manager not able to see the nodes...
Docker file for mod cluster:
FROM fedora:latest
RUN yum -y update
RUN yum -y install httpd mod_cluster
RUN yum clean all
RUN sed -i 's|LoadModule proxy_balancer_module|# LoadModule proxy_balancer_module|' /etc/httpd/conf.modules.d/00-proxy.conf
ADD mod_cluster.conf /etc/httpd/conf.d/mod_cluster.conf
EXPOSE 80
CMD ["/sbin/httpd", "-DFOREGROUND"]
Mod_cluster.conf
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
LoadModule manager_module modules/mod_manager.so
<IfModule manager_module>
Maxhost 100
ServerName localhost
<VirtualHost *:80>
<Directory />
Require all granted
</Directory>
<Location /mod_cluster_manager>
SetHandler mod_cluster-manager
Require all granted
</Location>
KeepAliveTimeout 60
ManagerBalancerName mycluster
EnableMCPMReceive On
ServerAdvertise On
</VirtualHost>
</IfModule>
I can see the servers running.
> Docker ps command shows the two containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b613166f4236 wildfly-mgmt "/opt/jboss/wildfly/b" 18 hours ago Up 18 hours 8080/tcp dockercomposecluster_node_1
963a728bae70 wildfly-cluster-httpd "/sbin/httpd -DFOREGR" 18 hours ago Up 18 hours 0.0.0.0:9090->80/tcp dockercomposecluster_lb_1
I can see the servers running from the console log
node_1 | 19:43:23,828 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://0.0.0.0:9990/management
node_1 | 19:43:23,828 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://0.0.0.0:9990
node_1 | 19:43:23,829 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) started in 75208ms - Started 331 of 577 services (393 services are lazy, passive or on-demand)
But the Mod-CLuster_manager is not able to see the nodes. Can anyone please point out what is wrong here? I am really new to this.
For debugging you can do docker exec -it containername bash -it for interactive terminal. This should put you inside the container. From there can you do telnet <containername> <port> (you probably have to install telnet first) - or docker inspect <containername> the container you want to see and use its IP.
If you can't telnet, have you tried starting them on the same docker network?

Resources