Nginx and Unicorn user permissions for socket - ruby-on-rails

Nginx has a clearly defined way to set who the process runs as in the conf file:
user nobody nogroup;
(Though as a side question I wonder why is the group necessary? If you run the process as a user the group must be by definition the group that user belongs to, how can you define a user and a group simultaneously?)
But Unicorn doesn't seem to have this ability. As a result, in my provider's VPS, I'm logged in as root, I start nginx (which runs as user nginx, group web) and I then start unicorn (which starts as root because I am logged in as root). Unicorn makes a socket owned by root and then nginx can't read from it. How can I make unicorn run as the user unicorn in the same security group as nginx so that the socket is readable by nginx?
This is on ubuntu 12.04 64bit, unicorn v4.8.2, nginx version: nginx/1.4.6
The error is below:
unix:/etc/sockets/.unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 24.7.100.227, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/etc/sockets/.unicorn.sock:/", host: "xx.xx.xx.xxx"

Ah I figured it out. Its actually unnecessary. BOTH processes run as root for their master processes. Even if you configure nginx with a user parameter the master process runs as root which is the one which interacts with the socket. So my issue was actually something else (a typo in the unicorn conf file)

Related

Nginx Reverse Proxy to Docker 502 Bad Gateway

Spent all week on this one and tried every related stackoverflow post. Thanks for being here.
I have an Ubuntu VM running nginx with reverse proxies pointing to various docker daemons concurrently running on different ports. All my static sites work flawlessly. However, I have one container running an expressjs app.
I get responses after restarting the server for about an hour. Then I get 502 Bad Gateway. A refresh brings the site back up for approx 5 seconds until it permanently goes down. This is reproducible.
The docker container has express listening on 0.0.0.0:8090 inside the container
The container is running
02e1917991e6 docker/express-site "docker-entrypoint.s…" About an hour ago Up About an hour 127.0.0.1:8090->8090/tcp express-site
The 8090 port is EXPOSEd in the Dockerfile.
I tried other ports.
When down, I can curl the site from within the container when inspecting.
When down, curling the site from within the VM yields
curl: (52) Empty reply from server
Memory and CPU usage within the container and within the VM barely reach 5%.
Site usually has SSL but tried http as well.
Tried various nginx proxy settings (see config below)
Using out-of-the box nginx.conf
Considering that it might be related to a timeout or docker network settings...
My site-available config file looks like:
server {
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:8090;
#proxy_set_header Host $host;
#proxy_buffering off;
#proxy_buffer_size 16k;
#proxy_busy_buffers_size 24k;
#proxy_buffers 64 4k;
}
listen 80;
listen [::]:80;
#listen 443 ssl; # managed by Certbot
#ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
#ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
#include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Nginx Error Log shows:
2021/01/02 23:50:00 [error] 13901#13901: *46 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: ***.**.**.***, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8090/favicon.ico", host: "www.example.com", referrer: "http://www.example.com"
Anyone else have ideas?
Didn't get much feedback, but I did more research and the issue is now stable so I wanted to post my findings.
I have isolated the issue with the docker container. Nginx works fine with the same app running on the VM directly.
I updated my docker container image from node:12-alpine to node:14-alpine. The site has been up for 42 hours without issue.
If it randomly fails again, then it's probably due to load.
I hope this solves someone's issue.
Update 2021-10-24
The same issue started and I've narrowed it down to the port and/or docker on my version of Ubuntu. May I recommend...
changing the port
rebooting your PC
installing the latest OS and docker updates

Nginx + Docker - "no live upstreams while connecting to upstream" with upstream but works fine with proxy_pass

So I've been facing a weird problem, and I'm not sure where the fault is. I'm running a container using docker-compose, and the following nginx configuration works great:
server {
location / {
proxy_pass http://container_name1:1337;
}
}
Where container_name was the name of the service I gave in docker-compose.yml file. It resolves to the IP perfectly and it works. However, the moment I change the above file to this:
upstream backend {
least_conn;
server container_name1:1337;
server container_name2:1337;
}
server {
location / {
proxy_pass http://backend;
}
}
It stops working completely and in error logs I get the following:
2020/03/17 13:16:03 [error] 8#8: *11 no live upstreams while connecting to upstream, client: xxxxxx, server: codedamn.com, request: "GET /HTTP/1.1", upstream: "http://backend/", host: "xxxxx"
Why is that? Is nginx not able to resolve DNS when inside upstream blocks? Could anyone help with this problem?
NOTE: This happens only on production (Ubuntu 16.04), on local (macOS Catalina), the same configuration works fine. I'm totally confused after discovering this.
Update 1: The following works:
upstream backend {
least_conn;
server container_name1:1337;
}
But not with more than one server. Why?!
Alright. Figured it out. This is because docker-compose creates contianers randomly and nginx quickly marks the containers as down (I was deploying this on production when there was some traffic). The app containers weren't ready, but nginx was, so it marked them down and stopped forwarding any traffic.
For now, instead of syncing up docker-compose container creation order (which was a bit hacky, as I discovered), I disabled the failed attempts of nginx to automatically mark service as down by writing:
server app_inst1:1337 max_fails=0;
which lets nginx still forward the traffic to a particular service (and my docker is configured to restart the container in case it crashes), which is fine.

Rails: use Capistrano to deploy, how to check log when error happens?

I am deploying a rails add using Capistrano on remote Ubuntu 14.04 server.
Finally when I restart nginx, web page shows an error
We're sorry, but something went wrong.
I hope to know what cause the error, what command can I use to see log from remote server
try
bundle exec tail -f log/production.log
if no error is seen there then first check your nginx logs at
tail -f /var/log/nginx/access.log
or
tail -f /var/log/nginx/error.log
if you see some request logging there then that means request is coming to server and its not passing to puma server.
There can be two reasons about why request is not being passed to puma, either your address of puma process is not correct in nginx file or puma server is not running or there was some error and puma was shutdown when request reached it.
to see puma process use this command
ps aux | grep puma
it should print one line out of many lines
app 22528 0.1 0.5 296532 23912 ? Ssl 16:42 0:00 puma 2.11.1 (tcp://0.0.0.0:8080) [20180110213633]
now using this information I can map address like this in nginx
upstream app {
# Path to Puma SOCK file, as defined previously
server 0.0.0.0:8080;
}
here I bind the puma local ip with port to nginx process.
Make sure your puma.rb binds properly to puma.sock file as for one of my project I am doing like this in config/puma.rb
bind "unix:///Users/Apple/RAILS_PROJECTS/tracker/tmp/sockets/puma.sock"

Debugging Unicorn server remotely with RubyMine

I have Rails (version 4.0.3) application which uses nginx as front-end server to dispatch the requests to actual Unicorn. Whilst developing the app I would like to use Docker on Windows (boot2docker) to run the application and ruby-debug-ide to debug the application remotely.
The original setup works fine (application answers on the host machine) until I replace rails server with
rdebug-ide --port 1234 --host 0.0.0.0 --dispatcher-port 26162 -- bin/rails server
After running this on the Docker container I connect to the remote debugger successfully as it tells on Docker containers bash that breakpoints were added based on what I've set on RubyMine and tells that Unicorn server is now running. I added gem unicorn-rails to make rails server work for Unicorn too.
Now the actual problem is that the nginx can't seem to find the Unicorn when it as ran within the debugger. It just keeps on loading on browser (and with curl) until 504 (gateway timeout) is returned.
The interesting Unicorn configuration contains
app_dir = "/app"
working_directory app_dir
pid "#{app_dir}/tmp/unicorn.pid"
worker_processes 1
listen "/tmp/unicorn.sock", :backlog => 64
I have set up everything as described on JetBrains help pages. On Docker I have all the necessary ports (1234 for debugger, 26162 for dispatcher, 443 for HTTPS) open.
I've crawled Internet and Stack Overflow for hours without any luck and can't find anything to try anymore. Any ideas?

Problem starting Passenger with Nginx

I have just setup Passenger with Nginx and it seems to install fine but when I run it I try to start it by:
passenger start -e production
I get:
=============== Phusion Passenger Standalone web server started ===============
PID file: /root/rails_apps/myapp/tmp/pids/passenger.3000.pid
Log file: /root/rails_apps/myapp/log/passenger.3000.log
Environment: production
Accessible via: http://0.0.0.0:3000/
You can stop Phusion Passenger Standalone by pressing Ctrl-C.
===============================================================================
2011/04/18 07:17:27 [error] 9125#0: *4 "/root/rails_apps/myapp/public/index.html" is forbidden (13: Permission denied), client: 127.0.0.1, server: _, request: "HEAD / HTTP/1.1", host: "0.0.0.0"
and I get "Unable to connect" when I try to access my site in the browser.
Here is configuration in nginx.conf
server {
listen 80;
server_name myapp.com;
root /root/rails_apps/myapp/public; # <--- be sure to point to 'public'!
passenger_enabled on;
}
any ideas?
This error seems caused because the user of nginx cannot access the mentioned file. It can be caused not only if the /root/rails_apps/myapp/public is not have a correct permission, but even if one of the parent directories does not have that!
In your nginx.conf you can see something like:
user nginx;
http {
# blah.
}
Sometimes parameter of the user can be different. Be sure to all folder is available by this user in the path.
You can check it by sudo -Hu nginx /bin/bash -l and cat /root/rails_apps/myapp/public/index.html. Test and test it again with this command until you cannot see the content of the file.
A little explanation: with that sudo command you start a shell as an user nginx. And with cat command you can simulate the file reading.
Try this:
sudo passenger start -e production
since the path you specified is in /root (/root/rails_apps/myapp/public), nginx should have enough permissions:
user root; in nginx.conf
you should also start nginx as superuser ( sudo )
but it might be better to just move your rails app somewhere to your user directory and grant needed permissions to default nginx user 'www-data'
user www-data;

Resources