I am deploying a Rails 2.3 // Spree app with Capistrano/Unicorn/Foreman/Upstart.
The part that I cannot figure out is how to have the /myapp/shared/sockets/unicorn.sock be automatically created by the foreman/upstart process management (at least that I think the unix socket should come from them).
What's responsible for creating the unix socket?
Let's say your configuration is nginx + unicorn . As you probably know , in the config dir you should create a file named unicorn.rb . In this file there is a description how to handle non-static requests , something like this :
upstream unicapd {
server unix:/tmp/capd.sock fail_timeout=0;
}
I've named the upstream differently than stated in tutorials , this gives me ability to host a number different applications on the same host .
Then , in your vhosts dir on the Nginx configuration you put something like this (let's say your host file is 'vhosts/myconf.conf':
location #unicorn1 {
proxy_pass http://unicapd;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
Here you see the instruction to nginx to serve non-static requests from the place , named "http://unicapd" , which is related to your unicorn.rb config file . This configuration is triggered by a file , which is in your init.d (in case you're running Debian) directory .
Summary : When you say bundle exec unicorn --restart , the script in your init.d triggers the code , forming a "special" file /tmp/capd.sock , which serves the dynamic content from you Rails app.
Path to unix-socket configured in unicorn's config:
...
listen "/home/user/app/shared/sockets/unicorn.sock", :backlog => 64
...
Then in nginx.conf:
location / {
try_files $uri #unicorn;
proxy_cache cache;
proxy_cache_valid 10m;
}
location #unicorn {
proxy_set_header Client-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://unix:/home/user/apps/shared/sockets/unicorn.sock;
}
When application will be started, unicorn create socket-file in setted path (user must have write access to this path)
Related
I have different versions of my web application running in Docker containers. And nginx is running on my host machine.
Is it possible to access the desired deployed version of my web application with the help of sub-domain such as v1.myapp.io, v2.myapp.io without reconfiguring and restarting the nginx?
I also want to access future versions in the same way?
Could anyone tell me if there is any way to achieve it?
Please consider me a newbie to Docker/nginx world.
Thanks in Advance.
Yes, although it can be done but its very difficult to achieve with docker only. kubernetes will make this very easy and everything like dns, service mapping is provided out of the box. I will include both docker and kubernetes approach:
Docker approach:
A first draft will look like this, use regex in nginx server_name and set the docker container names with a pattern. Create a /etc/hosts entry for different containers like:
172.16.0.1 v1.docker.container
172.16.0.2 v2.docker.container
And nginx server conf look like:
server {
listen 80;
server_name "~^(?<ns>[a-z]+.+)\.myapp\.io";
resolver 127.0.0.1:53 valid=30s;
# make sure $ns.docker.container is resolved to container IP
set $proxyserver "$ns.docker.container";
location / {
try_files $uri #clusterproxy;
}
location #clusterproxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-IP $clientip;
proxy_set_header X-Forwarded-For $clientip;
proxy_set_header X-Real-IP $clientip;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-FORWARDED-PROTO 80;
proxy_pass http://$proxyserver:80;
}
}
Kubernetes approach:
Create different service and deployment for different versions in a namespace. Lets say namespace is 'app-namespace'. Service names are self explanatory:
APP version v1: v1-app-service
APP version v2: v2-app-service
To make nginx more flexible you can add the service name as namespace to $proxyserver
Nginx rule:
server {
listen 80;
server_name "~^(?<version>[a-z]+.+)\.myapp\.io";
# you can replace this with kubernetes dns server IP
resolver 127.0.0.1:53 valid=30s;
# make sure $ns.docker.container is resolved to container IP
set $proxyserver "$version.app-namespace.svc.kubernetes";
location / {
try_files $uri #clusterproxy;
}
location #clusterproxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-IP $clientip;
proxy_set_header X-Forwarded-For $clientip;
proxy_set_header X-Real-IP $clientip;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-FORWARDED-PROTO 80;
proxy_pass http://$proxyserver:80;
}
}
I found another solution to this problem after digging a lot. It can be easily be done with Automated Nginx Reverse Proxy for Docker.
Once docker container for ngninx was up and running on my system.
I spun two docker containers of my webapp(diff versions) with the following command:
docker run -e VIRTUAL_HOST=v1.myapp.io --name versionOne -d myapp.io:v1
docker run -e VIRTUAL_HOST=v2.myapp.io --name versionTwo -d myapp.io:v2
and it worked out for me.
Additional notes:
1. I am using dnsmasq for handling all dns queries
I have two apps: /foo and /bar. In each of those folders I'm starting up passenger. For foo:
passenger start -d -e production -p 4000
And for bar:
passenger start -d -e production -p 4001
I then have nginx configured like so:
server {
listen 80 default_server;
server_name www.server.com;
root /var/www/html;
location /foo/ {
proxy_pass http://0.0.0.0:4000/;
proxy_set_header Host $host;
}
location /bar/ {
proxy_pass http://0.0.0.0:4001/;
proxy_set_header Host $host;
}
}
The apps are getting served up, but none of the links work. A link to the users#index action comes back as '/users' not '/foo/users'.
I've set config.relative_url_root in both apps, that helps with the assets but not the links.
I've tried with both the _url and _path methods, neither work.
This answer is close, but passenger_base_uri isn't a valid directive for stock nginx.
So then I followed the advanced configuration instructions for Passenger's nginx configuration and added passenger_base_uri = '/foo'; to my custom conf file and loaded it like so:
passenger start -d -e production -p 4000 --nginx-config-template nginx.conf.erb
Still no love, and I'm out of ideas. Has anyone done this before? It seems like a common way to deploy multiple apps in production.
More Thoughts (2015-06-05)
Adding passenger_base_uri = '/foo' to my nginx.conf.erb file hosts the application in TWO locations (which is odd to me, but whatever):
localhost:4000/
localhost:4000/foo/
The first doesn't have the correct resource links (i.e. it's just '/users') but has access to its assets.
The second has the correct resource links (e.g. '/foo/users') but doesn't have its assets (this is because it's looking for /foo/assets/* inside of its public folder, not just /assets/*). I believe that this is the way to go though, as I can change my proxy like this to get at the application:
location /foo/ {
proxy_pass http://0.0.0.0:4000/foo/;
proxy_set_header Host $host;
}
Does anyone else have any thoughts though? If I do this, it'll mean I'll have to rake my assets into public/foo for it to work. Not the end of the world, but it still seems weird.
For anyone else looking to do the same thing, here's what it was in the end:
Follow the Advanced Configuration to get a project specific nginx.conf.erb file.
Add a passenger_base_uri directive to that file for your app (e.g. passenger_base_uri = '/foo';)
In your config/environments/production.rb file move the location of the assets: config.assets.prefix = '/foo/assets'.
Start passenger passenger start -d -e production -p SOME_PORT --nginx-config-template nginx.conf.erb
In your nginx proxy configuration add a location directive for your app:
location /foo/ {
proxy_pass http://0.0.0.0:SOME_PORT/foo/;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host; # more robust than http_host
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this ensures your app's env is correct
proxy_set_header X-Forwarded-Host $host;
# proxy_set_header X-Forwarded-Proto https; # add this if you always want the redirects to go to HTTPS
}
After that, (re)start your nginx proxy and you should be good at http://your_proxy/foo/.
I want to load the file a.js locally when loading a site (e.g. example.com). Normally I could just change my /etc/hosts to point example.com to 127.0.1.1 but I don't want to load all files just the file a.js. Better explained by
I want:
example.com/a.js ---> localhost/a.js
example.com/b.js ---> example.com/b.js
One way of doing this (not the fastest) is introducing a proxy pass in your server config (below shown with nginx but possible with apache or other):
Change your /etc/hosts to redirect target domain name (example.com in the question) to 127.0.0.1
Introduce two proxy passes in your nginx config:
i. A proxy pass for the specific file (a.js) to the local file.
ii. A proxy pass for all other paths back to the remote IP of the target domain (example.com). This proxy pass needs to be an IP addres (can be got with nslookup example.com) because using the domain example.com will be blocked as we set the hosts in step 1.
server {
listen 80;
server_name example.com;
location /a.js {
# your local server
proxy_pass http://localhost:80/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
# everything else back to the IP of example.com
proxy_pass http://<REMOTE_IP>/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
I'm trying to set nginx location that will handle various paths and proxy them to my webapp.
Here is my conf:
server {
listen 80;
server_name www.example.org;
#this works fine
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8081/myApp/;
}
#not working
location ~ ^/(.+)$ {
proxy_pass http://localhost:8081/myApp/$1;
}
}
I would like to access myApp with various paths like: /myApp/ABC, /myApp/DEF, myApp/GEH or /myApp/ZZZ.
Of course these paths are not available in myApp. I want them to point to root of myApp and keep url.
Is that possible to archive with nginx ?
Nginx locations match in order of definition. location / is basically a wildcard location, so it will match everything, and nothing will reach the second location. Reverse the order of the two definitions, and it should work. But actually, now that I look at it more closely, I think both locations are essentially doing the same thing:
/whatever/path/ ->>proxies-to->> http://localhost:8081/myApp/whatever/path/
A very late reply. this might help someone
try proxy_pass /myApp/ /location1 /location2;
Each location separated with space.
You will probably have to do a rewrite followed by a proxy pass, I had the same issue. Check here: How to make a conditional proxy_pass within NGINX
I am rather new to using nginx. I want to use it to serve static content in order to reduce the load on the rails server. It seems to be a rather simple task but I just can't find a solution that works for me.
I want nginx to serve static files which exist in the public directory inside my rails application directory. To be more precise: I got an index.html inside the directory I want to get served when entering http:/[domainname]. Instead I just get the default nginx index.html. I already checked that thin is running and when I query 127.0.0.1:3000 I get the page I want.
So here's the file called [domainname] in the sites-available directory.
upstream rails {
server 127.0.0.1:3000; #This is where thin is waiting for connections
}
# HTTP server
server {
listen 80;
server_name [domainname];
set $app /home/projektor/website/app.[domainname];
root $app/public;
# Set a limit to POST data
client_max_body_size 8M;
# Errors generated by Rails
error_page 400 /400.html;
error_page 422 /422.html;
error_page 500 504 /500.html;
# Errors generated *outside* Rails
error_page 502 #502;
error_page 503 #503;
# If the public/system/maintenance.html file exists,
# return a 503 error, that ...
if (-f $document_root/system/maintenance.html) {
return 503;
}
# ... will serve the very same file. This construct
# is needed in order to stop the request before
# handing it to Rails.
location #503 {
rewrite ^ /system/maintenance.html break;
}
# When a 502 error occurs - that is, Rails is not
# running, serve the 502.html file
location #502 {
rewrite ^ /502.html break;
}
# Add client-side caching headers to static files
#
location ~ ^/(stylesheets|javascripts|images|system/avatars) {
expires 720h;
}
# Hand over the request to Rails, setting headers
# that will be interpreted by request.remote_ip,
# request.ssl? and request.host
#
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
# If the file exists as a static file serve it directly
if (-f $request_filename) {
break;
}
# Oh yeah, hand over the request to Rails! Yay! :-D
proxy_pass http://rails;
}
}
The file is based on this one.
Thanks for your help.
Edit:
I already exchanged 127.0.0.1:3000 for 0.0.0.0:3000 in the upstream part. I also checked the the ownership of the files in sites-available and sites-enabled and they should both be ok.
I hardcoded return 503; into the location instruction and it seems that it never matches. It seems that it always matches the precreated default configuration.
Take a look at the Mongrel example from the try_files documentation. Something like this should work:
location / {
try_files /system/maintenance.html $uri $uri/index.html $uri.html #thin;
}
location #thin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
# Oh yeah, hand over the request to Rails! Yay! :-D
proxy_pass http://rails;
}
When I set up a recent version of NGINX, the solution to getting it to point properly to my rails app had everything to do with fully, and properly configuring it. There were a few extra steps I didn't find in the documentation. It was acting just as yours is. I don't have a clean install to go off of, but bear with me here.
I set a directory called sites-enabled in the default install location. This is a debian install from apt repository, so the install loccations are /etc/nginx and /var/nginx.
mkdir /etc/nginx/sites-enabled /etc/nginx/sites-available
Place conf file for site in sites-available/
Add this line to the bottom of /etc/nginx/nginx.conf
include /etc/nginx/sites-enabled/*;
Look for and remove any reference that may include the DEFAULT configuration, which is telling nginx to actually load this file. Which is what gives you the nginx default index.html (/etc/nginx/conf.d/default.conf)
grep -r "default.conf" /etc/nginx
Symlink (man ln) your file in sites-available to sites-enabled.
ln -s /etc/nginx/sites-available/myfilename /etc/nginx/sites-enabled/myfilename
Test your configuration.
/etc/init.d/nginx configtest
Once your configuration is set up properly, restart nginx
/etc/init.d/nginx restart
I can't remember if I removed this reference or including the line in step 2 was enough. If you update or comment on this answer and give me the results, I can try to dig up the other steps I took.
I don't think the problem here is your actual configuration.