I'm trying to add a Wordpress blog into a site that was built in ruby on rails. I just need it to be in a sub directory. I made a folder in the public directory and put the Wordpress files in there and now i'm getting a routing error and i'm really not that familiar with rails. Can someone help me figure out a way to do this?
You can get PHP and rails working in the same project if you have access to the server configuration. I was able to get things working on a test VPS in just a few minutes. I didn't test with wordpress, just a simple phpinfo() call, but I don't see any reason why it would fail.
My install uses NGINX for the web server, Unicorn for Rails, and spawn-fcgi and php-cgi for the PHP processing.
I already had a rails app working so I just added PHP to that. The rails app uses NGINX to proxy requests to Unicorn, so it was already serving the public directory as static. I will post my virtual host file below so you can see how it was done.
This was all done on an ArchLinux VPS, but other distros should be similar.
My virtual host file:
upstream unicorn {
server unix:/tmp/unicorn.jrosw.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name example.com www.example.com;
root /home/example/app/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/conf/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/example/app/current/public$fastcgi_script$
}
}
And then a small script to bring up php-cgi:
#!/bin/sh
# You may want to just set this to run as your app user
# if you upload files to the php app, just to avoid
# permissions problems
if [ `grep -c "nginx" /etc/passwd` = "1" ]; then
FASTCGI_USER=nginx
elif [ `grep -c "www-data" /etc/passwd` = "1" ]; then
FASTCGI_USER=www-data
elif [ `grep -c "http" /etc/passwd` = "1" ]; then
FASTCGI_USER=http
else
# Set the FASTCGI_USER variable below to the user that
# you want to run the php-fastcgi processes as
FASTCGI_USER=
fi
# Change 3 to the number of cgi instances you want.
/usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 3 -u $FASTCGI_USER -f /usr/bin/php-cgi
The only problem I had was getting the fastcgi_index option to work, so you'd probably need to look into nginx's url rewriting capabilities to get wordpress' permalink functionality working.
I know this method isn't ideal, but hopefully it gets you on the right track.
Related
I am using Nginx and reverse proxy, also Docker.
I have two Docker containers.
319f103c82e5 web_client_web_client "nginx -g 'daemon of…" 6 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp web_client
7636ddaeae99 admin_web_admin "nginx -g 'daemon of…" 2 hours ago Up 2 hours 0.0.0.0:6500->80/tcp, 0.0.0.0:7000->443/tcp web_admin
This is my two containers. When I enter http://website.com, it goes to web_client_web_client container. When I enter http://website.com:6500, it goes to admin_web_admin container. This is the flow right now.
What I want is I don't want my admin users to type http://website.com:6500 to get to the admin page. I prefer them to type http://website.com/admin. So I decided to use proxy_pass which means, when accessing http://website.com/admin, it should proxy_pass to https://website.com:7000
So now, I am posting a Nginx config for web_client_web_client since it's the one which handles requests for port 80 and 433.
Here it is:
server {
listen 80 default_server;
server_name website.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location /admin {
proxy_pass https://website.com:7000/;
}
# I also tried
#location /admin/ {
# proxy_pass https://website.com:7000/;
#}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name website.com;
gzip on;
gzip_min_length 1000;
gzip_types text/plain text/xml application/javascript text/css;
ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
root /usr/share/nginx/html;
location / {
add_header Cache-Control "no-store";
try_files $uri $uri/index.html /index.html;
}
location ~ \.(?!html) {
add_header Cache-Control "public, max-age=2678400";
try_files $uri =404;
}
}
Now, what happens is, static files (css and js files are not loaded) and when inspecting from chrome, request gets made as https://website.com/static/css/app.597efb9d44f82f809fff1f026af2b728.css instead of https://website.com:7000/static/css/app.597efb9d44f82f809fff1f026af2b728.css. So it says 404 not found. I am not sure why I cannot understand such a simple thing.
Your main problem is not really with nginx but with how the 2 applications are setup. I don't have your code but this is what I can infer from your post:
In your pages you load the static content using absolute paths: /static/css/...
So even when you call your pages with /admin in front they will still try to load the static content from /static/
One solution is to use relative paths for your static content. Depending on how complex your application is this might require some work... You need to change the path to static files to something like "./static/css/..." and make sure your files still work. Then your setup in nginx will work because admin pages will try to load '/admin/static/...'
Another solution is to rename the 'static' folder in the admin app to something else and then proxypass that new path as well in your nginx config.
One last thing, your post mentions 2 ports: 6500 and 7000. I am assuming that is a mistake so can you correct it? Or did I understand wrong?
I have an nginx pod deployed in my kubernetes cluster to serve static files. In order to set a specific header in different environments I have followed the instructions in the official nginx docker image docs which uses envsubst to generate the config file from a template before running nginx.
This is my nginx template (nginx.conf.template):
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
add_header x-myapp-env $MYAPP_ENV;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
}
I use the default command override feature of Kubernetes to initially generate the nginx conf file before starting nginx. This is the relevant part of the config:
command: ["/bin/sh"]
args: ["-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" ]
Kubernetes successfully deploys the pod however when I make a request I get a ERR_TOO_MANY_REDIRECTS error in my browser.
Strangely, when I deploy the container without running the command override using an nginx.conf almost identical to the above (but without the add_header directive) it works fine.
(All SSL certs and files to be served are happily copied onto the container at build time so there should be no issue there)
Any help appreciated.
I am pretty sure envsubst is biting you by making try_files $uri $uri/ /index.html; into try_files / /index.html; and return 301 https://$host$request_uri; into return 301 https://;. This results in a loop of redirections.
I suggest you run envsubst '$MYAPP_ENV' <template >nginx.conf instead. That will only replace that single variable and not the unintended ones. (Note the escaping around the variable in the sample command!) If later on you need to add variables you can specify them all like envsubsts '$VAR1$VAR2$VAR3'.
If you want to replace all environment variables you can use this snippet:
envsubst `declare -x | sed 's/^declare -x \([^=]*\)=.*/$\1/' | tr -d '\n'` <template >nginx.conf
Also, while it's not asked in the question you can save yourself some trouble by using ... && exec nginx -g 'daemon off;'. The exec will replace the running shell (pid 1) with the nginx process instead of forking it. This also means that signals will be received by nginx, etc.
I'm using Digital Ocean droplet for rails application. I've managed to deploy first application with success, but now facing with problems trying to deploy the second one. I am using unicorn as app server and nginx as web server. OS is Ubuntu 14.04
I've read lots of threads on stackexchange sites, also on blogs etc. but none of them fits my position. Problem is, I think, on app and system folder/file/configuration structures. Which I am very cautious to change anything on system configuration files.
In most examples on web, everyone is talking about unicorn.rb inside
rails_root/config/ however I don't have any. Instead I have unicorn.conf with same content inside /etc.
There is also a socket file which listens for first app, and I tried two create the second for my second app - but it failed.
I know, I have to create another unicorn configuration for second app, and also have to do something which should be resulted with the creation of a socket for second.
But the lack of knowledge and understanding about system administration drives me to trouble.
Can anyone guide me about this problem?
I can provide more files if needed.
nginx configuration file for first app (path /etc/sites-available/first_app).
upstream app_server {
server unix:/var/run/unicorn.sock fail_timeout=0;
}
server {
listen 80;
root /home/rails/myfirstapp/public;
server_name www.myfirstapp.com;
index index.htm index.html index.php index.asp index.aspx index.cgi index.pl index.jsp;
location / {
try_files $uri/index.html $uri.html $uri #app;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
server_name www.myfirstapp.com;
return 301 $scheme://myfirstapp.com$request_uri;
}
second app (/etc/sites-available/second_app)
upstream app_server_2 {
server unix:/var/run/unicorn.app_two.sock fail_timeout=0;
}
server {
listen 80;
root /home/rails/secondapp/public;
server_name secondapp.com;
index index.htm index.html index.php index.asp index.aspx index.cgi index.pl index.jsp;
location / {
try_files $uri/index.html $uri.html $uri #app;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server_2;
}
}
server {
listen 80;
server_name secondapp.com www.secondapp.com;
return 301 $scheme://secondapp.com$request_uri;
}
(/etc/unicorn.conf)
listen "unix:/var/run/unicorn.sock"
worker_processes 4
user "rails"
working_directory "/home/rails/myfirstapp"
pid "/var/run/unicorn.pid"
stderr_path "/var/log/unicorn/unicorn.log"
stdout_path "/var/log/unicorn/unicorn.log"
This has probably gone unanswered because you should just use 2 independent droplets as opposed to trying to make this work (which will be a bit of a nightmare for those unfamiliar with server and deployment stuff). Rails has lots of ways to interconnect 2 apps across the interwebs.
If you need to share the database you could even just setup a third droplet (although not needed) and host the centralised DB from there and have both apps connected to it. That also sets you up for mucho scalability.
Unless, of course, I misunderstood what you are trying to do.
If you are using 2 droplets and grammar is our fickle mistress, hook us up with some more details man.
I am rather new to using nginx. I want to use it to serve static content in order to reduce the load on the rails server. It seems to be a rather simple task but I just can't find a solution that works for me.
I want nginx to serve static files which exist in the public directory inside my rails application directory. To be more precise: I got an index.html inside the directory I want to get served when entering http:/[domainname]. Instead I just get the default nginx index.html. I already checked that thin is running and when I query 127.0.0.1:3000 I get the page I want.
So here's the file called [domainname] in the sites-available directory.
upstream rails {
server 127.0.0.1:3000; #This is where thin is waiting for connections
}
# HTTP server
server {
listen 80;
server_name [domainname];
set $app /home/projektor/website/app.[domainname];
root $app/public;
# Set a limit to POST data
client_max_body_size 8M;
# Errors generated by Rails
error_page 400 /400.html;
error_page 422 /422.html;
error_page 500 504 /500.html;
# Errors generated *outside* Rails
error_page 502 #502;
error_page 503 #503;
# If the public/system/maintenance.html file exists,
# return a 503 error, that ...
if (-f $document_root/system/maintenance.html) {
return 503;
}
# ... will serve the very same file. This construct
# is needed in order to stop the request before
# handing it to Rails.
location #503 {
rewrite ^ /system/maintenance.html break;
}
# When a 502 error occurs - that is, Rails is not
# running, serve the 502.html file
location #502 {
rewrite ^ /502.html break;
}
# Add client-side caching headers to static files
#
location ~ ^/(stylesheets|javascripts|images|system/avatars) {
expires 720h;
}
# Hand over the request to Rails, setting headers
# that will be interpreted by request.remote_ip,
# request.ssl? and request.host
#
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
# If the file exists as a static file serve it directly
if (-f $request_filename) {
break;
}
# Oh yeah, hand over the request to Rails! Yay! :-D
proxy_pass http://rails;
}
}
The file is based on this one.
Thanks for your help.
Edit:
I already exchanged 127.0.0.1:3000 for 0.0.0.0:3000 in the upstream part. I also checked the the ownership of the files in sites-available and sites-enabled and they should both be ok.
I hardcoded return 503; into the location instruction and it seems that it never matches. It seems that it always matches the precreated default configuration.
Take a look at the Mongrel example from the try_files documentation. Something like this should work:
location / {
try_files /system/maintenance.html $uri $uri/index.html $uri.html #thin;
}
location #thin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
# Oh yeah, hand over the request to Rails! Yay! :-D
proxy_pass http://rails;
}
When I set up a recent version of NGINX, the solution to getting it to point properly to my rails app had everything to do with fully, and properly configuring it. There were a few extra steps I didn't find in the documentation. It was acting just as yours is. I don't have a clean install to go off of, but bear with me here.
I set a directory called sites-enabled in the default install location. This is a debian install from apt repository, so the install loccations are /etc/nginx and /var/nginx.
mkdir /etc/nginx/sites-enabled /etc/nginx/sites-available
Place conf file for site in sites-available/
Add this line to the bottom of /etc/nginx/nginx.conf
include /etc/nginx/sites-enabled/*;
Look for and remove any reference that may include the DEFAULT configuration, which is telling nginx to actually load this file. Which is what gives you the nginx default index.html (/etc/nginx/conf.d/default.conf)
grep -r "default.conf" /etc/nginx
Symlink (man ln) your file in sites-available to sites-enabled.
ln -s /etc/nginx/sites-available/myfilename /etc/nginx/sites-enabled/myfilename
Test your configuration.
/etc/init.d/nginx configtest
Once your configuration is set up properly, restart nginx
/etc/init.d/nginx restart
I can't remember if I removed this reference or including the line in step 2 was enough. If you update or comment on this answer and give me the results, I can try to dig up the other steps I took.
I don't think the problem here is your actual configuration.
I am using Ubuntu.
Here is the tutorial
Nginx config I am using:
upstream my_app {
server unix:///home/uname/railsproject/my_app.sock;
}
server {
listen 88; #(I used exact 88 when I am testing now)
server_name localhost; # I used exact localhost when I am testing this
root /home/uname/railsproject/public; # I assume your app is located at that location
location / {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
}
Puma command
RAILS_ENV=production puma -e production -b unix:///home/uname/railsproject/my_app.sock -p 8000
In the address bar, I am typing
http://localhost/
and then website opening but static assets not working. Of course, I ran
RAILS_ENV=production rake assets:precompile
and assets are available in public/assets folder
I also tried placing m.txt file in assets directory and accessing
http://localhost/assets/m.txt
but didn't work. I also tried this command:
sudo chown -R www-data:www-data public/
but this didn't help.
I'm posting for future readers.
I encounter this error when I change my hosting provider. My standard nginx conf for assets stop working, I have to change it for:
location ~ ^assets/
{
gzip_static on;
expires max;
add_header Cache-Control public;
}
Removing the / before /assets did the trick, don't know why.
ps: location is in the server block
I had a similar problem. I got an answer from the very helpful #nginx channel on irc. They said it was "idiomatic nginx" and that the other form, though more popular, was discouraged:
server {
server_name server.com;
# path for static files
root /home/production/server.com/current/public;
location / {
try_files $uri #proxy;
# if url path access by directory name is wanted:
#try_files $uri $uri/ #proxy;
}
location #proxy {
proxy_pass http://localhost:9292;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
/hattip: vandemar