Ok. This should be my easiest stackoverflow post yet.
So I have Capistrano installed and configured properly. I've managed a successful deployment to my remote server (incidentally that remote server is running rails 4.0 and the local one was on 3.2.13). All my files appear to have been successfully transferred to my liquid_admin/current directory (they used to just be in the liquid_admin directory... but whatever.)
So what do I do now? How do I get rails server to load the app in liquid_admin/current?
If I try to do "rails server" it just tells me:
usage: rails new app_path
Would that actually overwrite my old app? Basically all I want to do is load the app in the "current" directory. Run the server. Should be a no-brainer right? :)
For a single website on a small server, passenger and Ngnix look like winners.
sudo passenger-install-nginx-module
And then on the Nginx sites folder:
server {
listen 80;
server_name www.mysite.com;
root /rails_website_root/public;
passenger_enabled on;
}
Then just start Ngnix (usually you put it on autostart)
The default server that you probably use in development - WEBrick - is not suitable for production. Some options that you have are:
Unicorn
Thin
You also need Apache or Nginx 'in front' of your Rails server.
All this is well explained in tons of guides, books, railscasts etc, so please go and google it.
Related
The problem
In development I'm having split processes between my Rails server and ActionCable.
➜ backend git:(dev) bundle exec puma cable/config.ru -p 1337
Puma starting in single mode...
* Version 3.12.0 (ruby 2.5.1-p57), codename: Llamas in Pajamas
* Min threads: 0, max threads: 16
* Environment: development
My configuration is pretty classical
# cable/config.ru
require_relative '../config/cable_environment'
Rails.application.eager_load!
run ActionCable.server
And in my development.rb I've got those
# Set Action Cable server url for consumer connection
config.action_cable.url = 'wss://localhost:4001'
config.action_cable.allowed_request_origins = [
'https://localhost:4000',
]
It looks like it's working. In parallel of that I use some split front-end in VueJS to listen to the websockets. You should know this whole configuration was working before a few days back.
When I try to listen to sockets from the front-end it does this
action_cable.js?f5ee:241 WebSocket connection to 'wss://localhost:4001/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
What I know
The SSL is correctly configured (as I said it was actually working before) and the configuration seems really fine. I'm also not the only one working on this Rails project but the only one to face this issue.
When I try to check it the port is in use I do the following
lsof -ti:4001
I noticed no process is running at all and I've no idea how to troubleshoot this. As the websocket server seems to be intern to ActionCable, I don't really know how to search what crashes, there's no error displayed, puma is up and running, but the websockets aren't.
What I tried
I can't recollect exactly what I did in the last few days which could have broken the Websockets, so I tried many things
I've tried to change the configuration in many, many ways, but I pasted the original one here. Basically ran the whole documentation multiple times and tried all combinations i could think of
I've tried to change the port, or endpoint
I changed the SSL certificates which are self-hosted (and working) and produced new ones, it doesn't seem to be linked to the problem
I've tried to downgrade Puma but it didn't change anything.
I've tried to reinstall the project entirely in a fresh directory but it still doesn't work.
I've also reinstalled multiple times Ruby via RVM and tried a few versions, without success.
I even ended up upgrading my computer to MacOS Mojave in a desperate move, didn't move the problem one bit.
What I don't know
I don't know anything about the mechanism to spawn Websockets via ActionCable. Why isn't there any error ? How does it work ? Did I miss something in my configuration? Does it interact with anything else like NPM / Node / Apollo Which could have changed on my computer? I work on multiple projects with multiple technologies, it's pretty hard to find an exact origin ...
If you need more informations, just let me know. Thanks for your time.
The solution
I figured I had an Nginx setup to make my sockets work with TLS which I had forgotten about. If you end up here, you may be in the same case.
You can check that out by doing nginx -t in your console and open the file - if any present - in your IDE.
# /usr/local/etc/nginx/nginx.conf
...
server {
listen 5212 ssl;
server_name localhost;
ssl_certificate /Users/loschcode/Desktop/some-path/ssl/server.crt;
ssl_certificate_key /Users/loschcode/Desktop/some-path/ssl/server.key;
location / {
rewrite ^(.*) https://localhost:5214$1 permanent;
}
}
...
I recently changed the directory I used to work on the project, which made a wrong path to the certificate, therefore my issue. Fixing the path made it work.
I have completed development of an ember.js app for the time being, and am wondering how I can deploy it to a production server? I successfully built a dist dir within the app, and I also cloned the ember app repo to the production server.
I am deploying the rails app with capistrano from my local box to the VPS, and everything appears to be working there. Side note, I am using Nginx as the web server, and puma as the application server for the rails apps.
Also, I have the local dev versions of the ember / rails app working great on my local box running the below commands,
rails s --binding 127.0.0.1 and,
ember server --proxy http://127.0.0.1:3000
So I decided to copy the files that were in the dist dir to the public dir of the rails app, and move the assets for the ember app to the assets dir of the rails app.
On my local dev box, I see CSV files being presented like,
However when I load the "ember app" on the production box I am not seeing the CSV files being presented like,
Which brings me to my question, what is the proper way to deploy a ember-cli app to a production server and have it communicate to a rails API backend?
UPDATE
This is what I am seeing in the network tab,
In an ideal system, I use this setup:
On disk:
/srv/frontend/
/srv/backend/
frontend
With Ember CLI, /srv/frontend contains the output of ember build. I can use the --output-path=/srv/frontend flag to set this, if the Ember CLI source is also on the same machine. All API requests should be prefixed with /api. I do this by setting the namespace property to my ApplicationAdapter to api/ (or api/v1 sometimes.)
backend
/srv/backend contains my backend API (The location doesn't really matter in most cases).
For the Rails API, I use puma as a standalone server. As long as you have a standalone server that listens on a port, it doesn't matter if it's puma or something else. All API routes must be namespaced under /api. you can wrap all your routes in a scope block to do this without changing your codebase. I go one step further and add another namespace v1.
reverse proxy
Then I install nginx and make my config like this:
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443;
# add SSL paths
server_name localhost;
# UI, Static Build
location / {
alias /srv/frontend/;
index index.html;
}
# Rails API
location /api {
proxy_pass http://localhost:3000/;
}
}
So now I have an Nginx config that proxies / requests to the /srv/frontend/index.html and /api requests to Puma on port 3000.
The only downside to this is that I'm forced to use hash location on my Ember application. There are ways to circumvent this, but it's just easier to use hash location and history location doesn't really buy me much.
So I've bought and set up my DigitalOcean Ubuntu 14.04 Droplet, set up SSH keys, bought a domain name, transfered my app to the DigitalOcean cloud server using Filezilla and install Passenger with NGINX. It took a lot of trial, error and research but I've learned a lot from it.
The problem is, I still can't get the it to work! When I start my $rails s -e production in the cloud, I noticed it still uses Webrick despite already installing NGINX. I also get a 500 Internal Service Error from NGINX when I visit my website's IP address.
Its likely that I did something wrong, but due to my inexperience, I have no way of really knowing what procedure was incorrect/skipped. Maybe something with Capistrano? my secret keys still need some work done, not sure :/
Can someone help me out?
My checklist:
Install Ruby and Rails and bundle install in my Cloud Server [done]
Install NGINX and Passenger following the tutorial at: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-passenger-and-nginx-on-ubuntu-14-04 [done]
Edited my NGINX config file to:
server {
listen 80 default_server;
server_name 45.55.136.43;
passenger_enabled on;
passenger_app_env production;
root /origins/public;
}
Ive a rails 4 app running which is used for file exchange. Its basicly running very well but when I try to download a file that is bigger then some hundred mb its getting slow. I think this is because nginx doenst stream the file it's first loading the file to the ram and then sends it.
I have sendfile on; in my nginx config and config.action_dispatch.x_sendfile_header set to true in my config/environments/production.conf. Im using thin as a webserver.
Does anyone have an idea about what Im doing wrong?
I don't think thin supports Rails implementation of streaming out of the box.
There was some work done on that front, but afaik it was never merged to master branch.
Instead of thin, our team switched to using puma on our local machines, and we're using passenger on our production server (although in the light of recent article from Engine Yard, we're considering switching our production app server to either unicorn on puma).
As we install rails it uses its own web server WEBrick. If i want to run ths application in Nginx server, then how to configure or set the Nginx web werver?
You should run your rails application in a production server such as mongrel_cluster or thin (I have used the former, and am currently switching to the latter). To put nginx in front of it, I would use the upstream and proxy_pass directives. I found a nice blog post comparing ways of running rails applications that shows their config for mongrel_cluster + nginx.
Passenger is also available for nginx, I've used that with Apache and it was very easy to set up.