I am trying to move an app over from an older server to another no so much older server. As far as I can tell the old app was (for some reason I did this) running on Unicorn, with ruby version 1.8.6. The new server supports 186 and 193 so it shouldnt be a problem. but I continuously get an error when i try to run the site.
http://omnimart.myitcrm.biz/
I have searched the web and couldnt find an exact post describing this error and a solution for it.
I tried to put 186 in nginx so that it would use the correct version of ruby, but I also get 193 showing up in the environment variables, and I figured that may be the problem, but I tried tracing these environment variables back to see where there were set and I couldnt find it anywhere. Could that be the problem?
nginx conf:
server {
passenger_ruby /home/purge/.rvm/gems/ruby-1.8.6-p420/wrappers/ruby;
listen 80;
server_name omnimart.myitcrm.biz;
root /home/purge/www/omnimart/current/public;
passenger_enabled on;
}
The site is currently running now so it isn't an emergency but I would like to resolve this so i can shut down one node that only has one remaining app running on it.
Any ideas on what I can do to fix this one?
Related
The problem
In development I'm having split processes between my Rails server and ActionCable.
➜ backend git:(dev) bundle exec puma cable/config.ru -p 1337
Puma starting in single mode...
* Version 3.12.0 (ruby 2.5.1-p57), codename: Llamas in Pajamas
* Min threads: 0, max threads: 16
* Environment: development
My configuration is pretty classical
# cable/config.ru
require_relative '../config/cable_environment'
Rails.application.eager_load!
run ActionCable.server
And in my development.rb I've got those
# Set Action Cable server url for consumer connection
config.action_cable.url = 'wss://localhost:4001'
config.action_cable.allowed_request_origins = [
'https://localhost:4000',
]
It looks like it's working. In parallel of that I use some split front-end in VueJS to listen to the websockets. You should know this whole configuration was working before a few days back.
When I try to listen to sockets from the front-end it does this
action_cable.js?f5ee:241 WebSocket connection to 'wss://localhost:4001/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
What I know
The SSL is correctly configured (as I said it was actually working before) and the configuration seems really fine. I'm also not the only one working on this Rails project but the only one to face this issue.
When I try to check it the port is in use I do the following
lsof -ti:4001
I noticed no process is running at all and I've no idea how to troubleshoot this. As the websocket server seems to be intern to ActionCable, I don't really know how to search what crashes, there's no error displayed, puma is up and running, but the websockets aren't.
What I tried
I can't recollect exactly what I did in the last few days which could have broken the Websockets, so I tried many things
I've tried to change the configuration in many, many ways, but I pasted the original one here. Basically ran the whole documentation multiple times and tried all combinations i could think of
I've tried to change the port, or endpoint
I changed the SSL certificates which are self-hosted (and working) and produced new ones, it doesn't seem to be linked to the problem
I've tried to downgrade Puma but it didn't change anything.
I've tried to reinstall the project entirely in a fresh directory but it still doesn't work.
I've also reinstalled multiple times Ruby via RVM and tried a few versions, without success.
I even ended up upgrading my computer to MacOS Mojave in a desperate move, didn't move the problem one bit.
What I don't know
I don't know anything about the mechanism to spawn Websockets via ActionCable. Why isn't there any error ? How does it work ? Did I miss something in my configuration? Does it interact with anything else like NPM / Node / Apollo Which could have changed on my computer? I work on multiple projects with multiple technologies, it's pretty hard to find an exact origin ...
If you need more informations, just let me know. Thanks for your time.
The solution
I figured I had an Nginx setup to make my sockets work with TLS which I had forgotten about. If you end up here, you may be in the same case.
You can check that out by doing nginx -t in your console and open the file - if any present - in your IDE.
# /usr/local/etc/nginx/nginx.conf
...
server {
listen 5212 ssl;
server_name localhost;
ssl_certificate /Users/loschcode/Desktop/some-path/ssl/server.crt;
ssl_certificate_key /Users/loschcode/Desktop/some-path/ssl/server.key;
location / {
rewrite ^(.*) https://localhost:5214$1 permanent;
}
}
...
I recently changed the directory I used to work on the project, which made a wrong path to the certificate, therefore my issue. Fixing the path made it work.
Since a couple of days, users that just updated to iOS 11 cannot access my website.
It's hosted via a nginx reverse proxy that is using LetsEncrypt to provide SSL.
The client experience is, that if you click a link, usually the safari window just disappears or shows a generic error.
Using the debugger, there's an error: [Error[ Failed to load resource: The operation couldn't be completed. Protocol Error
This only happens with iOS devices since the update to iOS 11.
My Server is running on DigitalOcean with the docker image jwilder/nginx-proxy.
Ok, I actually found the issue to be related to an improper implementation of HTTP2 in iOS11.
This post shed some light on the situation:
http://www.essential.exchange/2017/09/18/ios-11-about-to-release-things-to-be-aware-of/
The jwilder/nginx-proxy docker image is using http2 by default and as far as I can see you can't change that either.
No, to solve the issue, remove the http2 keyword in your server configuration for now.
This:
server {
listen x.x.x.x:443 ssl http2;
server_name xxxx;
[...]
}
Becomes:
server {
listen x.x.x.x:443 ssl;
server_name xxxx;
[...]
}
If you're running jwilder/nginx-proxy you will have to change /app/nginx.tmpl too, otherwise, the config file will be rewritten at one point.
Hope this answer helps some people struggling with the same problem.
If you find another solution to fix this, please add it below. I haven't had too much time to look for solutions as it took me forever to find this one.
So I've bought and set up my DigitalOcean Ubuntu 14.04 Droplet, set up SSH keys, bought a domain name, transfered my app to the DigitalOcean cloud server using Filezilla and install Passenger with NGINX. It took a lot of trial, error and research but I've learned a lot from it.
The problem is, I still can't get the it to work! When I start my $rails s -e production in the cloud, I noticed it still uses Webrick despite already installing NGINX. I also get a 500 Internal Service Error from NGINX when I visit my website's IP address.
Its likely that I did something wrong, but due to my inexperience, I have no way of really knowing what procedure was incorrect/skipped. Maybe something with Capistrano? my secret keys still need some work done, not sure :/
Can someone help me out?
My checklist:
Install Ruby and Rails and bundle install in my Cloud Server [done]
Install NGINX and Passenger following the tutorial at: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-passenger-and-nginx-on-ubuntu-14-04 [done]
Edited my NGINX config file to:
server {
listen 80 default_server;
server_name 45.55.136.43;
passenger_enabled on;
passenger_app_env production;
root /origins/public;
}
Well this is embarrassing. If for some reason my developer sends a bad build of our rails app to the production server, passenger may not be able to load. When that happens, web requests to passenger dump an error page with all of the variables in .env. As he prefers to put all of his secrets in .env like API keys to remote services, this is potentially a big security hole.
Is there any way to turn this behaviour off? We're using nginx. We're adding a staging server to the workflow to avoid pushing bad releases, but still, this seems like it shouldn't be happening.
Thanks. Here's the relevant portion of the nginx.conf file:
http {
passenger_root /home/X/.rvm/gems/ruby-2.1.1/gems/passenger-4.0.40;
passenger_ruby /home/X/.rvm/gems/ruby-2.1.1#XXX/wrappers/ruby;
server {
listen 443;
server_name www.X.com;
root /home/X/current/public;
passenger_enabled on;
..
Turn passenger_friendly_error_pages off. Since 4.0.42, it's off by default on production.
Ok. This should be my easiest stackoverflow post yet.
So I have Capistrano installed and configured properly. I've managed a successful deployment to my remote server (incidentally that remote server is running rails 4.0 and the local one was on 3.2.13). All my files appear to have been successfully transferred to my liquid_admin/current directory (they used to just be in the liquid_admin directory... but whatever.)
So what do I do now? How do I get rails server to load the app in liquid_admin/current?
If I try to do "rails server" it just tells me:
usage: rails new app_path
Would that actually overwrite my old app? Basically all I want to do is load the app in the "current" directory. Run the server. Should be a no-brainer right? :)
For a single website on a small server, passenger and Ngnix look like winners.
sudo passenger-install-nginx-module
And then on the Nginx sites folder:
server {
listen 80;
server_name www.mysite.com;
root /rails_website_root/public;
passenger_enabled on;
}
Then just start Ngnix (usually you put it on autostart)
The default server that you probably use in development - WEBrick - is not suitable for production. Some options that you have are:
Unicorn
Thin
You also need Apache or Nginx 'in front' of your Rails server.
All this is well explained in tons of guides, books, railscasts etc, so please go and google it.