I am a newbie in Ubuntu and generally server side and I have created a Rails app and have deployed it on Ubuntu Ec2.
I am using Nginx and Thin server on it.The app is running perfectly on it.
Now I want to deploy another app on the same server.
I have already put the app on the server and when i try to start the rails app it does not start.
I guess it is because of nginx.conf file.
Can someone please let me know how to run two apps on the same server
When you try to browse to a machine on Amazon's EC2, and you don't get any response, the best suspect is the AWS Security Group. Make sure that the port the application runs on is open in your machine's security group:
(source: amazon.com)
For nginx to run both you apps, you need to configure them both on its nginx.conf
upstream app1 {
server 127.0.0.1:3000;
}
upstream app2 {
server 127.0.0.1:3020;
}
server {
listen 80;
server_name .example.com;
access_log /var/www/myapp.example.com/log/access.log;
error_log /var/www/myapp.example.com/log/error.log;
root /var/www/myapp.example.com;
index index.html;
location /app1 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app1;
}
location /app2 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app2;
}
}
This configuration will listen for app1 on local port 3000, and app2 on local port 3020, and redirect data starting with http://my.example.com/app1 to the first app, and data starting with http://my.example.com/app2 to the second app.
Related
I'm deploying some services using Docker, with docker-compose and a nginx container to proxy to my domain.
I have already deployed the frontend app, and it is accessible from the web. Supposedly, I only need to expose the frontend/nginx port to the web, without me needing to expose the rest of the services. But I'm not able to do that for now.
For example, Client -> Login Request -> frontend <-> (local) Backend get request.
Right now, I'm getting connection refused, and the get is pointing to http://localhost, and not the name of the service defined in docker-compose. (all containers are deployed on the same network, one that I have created)
What do I need to do to configure this?
Here is my nginx config so far:
server {
listen 80 default_server;
listen 443 ssl;
server_name mydomain;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
location / {
proxy_pass http://frontend:3000/;
}
location /auth {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://auth:3003;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
# Same for the other services
(...)
EDIT:
Should I create a location for every get and post that I have for my services?
As:
location getUser/ {
proxy_pass http://auth:3003;
}
I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect
Hello this is my first time deploying rails app to a ubuntu server so after I configured nginx and got the "welcome to nginx page" at a certain IP ... and when I start rails application I must enter the port in the IP address for example 165.217.84.11:3000 in order to access rails so how to make rails run default when I run only this IP 165.217.84.11
You can set the redirection from the 80 port (wich is the default) to the 3000 like this:
worker_processes 1;
events { worker_connections 1024; }
http {
client_max_body_size 10m;
sendfile on;
upstream rails {
server 165.217.84.11:3000;
}
server {
listen 80;
location / {
proxy_pass http://rails-app;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl off;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
So, when you access 165.217.84.11 in the browser you should see your rails project.
In general you have to setup your nginx to use puma socket file and then it will access the website using socket file instead of using TCP port (:3000 by the default).
Here is a nice tutorial: link
And here is a short explanation why you should use sockets.
I have created a Rails 5.0 app with Elastic Beanstalk on Amazon Web Services and I have been able to successfully create the website with a functioning database. The only problem is that I need ActionCable for my app to work and it is really hard for me to configure Elasticache and have the rails app successfully communicate to the Elasticache cluster.
A lot of people have told me that the load balancer in Elastic Beanstalk doesn't allow any communication to the the Elasticache cluster and I haven't been able to find any documentation on how to integrate Redis into Elastic Beanstalk in order to properly configure ActionCable.
Do you guys knows a step by step detailed approach to successfully set up ActionCable on Elastic Beanstalk Rails 5.0 app using Elasticache?
Most important thing is to change the loadbalancer to user TCP and SSL instead of HTTP and HTTPS. You also need to configure nginx to pass on the upgrade headers on the /cable location.
Try adding this file (nginx.config) in your .ebextensions folder:
files:
/etc/nginx/conf.d/proxy.conf:
content: |
client_max_body_size 500M;
server_names_hash_bucket_size 128;
upstream backend {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name *.cmgresearch.net;
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://backend;
proxy_redirect off;
location /assets {
root /var/app/current/public;
}
# enables WS support
location /cable {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
container_commands:
01restart_nginx:
command: "service nginx restart"
See https://blog.cmgresearch.com/2017/05/11/step-7-action-cable-on-elastic-beanstalk.html
I'm trying to set up a nginx loadbalancer/proxy for two servers, with OAuth authenticated apps running on both of them.
Everything's running fine when nginx is running on port 80, but when I put it on any other port OAuth authentication fails with an "invalid signature" error message.
Here is my server config in nginx.conf:
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://webservice;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-FORWARDED-PROTO https;
}
Has anyone run into a similar problem?
PS: I've noticed that the port 80 is omitted from the OAuth realm property, but other ports are added normally.
That's probably not in any way related to nginx. OAuth (1, not 2) requires a signing URL, which would be http://webservice:81 if you moved it to port 81. Make sure that your OAuth code knows the website is actually on port 80 and not 81.
Either update your client to say it's port 81 or tell the server it's on 80.
Replace 81 with your favorite port