serving web app and python using nginx on remote server - post

Setup:
1> Web GUI using Angular JS hoasted in tomcat server and Python app using Flask is running on an AWS server.
2> I am working in a secure server hence am unable to access AWS directly.
3> I have setup NGINX to access GUI app from my local secured network. GUI app is running on awsserver:9506/appName
4> Flask app is running in AWS server hosted on 127.0.0.1:5000. This app has 2 uri's cross and accross:
127.0.0.1:5000/cross
127.0.0.1:5000/accross
Now in my GUI after NGINX setup i am able to access it using domain name and without port:
doman.name/appName
Now when i try to use it send a request to server my url changes to:
doman.name/cross. I did the changes in NGINX config and am able to access it but am not able to get a response back. Please find below my NGINX config file:
server {
listen 80;
server_name domain.name;
root /home/Tomcat/webapps/appName;
location / {
proxy_pass http://hostIP:9505/; #runs the tomcat home page
}
location /appName/ {
proxy_pass http://hostIP:9505/appName; #runs the application home page
}
location /cross/ {
proxy_pass http://127.0.0.1:5000/cross; #hits the python flask app and am trying to send post
}
}
Also what i noticed is that my POST request is being converted to GET at the server by NGINX

You need to be consistent with your use of the trailing /. With the proxy_pass statement (as with alias) nginx performs text substitution to form the rewritten URI.
Is the URI of the API /cross or /cross/? POST is converted to GET when the server is forced to perform a redirect (for example, to append a /).
Specifying the same URI on the location and proxy_pass is unnecessary as no changes are made.
If the hostIP in your first two location blocks is the same, and assuming that the missing trailing / is accidental, they can be combined into a single location block.
For example:
location / {
proxy_pass http://hostIP:9505;
}
location /cross {
proxy_pass http://127.0.0.1:5000;
}
See this document for more.

Related

Nginx returning 404 when trying get subroute in dockerized react app

I have a react app served with serve on docker.
I use Nginx to rewrite requests to my app with the following config:
http {
upstream myapp_servers {
server 1.2.3.4:8000;
}
}
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://myapp_servers;
}
}
When I access my app with myapp.com it works fine.
If i want to access a subroute on my app like myapp.com/route nginx returns 404 error,
try_files $uri index.html only works if nginx serves static files.
How can i solve this issue for dockerized react app?
Ok so I figured it out, the issue was not with Nginx.
What happened is that I used vercel\serve for my deployment server and i had to run it in single mode with -s, also i had to remove homepage: "."
from my package.json and now the application works properly in production.

How do I map a location to an upstream server in Nginx?

I've got several Docker containers acting as web servers on a bridge network. I want to use Nginx as a proxy that exposes a service (web) outside the bridge network and embeds content from other services (i.e. wiki) using server side includes.
Long story short, I'm trying to use the configuration below, but my locations aren't working properly. The / location works fine, but when I add another location (e.g. /wiki) or change / to something more specific (e.g. /web) I get a message from Nginx saying that it "Can't get /wiki" or "Can't get /web" respectively:
events {
worker_connections 1024;
}
http {
upstream wiki {
server wiki:3000;
}
upstream web {
server web:3000;
}
server {
ssi on;
location = /wiki {
proxy_pass http://wiki;
}
location = / {
proxy_pass http://web;
}
}
}
I've attached to the Nginx container and validated that I can reach the other containers using CURL- they appear to be working properly.
I've also read the Nginx pitfalls and know that using hostnames (wiki, web) isn't ideal, but I don't know the IP addresses ahead of time and have tried to counter any DNS issues by telling docker-compose that the nginx container depends on web and wiki.
Any ideas?
You must turn proxy_pass http://wiki; to proxy_pass http://wiki/;.
As I know, Nginx would take two different way with/without backslash at the end of uri. You may find more details about proxy_pass directive on nginx.org.
In your case, a backslash(/) is essential as a uri to be passed to server. You've already got error message "Can't get /wiki". In fact, this error message means that there is no /wiki in server wiki:3000, not in Nginx scope.
Getting better knowing about proxy_pass directive with/without uri would help you much.
I hope this would help.

Nginx location app urls

I'm trying to get routing within the nignx config working. I have an app at http://app1:8081 and another app at http://app2:8080. (FYI I'm using docker containers so each app is in its own container) What I have working is for nginx is app1 is point to http://example.com. Where I'm having trouble getting http://example.com/gc to work.
server {
listen 80;
server_name http://example.com;
location /gc/ {
proxy_pass http://app2:8080/;
}
location / {
proxy_pass http://app1:8081/;
}
}
I've tried the proxy_pass with and without trailing / and the location with and without trailing /. I've had an odd result where going to example.com/gc/ would rewrite to example.com/home which didn't work.
I'm was hoping for something that is similar to IIS with application folders under a site. If you have a site that points to example.com and put an application named gc and point it to the application folder.
The end result should be example.com/gc/home renders app2:8080/home.
Any help with my nginx config would be greatly appreciated.

How can I host my API and web app on the same domain?

I have a Rails API and a web app(using express), completely separate and independent from each other. What I want to know is, do I have to deploy them separately? If I do, how can I make it so that my api is in mysite.com/api and the web app in mysite.com/
I've seen many projects that do it that way, even have the api and the app in separate repos.
Usually you don't expose such web applications directly to clients. Instead you use a proxy server, that forwards all incoming requests to the node or rails server.
nginx is a popular choice for that. The beginners guide even contains a very similar example to what you're trying to do.
You could achieve what you want with a config similar to this:
server {
location /api/ {
proxy_pass http://localhost:8000;
}
location / {
proxy_pass http://localhost:3000;
}
}
This is assuming your API runs locally on port 8000 and your express app on port 3000. Also this is not a full configuration file - this needs to be loaded in or added to the http block. Start with the default config of your distro.
When there are multiple location entries nginx chooses the most specific one. You could even add further entries, e.g. to serve static content.
While Svens answer is completely correct for the question given. I'd prefer doing it at the DNS level so that I can change the server to a new location just in case my API or Web App experience heavy load. This helps us to run our APIs without affecting WebApp and vice-versa
DNS Structure
api.mysite.com => 9.9.9.9 // public IP address of my server
www.mysite.com = > 9.9.9.9 // public IP address of my server
Since now you'd want both your WebApp and API to run on the same server, you can use nginx to forward requests appropriately.
server {
listen 80;
server_name api.mysite.com;
# ..
# Removed for simplicity
# ..
location / {
proxy_pass http://localhost:3000;
}
}
server {
listen 80;
server_name www.mysite.com;
# ..
# Removed for simplicity
# ..
location / {
proxy_pass http://localhost:8000;
}
}
Any time in future if you are experiencing overwhelming traffic, you can just alter the DNS to point to a new server and you'd be good.

How to deploy an ember-cli app that interacts with a rails API backend to a VPS

I have completed development of an ember.js app for the time being, and am wondering how I can deploy it to a production server? I successfully built a dist dir within the app, and I also cloned the ember app repo to the production server.
I am deploying the rails app with capistrano from my local box to the VPS, and everything appears to be working there. Side note, I am using Nginx as the web server, and puma as the application server for the rails apps.
Also, I have the local dev versions of the ember / rails app working great on my local box running the below commands,
rails s --binding 127.0.0.1 and,
ember server --proxy http://127.0.0.1:3000
So I decided to copy the files that were in the dist dir to the public dir of the rails app, and move the assets for the ember app to the assets dir of the rails app.
On my local dev box, I see CSV files being presented like,
However when I load the "ember app" on the production box I am not seeing the CSV files being presented like,
Which brings me to my question, what is the proper way to deploy a ember-cli app to a production server and have it communicate to a rails API backend?
UPDATE
This is what I am seeing in the network tab,
In an ideal system, I use this setup:
On disk:
/srv/frontend/
/srv/backend/
frontend
With Ember CLI, /srv/frontend contains the output of ember build. I can use the --output-path=/srv/frontend flag to set this, if the Ember CLI source is also on the same machine. All API requests should be prefixed with /api. I do this by setting the namespace property to my ApplicationAdapter to api/ (or api/v1 sometimes.)
backend
/srv/backend contains my backend API (The location doesn't really matter in most cases).
For the Rails API, I use puma as a standalone server. As long as you have a standalone server that listens on a port, it doesn't matter if it's puma or something else. All API routes must be namespaced under /api. you can wrap all your routes in a scope block to do this without changing your codebase. I go one step further and add another namespace v1.
reverse proxy
Then I install nginx and make my config like this:
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443;
# add SSL paths
server_name localhost;
# UI, Static Build
location / {
alias /srv/frontend/;
index index.html;
}
# Rails API
location /api {
proxy_pass http://localhost:3000/;
}
}
So now I have an Nginx config that proxies / requests to the /srv/frontend/index.html and /api requests to Puma on port 3000.
The only downside to this is that I'm forced to use hash location on my Ember application. There are ways to circumvent this, but it's just easier to use hash location and history location doesn't really buy me much.

Resources