The system I'm working on consists of an SvelteKit app and a Flask App. Both are inside it's own docker container and a third one with an NGINX image.
The idea is that all requests that doesn't start with /api go to the SvelteKit app, and the ones that do go to the Flask app.
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream backend {
server backend:5002;
}
upstream frontend {
server frontend:5001;
}
server {
listen 80;
location / {
proxy_pass http://frontend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
location /api {
proxy_pass http://backend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
After a lot of tries this is the config that is closest to the result I need but /api still goes to the SvelteKit app. So as I see it, I don't understand anything about how Nginx works.
This is the docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02e5f32f3b5f stopssis_v2_nginx "/docker-entrypoint.…" 7 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp nginx
4ebe13b534db stopssis_v2_frontend "docker-entrypoint.s…" 26 minutes ago Up 6 minutes 0.0.0.0:5001->5001/tcp, :::5001->5001/tcp frontend
94345bfd5123 stopssis_v2_backend "python ./app/run.py" 26 minutes ago Up 6 minutes 0.0.0.0:5002->5002/tcp, :::5002->5002/tcp backend
Also, any good resource that explains nginx visually?
Maybe take a look at the nginx beginners guide:
Generally, the configuration file may include several server blocks distinguished by ports on which they listen to and by server names. Once nginx decides which server processes a request, it tests the URI specified in the request’s header against the parameters of the location directives defined inside the server block.
Meaning that you should not have two separate server objects for routing traffic to different endpoints based on the path of the request. But instead you should put both location blocks inside the same server block, as they refer to the same servername and port.
Related
I am learning using nginx to connect server and app in the docker compose. In the app, I am trying to post data to the database and it sends 5 requests a time. The nginx seems not happy with it, and then I got 502 error: POST http://localhost/api/somerequest 502 (Bad Gateway). If I used a lower frequency at 1 request a time and it will work.
The question is whether it possible to improve the nginx performance to allow it to handle multiple large number of requests, e.g. at 5 request frequency. Is there any settings in the configuration I can start with?
The current config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
include conf.d/events.conf;
include conf.d/http.conf;
events {
worker_connections 1024;
}
http {
upstream server {
server ${SERVER_HOST}:${SERVER_PORT}; # env variable from container
keepalive 15;
}
upstream client {
server ${CLIENT_HOST}:${CLIENT_PORT}; # env variable from container
keepalive 15;
}
server {
listen 80;
server_name myservice; # my service container name in docker-compose.yml
error_log /var/log/nginx/myservice.error.log;
access_log /var/log/nginx/myservice.access.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass http://client;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass http://server/api;
}
}
}
Note: there is one core in the nginx container. I have tried to increase the connection workers to 4000, but it still does not work.
Thank you in advance.
As David said nginx is usually quite efficient. The magic solution of making nginx wait is not really a solution unless you really know what you are doing and the dangers you are exposing yourself to.
I want to give you an example of what you want:
Suppose we make nginx wait for the answer or, failing that, wait for a certain time. Now suppose that for some reason the app crashes for several hours. What will happen in that situation?
UPDATE (Read comments)
location / {
echo_sleep 1;
echo_exec #test;
}
location #test {
echo $echo_request_uri;
}
I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect
Hello this is my first time deploying rails app to a ubuntu server so after I configured nginx and got the "welcome to nginx page" at a certain IP ... and when I start rails application I must enter the port in the IP address for example 165.217.84.11:3000 in order to access rails so how to make rails run default when I run only this IP 165.217.84.11
You can set the redirection from the 80 port (wich is the default) to the 3000 like this:
worker_processes 1;
events { worker_connections 1024; }
http {
client_max_body_size 10m;
sendfile on;
upstream rails {
server 165.217.84.11:3000;
}
server {
listen 80;
location / {
proxy_pass http://rails-app;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl off;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
So, when you access 165.217.84.11 in the browser you should see your rails project.
In general you have to setup your nginx to use puma socket file and then it will access the website using socket file instead of using TCP port (:3000 by the default).
Here is a nice tutorial: link
And here is a short explanation why you should use sockets.
I am a newbie in Ubuntu and generally server side and I have created a Rails app and have deployed it on Ubuntu Ec2.
I am using Nginx and Thin server on it.The app is running perfectly on it.
Now I want to deploy another app on the same server.
I have already put the app on the server and when i try to start the rails app it does not start.
I guess it is because of nginx.conf file.
Can someone please let me know how to run two apps on the same server
When you try to browse to a machine on Amazon's EC2, and you don't get any response, the best suspect is the AWS Security Group. Make sure that the port the application runs on is open in your machine's security group:
(source: amazon.com)
For nginx to run both you apps, you need to configure them both on its nginx.conf
upstream app1 {
server 127.0.0.1:3000;
}
upstream app2 {
server 127.0.0.1:3020;
}
server {
listen 80;
server_name .example.com;
access_log /var/www/myapp.example.com/log/access.log;
error_log /var/www/myapp.example.com/log/error.log;
root /var/www/myapp.example.com;
index index.html;
location /app1 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app1;
}
location /app2 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app2;
}
}
This configuration will listen for app1 on local port 3000, and app2 on local port 3020, and redirect data starting with http://my.example.com/app1 to the first app, and data starting with http://my.example.com/app2 to the second app.
I'm trying to set up a nginx loadbalancer/proxy for two servers, with OAuth authenticated apps running on both of them.
Everything's running fine when nginx is running on port 80, but when I put it on any other port OAuth authentication fails with an "invalid signature" error message.
Here is my server config in nginx.conf:
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://webservice;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-FORWARDED-PROTO https;
}
Has anyone run into a similar problem?
PS: I've noticed that the port 80 is omitted from the OAuth realm property, but other ports are added normally.
That's probably not in any way related to nginx. OAuth (1, not 2) requires a signing URL, which would be http://webservice:81 if you moved it to port 81. Make sure that your OAuth code knows the website is actually on port 80 and not 81.
Either update your client to say it's port 81 or tell the server it's on 80.
Replace 81 with your favorite port