Nginx + Gunicorn+Flask: Directive not permitted here error - docker

I'm trying to run an nginx+gunicorn+flask app.
The following is the code contained in /etc/nginx/conf.d/nginx.conf:
user nginx;
worker_processes auto;
# auto When one is in doubt, setting it to the number of available
# CPU cores would be a good start (the value “auto” will try to autodetect it).
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
https {
upstream app_name {
server web_1:5000;
server web_2:5000;
}
server {
listen 80;
location / {
proxy_pass http://app_name;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
I'm completely new to this, and the code above was based on some copy pasting from different sources...
If run /usr/sbin/nginx, I get the error nginx: [emerg] "user" directive is not allowed here in /etc/nginx/conf.d/nginx.conf:1.
I've read some answer, but couldn't figure out how to adapt them to my case.
Note: I'm using this in a docker container, from an Ubuntu image. Also, I've been reading the documentation on Nginx, but I have some trouble understanding it. Is there an introduction to Nginx for those new to this?

Related

How to improve nginx performance when I need to send multiple API requests in docker?

I am learning using nginx to connect server and app in the docker compose. In the app, I am trying to post data to the database and it sends 5 requests a time. The nginx seems not happy with it, and then I got 502 error: POST http://localhost/api/somerequest 502 (Bad Gateway). If I used a lower frequency at 1 request a time and it will work.
The question is whether it possible to improve the nginx performance to allow it to handle multiple large number of requests, e.g. at 5 request frequency. Is there any settings in the configuration I can start with?
The current config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
include conf.d/events.conf;
include conf.d/http.conf;
events {
worker_connections 1024;
}
http {
upstream server {
server ${SERVER_HOST}:${SERVER_PORT}; # env variable from container
keepalive 15;
}
upstream client {
server ${CLIENT_HOST}:${CLIENT_PORT}; # env variable from container
keepalive 15;
}
server {
listen 80;
server_name myservice; # my service container name in docker-compose.yml
error_log /var/log/nginx/myservice.error.log;
access_log /var/log/nginx/myservice.access.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass http://client;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass http://server/api;
}
}
}
Note: there is one core in the nginx container. I have tried to increase the connection workers to 4000, but it still does not work.
Thank you in advance.
As David said nginx is usually quite efficient. The magic solution of making nginx wait is not really a solution unless you really know what you are doing and the dangers you are exposing yourself to.
I want to give you an example of what you want:
Suppose we make nginx wait for the answer or, failing that, wait for a certain time. Now suppose that for some reason the app crashes for several hours. What will happen in that situation?
UPDATE (Read comments)
location / {
echo_sleep 1;
echo_exec #test;
}
location #test {
echo $echo_request_uri;
}

keycloak docker compose behing nginx question

I have setup in docker-compose mysql, keycloack and nginx.
Changed in standalone and standalone-ha
<web-context>keycloak/auth</web-context>
so I can use keycloak under /keycloak.
If I expose 8080 keycloack port I can use it under http://localhost:8080/keycloak/auth/.
Login, changing settings etc all works fine.
So I can assume that this keycloak configuration is fine.
But I want to hide it under nginx proxy.
here is my nginx.conf:
user nginx;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type text/html;
server {
listen 8080;
location /keycloak {
proxy_pass http://keycloak:8080/keycloak/auth/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
if i go to http://localhost/keycloak or http://localhost/keycloak/auth
I see an 404 Nginx error.
I can not find this problem..
any idea how to solve this ?
thanks!
EDIT:
when i set proxy_pass http://keycloak:8080;
then url: http://localhost/keycloak/auth/ works fine,
but I wonder why if I go to http://localhost/keycloak/ I am redirected to the http://localhost/auth
any ideas ?

How to implement (Certbot) ssl using Docker with Nginx image

I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect

How to configure ngnix for rails port?

Hello this is my first time deploying rails app to a ubuntu server so after I configured nginx and got the "welcome to nginx page" at a certain IP ... and when I start rails application I must enter the port in the IP address for example 165.217.84.11:3000 in order to access rails so how to make rails run default when I run only this IP 165.217.84.11
You can set the redirection from the 80 port (wich is the default) to the 3000 like this:
worker_processes 1;
events { worker_connections 1024; }
http {
client_max_body_size 10m;
sendfile on;
upstream rails {
server 165.217.84.11:3000;
}
server {
listen 80;
location / {
proxy_pass http://rails-app;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl off;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
So, when you access 165.217.84.11 in the browser you should see your rails project.
In general you have to setup your nginx to use puma socket file and then it will access the website using socket file instead of using TCP port (:3000 by the default).
Here is a nice tutorial: link
And here is a short explanation why you should use sockets.

how to run two apps on EC2 with nginx

I am a newbie in Ubuntu and generally server side and I have created a Rails app and have deployed it on Ubuntu Ec2.
I am using Nginx and Thin server on it.The app is running perfectly on it.
Now I want to deploy another app on the same server.
I have already put the app on the server and when i try to start the rails app it does not start.
I guess it is because of nginx.conf file.
Can someone please let me know how to run two apps on the same server
When you try to browse to a machine on Amazon's EC2, and you don't get any response, the best suspect is the AWS Security Group. Make sure that the port the application runs on is open in your machine's security group:
(source: amazon.com)
For nginx to run both you apps, you need to configure them both on its nginx.conf
upstream app1 {
server 127.0.0.1:3000;
}
upstream app2 {
server 127.0.0.1:3020;
}
server {
listen 80;
server_name .example.com;
access_log /var/www/myapp.example.com/log/access.log;
error_log /var/www/myapp.example.com/log/error.log;
root /var/www/myapp.example.com;
index index.html;
location /app1 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app1;
}
location /app2 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app2;
}
}
This configuration will listen for app1 on local port 3000, and app2 on local port 3020, and redirect data starting with http://my.example.com/app1 to the first app, and data starting with http://my.example.com/app2 to the second app.

Resources