Nginx file not found while doing simple proxy - docker

I'm trying to configure simple Nginx reverse proxy, here is my nginx.conf file
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
location /api {
proxy_pass http://172.17.0.1:8081/api;
}
}
}
And here is my Dockerfile
FROM openresty/openresty:latest
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80/tcp
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Now, I'm executing docker build . -t my-nginx and then docker run -p 80:80 my-nginx
And I'm calling endpoint at 127.0.0.1:80/api
However, I'm getting 404 back in response and in nginx logs I can see
172.17.0.1 - - [02/Jan/2023:14:39:16 +0000] "POST /api HTTP/1.1" 404 159 "-" "Apache-HttpClient/4.5.13 (Java/17.0.5)" 2023/01/02 14:39:16
[error] 7#7: *1 open() "/usr/local/openresty/nginx/html/api" failed
(2: No such file or directory), client: 172.17.0.1, server: localhost,
request: "POST /api HTTP/1.1", host: "127.0.0.1:80"
Why is that happening? What is wrong with that configuration?

The reason is that by default the openresty docker image won't look for the nginx configuration file under /etc/nginx/nginx.conf, but on /usr/local/openresty/nginx/conf/nginx.conf.
You can verify this by running:
# nginx -t
nginx: the configuration file /usr/local/openresty/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/openresty/nginx/conf/nginx.conf test is successful
This said, the /usr/local/openresty/nginx/conf/nginx.conf file has an include directive for all the files under /etc/nginx/conf.d folder, you can therefore place your nginx server and location configurations under this folder.
Replace Dockerfile to be:
FROM openresty/openresty:latest
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80/tcp
ENTRYPOINT ["nginx", "-g", "daemon off;"]
and remove the http key in your nginx configuration.
Make sure to read docker documentation for openresty image, as you'll find all required information: openresty docker

Related

Why is Google Cloud Builder not running docker-compose up correctly?

After testing that my website can successfully be deployed locally with docker, I'm trying to run a docker container directly on my GCP virtual instance. Inside my cloudbuilder.yaml file is the following:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '--build']
timeout: '1600s'
In running gcloud builds submit . --config=cloudbuild.yaml --timeout=1h, I get the following error at the end of it:
ERROR
Creating jkl-api ... done
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUILD FAILURE: Build step failure: build step 0 "docker/compose:1.26.2" failed: context deadline exceeded
ERROR: (gcloud.builds.submit) build 9712fc75-9b47-43a7-a84d-a208897fe00d completed with status "FAILURE"
Why am I getting this error?
Edit:
As per #Samantha Létourneau's comment, I decided I want to instead directly build the images for my project then run them instead of using docker-compose. I was able to successfully build and push a docker image to the Container Registry with this cloudbuilder.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/lawma-project-356604/lawma-image', '.']
# Docker Push
- name: 'gcr.io/cloud-builders/docker'
args: ['push',
'gcr.io/lawma-project-356604/lawma-image']
But when i try and deploy a container i get the following error:
Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=80 environment variable.
and this error [1]:
nginx: [emerg] host not found in upstream "lawma-api" in /etc/nginx/conf.d/default.conf:20
heres my nginx.conf file:
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
error_page 500 502 503 504 /50x.html;
location / {
try_files $uri /index.html;
add_header Cache-Control "no-cache";
}
location /static {
expires 1y;
add_header Cache-Control "public";
}
location /api {
proxy_pass http://lawma-api:8000;
}
}
and my Dockerfile:
#Build step #1: build the React frontend
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY lawmaapp/package.json ./
COPY lawmaapp/public ./public
COPY lawmaapp/src ./src
EXPOSE 80
RUN npm install
RUN npm run build
#Build step #2: build an nginx container \
FROM nginx:stable-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
Why am i getting the error [1]?

Bad Gateway from nginx published with docker-compose

I am learning docker-compose and now I am trying to setup app and nginx in one docker-compose script on my WSL Ubuntu.
I am testing my endpoint with
curl -v http://127.0.0.1/weatherforecast
But I am receiving 502 Bad Gateway from nginx.
If I change port exposing to port publishing in docker-compose, as below, requests bypass nginx and reach my app and I receive an expected response.
ports:
- 5000:8080
My setup:
app's dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:8080
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["WebApplication2.csproj", "."]
RUN dotnet restore "./WebApplication2.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "WebApplication2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebApplication2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication2.dll"]
nginx.conf
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080/;
}
}
}
docker-compose.yml
version: "3.9"
services:
web:
depends_on:
- nginx
build: ./WebApplication2
expose:
- "8080"
nginx:
image: "nginx"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./logs:/var/log/nginx/
ports:
- 80:80
>docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------
composetest_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp,:::80->80/tcp
composetest_web_1 dotnet WebApplication2.dll Up 8080/tcp
/var/log/nginx/error.log
[error] 31#31: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.26.0.1, server: , request: "GET /weatherforecast HTTP/1.1", upstream: "http://127.0.0.1:8080/weatherforecast", host: "127.0.0.1"
cURL output:
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /weatherforecast HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Fri, 13 Aug 2021 17:50:56 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
* Connection #0 to host 127.0.0.1 left intact
You should redirect your request to your web container instead of 127.0.0.1. Each container is running as separate part of network (each has different IP address) and 127.0.0.1 points to local container. So, in your case, it point to nginx itself. Instead of real IP address of container, you can use DNS name (it is equal to service name in docker-compose). Use something like:
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
server {
listen 80;
location / {
proxy_pass http://web:8080/;
}
}
}
Also, you specified that your web container depends on nginx, but it should be viceversa. Like:
version: "3.9"
services:
web:
build: .
nginx:
image: "nginx"
depends_on:
- web
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80

"Welcome to Nginx!" - Docker-Compose, using uWSGI, Flask, nginx

My Problem:
I am using Ubuntu 18.04 and a docker-compose based solution with two Docker images, one to handle Python/uWSGI and one for my NGINX reverse proxy. No matter what I change, it always seems like WSGI is unable to detect my default application. Whenever I run docker-compose up, and navigate to localhost:5000 I get the above default splash.
The complete program appears to work on our CentOS 7 machines. However, when I try to execute it on my Ubuntu test machine, I can only get the "Welcome to NGINX!" page.
Directory Structure:
/app
- app.conf
- app.ini
- app.py
- docker-compose.py
- Dockerfile-flask
- Dockerfile-nginx
- requirements.txt
/templates
(All code snippets have been simplified to help isolate the problem)
Here is an example of my docker traceback:
clocker_flask_1
[uWSGI] getting INI configuration from app.ini
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
uwsgi socket 0 bound to TCP address 0.0.0.0:5000 fd 3
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** Operational MODE: preforking+threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x558072010e70 pid: 1 (default app)
clocker_nginx_1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
Here is my docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
flask:
image: webapp-flask
build:
context: .
dockerfile: Dockerfile-flask
volumes:
- "./:/app:z"
- "/etc/localtime:/etc/localtime:ro"
environment:
- "EXTERNAL_IP=${EXTERNAL_IP}"
nginx:
image: webapp-nginx
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- 5000:80
depends_on:
- flask
Dockerfile-flask:
FROM python:3
ENV APP /app
RUN mkdir $APP
WORKDIR $APP
EXPOSE 5000
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD [ "uwsgi", "--ini", "app.ini" ]
Dockerfile-nginx
FROM nginx:latest
EXPOSE 80
COPY app.conf /etc/nginx/conf.d
app.conf
server {
listen 80;
root /usr/share/nginx/html;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass flask:5000;
}
}
app.py
# Home bit
#application.route('/')
#application.route('/home', methods=["GET", "POST"])
def home():
return render_template(
'index.html',
er = er
)
if __name__ == "__main__":
application.run(host='0.0.0.0')
app.ini
[uwsgi]
protocol = uwsgi
module = app
callable = application
master = true
processes = 2
threads = 2
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
max-requests = 1000
The nginx image comes with a main configuration file, /etc/nginx/nginx.conf, which loads every conf file in the conf.d folder -- including your nemesis in this case, a stock /etc/nginx/conf.d/default.conf. It reads as follows (trimmed a bit for concision):
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
So, your app.conf and this configuration are both active. The reason why this default one wins, though, is because of the server_name directive that it has (and yours lacks) -- when you're hitting localhost:5000, nginx matches based on the hostname and sends your request there.
To fix this easily, you can just remove that file in your Dockerfile-nginx:
RUN rm /etc/nginx/conf.d/default.conf

Traefik - proxy to backend for angular application

I have set up a proxy with Nginx which is as follows
server {
listen 80;
server_name localhost;
location /api {
proxy_pass https://api.mydomain.com/;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
my Dockerfile
FROM node:12-alpine as builder
WORKDIR /workspace
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/www /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
This works fine But wants to replace Nginx with Traefik for the above proxy settings. Any help would be much appreciated since I'm very new to traefik.
With Traefik 2+, you need to configure 2 routers:
- One for the API
- One for the webapp
For the API proxy, you will have a rule like:
rule = "Host(`example.com`) && Path(`/api`)"
And the webapp will juste have the host as rule
rule = "Host(`example.com`)"
For kubernetes, you can do it in an ingress like that:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`example.com`) && PathPrefix(`/api`)
kind: Rule
services:
- name: mywebapp-svc
port: 80
- match: Host(`example.com`)
Kind: Rule
services:
- name: myapi-svc
port: 80
If the API is not inside the kubernetes cluster, you can define the rule tu use an externalService like that:
---
apiVersion: v1
kind: Service
metadata:
name: myapi-svc
namespace: default
spec:
externalName: api.mydomain.com
type: ExternalName
if you want to step up from that manual configuration, you may use Traefik as described here. Watch how he uses docker labels to define how to route HTTP traffic.
I personally use caddy docker proxy in docker (swarm, but not required) which I find easier to understand and use

Suppervisor - php-fpm leads to 502 Bad Gateway

I have a web application based on php and nginx images ... Everything works great until I set a command under the PHP configuration:
command: /usr/bin/supervisord -c /symfony/supervisord.conf
docker-compose.yml
version: '2'
services:
php:
build: docker/php
tty: true
volumes:
- '.:/symfony'
command: /usr/bin/supervisord -c /symfony/supervisord.conf
nginx:
image: nginx:1.11
tty: true
volumes:
- './public/:/symfony'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf'
ports:
- '80:80'
links:
- php
This is my default.conf
server {
server_name ~.*;
location / {
root /symfony;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /symfony/public/index.php;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
This is my supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
logfile=/tmp/supervisord.log
pidfile=/var/run/supervisord.pid
nodaemon=true
Nginx logs show me:
nginx_1 | 2018/10/02 00:42:36 [error] 11#11: 1 connect() failed
(111: Connection refused) while connecting to upstream, client:
172.23.0.1, server: ~., request: "GET / HTTP/1.1", upstream: "fastcgi://172.23.0.2:9000", host: "127.0.0.1"
As we see, nginx report a 502 Bad Gateway error. If i remove the last line, CMD, everything works fine. If I remove the line and I acess via docker-compose exec php bash and launch the command manually everything work also.
Any Idea why adding that command leads to 502 Bad Gateway ??
Ok I found a solution It was a problem with supervisor. Because each time we launch our service supervisor, the php-fpm service is stopped automatically that's why it should add a configuration that will relaunch the php-fpm but this time from supervisor configuration.
[program:php-fpm]
command = /usr/local/sbin/php-fpm
autostart=true
autorestart=true
For anyone else with similar problem:
Don't forget that command key in docker-compose.yml file overrides default CMD in Dockerfile, therefore that command won't be run.
For example, if php:7.4-fpm final command is CMD php-fpm, it won't be run.
Therefore if you have some custom logic for running after container is ran, don't forget to include it in your command, e.g.:
command: bash -c "php-fpm & npm run dev"

Resources