How do you deploy static web with express api and mongodb?
Ive tried all different kind of ways to configure nginx but i cant get it to talk to the api at the location /api
ive tested that i can access api and mongodb with the api but i cant access the api from the nginx server http://localhost:8082/api/ gives me 404
Here is the docker-compose for the stack.
version: "3.8"
services:
js-alist-api:
image: "js-alist-api:latest"
ports:
- "5005:5005"
restart: always
container_name: "js-alist-api"
env_file:
- ./server/.env
volumes:
- "./js-alist-data/public:/public"
- "./server/oldDb.json:/oldDb.json"
js-alist-client:
image: "js-alist-client:latest"
ports:
- "8082:80"
restart: always
container_name: "js-alist-client"
volumes:
#- ./nginx-api.conf:/etc/nginx/sites-available/default.conf
- ./nginx-api.conf:/etc/nginx/conf.d/default.conf
database:
container_name: mongodb
image: mongo:latest
restart: always
volumes:
- "./js-alist-data/mongodb:/data/db"
Here is js-alist-client.dockerfile:
FROM nginx:alpine
COPY ./client-vue/vue/dist/ /usr/share/nginx/html/ # here i copy my static web
EXPOSE 80/tcp
next here is the nginx-api.conf:
server {
listen 80;
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
location /api/ {
proxy_pass http://localhost:5005/;
}
}
If i access the http://localhost:5005 it works
If i run my api it adds data to mongodb
If i run http://localhost:8082/ i can see static web
if i run http://localhost:8082/api or http://localhost:8082/api/ i get 404.
Also ive noticed if i change the:
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
to
location / {
root /usr/share/nginx/html2/;
index index.html index.htm;
}
i still can access the static web, even if the path dont exist. That leads me to believe that the conf file is not enabled.
But i checked in the js-alist-client container: /etc/nginx # cat nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
it shows that everything in /etc/nginx/conf.d/ is included
Now i dont know what is going on, and it seems my conf file is not loading. What am i doing wrong?
EDIT:
After some trial and error, not sure what im doing, but i saw this line elsewhere on internet:
listen [::]:80;
Added this line and added the suggested proxy_pass to service name of the container and got it working, but only halfassed. Meaning it only goes to the root subpath of /api. Every other subpath such as /api/images/something/else is not working.
New nginx conf file:
server {
listen 80;
listen [::]:80;
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
location /api/ {
proxy_pass http://js-alist-api:5005/;
}
}
How do i get that every and all subpaths are allowed?
EDIT2:
The next day i come in and now even this .conf is not working (posted in EDIT) I have no idea why sometimes it works and sometimes it doesnt. What a load of carp.
In a container context, localhost means the container itself. So when you say proxy_pass http://localhost:5005/;, Nginx passes the request on to port 5005 in the client container.
Docker-compose creates a docker network where the containers can talk to each other using their service names as host names. So you need to change the proxy_pass statement to
proxy_pass http://js-alist-api:5005/;
Related
I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere
I'm trying to mount an Nginx container through a docker-compose.yml with configuration files shared with volumes.
Here are the docker-compose.yml
version: '3.7'
services:
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: lamp
phpmyadmin:
image: phpmyadmin/phpmyadmin
depends_on:
- db
restart: always
ports:
- 8080:80
nginx:
image: nginx
depends_on:
- db
ports:
- 8000:80
volumes:
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
# - ./log:/var/log/nginx
# - ./code:/usr/share/nginx/html
php:
image: phpdockerio/php73-fpm
depends_on:
- db
The nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
And the default.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
The issue is, when I run docker-compose up, I get those errors
ERROR: for lamp-compose_nginx_1 Cannot start service nginx: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/e/Projects/lamp-compose/docker/nginx/default.conf\\\" to rootfs \\\"/var/lib/docker/overlay2/15cb114888ab7570fd7da633798ef7f094049965e83d4f4ca5500f7d4a833706/merged\\\" at \\\"/var/lib/docker/overlay2/15cb114888ab7570fd7da633798ef7f09Creating lamp-compose_php_1 ... done
ied host path exists and is the expected type
ERROR: for nginx Cannot start service nginx: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/e/Projects/lamp-compose/docker/nginx/default.conf\\\" to rootfs \\\"/var/lib/docker/overlay2/15cb114888ab7570fd7da633798ef7f094049965e83d4f4ca5500f7d4a833706/merged\\\" at \\\"/var/lib/docker/overlay2/15cb114888ab7570fd7da633798ef7f094049965e83d4f4ca5500f7d4a833706/merged/etc/nginx/conf.d\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
I already saw some answers to similar issue saying I should change - ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf to - ./docker/nginx/nginx.conf:/etc/nginx/ which makes no sense to me (and also is not working).
Would anyone know what's wrong?
From the linked error, it seems that one of the paths ./docker/nginx/nginx.conf, ./docker/nginx/default.conf is a directory and not a file.
Instead of adding the configuration files as volumes, I recommend that you. use a custom docker image for Nginx with your configurations in it. You need to change the docker-compose file and provide a docker file for Nginx as shown below
Dockerfile
From nginx:latest
COPY ./docker/nginx/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
compose
nginx:
build: .
depends_on:
- db
ports:
- 8000:80
I am attempting to use Docker to help deploy an application. The idea is to have two containers. One is the front-end containing Nginx and an Angular app.
FROM nginx
COPY ./dist/ /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/nginx.conf
It is supposed to contact a Spring Boot based API generated using the gradle-docker plugin and Dockerfile recommended by Spring:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
They seem to run fine individually (I can access them on my development machine); however, I am having trouble connecting the two.
My docker-compose.yml file:
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
ports:
- 80:80
- 443:443
networks:
- app
api:
image: com.midamcorp/employee_search:latest
ports:
- 8080:8080
networks:
- app
networks:
app:
Based upon my understanding of the Docker documentation on networks, I was under the impression that the containers would be placed in the same network and thus could interact, with the service name (for example, api) acting as the "host". Based upon this assumption, I am attempting to access the API from the Angular application through the following:
private ENDPOINT_BASE: string = "http://api:8080/employee";
This returns an error: Http failure response for (unknown url): 0 Unknown Error.
To be honest, the sample I have looked at used this concept (substituting the service name for host to connect two containers) for database applications not HTTP. Is what I am attempting to accomplish not possible?
EDIT:
My nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
EDIT:
Updated nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream docker-java {
server api:8080;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8081;
server_name localhost;
location / {
proxy_pass http://docker-java;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
and docker-compose.yml
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
ports:
- 80:80
- 443:443
networks:
- app
depends_on:
- api
api:
image: com.midamcorp/employee_search:latest
networks:
- app
networks:
app:
And the client / Angular app uses the following to contact the API private ENDPOINT_BASE: string = "http://localhost:8081/employee";
Output from docker ps
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
947eb757eb4b b28217437313 "nginx -g 'daemon of…" 10 minutes ago Up 10 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp employee_service_webapp_1
e16904db67f3 com.midamcorp/employee_search:latest "java -Djava.securit…" 10 minutes ago Up 10 minutes employee_service_api_1
The problem you are experiencing is not anything weird. It's just that you did not explicitly name your containers. Instead, Docker generated a the names by itself. So, nginx will resolve employee_service_api_1, but will not recognize just api. Open you webapp container and take a look at your hosts (cat /etc/hosts) - it will show you employee_service_api_1 and it's IP address.
How to fix it.
Add container_name to your docker-compose.yml:
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
container_name: employee_webapp
ports:
- 80:80
- 443:443
networks:
- app
depends_on:
- api
api:
image: com.midamcorp/employee_search:latest
container_name: employee_api
networks:
- app
networks:
app:
I always refrain from using "simple" names (i.e. just api), cuz on my system multiple containers with similar names might show up, so I add some prefix. In this case I named the api container employee_api and now nginx will resolve to that name once you restart your containers.
I'm having trouble trying to get the following to work in Docker
What I want is that when the user requests http://localhost/api then NGINX reverse proxies to my .Net Core API running in another container.
Container Host: Windows
Container 1: NGINX
dockerfile
FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.conf
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
location /api1 {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Container 2: .Net Core API
Dead simple - API exposed on port 80 in the container
Then there is the docker-compose.yml
docker-compose.yml
version: '3'
services:
api1:
image: api1
build:
context: ./Api1
dockerfile: Dockerfile
ports:
- "5010:80"
nginx:
image: vc-nginx
build:
context: ./infra/nginx
dockerfile: Dockerfile
ports:
- "5000:80"
Reading the Docker documentation it states:
Links allow you to define extra aliases by which a service is
reachable from another service. They are not required to enable
services to communicate - by default, any service can reach any other
service at that service’s name.
So as my API service is called api1, I've simply referenced this in the nginx.conf file as part of the reverse proxy configuration:
proxy_pass http://api1;
Something is wrong as when I enter http:\\localhost\api I get a 404 error.
Is there a way to fix this?
The problem is the nginx location configuration.
The 404 error is right, because your configuration is proxying request from http://localhost/api/some-resource to a missing resource, because your mapping is for /api1 path and you're asking for /api.
So you should only change the location to /api and it will work.
Keep in mind that requests to http://localhost/api will be proxied to http://api1/api (the path is kept). If your backend is configured to expose api with a prefixing path this is ok, otherwise you will receive another 404 (this time from your service).
To avoid this you should rewrite the path before proxying the request with a rule like this:
# transform /api/some-resource/1 to /some-resource/1
rewrite /api/(.*) /$1 break;
I am attempting to use an NGINX container to host a static web application. This container should also redirect certain requests (i.e. www.example.com/api/) to another container on the same network.
I am getting the "host not found in upstream" issue when calling docker-compose build, even though I am enforcing that the NGINX container is the last to be built.
I have tried the following solutions:
Enforcing a network name and aliases (as per Docker: proxy_pass to another container - nginx: host not found in upstream)
Adding a "resolver" directive (as per Docker Networking - nginx: [emerg] host not found in upstream and others), both for 8.8.8.8 and 127.0.0.11.
Rewriting the nginx.conf file to have the upstream definition before the location that will redirect to it, or after it.
I am running on a Docker for Windows machine that is using a mobylinux VM to run the relevant container(s). Is there something I am missing? It isn't obvious to me that the "http://webapi" address should resolve correctly, as the images are built but not running when you are calling docker-compose.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream docker-webapi {
server webapi:80;
}
server {
listen 80;
server_name localhost;
location / {
root /wwwroot/;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://docker-webapi;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
docker-compose:
version: '3'
services:
webapi:
image: webapi
build:
context: ./src/Api/WebApi
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61219:80"
model.api:
image: model.api
build:
context: ./src/Services/Model/Model.API
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61218:80"
webapp:
image: webapp
build:
context: ./src/Web/WebApp/
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- webapi
Dockerfile:
FROM nginx
RUN mkdir /wwwroot
COPY nginx.conf /etc/nginx/nginx.conf
COPY wwwroot ./wwwroot/
EXPOSE 80
RUN service nginx start
You issue is below
RUN service nginx start
You never run service command inside docker, because there is no init system. Also RUN commands are run during build time.
nginx original image has everything you need for nginx to start fine. So just remove the line and it would work.
By default nginx image has below CMD instruction
CMD ["nginx" "-g" "daemon off;"]
You can easily find that out by running the below command
docker history --no-trunc nginx | grep CMD