Docker nginx routing to API in other container - docker

i am going crazy not figuring this out..
I have a docker network created named isolated-network, and i have an nginx server and an API created in the network. Nginx on port 80 is exposed to the host so i can access it, but the API isnt. I know i could expose the API to the host too, but i'd like for it to be isolated and only accessible through nginx through say /api.
I have configured the Nginx at /api to route to http://my-api:8000, however i get 404 not found in return when accessing http://locahost/api. If i do 'docker exec -it nginx sh' and try to curl the same route http://my-api:8000 i get the expected response.
Is what i'm trying even possible? I have not found any example trying to do the same. If i can't route to http://my-api:8000, can i atleast send the API request to it and receive the response?

in below i have an example that nginx will route traffic to php-fpm container
docker file:
FROM php:8.1-fpm
USER root
RUN mkdir -p /var/www/ && mkdir -p /home/www/.composer && chown www:www /var/www/html && chown www:www /home/www/.composer
#groupadd -g 1000 www && useradd -ms /bin/bash -G www -g 1000 www &&
RUN usermod -aG sudo www
RUN chown -R www /var/www
USER root
EXPOSE 9000
CMD ["php-fpm"]
docker compose file:
version: "3"
services:
webserver:
image: nginx:1.21.6-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./configfiles/nginx/conf-fpm:/etc/nginx/conf.d
php-fpm:
image: php:8.1-fpm
working_dir: /var/www
stdin_open: true
tty: true
build:
context: ./configfiles/php-fpm
dockerfile: Dockerfile
volumes:
- ./src:/var/www
nginx config file
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
proxy_read_timeout 3600;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_read_timeout 3600;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}

Related

Docker Container cant connect to other container - nginx in alpine to nginx in alpine

I'm confused about making a connection from nginx alpine to nginx alpine
both use laravel 9
on the host I can access both using http://localhost:8080
and http://localhost:5001
but when I try to use guzzle in frontend like this
$response = $http->get('http://dashboard:5001');
result is
cURL error 7: Failed to connect to dashboard_service port 5001 after 0 ms:
Connection refused
and I try to curl from frontend container to dashboard container the result is connection refused.I can ping it, but curl not work
this is my docker-compose.yml
version: "3.8"
networks:
networkname:
services:
frontend:
build:
context: .
dockerfile: ./file/Dockerfile
container_name: frontend
ports:
- 8080:80
volumes:
- ./frontend:/code
- ./.docker/php-fpm.conf:/etc/php8/php-fpm.conf
- ./.docker/php.ini-production:/etc/php8/php.ini
- ./.docker/nginx.conf:/etc/nginx/nginx.conf
- ./.docker/nginx-laravel.conf:/etc/nginx/modules/nginx-laravel.conf
networks:
- networkname
dashboard:
build:
context: .
dockerfile: ./file/Dockerfile
container_name: dashboard
ports:
- 5001:80
volumes:
- ./dashboard:/code
- ./.docker/php-fpm.conf:/etc/php8/php-fpm.conf
- ./.docker/php.ini-production:/etc/php8/php.ini
- ./.docker/nginx.conf:/etc/nginx/nginx.conf
- ./.docker/nginx-laravel.conf:/etc/nginx/modules/nginx-laravel.conf
networks:
- networkname
this is my dockerfile
FROM alpine:latest
WORKDIR /var/www/html/
# Essentials
RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
# Installing PHP
RUN apk add --no-cache php8 \
php8-common \
php8-fpm \
# Installing composer
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php
# Configure supervisor
RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini
# Configure PHP
RUN mkdir -p /run/php/
RUN mkdir -p /test
RUN touch /run/php/php8.0-fpm.pid
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
this is my nginx conf
server {
listen 80;
server_name localhost;
root /code/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass localhost:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
I'm confused about having to set it up in docker, nginx or alpine linux
Thanks.
Hello if you try this from inside container's php you needn't check it with port
Type such from php:
$response = $http->get('http://dashboard');
But, if you check this from out container you may enter like ip:port
$response = $http->get('http://127.0.0.1:5001');
If you are connecting to a docker container within the same network, then use the internal port. If not, use the external port.
In your case, you are trying to connect to the dashboard container from within networkname network. So try http://dashboard instead of http://dashboard:5001

Docker ngnix with tag nginx:latest seems causes a major issue - direct acces to web directory

Upgrading Nginx docker with image tag Nginx:latest causes not executing PHP files and give direct access to web directory!
Upgrading docker-compose.yml from nginx:1.18.0 to Nginx:latest seems to cause a major issue.
Ngnix container not executing PHP files anymore and give direct access to all content of web repository
Details:
Extract of docker-compose.yml (full reproductible example below)
webserver:
#image: nginx:1.8.0
image: nginx:latest
and then "docker-composer up -d"
raises the issue.
Effect:
Nginx 1.18.0 not executing PHP files (using php7.4-fpm) and give direct access to web contains
eg: domain.com/index.php can then be directly downloaded!
First elements:
image nginx:latest or image nginx produce the same effect
image nginx:1.8.0 (nor any explicit x.y.z tag) does not produce this issue
Troubling facts:
nginx image with tag: nginx:mainline download version # nginx version: nginx/1.21.5
nginx image with tag: nginx:latest download a 1.8.0 version # nginx version: nginx/1.8.0
Probable issue :
image nginx:latest has the following file (extract)
/etc/nginx/nginx.conf
html {
(...)
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*; # THIS LINE IS NEW - instantiated a default site
}
Don't know if this point has been noticed
Is a Dockerfile with "rm /etc/nginx/sites-enabled/" cmd an acceptable workaround or a prerequisite?
Reproducible example
docker-compose.yml
version: "3"
services:
cms_php:
image: php:7.4-fpm
container_name: cms_php
restart: unless-stopped
networks:
- internal
- external
volumes:
- ./src:/var/www/html
webserver:
# image: nginx:1.18.0 # OK
# image: nginx:1.17.0 # OK
# image: nginx:mainline # OK
image: nginx:latest # NOK
# image: nginx # NOK
container_name: webserver
depends_on:
- cms_php
restart: unless-stopped
ports:
- 80:80
volumes:
- ./src:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d/
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
nginx-conf/nginx.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cms_php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
src/index.php
<?php echo "Hi..."; ?>
With the below setup, I am able to get the desired data. I didn't have to make changes to your files. You may have an issue with your paths/setup. Try to imitate my setup. I am using nginx:latest.
$ curl localhost:80
Hi...
Running docker processes in this setup
$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------
cms_php docker-php-entrypoint php-fpm Up 9000/tcp
webserver /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
Folder structure
$ tree
.
├── docker-compose.yaml
├── nginx-conf
│ └── nginx.conf
└── src
└── index.php
2 directories, 3 files
src/index.php
$ cat src/index.php
<?php echo "Hi..."; ?>
docker-compose.yaml
$ cat docker-compose.yaml
version: "3"
services:
cms_php:
image: php:7.4-fpm
container_name: cms_php
restart: unless-stopped
networks:
- internal
- external
volumes:
- ./src:/var/www/html
webserver:
image: nginx:latest
container_name: webserver
depends_on:
- cms_php
restart: unless-stopped
ports:
- 80:80
volumes:
- ./src:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d/
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
nginx-conf/nginx.conf
$ cat nginx-conf/nginx.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cms_php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}

Docker nginx self-signed certificate - can't connect to https

I have been following a few tutorials to try and get my SSL cert working with my docker enviroment. I have decided to go down the route of a self-signed certificate with letsencrypt. I have generated the certificate with the following command
certbot certonly --manual \
--preferred-challenges=dns \
--email {email_address} \
--server https://acme-v02.api.letsencrypt.org/directory \
--agree-tos \
--manual-public-ip-logging-ok \
-d "*.servee.co.uk"
NOTE: I am using multi tenancy so I need the wildcard on my domain
This works, the certificate has been generated on my server. I am now trying to use this with my docker nginx container.
My docker-compose.yml files looks like this
...
services:
nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
ports:
- 433:433
- 80:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- app
- mysql
networks:
- laravel
...
This is my Dockerfile
FROM nginx:stable-alpine
COPY ./fullchain.pem /etc/nginx/fullchain.pem
COPY ./privkey.pem /etc/nginx/privkey.pem
ADD nginx.conf /etc/nginx/nginx.conf
ADD default.conf /etc/nginx/conf.d/default.conf
RUN mkdir -p /var/www/html
RUN addgroup -g 1000 laravel && adduser -G laravel -g laravel -s /bin/sh -D laravel
RUN chown laravel:laravel /var/www/html
I am copying the pem files into the nginx container so I can use them.
Here is my default.conf file which should be loading my certificate
server {
listen 80;
index index.php index.html;
server_name servee.co.uk;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
server {
listen 443 ssl;
server_name servee.co.uk;
ssl_certificate /etc/nginx/fullchain.pem;
ssl_certificate_key /etc/nginx/privkey.pem;
index index.php index.html;
location / {
proxy_pass http://servee.co.uk; #for demo purposes
}
}
The nginx container builds successfully and when I bash into it I can find the pem files. The issue is when I go to https://servee.co.uk I just get Unable to connect error. If I go to http://servee.co.uk it works fine.
I'm not sure what I have missed, this has really put me off docker because its such a pain to get SSL working so hopefully its an easy fix.
You need to update your docker-compose.yml file to use port 443 instead of 433 to match your nginx.conf. Try the below docker-compose.yml file.
...
services:
nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
ports:
- 443:443
- 80:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- app
- mysql
networks:
- laravel
...

Deploying Rails 6 app using Docker and Nginx

As I deploy my first ever rails app to server, I keep getting this error right when from home url.
The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
My configuration:
Dockerfile
FROM ruby:3.0.0-alpine3.13
RUN apk add --no-cache --no-cache --update alpine-sdk nodejs postgresql-dev yarn tzdata
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install
COPY . .
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 4000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0", "-e", "production", "-p", "4000"]
entrypoint.sh
#!/bin/sh
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /app/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
docker-compose.yml
central:
build:
context: ./central
dockerfile: Dockerfile
command: sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 4000 -e production -b '0.0.0.0'"
volumes:
- ./central:/usr/src/app
env_file:
- ./central/.env.prod
stdin_open: true
tty: true
depends_on:
- centraldb
centraldb:
image: postgres:12.0-alpine
volumes:
- centraldb:/var/lib/postgresql/data/
env_file:
- ./central/.env.prod.db
nginx:
image: nginx:1.19.0-alpine
volumes:
- ./nginx/prod/certbot/www:/var/www/certbot
- ./central/public/:/home/apps/central/public/:ro
ports:
- 80:80
- 443:443
depends_on:
- central
links:
- central
restart: unless-stopped
command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
nginx.conf
upstream theapp {
server central:4000;
}
server {
listen 80;
server_name thedomain.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name thedomain.com;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
# hidden ssl config
root /home/apps/central/public;
index index.html;
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
location / {
try_files $uri #rails;
}
location #rails {
proxy_pass http://theapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
client_max_body_size 4G;
keepalive_timeout 10;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /home/apps/central/public;
}
error_log /var/log/nginx/central_error.log;
access_log /var/log/nginx/central_access.log;
}
As I check the log file of nginx but it doesn't show anything.
The app work well in my local and even in production with ports config (of course if I go to thedomain.com:4000), but I need to serve it with Nginx in production with thedomain.com so I need solution for this.

Having issues with nginx and docker mapping the correct folder

For whatever reason, I keep getting the default html
My docker files are here /Users/xxx/_sites/_blah:
Dockerfile
site.conf
docker-compose.yml
My app with the code is here /Users/xxx/_sites/_blah/app
Here are my files:
Dockerfile
FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./site.conf /etc/nginx/conf.d/
COPY content /var/www/html/site
COPY conf /etc/nginx
VOLUME /var/www/html/site
VOLUME /etc/nginx/conf.d/
docker-compose.yml
version: '3.3'
services:
php:
container_name: pam-php
image: php:fpm
volumes:
- ./app:/var/www/html/site
nginx:
container_name: pam-nginx
image: nginx:latest
volumes:
- ./app:/var/www/html/site:rw
- ./site.conf:/etc/nginx/conf.d/site.conf:rw
ports:
- 7777:80
site.conf
server {
listen 80;
server_name localhost;
root /var/www/html/site;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Every tutorial or block of code just doesn't work or is old.

Resources