Why nginx-based docker image refuses connection in VSCode devcontainer - docker

Consider following setup:
devcontainer.json
{
"name": "Deputy devcontainer",
"dockerComposeFile": "docker-compose.yml",
"service": "go-development",
"settings": {
"go.toolsManagement.checkForUpdates": "local",
"go.useLanguageServer": true,
"go.gopath": "/go"
},
"extensions": [
"golang.Go",
"gitlab.gitlab-workflow",
"GitHub.copilot",
"eamodio.gitlens",
"zxh404.vscode-proto3",
"bungcip.better-toml"
],
"workspaceFolder": "/workspace",
"remoteUser": "vscode"
}
docker-compose.yml
version: '3'
services:
go-development:
image: my-custom-docker-repository/go-development:latest
command: /bin/sh -c "while sleep 1000; do :; done"
volumes:
- ..:/workspace:cached
- ./deputy-cli-configuration.toml:/home/vscode/.deputy/configuration.toml
user: vscode
deputy-package-server:
image: my-custom-docker-repository/deputy-package-server:latest
ports:
- "8080:8080"
volumes:
- ./deputy-packages:/var/opt/deputy/deputy-package-server/package
- ./deputy-repository:/var/opt/deputy/deputy-package-server/repository
environment:
- RUST_LOG=debug
index-repository:
image: my-custom-docker-repository/deputy-repository-server
volumes:
- ./deputy-repository/.git:/srv/git/index.git
ports:
- "8082:80"
Dockerfile for index-repository:
FROM teamfruit/nginx-fcgiwrap
RUN apt-get update && apt-get upgrade -y
RUN apt-get install git-core fcgiwrap -y
COPY nginx.conf /etc/nginx/conf.d/default.conf
nginx.conf in index-repository:
server {
listen 80;
location ~ /git(/.*) {
client_max_body_size 0;
fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend;
include fastcgi_params;
fastcgi_param GIT_HTTP_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT /srv/git;
fastcgi_param PATH_INFO $1;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
}
}
Dockerfile for go-development:
FROM vscode/devcontainers/go
RUN apt-get update && apt-get upgrade -y
RUN apt-get install protobuf-compiler make debhelper dpkg-dev -y
RUN go install google.golang.org/grpc/cmd/protoc-gen-go-grpc#v1.2
RUN go install google.golang.org/protobuf/cmd/protoc-gen-go#v1.28
RUN chmod 777 -R /go/pkg
When running the vscode-devcontainer, it is impossible to git clone http://index-repository/git/index.git, this fails with connection refused.
However, when I try to clone either directly in the index-repository or host machine, the commands succeeds. Also connecting to deputy-package-server (a simple Rust web-server) works without any issues.
I can also see from the tcpdump that the request actually reaches the nginx-container:
I also used dig to verify that dns is not a problem.
As of right now I am really puzzled as to why the connection keeps getting refused. I am suspecting that the issue may lie somewhere in the nginx configuration, but cannot tell where. What am I doing wrong?

You are mapping the port "8082:80", so you should access the index-repository container by http://localhost:8082 isntead of http://index-repository:8082.
If you want to access the repositroy server container by the domain name index-repository, you should expose the port 80 of the repositroy server container (the hostname would be defaulted in the docker network as same as the service id, eg. index-repository).
Then your dev container can access it as http://index-repository:80

Related

Docker-Compose, NGINX, and Hot Reload Configuration

I have a functional fullstack application running through docker-compose. Works like a charm. Only problem is that the team has to rebuild the entire application to reflect changes. That means bringing the entire thing down with docker-compose down.
I'm looking for help to update the file(s) below to allow for either hot reloads OR simply enable browser refreshes to pickup UI changes
NOTES:
I have "dev" and "prod" npm scripts. Both behave as they were prod (currently produce a static build folder and point to it)
Any help would be greatly appreciated :)
package.json
{
"name": "politicore",
"version": "1.0.1",
"description": "Redacted",
"repository": "Redacted",
"author": "Redacted",
"license": "LicenseRef-LICENSE.MD",
"private": true,
"engines": {
"node": "10.16.3",
"yarn": "YARN NO LONGER USED - use npm instead."
},
"scripts": {
"dev": "docker-compose up",
"dev-force": "docker-compose up --build --force-recreate",
"dev-force-d": "docker-compose up --build --force-recreate -d",
"prod-up": "docker-compose -f docker-compose-prod.yml up",
"prod-up-force": "docker-compose -f docker-compose-prod.yml up --build --force-recreate",
"prod-up-force-d": "docker-compose -f docker-compose-prod.yml up --build --force-recreate -d",
"dev-down": "docker-compose down",
"dev-down-remove": "docker-compose down --remove-orphans",
"prod-down": "docker-compose down",
"prod-down-remove": "docker-compose down --remove-orphans"
}
}
nginx dev config file
server {
listen 80;
listen 443;
server_name MyUrl.com www.MyUrl.com;
server_tokens off;
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Permitted-Cross-Domain-Policies master-only;
add_header Referrer-Policy same-origin;
add_header Expect-CT 'max-age=60';
add_header Feature-Policy "accelerometer none; ambient-light-sensor none; battery none; camera none; gyroscope none;";
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /graphql {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://api:5000;
proxy_redirect default;
}
}
docker-compose dev file
version: '3.6'
services:
api:
build:
context: ./services/api
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/api:/usr/src/app'
- '/usr/src/app/node_modules'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
env_file:
- common/.env
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 80:80
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- api
stdin_open: true
Client Service dockerfile
FROM node:10 as builder
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
COPY nginx/dev.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
API dockerfile (dev & prod)
FROM node:10
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
CMD ["npm", "start"]
Filetree Picture
As I understand it, your nginx file defines 2 areas to serve: location / and location /graphql.
The first (location /) is serving up static files from /usr/share/nginx/html inside the container. Those files are created during your docker build. Since those are produced in a multi-stage docker build, you will need to change your strategy up. Here are several options that may help guide you.
Option 1
One option is to build local and mount a volume.
Perform npm run build on your box (perhaps even with a filewatcher to perform builds any time *.js files change
Add - ./build:/usr/share/nginx/html to list of volumes for client service
The trade-off here is that you have to forego a fully dockerized build (if that's something that matters heavily to you and your team).
Option 2
Utilize a hot-reloading node server for local development and build a docker image for production environments. It's hard to tell from the files whether the client is react, angular, vuejs, etc., but typically they have a pattern from running local dev servers.
The trade-off here is that you run locally differently than running in production.
Option 3
Combine nginx and nodejs into one docker image with hot reloading inside.
Build a local docker image that contains nodejs and nginx
(You already have a volume mount into client of your app src files)
Set up the image to run npm run build inside the container every time a file changes in that mounted volume
The trade-off here is that you may have more than 1 process running in a docker container (a big no-no).
Option 4
A variation of option 3 where you run 2 docker containers.
Declare a top-level volume client_build
volumes:
- client_build:
Create a docker service in docker-compose with 2 volumes
- ./services/client:/usr/src/app
- client_build:/usr/src/app/build
Add the build volume to your client service: - client_build:/usr/share/nginx/html
Make sure nginx hot-reloads when that dir changes

How to Configure LetsEncrypt-Cerbot in a Standalone Container

I'm trying to find simple documentation on running certbot in a docker-container, but all I can find is complicated guides w/ running certbot + webserver etc. The official page is kinda useless... https://hub.docker.com/r/certbot/certbot/ .I already have webserver separate from my websites and I want to run certbot on it's own as well.
Can anybody give me some guidance on how I could generate certificates for mysite.com with a webroot of /opt/mysite/html.
As I already have services on port 443 and 80 I was thinking of using the "host-network" if needed for certbot, but I don't really understand why it needs access to 443 when my website is served over 443 already.
I have found something like so to generate a certbot container, but I have no idea how to "use it" or tell it to generate a cert for my site.
Eg:
WD=/opt/certbot
mkdir -p $WD/{mnt,setup,conf,www}
cd $WD/setup
cat << 'EOF' >docker-compose.yaml
version: '3.7'
services:
certbot:
image: certbot/certbot
volumes:
- type: bind
source: /opt/certbot/conf
target: /etc/letsencrypt
- type: bind
source: /opt/certbot/www
target: /var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
EOF
chmod +x docker-compose.yaml
This link has something close to what I need, (obviously somehow I need to give it my domain as an argument!)
Letsencrypt + Docker + Nginx
docker run -it --rm \
-v certs:/etc/letsencrypt \
-v certs-data:/data/letsencrypt \
deliverous/certbot \
certonly \
--webroot --webroot-path=/data/letsencrypt \
-d api.mydomain.com
I like to keep everything pretty "isolated" so I'm looking to just have certbot run in it's own container and configure nginx/webserver to use the certs seperatley and not have certbot either autoconfigure nginx or run in the same stack as a webserver.
Well I have been learing a lot about docker recently and i recently learned how to look at the Dockerfile. The certbot dockerfile gave me some more hints.
Basically you can append the follow to your docker-compose.yaml and it is as if appending to certbot on the CLI. I will update with my working configs, but I was blocked due to the "Rate Limit of 5 failed auths/hour" :(
See Entrypoint of DockerFile
ENTRYPOINT [ "certbot" ]
Docker-Compose.yaml:
command: certonly --webroot -w /var/www/html -d www.examplecom -d examplecom --non-interactive --agree-tos -m example#example.com
I will update with my full config once I get it working and will be including variables to utilize .env file.
Full Config Example:
WD=/opt/certbot
mkdir -p $WD/{setup,certbot_logs}
cd $WD/setup
cat << 'EOF' >docker-compose.yaml
version: '3.7'
services:
certbot:
container_name: certbot
hostname: certbot
image: certbot/certbot
volumes:
- type: bind
source: /opt/certbot/certbot_logs
target: /var/log/letsencrypt
- type: bind
source: /opt/nginx/ssl
target: /etc/letsencrypt
- type: bind
source: ${WEBROOT}
target: /var/www/html/
environment:
- 'TZ=${TZ}'
command: certonly --webroot -w /var/www/html -d ${DOMAIN} -d www.${DOMAIN} --non-interactive --agree-tos --register-unsafely-without-email ${STAGING}
EOF
chmod +x docker-compose.yaml
cd $WD/setup
Variables:
cat << 'EOF'>.env
WEBROOT=/opt/example/example_html
DOMAIN=example.com
STAGING=--staging
TZ=America/Whitehorse
EOF
chmod +x .env
NGinx:
server {
listen 80;
listen [::]:80;
server_name www.example.com example.com;
location /.well-known/acme-challenge/ {
proxy_pass http://localhost:8575/$request_uri;
include /etc/nginx/conf.d/proxy.conf;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443;
server_name www.example.com example.com;
# ssl_certificate /etc/ssl/live/example.com/fullchain.pem;
# ssl_certificate_key /etc/ssl/live/example.com/privkey.pem;
ssl_certificate /etc/ssl/fake/fake.crt;
ssl_certificate_key /etc/ssl/fake/fake.key;
location / {
proxy_pass http://localhost:8575/;
include /etc/nginx/conf.d/proxy.conf;
}
)
Updated Personal Blog --> https://www.freesoftwareservers.com/display/FREES/Use+CertBot+-+LetsEncrypt+-+In+StandAlone+Docker+Container

How to enable HTTPS on AWS EC2 running an NGINX Docker container?

I have an EC2 instance on AWS that runs Amazon Linux 2.
On it, I installed Git, docker, and docker-compose. Once done, I cloned my repository and ran docker-compose up to get my production environment up. I go to the public DNS, and it works.
I now want to enable HTTPS onto the site.
My project has a frontend using React to run on an Nginx-alpine server. The backend is a NodeJS server.
This is my nginx.conf file:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://${PROJECT_NAME}_backend:${NODE_PORT}/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Here's my docker-compose.yml file:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend: # Node-Express backend that acts as an API.
container_name: ${PROJECT_NAME}_backend
init: true
build:
context: ./backend/
target: production
restart: always
environment:
- NODE_PATH=${EXPRESS_NODE_PATH}
- AWS_REGION=${AWS_REGION}
- NODE_ENV=production
- DOCKER_BUILDKIT=1
- PORT=${NODE_PORT}
networks:
- client
##############################
# Front-End Container
##############################
nginx:
container_name: ${PROJECT_NAME}_frontend
build:
context: ./frontend/
target: production
args:
- NODE_PATH=${REACT_NODE_PATH}
- SASS_PATH=${SASS_PATH}
restart: always
environment:
- PROJECT_NAME=${PROJECT_NAME}
- NODE_PORT=${NODE_PORT}
- DOCKER_BUILDKIT=1
command: /bin/ash -c "envsubst '$$PROJECT_NAME $$NODE_PORT' < /etc/nginx/conf.d/nginx.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
expose:
- "80"
ports:
- "80:80"
depends_on:
- backend
networks:
- client
##############################
# General Config
##############################
networks:
client:
I know there's a Docker image for certbot, but I'm not sure how to use it. I'm also worried about the way I'm proxying requests to /api/ to the server over http. Will that also give me any problems?
Edit:
Attempt #1: Traefik
I created a Traefik container to route all traffic through HTTPS.
version: '2'
services:
traefik:
image: traefik
restart: always
ports:
- 80:80
- 443:443
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/traefik/traefik.toml:/traefik.toml
- /opt/traefik/acme.json:/acme.json
container_name: traefik
networks:
web:
external: true
For the toml file, I added the following:
debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "ec2-00-000-000-00.eu-west-1.compute.amazonaws.com"
watch = true
exposedByDefault = false
[acme]
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
I added this to my docker-compose production file:
labels:
- "traefik.docker.network=web"
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:ec2-00-000-000-00.eu-west-1.compute.amazonaws.com"
- "traefik.basic.port=80"
- "traefik.basic.protocol=https"
I ran docker-compose up for the Traefik container, and then ran docker-compose up on my production image. I got the following error:
unable to obtain acme certificate
I'm reading the Traefik docs and apparently there's a way to configure the toml file specifically for Amazon ECS: https://docs.traefik.io/configuration/backends/ecs/
Am I on the right track?
Easiest way would be to setup a ALB and use it for HTTPS.
Create ALB
Add 443 Listener to ALB
Generate Certificate using AWS Certificate Manager
Set the Certificate to the default cert for the load balancer
Create Target Group
Add your EC2 Instance to the Target Group
Point the ALB to the Target Group
Requests will be served using the ALB with https
Enabling SSL is done through following the tutorial on Nginx and Let's Encrypt with Docker in Less Than 5 Minutes. I ran into some issues while following it, so I will try to clarify some things here.
The steps include adding the following to the docker-compose.yml:
##############################
# Certbot Container
##############################
certbot:
image: certbot/certbot:latest
volumes:
- ./frontend/data/certbot/conf:/etc/letsencrypt
- ./frontend/data/certbot/www:/var/www/certbot
As for the Nginx Container section of the docker-compose.yml, it should be amended to include the same volumes added to the Certbot Container, as well as add the ports and expose configurations:
service_name:
container_name: container_name
image: nginx:alpine
command: /bin/ash -c "exec nginx -g 'daemon off;'"
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
The data folder may be saved anywhere else, but make sure to know where it is and make sure to reference it properly when reused later. In this example, I am simply saving it in the same directory as the docker-compose.yml file.
Once the above configurations are put into place, a couple of steps are to be taken in order to initialize the issuance of the certificates.
Firstly, your Nginx configuration (default.conf) is to be changed to accommodate the domain verification request:
server {
listen 80;
server_name example.com www.example.com;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com www.example.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Once the Nginx configuration file is amended, a dummy certificate is created to allow for Let's Encrypt validation to take place. There is a script that does all of this automatically, which can be downloaded, into the root of the project, using CURL, before being amended to suit the environment. The script would also need to be made executable using the chmod command:
curl -L https://raw.githubusercontent.com/wmnnd/nginx-certbot/master/init-letsencrypt.sh > init-letsencrypt.sh && chmod +x init-letsencrypt.sh
Once the script is downloaded, it is to be amended as follows:
#!/bin/bash
if ! [ -x "$(command -v docker-compose)" ]; then
echo 'Error: docker-compose is not installed.' >&2
exit 1
fi
-domains=(example.org www.example.org)
+domains=(example.com www.example.com)
rsa_key_size=4096
-data_path="./data/certbot"
+data_path="./data/certbot"
-email="" # Adding a valid address is strongly recommended
+email="admin#example.com" # Adding a valid address is strongly recommended
staging=0 # Set to 1 when testing setup to avoid hitting request limits
if [ -d "$data_path" ]; then
read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
exit
fi
fi
if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
echo "### Downloading recommended TLS parameters ..."
mkdir -p "$data_path/conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
echo
fi
echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
openssl req -x509 -nodes -newkey rsa:1024 -days 1\
-keyout '$path/privkey.pem' \
-out '$path/fullchain.pem' \
-subj '/CN=localhost'" certbot
echo
echo "### Starting nginx ..."
-docker-compose up --force-recreate -d nginx
+docker-compose -f docker-compose.yml up --force-recreate -d service_name
echo
echo "### Deleting dummy certificate for $domains ..."
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
rm -Rf /etc/letsencrypt/live/$domains && \
rm -Rf /etc/letsencrypt/archive/$domains && \
rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo
echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[#]}"; do
domain_args="$domain_args -d $domain"
done
# Select appropriate email arg
case "$email" in
"") email_arg="--register-unsafely-without-email" ;;
*) email_arg="--email $email" ;;
esac
# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
certbot certonly --webroot -w /var/www/certbot \
$staging_arg \
$email_arg \
$domain_args \
--rsa-key-size $rsa_key_size \
--agree-tos \
--force-renewal" certbot
echo
echo "### Reloading nginx ..."
-docker-compose exec nginx nginx -s reload
+docker-compose exec service_name nginx -s reload
I have made sure to always include the -f flag with the docker-compose command just in case someone doesn't know what to change if they had a custom named docker-compose.yml file. I have also made sure to set the service name as service_name to make sure to differentiate between the service name and the Nginx command, unlike the tutorial.
Note: If unsure about the fact that the setup is working, make sure to set staging as 1 to avoid hitting request limits. It is important to remember to set it back to 0 once testing is done and redo all steps from amending the init-letsencrypt.sh file. Once testing is done and the staging is set to 0, it is important to stop previous running containers and delete the data folder for the proper initial certification to ensue:
$ docker-compose -f docker-compose.yml down && yes | docker system prune -a --volumes && sudo rm -rf ./data
Once the certificates are ready to be initialized, the script is to be run using sudo; it is very important to use sudo, as issues will occur with the permissions inside the containers if run without it.
$ sudo ./init-letsencrypt.sh
After the certificate is issued, there is the matter of automatically renewing the certificate; two things need to be done:
In the Nginx Container, Nginx would reload the newly obtained certificates through the following ammendment:
service_name:
...
- command: /bin/ash -c "exec nginx -g 'daemon off;'"
+ command: /bin/ash -c "while :; do sleep 6h & wait $${!}; nginx -s reload; done & exec nginx -g 'daemon off;'"
...
In the Certbot Container section, the following is to be add to check if the certificate is up for renewal every twelve hours, as recommended by Let's Encrypt:
certbot:
...
+ entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${!}; done;'"
Before running docker-compose -f docker-compose.yml up, the ownership of the data should be changed folder to the ec2-user; this is to avoid running into permission errors when running docker-compose -f docker-compose.yml up, or running it in sudo mode:
sudo chown ec2-user:ec2-user -R /path/to/data/
Don't forget to add a CAA record in your DNS provider for Let's Encrypt. You may read here for more information on how to do so.
If you run into any issues with the Nginx container because you are substituting variables and $server_name and $request_uri are not appearing properly, you may refer to this issue.

Set up docker-compose with nginx missing snippets/fastcgi-php.conf

I am trying to set up a new docker-compose file.
version: '3'
services:
webserver:
image: nginx:latest
container_name: redux-webserver
# working_dir: /application
volumes:
- ./www:/var/www/
- ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
ports:
- "7007:80"
Currently it is very simple. But I copy the following config:
# Default server configuration
#
server {
listen 7007 default_server;
listen [::]:7007 default_server;
root /var/www;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/=404;
}
location /redux {
alias /var/www/Redux/src;
try_files $uri #redux;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #redux {
rewrite /redux/(.*)$ /redux/index.php?/$1 last;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
#fastcgi_index index.php;
# With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
But now when I try to start it with docker-compose run webserver I get the following error:
2019/07/20 08:55:09 [emerg] 1#1: open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
nginx: [emerg] open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
I understand it does not find the file fastcgi-php.conf. But why is this? Shouldn't that file be included in the standart nginx installation?
/etc/nginx/snippets/fastcgi-php.conf is in nginx-full package, but the image nginx:latest you used did not install nginx-full package.
To have it, you need to write your own dockerfile base from nginx:latest & install nginx-full in it:
Dockerfile:
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-full
docker-compose.yaml:
version: '3'
services:
webserver:
build: .
image: mynginx:latest
Put Dockerfile & docker-compose.yaml in the same folder then up it.
Additional, if you do not mind use other folk's repo(means not official), you can just search one from dockerhub, e.g. one I find from dockerhub (schleyk/nginx-full):
docker run -it --rm schleyk/nginx-full ls -alh /etc/nginx/snippets/fastcgi-php.conf
-rw-r--r-- 1 root root 422 Apr 6 2018 /etc/nginx/snippets/fastcgi-php.conf
You are trying to use a docker compose config that does not account for your trying to load fastcgi / php specific options.
You can use another image and link it to your web server like:
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code
Source, with a more thorough explanation: http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/

Suppervisor - php-fpm leads to 502 Bad Gateway

I have a web application based on php and nginx images ... Everything works great until I set a command under the PHP configuration:
command: /usr/bin/supervisord -c /symfony/supervisord.conf
docker-compose.yml
version: '2'
services:
php:
build: docker/php
tty: true
volumes:
- '.:/symfony'
command: /usr/bin/supervisord -c /symfony/supervisord.conf
nginx:
image: nginx:1.11
tty: true
volumes:
- './public/:/symfony'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf'
ports:
- '80:80'
links:
- php
This is my default.conf
server {
server_name ~.*;
location / {
root /symfony;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /symfony/public/index.php;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
This is my supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
logfile=/tmp/supervisord.log
pidfile=/var/run/supervisord.pid
nodaemon=true
Nginx logs show me:
nginx_1 | 2018/10/02 00:42:36 [error] 11#11: 1 connect() failed
(111: Connection refused) while connecting to upstream, client:
172.23.0.1, server: ~., request: "GET / HTTP/1.1", upstream: "fastcgi://172.23.0.2:9000", host: "127.0.0.1"
As we see, nginx report a 502 Bad Gateway error. If i remove the last line, CMD, everything works fine. If I remove the line and I acess via docker-compose exec php bash and launch the command manually everything work also.
Any Idea why adding that command leads to 502 Bad Gateway ??
Ok I found a solution It was a problem with supervisor. Because each time we launch our service supervisor, the php-fpm service is stopped automatically that's why it should add a configuration that will relaunch the php-fpm but this time from supervisor configuration.
[program:php-fpm]
command = /usr/local/sbin/php-fpm
autostart=true
autorestart=true
For anyone else with similar problem:
Don't forget that command key in docker-compose.yml file overrides default CMD in Dockerfile, therefore that command won't be run.
For example, if php:7.4-fpm final command is CMD php-fpm, it won't be run.
Therefore if you have some custom logic for running after container is ran, don't forget to include it in your command, e.g.:
command: bash -c "php-fpm & npm run dev"

Resources