Nginx cannot load certificate in docker container - docker

So I'm trying to use nginx with a certbot certificate in a docker container, but I get this error, even though the file exists.
2022/10/07 11:08:47 [emerg] 15#15: cannot load certificate "/etc/nginx/certs/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: [emerg] cannot load certificate "/etc/nginx/certs/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: configuration file /etc/nginx/nginx.conf test failed
The certificates were generated outside of the docker container and mounted into nginx (so I might've done it wrong).
nginx:
container_name: best-nginx
build:
context: .
restart: always
image: nginx:alpine
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt/live/mycerts:/etc/nginx/certs
ports:
- "443:443"
default.conf
server {
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
server_name myservername.com;
location / {
try_files $uri $uri/ =404;
}
location /keycloak {
proxy_pass http://localhost:28080/;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/nginx/certs/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/nginx/certs/privkey.pem; # managed by Certbot
}
Dockerfile
# develop stage
FROM node:18-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./public ./public
COPY ./src ./src
# build stage
FROM develop-stage as build-stage
RUN npm run build
# production stage
FROM nginx:1.23.1-alpine as production-stage
COPY --from=build-stage /app/build /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
What I observed is that certbot generates 4 files, while I'm using only 2 in my default.conf
Could that be the root of my problem?
Thanks.
//Edit:
The files exist in /etc/letsencrypt/live/mycerts but I can't access live/mycerts without root access. So I think they might be mapped weirdly?
Here's a ls -la in the docker container, in /etc/nginx/certs, and they look a bit strange.
lrwxrwxrwx 1 root root 45 Oct 7 10:20 cert.pem -> ../../archive/mycerts/cert1.pem
lrwxrwxrwx 1 root root 46 Oct 7 10:20 chain.pem -> ../../archive/mycerts/chain1.pem
lrwxrwxrwx 1 root root 50 Oct 7 10:20 fullchain.pem -> ../../archive/mycerts/fullchain1.pem
lrwxrwxrwx 1 root root 48 Oct 7 10:20 privkey.pem -> ../../archive/mycerts/privkey1.pem

you are mounting a folder with symbolic links, in you container you will get symbolic links that points to the same location, not real files.
So either you mount a directory with real cert files recommended
or mount archive/mycerts:/etc so symblic links points to real files inside the conatiner not recommended

Related

Why is Google Cloud Builder not running docker-compose up correctly?

After testing that my website can successfully be deployed locally with docker, I'm trying to run a docker container directly on my GCP virtual instance. Inside my cloudbuilder.yaml file is the following:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '--build']
timeout: '1600s'
In running gcloud builds submit . --config=cloudbuild.yaml --timeout=1h, I get the following error at the end of it:
ERROR
Creating jkl-api ... done
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUILD FAILURE: Build step failure: build step 0 "docker/compose:1.26.2" failed: context deadline exceeded
ERROR: (gcloud.builds.submit) build 9712fc75-9b47-43a7-a84d-a208897fe00d completed with status "FAILURE"
Why am I getting this error?
Edit:
As per #Samantha Létourneau's comment, I decided I want to instead directly build the images for my project then run them instead of using docker-compose. I was able to successfully build and push a docker image to the Container Registry with this cloudbuilder.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/lawma-project-356604/lawma-image', '.']
# Docker Push
- name: 'gcr.io/cloud-builders/docker'
args: ['push',
'gcr.io/lawma-project-356604/lawma-image']
But when i try and deploy a container i get the following error:
Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=80 environment variable.
and this error [1]:
nginx: [emerg] host not found in upstream "lawma-api" in /etc/nginx/conf.d/default.conf:20
heres my nginx.conf file:
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
error_page 500 502 503 504 /50x.html;
location / {
try_files $uri /index.html;
add_header Cache-Control "no-cache";
}
location /static {
expires 1y;
add_header Cache-Control "public";
}
location /api {
proxy_pass http://lawma-api:8000;
}
}
and my Dockerfile:
#Build step #1: build the React frontend
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY lawmaapp/package.json ./
COPY lawmaapp/public ./public
COPY lawmaapp/src ./src
EXPOSE 80
RUN npm install
RUN npm run build
#Build step #2: build an nginx container \
FROM nginx:stable-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
Why am i getting the error [1]?

"Welcome to Nginx!" - Docker-Compose, using uWSGI, Flask, nginx

My Problem:
I am using Ubuntu 18.04 and a docker-compose based solution with two Docker images, one to handle Python/uWSGI and one for my NGINX reverse proxy. No matter what I change, it always seems like WSGI is unable to detect my default application. Whenever I run docker-compose up, and navigate to localhost:5000 I get the above default splash.
The complete program appears to work on our CentOS 7 machines. However, when I try to execute it on my Ubuntu test machine, I can only get the "Welcome to NGINX!" page.
Directory Structure:
/app
- app.conf
- app.ini
- app.py
- docker-compose.py
- Dockerfile-flask
- Dockerfile-nginx
- requirements.txt
/templates
(All code snippets have been simplified to help isolate the problem)
Here is an example of my docker traceback:
clocker_flask_1
[uWSGI] getting INI configuration from app.ini
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
uwsgi socket 0 bound to TCP address 0.0.0.0:5000 fd 3
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** Operational MODE: preforking+threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x558072010e70 pid: 1 (default app)
clocker_nginx_1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
Here is my docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
flask:
image: webapp-flask
build:
context: .
dockerfile: Dockerfile-flask
volumes:
- "./:/app:z"
- "/etc/localtime:/etc/localtime:ro"
environment:
- "EXTERNAL_IP=${EXTERNAL_IP}"
nginx:
image: webapp-nginx
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- 5000:80
depends_on:
- flask
Dockerfile-flask:
FROM python:3
ENV APP /app
RUN mkdir $APP
WORKDIR $APP
EXPOSE 5000
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD [ "uwsgi", "--ini", "app.ini" ]
Dockerfile-nginx
FROM nginx:latest
EXPOSE 80
COPY app.conf /etc/nginx/conf.d
app.conf
server {
listen 80;
root /usr/share/nginx/html;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass flask:5000;
}
}
app.py
# Home bit
#application.route('/')
#application.route('/home', methods=["GET", "POST"])
def home():
return render_template(
'index.html',
er = er
)
if __name__ == "__main__":
application.run(host='0.0.0.0')
app.ini
[uwsgi]
protocol = uwsgi
module = app
callable = application
master = true
processes = 2
threads = 2
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
max-requests = 1000
The nginx image comes with a main configuration file, /etc/nginx/nginx.conf, which loads every conf file in the conf.d folder -- including your nemesis in this case, a stock /etc/nginx/conf.d/default.conf. It reads as follows (trimmed a bit for concision):
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
So, your app.conf and this configuration are both active. The reason why this default one wins, though, is because of the server_name directive that it has (and yours lacks) -- when you're hitting localhost:5000, nginx matches based on the hostname and sends your request there.
To fix this easily, you can just remove that file in your Dockerfile-nginx:
RUN rm /etc/nginx/conf.d/default.conf

Docker-Compose, NGINX, and Hot Reload Configuration

I have a functional fullstack application running through docker-compose. Works like a charm. Only problem is that the team has to rebuild the entire application to reflect changes. That means bringing the entire thing down with docker-compose down.
I'm looking for help to update the file(s) below to allow for either hot reloads OR simply enable browser refreshes to pickup UI changes
NOTES:
I have "dev" and "prod" npm scripts. Both behave as they were prod (currently produce a static build folder and point to it)
Any help would be greatly appreciated :)
package.json
{
"name": "politicore",
"version": "1.0.1",
"description": "Redacted",
"repository": "Redacted",
"author": "Redacted",
"license": "LicenseRef-LICENSE.MD",
"private": true,
"engines": {
"node": "10.16.3",
"yarn": "YARN NO LONGER USED - use npm instead."
},
"scripts": {
"dev": "docker-compose up",
"dev-force": "docker-compose up --build --force-recreate",
"dev-force-d": "docker-compose up --build --force-recreate -d",
"prod-up": "docker-compose -f docker-compose-prod.yml up",
"prod-up-force": "docker-compose -f docker-compose-prod.yml up --build --force-recreate",
"prod-up-force-d": "docker-compose -f docker-compose-prod.yml up --build --force-recreate -d",
"dev-down": "docker-compose down",
"dev-down-remove": "docker-compose down --remove-orphans",
"prod-down": "docker-compose down",
"prod-down-remove": "docker-compose down --remove-orphans"
}
}
nginx dev config file
server {
listen 80;
listen 443;
server_name MyUrl.com www.MyUrl.com;
server_tokens off;
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Permitted-Cross-Domain-Policies master-only;
add_header Referrer-Policy same-origin;
add_header Expect-CT 'max-age=60';
add_header Feature-Policy "accelerometer none; ambient-light-sensor none; battery none; camera none; gyroscope none;";
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /graphql {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://api:5000;
proxy_redirect default;
}
}
docker-compose dev file
version: '3.6'
services:
api:
build:
context: ./services/api
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/api:/usr/src/app'
- '/usr/src/app/node_modules'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
env_file:
- common/.env
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 80:80
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- api
stdin_open: true
Client Service dockerfile
FROM node:10 as builder
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
COPY nginx/dev.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
API dockerfile (dev & prod)
FROM node:10
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
CMD ["npm", "start"]
Filetree Picture
As I understand it, your nginx file defines 2 areas to serve: location / and location /graphql.
The first (location /) is serving up static files from /usr/share/nginx/html inside the container. Those files are created during your docker build. Since those are produced in a multi-stage docker build, you will need to change your strategy up. Here are several options that may help guide you.
Option 1
One option is to build local and mount a volume.
Perform npm run build on your box (perhaps even with a filewatcher to perform builds any time *.js files change
Add - ./build:/usr/share/nginx/html to list of volumes for client service
The trade-off here is that you have to forego a fully dockerized build (if that's something that matters heavily to you and your team).
Option 2
Utilize a hot-reloading node server for local development and build a docker image for production environments. It's hard to tell from the files whether the client is react, angular, vuejs, etc., but typically they have a pattern from running local dev servers.
The trade-off here is that you run locally differently than running in production.
Option 3
Combine nginx and nodejs into one docker image with hot reloading inside.
Build a local docker image that contains nodejs and nginx
(You already have a volume mount into client of your app src files)
Set up the image to run npm run build inside the container every time a file changes in that mounted volume
The trade-off here is that you may have more than 1 process running in a docker container (a big no-no).
Option 4
A variation of option 3 where you run 2 docker containers.
Declare a top-level volume client_build
volumes:
- client_build:
Create a docker service in docker-compose with 2 volumes
- ./services/client:/usr/src/app
- client_build:/usr/src/app/build
Add the build volume to your client service: - client_build:/usr/share/nginx/html
Make sure nginx hot-reloads when that dir changes

Gatsby: Environment variables .env return undefined

I am working on a static website using Gatsby for the development and Nginx for serving the static files.
I am also using Docker for the deployment to test and production and Traefik for routing traffic to the docker container of the application.
I have an environment variable which I defined in the application file, and that environment variable is called from a .env file in the root folder of the application.
However, when that environment variable is invoked in the application, it throws an error:
undefined
Here's the code:
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
.env
GATSBY_API_URL=https://myapi.mywebsite.com
docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
environment:
GATSBY_API_URL: ${GATSBY_API_URL}
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.my-website.rule=Host(`my-website.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
index.js
const onSubmit = async (values) => {
try {
const res = await axios.post(`${process.env.GATSBY_API_URL}/api/EmployeeDetail/verify`, values)
// console.log(res, 'verify endpoint');
if( res.data.requestSuccessful === true ) {
dispatchVerifyData({
type : 'UPDATE_VERIFY_DATA',
verifyData: {
res: res.data.responseData,
loanType: values.loanType
}
})
handleNext()
} else {
setIsSuccessful({
status: false,
message: res.data.message
})
}
} catch (error) {
//error state Unsuccessful
console.log(error, 'error')
setIsSuccessful({
status: false,
})
}
}
.dockerignore
node_modules
npm-debug.log
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
yarn-error.log
coverage/
.env
.env.production
Nginx default.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I can't still seem to tell what is the cause of the issue that causes the application to return an undefined error whenever the environment variable is invoked. Any form of help will be highly appreciated.
I finally figured it after some long hours of debugging with my colleagues.
Here are a few things I learnt:
Firstly, by default, Gatsby supports 2 environments:
Development. If you run gatsby develop, then you will be in the development environment.
Production. If you run gatsby build or gatsby serve, then you will be in the production environment.
If you note, however, we are running npm run build in our Dockerfile which is equivalent to gatsby build, so this informs the application that we are running in the production. environment.
Secondly, defining Environment Variables for Client-side JavaScript
For Project Env Vars that you want to access in client-side browser JavaScript, you can define an environment config file, .env.development and/or .env.production, in your root folder. Depending on your active environment, the correct one will be found and its values embedded as environment variables in the browser JavaScript.
In otherwords, we will need to rename our environment config file from .env to .env.production to allow the Gatsby application to recognize it in our production environment.
Thirdly, defining Environment Variables using prefixes
In addition to these Project Environment Variables defined in .env.* files, you could also define OS Env Vars. OS Env Vars which are prefixed with GATSBY_ will become available in browser JavaScript.
If you note very well we are already defining this in our .env config file as - GATSBY_API_URL=https://myapi.mywebsite.com, so we have no issues with that.
Fourthly, removing the env. config files from .dockerignore
If we observe clearly how the values of environment variables are embedded in the browser Javascript for Client-side JavaScript, you will see that it is done during build time and not run time.
Therefore, we need to remove the .env.* config files from .dockerignore and also remove the environment option in the docker-compose.yml file, since it is unnecessary anymore because we don't embed the values of the environment variables during the run time.
So our code will look like this now:
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
.env.production
GATSBY_API_URL=https://myapi.mywebsite.com
docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.my-website.rule=Host(`my-website.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
index.js
const onSubmit = async (values) => {
try {
const res = await axios.post(`${process.env.GATSBY_API_URL}/api/EmployeeDetail/verify`, values)
// console.log(res, 'verify endpoint');
if( res.data.requestSuccessful === true ) {
dispatchVerifyData({
type : 'UPDATE_VERIFY_DATA',
verifyData: {
res: res.data.responseData,
loanType: values.loanType
}
})
handleNext()
} else {
setIsSuccessful({
status: false,
message: res.data.message
})
}
} catch (error) {
//error state Unsuccessful
console.log(error, 'error')
setIsSuccessful({
status: false,
})
}
}
.dockerignore
node_modules
npm-debug.log
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
yarn-error.log
coverage/
Nginx default.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
That's all.
I hope this helps

Set up docker-compose with nginx missing snippets/fastcgi-php.conf

I am trying to set up a new docker-compose file.
version: '3'
services:
webserver:
image: nginx:latest
container_name: redux-webserver
# working_dir: /application
volumes:
- ./www:/var/www/
- ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
ports:
- "7007:80"
Currently it is very simple. But I copy the following config:
# Default server configuration
#
server {
listen 7007 default_server;
listen [::]:7007 default_server;
root /var/www;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/=404;
}
location /redux {
alias /var/www/Redux/src;
try_files $uri #redux;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #redux {
rewrite /redux/(.*)$ /redux/index.php?/$1 last;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
#fastcgi_index index.php;
# With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
But now when I try to start it with docker-compose run webserver I get the following error:
2019/07/20 08:55:09 [emerg] 1#1: open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
nginx: [emerg] open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
I understand it does not find the file fastcgi-php.conf. But why is this? Shouldn't that file be included in the standart nginx installation?
/etc/nginx/snippets/fastcgi-php.conf is in nginx-full package, but the image nginx:latest you used did not install nginx-full package.
To have it, you need to write your own dockerfile base from nginx:latest & install nginx-full in it:
Dockerfile:
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-full
docker-compose.yaml:
version: '3'
services:
webserver:
build: .
image: mynginx:latest
Put Dockerfile & docker-compose.yaml in the same folder then up it.
Additional, if you do not mind use other folk's repo(means not official), you can just search one from dockerhub, e.g. one I find from dockerhub (schleyk/nginx-full):
docker run -it --rm schleyk/nginx-full ls -alh /etc/nginx/snippets/fastcgi-php.conf
-rw-r--r-- 1 root root 422 Apr 6 2018 /etc/nginx/snippets/fastcgi-php.conf
You are trying to use a docker compose config that does not account for your trying to load fastcgi / php specific options.
You can use another image and link it to your web server like:
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code
Source, with a more thorough explanation: http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/

Resources