Adding ssl certificate to dockerized app breaks api call - docker

We have two docker containters. one running our angular app and one running our laravel api. Each has their own docker-compose file.
On our localhost there was no issue making api calls from angular to laravel over 127.0.0.1:3000
Then I took these containers and started them up on my Ubuntu server. Still no problem making calls over 195.xxx.xxx.xx:3000
I then added a ssl certificate to the domain and all of the sudden I can not make calls to the api over port 3000
Can anyone tell me where I am going wrong. I have tried different ports. If I remove the certbot stuff and call over http it all works fine again. Please please help...
For my ssl setup I followed this article and got it all setup without any real issues
Here is to docker setup for laravel
Dockerfile:
FROM php:7.3-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mariadb-client \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 3000 and start php-fpm server
EXPOSE 3000
CMD php-fpm
docker-compose.yml
version: "3"
services:
#PHP Service
api:
build:
context: .
dockerfile: Dockerfile
image: laravel360
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "3000:80"
- "3001:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: name
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
Any finally the config file
server {
listen 80;
client_max_body_size 100M;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
server {
listen 443 ssl;
client_max_body_size 100M;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
Angular Docker
#############
### build ###
#############
# base image
FROM node:alpine as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#~9.1.0
# add app
COPY . /app
# run tests
# RUN ng test --watch=false
# RUN ng e2e --port 4202
# generate build
RUN ng build --output-path=dist
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD [ "nginx", "-g", "daemon off;" ]
Docker Compose
version: '3'
services:
angular:
container_name: angular
build:
context: .
dockerfile: Dockerfile-prod
ports:
- "80:80"
- "443:443"
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
And then finally my nginx conf for the angular side
server {
listen 80;
server_name mydomaindotcom;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri /index.html;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name mydomaindotcom;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri /index.html
proxy_pass http://mydomaindotcom; #for demo purposes
proxy_set_header Host http://mydomaindotcom;
}
ssl_certificate /etc/letsencrypt/live/mydomaindotcom/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomaindotcom/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

Related

I need to host my react app on my domain and my api on my subdomain with Nginx on the Ubuntu server and Nginx on Docker

So basically, I have two Nginx processes. One runs on an ubuntu server itself (20.04 LTS) (for the frontend), and the other runs on docker on the ubuntu server (for the backend).
I have created a server block for both the front end and the backend but if I browse the backend on the subdomain (api.domain.com) it displays a 404 not found page with the current configuration.
The one running on the ubuntu server serves the react app with the server block below:
server {
listen 80;
# listen [::]:80;
server_name domain.com www.domain.com;
location / {
proxy_pass http://localhost:3000;
}
}
This server block for the react app works as intended, however, the other app which is my API built with Laravel, MySQL, and Nginx to serve the app with docker is not working as it should.
This is the server block for the API
server {
listen 80;
server_name api.domain.com www.api.domain.com;
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ .php$ {
try_files $uri /index.php =404;
fastcgi_pass 172.22.0.2:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
docker-compose file
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: Dockerfile
image: {docker_image:latest}
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "8000:80"
- "445:443"
depends_on:
- app
volumes:
- ./:/var/www
# - ./nginx/conf.d/app.conf:/etc/nginx/sites-available/{api.domain.com}
networks:
- app-network
#MySQL Service
db:
image: mysql:8.0
container_name: ${APP_NAME}_db
restart: unless-stopped
tty: true
ports:
- "3307:3306"
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
Dockerfile:
FROM php:8.0-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libwebp-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql zip exif pcntl
RUN docker-php-ext-configure gd --enable-gd --with-jpeg --with-webp --with-freetype
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
# RUN groupadd -g 1000 www
# RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www-data:www-data . /var/www
# RUN chown -R www:www /var/www
# RUN chmod -R 755 /var/www
# Change current user to www
USER www-data
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
Already created a symbolic link for the sites-enabled directory.
I have solved this exact problem before but I didn't document it, it has become really frustrating finding a solution this past few days.
I would really appreciate if anyone has a solution to this problem.

How to pass environment variable to nginx.conf file in docker?

I am trying to setup ruby on rails with docker everything is good but i want dynamic domain pass as environment variable to nginx.conf file during build image by docker-compose command but i don't know how to do it.
i trying to use this command
Docker-compose build
dcoker-compose up
Docker File
FROM ruby:2.7.2
ENV RAILS_ROOT /var/www/quickcard
ENV BUNDLE_VERSION 2.1.4
ENV BUNDLE_PATH usr/local/bundle/gems
ENV RAILS_LOG_TO_STDOUT true
ENV RAILS_PORT 5000
COPY ./entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential \
git \
libxml2-dev \
libpq-dev \
libxslt-dev \
nodejs \
yarn \
imagemagick \
tzdata \
less \
cron \
&& rm -rf /var/cache/apk/*
RUN gem install bundler --version "$BUNDLE_VERSION"
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
ADD Gemfile Gemfile
ADD Gemfile.lock Gemfile.lock
COPY yarn.lock yarn.lock
RUN bundle install
EXPOSE $RAILS_PORT
RUN ln -s $RAILS_ROOT/config/systemd/puma.service /etc/systemd/system/quickcard
COPY . .
RUN crontab -l | { cat; echo ""; } | crontab -
RUN yarn install
RUN yarn install --check-files
RUN ls /var/www/quickcard/public
ENTRYPOINT ["entrypoint.sh"]
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
Nginx Docker File
FROM nginx
RUN apt-get update -qq && apt-get -y install apache2-utils
ENV RAILS_ROOT /var/www/quickcard
WORKDIR $RAILS_ROOT
RUN mkdir log
COPY public public/
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
COPY ./multi_quickcard.key /etc/nginx/multi_quickcard.key
COPY ./quickcard-ssl-test.pem /etc/nginx/quickcard-ssl-test.pem
EXPOSE 80 443
CMD [ "nginx", "-g", "daemon off;" ]
Nginx.conf e.g
upstream puma {
# Path to Puma SOCK file, as defined previously
server app:5000 fail_timeout=0;
}
server {
listen 80;
server_name default_server;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
location / {
root /var/www/quickcard/public/;
proxy_pass http://puma;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /api {
root /var/www/quickcard/public/;
proxy_pass http://puma;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ^~ /assets/ {
root /var/www/quickcard/public/;
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Docker Compose File
version: '2.2'
services:
app:
build:
context: .
dockerfile: ./Dockerfile
command: bash -c "bundle exec rails s -p 5000 -e production -b 0.0.0.0 && RAILS_ENV=production bundle exec rake assets:precompile"
environment:
RAILS_ENV: production
volumes:
- /var/wwww/quickcard
- /var/wwww/quickcard/public
ports:
- 5000:5000
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
environment:
RAILS_ENV: production
volumes:
- /var/wwww/quickcard/tmp
cron_job:
build: .
command: cron -f
nginx:
build:
context: .
dockerfile: ./nginx.Dockerfile
volumes:
- ./log-nginx:/var/log/nginx/
restart: always
ports:
- 80:80
- 443:443
nginx Docker image can extract environment variables before it starts, but it's a bit tricky. One solution is to:
Add env variables to your nginx.conf file.
Copy it to /etc/nginx/templates/nginx.conf.template in the container (as opposed to your normal /etc/nginx) in the build step or as a volume.
Set the NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx environment variable in docker-compose.yml.
This will cause the nginx.conf.template file to be copied to /etc/nginx as nginx.conf and the environment variables will be replaced with their values.
There is one caveat to keep in mind: using command property in docker-compose.yml seems to be disabling the extraction functionality. If you need to run a custom command to start-up nginx, you can use the Dockerfile version.
I created a repo with the full setup, but in case it's not available:
# docker-compose.yml
version: "3"
services:
nginx-no-dockerfile:
container_name: nginx-no-dockerfile
image: nginx:1.23.1-alpine
ports:
- 8081:80
volumes:
- ./site/index.html:/usr/share/nginx/html/index.html
- ./site/nginx.conf:/etc/nginx/templates/nginx.conf.template
working_dir: /usr/share/nginx/html
environment:
NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx
API_URL: http://example.com
nginx-with-dockerfile:
container_name: nginx-with-dockerfile
build:
context: ./site
dockerfile: ./Dockerfile
ports:
- 8082:80
volumes:
- ./site/index.html:/usr/share/nginx/html/index.html
environment:
NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx
API_URL: http://example.com
# site/nginx.conf
worker_processes auto;
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location /example {
proxy_pass $API_URL;
}
}
}
# site/Dockerfile
FROM nginx:1.23.1-alpine
WORKDIR /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/templates/nginx.conf.template
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Run no Dockerfile version: docker compose up nginx-no-dockerfile
Run Dockerfile version:
docker compose build nginx-with-dockerfile
docker compose up nginx-with-dockerfile
Make sure to also have index.html file in the site folder.
The recommended approach Nginx seems to be to use the envsubst utility. You would need to create a template file with the variable placed inside as $Variable or {Variable}. You could then pass into envsubst the template file which would render the variable from the environment.
This has a few downsides in that it can potentially replace nginx variables unintentionally so I would make sure to pass in the specific variables you would want to replace.
See this question which addresses a similar problem for more details: https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf

How to run Nginx + SSL in docker?

This configuration was tested without docker. The site on SSL was launched without errors. Now when I want to run this server configuration in docker, I get no errors during installation, but the server does not start at all.
nginx.dockerfile
FROM nginx:stable-alpine
RUN mkdir -p /var/www/html
WORKDIR /var/www/html
RUN addgroup -g 1000 laravel && adduser -G laravel -g laravel -s /bin/sh -D laravel
RUN chown laravel:laravel /var/www/html
COPY ./nginx/ssl/mysite.ru/mysite_ru.crt /etc/nginx/ssl/mysite.ru/mysite_ru.crt
COPY ./nginx/ssl/mysite.ru/mysite_ru.key /etc/nginx/ssl/mysite.ru/mysite_ru.key
RUN apk update \
&& ln -sf ./nginx/ssl/mysite.ru /etc/nginx/ssl/mysite.ru
ADD ./nginx/nginx.conf /etc/nginx/nginx.conf
ADD ./nginx/default.conf /etc/nginx/conf.d/default.conf
the ./nginx/ssl/mysite_com folder contains working files: mysite_com. crt and mysite_com.key
This files checked without docker
docker-compose.yml
services:
site:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx
ports:
- 80:80
- 443:443
volumes:
- ./src:/var/www/html:delegated
- ./nginx/ssl:/etc/nginx/ssl
depends_on:
- php
- mysql
- postgres
networks:
- laravel
default.conf
server {
listen 443 ssl;
server_name mysite.ru;
ssl_certificate /etc/nginx/ssl/mysite.ru/mysite_ru.crt;
ssl_certificate_key /etc/nginx/ssl/mysite.ru/mysite_ru.key;
index index.php index.html;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
Tell me, what is the error?
and where can I view nginx logs in docker?
It seems you have not added or copied your SSL cert and key files to your nginx image.
Add COPY keyword before RUN apk update ... and then restart nginx at the end so that your Dockerfile looks like this and I think it should solves your problem:
FROM nginx:stable-alpine
RUN mkdir -p /var/www/html
WORKDIR /var/www/html
RUN addgroup -g 1000 laravel && adduser -G laravel -g laravel -s /bin/sh -D laravel
RUN chown laravel:laravel /var/www/html
COPY mysite_com.crt /etc/nginx/ssl/mysite.com/
COPY mysite_com.key /etc/nginx/ssl/mysite.com/
RUN apk update \
&& ln -sf ./nginx/ssl/mysite_com /etc/nginx/ssl
ADD ./nginx/nginx.conf /etc/nginx/nginx.conf
ADD ./nginx/default.conf /etc/nginx/conf.d/default.conf

Docker or symfony missconfig. error SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: No such host is known

How can I fix this?
Is this a miss config in docker-compose or maybe is a miss config in PHP-fpm or I should do something in simfony env.
I really don't know what to change. If I call doctrine from console require localhost as host and when the browser use pdo need to have host as MySQL (the docker name)
Very strange issue. After hours of debugging I find out why doctrine:migrations:migrate failed
C:\Users\Admin\Development\lemp\www\web>php bin/console doctrine:migrations:migrate
returning the following error:
An exception occurred in driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: No such host is known.
I found out that runing migration is working if I replace the host in the .env file:
DATABASE_URL=mysql://dummy:dummy#mysql:3306/dummy?serverVersion=5.7
with
DATABASE_URL=mysql://dummy:dummy#localhost:3306/dummy?serverVersion=5.7
But a new problem arose. The http://localhost:8080/ require the old env with mysql to run:
DATABASE_URL=mysql://dummy:dummy#mysql:3306/dummy?serverVersion=5.7
You can find all the config file in my public repo:
https://github.com/dumitriucristian/nginx-server
This is my docker-compose.yml content
version: '3'
# https://linoxide.com/containers/setup-lemp-stack-docker/
# http://www.inanzzz.com/index.php/post/zpbw/creating-a-simple-php-fpm-nginx-and-mysql-application-with-docker-compose
#https://www.pascallandau.com/blog/php-php-fpm-and-nginx-on-docker-in-windows-10/
#http://www.inanzzz.com/index.php/post/0e95/copying-symfony-application-into-docker-container-with-multi-stage-builds
#https://learn2torials.com/a/dockerize-reactjs-app
#https://knplabs.com/en/blog/how-to-dockerise-a-symfony-4-project
services:
nginx:
build:
context: .
dockerfile: docker/nginx/Dockerfile
ports:
- "8080:80"
volumes:
- ./nginx-server/logs:/var/log/nginx
- ./nginx-server/default.conf:/etc/nginx/conf.d/default.conf
- ./www/:/srv/www
depends_on:
- phpfpm
phpfpm:
build:
context: .
dockerfile: docker/phpfpm/Dockerfile
ports:
- "9000:9000"
volumes:
- ./www:/srv/www
- ./docker/phpfpm/default.conf:/usr/local/etc/php-fpm.d/default.conf
environment:
MYSQL_USER: "dummy"
MYSQL_PASSWORD: "dummy"
mysql:
image: mysql:5.7
ports:
- 3306:3306
depends_on:
- phpfpm
environment:
MYSQL_ROOT_PASSWORD: "dummy"
MYSQL_DATABASE: "dummy"
MYSQL_USER: "dummy"
MYSQL_PASSWORD: "dummy"
app:
build:
context: .
dockerfile: docker/app/Dockerfile
environment:
- NODE_ENV=test
command: npm run start
ports:
- 3000:3000
volumes:
- ./app:/app
nginx\Dockerfile
FROM nginx:latest
RUN apt-get update && apt-get install -y unzip zlib1g-dev git curl libmcrypt-dev bcrypt nano man
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
WORKDIR /usr/src
RUN mkdir -p web
COPY ./www/web /usr/src/web
RUN PATH=$PATH:/web/vendor/bin:bin
EXPOSE 9000
EXPOSE 80
phpfpm\Dockerfile
FROM php:7.4.0-fpm-alpine
RUN apk update \
&& apk add --no-cache $PHPIZE_DEPS \
git \
zip \
unzip \
&& docker-php-ext-install \
opcache \
pdo_mysql \
&& docker-php-ext-enable \
opcache \
&& rm -rf \
/var/cache/apk/* \
/var/lib/apt/lists/*
COPY ./docker/phpfpm/php.ini /usr/local/etc/php/conf.d/php.override.ini
COPY ./nginx-server/default.conf /usr/local/etc/php-fpm.d/default.conf``
and nginx default.conf
server {
listen 0.0.0.0:80;
listen [::]:80 default_server;
server_name _;
root /srv/www/web/public;
index index.htm index.html index.php ;
default_type text/html;
location ~* \.php$ {
try_files $uri $uri/ /index.php;
fastcgi_pass phpfpm:9000;
#fastcgi_pass unix:/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fastcgi_pass unix:/tmp/phpcgi.socket;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
}
Thanks,
I am new to dev-ops but I try.

how can I access my web app in url after docker composer up -d

Hi can you help me why I cannot see my page when I access in browser app.dev , it says unable to connect.I only see the echo in the console can you help me please in my docker-compose.yml and in my dockerfile what is missing. is my php is correct image ?
version: "3.7"
services:
web:
build: .
image: nginx:latest
container_name: nginx-container
ports:
- "8080:80"
expose:
- 9000
volumes:
- ./:/var/www/myapp
- ./default.conf:/etc/nginx/conf.d/default.conf
links:
- php
php:
image: php:7-fpm
container_name: php-container
db:
image: mysql
container_name: mysql-container
command: --default-authentication-plugin=mysql_native_password
volumes:
- ./mysql-data:/var/lib/mysql
expose:
- 3306
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: rootpass
site.conf
server {
listen 80;
index index.php;
server_name app.dev;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/myapp;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
dockerfile
FROM php:7.2-cli
COPY . /var/www/myapp
WORKDIR /var/www/myapp
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libpng-dev \
&& docker-php-ext-install -j$(nproc) iconv \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd
entrypoint [ "php", "./index.php" ]
As I checked your docker-compose.yaml file:
You have mounted you current path to /var/www which suppose to have projects files.
But sit.conf:
Configure the root document for Nginx to be / directory.
So what happens here is Nginx try to find index.php in / and when it can't find it uses the default configuration for Nginx and shows its default page.
To solve your issue:
Modify site.conf root to be root /var/www/ and of course check the permissions and make sure it's right and it will work.

Resources