I am trying to set up a new docker-compose file.
version: '3'
services:
webserver:
image: nginx:latest
container_name: redux-webserver
# working_dir: /application
volumes:
- ./www:/var/www/
- ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
ports:
- "7007:80"
Currently it is very simple. But I copy the following config:
# Default server configuration
#
server {
listen 7007 default_server;
listen [::]:7007 default_server;
root /var/www;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/=404;
}
location /redux {
alias /var/www/Redux/src;
try_files $uri #redux;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #redux {
rewrite /redux/(.*)$ /redux/index.php?/$1 last;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
#fastcgi_index index.php;
# With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
But now when I try to start it with docker-compose run webserver I get the following error:
2019/07/20 08:55:09 [emerg] 1#1: open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
nginx: [emerg] open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
I understand it does not find the file fastcgi-php.conf. But why is this? Shouldn't that file be included in the standart nginx installation?
/etc/nginx/snippets/fastcgi-php.conf is in nginx-full package, but the image nginx:latest you used did not install nginx-full package.
To have it, you need to write your own dockerfile base from nginx:latest & install nginx-full in it:
Dockerfile:
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-full
docker-compose.yaml:
version: '3'
services:
webserver:
build: .
image: mynginx:latest
Put Dockerfile & docker-compose.yaml in the same folder then up it.
Additional, if you do not mind use other folk's repo(means not official), you can just search one from dockerhub, e.g. one I find from dockerhub (schleyk/nginx-full):
docker run -it --rm schleyk/nginx-full ls -alh /etc/nginx/snippets/fastcgi-php.conf
-rw-r--r-- 1 root root 422 Apr 6 2018 /etc/nginx/snippets/fastcgi-php.conf
You are trying to use a docker compose config that does not account for your trying to load fastcgi / php specific options.
You can use another image and link it to your web server like:
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code
Source, with a more thorough explanation: http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/
Related
My Problem:
I am using Ubuntu 18.04 and a docker-compose based solution with two Docker images, one to handle Python/uWSGI and one for my NGINX reverse proxy. No matter what I change, it always seems like WSGI is unable to detect my default application. Whenever I run docker-compose up, and navigate to localhost:5000 I get the above default splash.
The complete program appears to work on our CentOS 7 machines. However, when I try to execute it on my Ubuntu test machine, I can only get the "Welcome to NGINX!" page.
Directory Structure:
/app
- app.conf
- app.ini
- app.py
- docker-compose.py
- Dockerfile-flask
- Dockerfile-nginx
- requirements.txt
/templates
(All code snippets have been simplified to help isolate the problem)
Here is an example of my docker traceback:
clocker_flask_1
[uWSGI] getting INI configuration from app.ini
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
uwsgi socket 0 bound to TCP address 0.0.0.0:5000 fd 3
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** Operational MODE: preforking+threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x558072010e70 pid: 1 (default app)
clocker_nginx_1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
Here is my docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
flask:
image: webapp-flask
build:
context: .
dockerfile: Dockerfile-flask
volumes:
- "./:/app:z"
- "/etc/localtime:/etc/localtime:ro"
environment:
- "EXTERNAL_IP=${EXTERNAL_IP}"
nginx:
image: webapp-nginx
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- 5000:80
depends_on:
- flask
Dockerfile-flask:
FROM python:3
ENV APP /app
RUN mkdir $APP
WORKDIR $APP
EXPOSE 5000
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD [ "uwsgi", "--ini", "app.ini" ]
Dockerfile-nginx
FROM nginx:latest
EXPOSE 80
COPY app.conf /etc/nginx/conf.d
app.conf
server {
listen 80;
root /usr/share/nginx/html;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass flask:5000;
}
}
app.py
# Home bit
#application.route('/')
#application.route('/home', methods=["GET", "POST"])
def home():
return render_template(
'index.html',
er = er
)
if __name__ == "__main__":
application.run(host='0.0.0.0')
app.ini
[uwsgi]
protocol = uwsgi
module = app
callable = application
master = true
processes = 2
threads = 2
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
max-requests = 1000
The nginx image comes with a main configuration file, /etc/nginx/nginx.conf, which loads every conf file in the conf.d folder -- including your nemesis in this case, a stock /etc/nginx/conf.d/default.conf. It reads as follows (trimmed a bit for concision):
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
So, your app.conf and this configuration are both active. The reason why this default one wins, though, is because of the server_name directive that it has (and yours lacks) -- when you're hitting localhost:5000, nginx matches based on the hostname and sends your request there.
To fix this easily, you can just remove that file in your Dockerfile-nginx:
RUN rm /etc/nginx/conf.d/default.conf
There is a working script in yii2, I can't deploy it.
Docker builds the project, everything is fine, but if i substitute the nginx config from docker into nginx / sites-available / default, then an error appears:
nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/sites-enabled/default:111
I read on the forums that i need to add the depends_on directive:
depend_on:
-app
but in this case, errors start to appear in:
docker-compose -f docker-compose.yml up -d
I substituted different versions - from "1" to "3.4", errors still appear. Here's the last one:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.environment: 'XDEBUG_CONFIG'
Invalid top-level property "environment". Valid top-level sections for this Comp ose file are: services, version, networks, volumes, and extensions starting with "x-".
You might be seeing this error because you're using the wrong Compose file versi on. Either specify a supported version (e.g "2.2" or "3.3") and place your servi ce definitions under the services key, or omit the version key and place you r service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compos e/compose-file/
The project only works on php-7.0
Here is the original (from project) nginx config:
## FRONTEND ##
server {
listen 80 default;
root /app/frontend/web;
index index.php index.html;
server_name yii2-starter-kit.dev;
charset utf-8;
# location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|js)$ {
# access_log off;
# expires max;
# }
location / {
try_files $uri $uri/ /index.php?$args;
}
client_max_body_size 32m;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off
# sendfile off;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
## Cache
# fastcgi_pass_header Cookie; # fill cookie valiables, $cookie_phpsessid for exmaple
# fastcgi_ignore_headers Cache-Control Expires Set-Cookie; # Use it with caution because it is cause SEO problems
# fastcgi_cache_key "$request_method|$server_addr:$server_port$request_uri|$cookie_phpsessid"; # generating unique key
# fastcgi_cache fastcgi_cache; # use fastcgi_cache keys_zone
# fastcgi_cache_path /tmp/nginx/ levels=1:2 keys_zone=fastcgi_cache:16m max_size=256m inactive=1d;
# fastcgi_temp_path /tmp/nginx/temp 1 2; # temp files folder
# fastcgi_cache_use_stale updating error timeout invalid_header http_500; # show cached page if error (even if it is outdated)
# fastcgi_cache_valid 200 404 10s; # cache lifetime for 200 404;
# or fastcgi_cache_valid any 10s; # use it if you want to cache any responses
}
}
## BACKEND ##
server {
listen 80;
root /app/backend/web;
index index.php index.html;
server_name backend.yii2-starter-kit.dev;
charset utf-8;
client_max_body_size 16m;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off on Vagrant based setup
# sendfile off;
location / {
try_files $uri $uri/ /index.php?$args;
}
# location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|js)$ {
# access_log off;
# expires max;
# }
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
## STORAGE ##
server {
listen 80;
server_name storage.yii2-starter-kit.dev;
root /app/storage/web;
index index.html;
# expires max;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off
# sendfile off;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
## PHP-FPM Servers ##
upstream php-fpm {
server app:9000;
}
## MISC ##
### WWW Redirect ###
#server {
# listen 80;
# server_name www.yii2-starter-kit.dev;
# return 301 http://yii2-starter-kit.dev$request_uri;
#}
Original (from project) docker-compose.yml
data:
image: busybox:latest
volumes:
- ./:/app
entrypoint: tail -f /dev/null
app:
build: docker/php
working_dir: /app
volumes_from:
- data
expose:
- 9000
links:
- db
- mailcatcher
environment:
XDEBUG_CONFIG: "idekey=PHPSTORM remote_enable=On remote_connect_back=On"
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./:/app
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/vhost.conf
links:
- app
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: yii2-starter-kit
MYSQL_USER: ysk_dbu
MYSQL_PASSWORD: ysk_pass
Help please make correct configs for nginx and docker-compose.
At yii2 newbie.
I would be very grateful for any help and advice.
Thank you in advance!
A Compose file without a version: key at the top level is a version 1 Compose file. This is a very old version of the Compose YAML file format that doesn't support networks, volumes, or other modern Compose features. You should select either version 2 or 3. (Version 3 is a little more oriented towards Docker's Swarm orchestrator, so for some specific options in a single-host setup you may need to specify version 2.) You need to specify a top-level version: key, and then the services you have need to go under a top-level services: key.
version: '3.8'
services:
app: { ... }
nginx: { ... }
mailcatcher: { ... }
db: { ... }
This will actually directly address your immediate issue. As discussed in Networking in Compose, Compose (with a version 2 or 3 config file) will create a default network for you and register containers so that their service names (like app) are usable as host names. You do not need links: or other configuration.
There are also a number of other unnecessary options in the Compose file you show. You don't need to repeat an image's WORKDIR as a container's working_dir:; you don't need to expose: ports (as distinct from publishing ports: out to the host); it's not really great practice to overwrite the code that gets COPYed into an image with volumes: from the host.
In modern Docker you also tend to not use data-volume containers. Instead, newer versions of Compose have a top-level volumes: key that can declare named volumes. You'd use this, for example, for your backing database storage.
The net result of all of this would be a Compose file like:
# Specify a current Compose version.
version: '3.8'
# Declare that we'll need a named volume for the database storage.
volumes:
mysql_data:
# The actual Compose-managed services.
services:
app:
build:
# If the application is in ./app, then the build context
# directory must be the current directory (to be able to
# COPY app ./
# ).
context: .
dockerfile: docker/php/Dockerfile
environment:
XDEBUG_CONFIG: "idekey=PHPSTORM remote_enable=On remote_connect_back=On"
nginx:
# Build a separate image that will also
# FROM nginx
# COPY app /usr/share/nginx/html
# COPY docker/nginx/vhost.conf /etc/nginx/conf.d
build:
context: .
dockerfile: docker/nginx/Dockerfile
ports:
- "80:80"
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
db:
image: mysql:5.7
volumes:
# Use the named volume we declared above
- mysql_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: yii2-starter-kit
MYSQL_USER: ysk_dbu
MYSQL_PASSWORD: ysk_pass
I have a set of docker containers that are generated from yaml files. These containers work ok - I can access localhost, ping http://web from the nginx container, and list the port mapping (see snippet1)
I now want to deploy them to another machine, so I used the docker commit, save, load, and run to create an image, copy the image and deploy new containers (see snippet2).
But after I deploy the containers, they don't run properly (I cannot access localhost, cannot ping http://web from the nginx container, and port mapping is empty - see snippet3)
The .yml file is in snippet4
and the nginx .conf files are in snippet5
What can be the problem?
Thanks,
Avner
EDIT:
From the responses below, I understand that instead of using "docker commit", I should build the container on the remote host using one of 2 options:
option1 - copy the code to the remote host and apply a modified docker-compose, and build from source using a modified docker-compose
option2 - create an image on local machine, push it to a docker repository, pull it from there, using a modified docker-compose
I'm trying to follow option1 (as a start), but still have problems.
I filed a new post here that describes the problem
END EDIT:
snippet1 - original containers work ok
# the original containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26ba325e737d webserver_nginx "nginx -g 'daemon of…" 3 hours ago Up 43 minutes 0.0.0.0:80->80/tcp, 443/tcp webserver_nginx_1
08ef8a443658 webserver_web "flask run --host=0.…" 3 hours ago Up 43 minutes 0.0.0.0:8000->8000/tcp webserver_web_1
33c13a308139 webserver_postgres "docker-entrypoint.s…" 3 hours ago Up 43 minutes 0.0.0.0:5432->5432/tcp webserver_postgres_1
# can access localhost
curl http://localhost:80
<!DOCTYPE html>
...
# can ping web container from the nginx container
docker exec -it webserver_nginx_1 bash
root#26ba325e737d:/# ping web
PING web (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.138 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.123 ms
...
# List port mappings for the container
docker port webserver_nginx_1
80/tcp -> 0.0.0.0:80
snippet2 - deploy the containers (currently still using the deployed containers on localhost)
# create new docker images from the containers
docker commit webserver_nginx_1 webserver_nginx_image2
docker commit webserver_postgres_1 webserver_postgres_image2
docker commit webserver_web_1 webserver_web_image2
# save the docker images into .tar files
docker save webserver_nginx_image2 > /tmp/webserver_nginx_image2.tar
docker save webserver_postgres_image2 > /tmp/webserver_postgres_image2.tar
docker save webserver_web_image2 > /tmp/webserver_web_image2.tar
# load the docker images from tar files
cat /tmp/webserver_nginx_image2.tar | docker load
cat /tmp/webserver_postgres_image2.tar | docker load
cat /tmp/webserver_web_image2.tar | docker load
# Create containers from the new images
docker run -d --name webserver_web_2 webserver_web_image2
docker run -d --name webserver_postgres_2 webserver_postgres_image2
docker run -d --name webserver_nginx_2 webserver_nginx_image2
# stop the original containers and start the deployed containers
docker stop webserver_web_1 webserver_nginx_1 webserver_postgres_1
docker stop webserver_web_2 webserver_nginx_2 webserver_postgres_2
docker start webserver_web_2 webserver_nginx_2 webserver_postgres_2
snippet3 - deployed containers don't work
# the deployed containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15ef8bfc0ceb webserver_nginx_image2 "nginx -g 'daemon of…" 3 hours ago Up 4 seconds 80/tcp, 443/tcp webserver_nginx_2
d6d228599f81 webserver_postgres_image2 "docker-entrypoint.s…" 3 hours ago Up 3 seconds 5432/tcp webserver_postgres_2
a8aac280ea01 webserver_web_image2 "flask run --host=0.…" 3 hours ago Up 4 seconds 8000/tcp webserver_web_2
# can NOT access localhost
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
# can NOT ping web container from the nginx container
docker exec -it webserver_nginx_2 bash
root#15ef8bfc0ceb:/# ping web
ping: unknown host
# List port mappings for the container
docker port webserver_nginx_2
# nothing is being shown
snippet4 - the .yml files
cat /home/user/webServer/docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/user/webServer/web:/home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -t 3600 -b :8000 project:app
depends_on:
- postgres
stdin_open: true
tty: true
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /home/user/webServer/web:/home/flask/app/web
depends_on:
- web
postgres:
restart: always
build: ./postgresql
volumes:
- data1:/var/lib/postgresql
expose:
- "5432"
volumes:
data1:
,
cat /home/user/webServer/docker-compose.override.yml
version: '3'
services:
web:
build: ./web
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- FLASK_APP=run.py
- FLASK_DEBUG=1
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
command: flask run --host=0.0.0.0 --port 8000
nginx:
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
depends_on:
- web
postgres:
ports:
- "5432:5432"
snippet5 - the nginx .conf files
cat /home/user/webServer/nginx/nginx.conf
# Define the user that will own and run the Nginx server
user nginx;
# Define the number of worker processes; recommended value is the number of
# cores that are being used by your server
worker_processes 1;
# Define the location on the file system of the error log, plus the minimum
# severity to log messages for
error_log /var/log/nginx/error.log warn;
# Define the file that will store the process ID of the main NGINX process
pid /var/run/nginx.pid;
# events block defines the parameters that affect connection processing.
events {
# Define the maximum number of simultaneous connections that can be opened by a worker process
worker_connections 1024;
}
# http block defines the parameters for how NGINX should handle HTTP web traffic
http {
# Include the file defining the list of file types that are supported by NGINX
include /etc/nginx/mime.types;
# Define the default file type that is returned to the user
default_type text/html;
# Define the format of log messages.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Define the location of the log of access attempts to NGINX
access_log /var/log/nginx/access.log main;
# Define the parameters to optimize the delivery of static content
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Define the timeout value for keep-alive connections with the client
keepalive_timeout 65;
# Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
#gzip on;
# Include additional parameters for virtual host(s)/server(s)
include /etc/nginx/conf.d/*.conf;
}
,
cat /home/user/webServer/nginx/myapp.conf
# Define the parameters for a specific virtual host/server
server {
# Define the server name, IP address, and/or port of the server
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
location /static {
alias /home/flask/app/web/instance;
}
location /foo {
root /usr/src/app/web;
index index5.html;
}
location /V1 {
root /usr/src/app/web;
index index.html;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
root /;
index index1.html;
resolver 127.0.0.11;
set $example "web:8000";
proxy_pass http://$example;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 10M;
client_body_buffer_size 10M;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
}
}
You need to copy the docker-compose.yml file to the target machine (with some edits) and re-run it there. If you're not also rebuilding the images there, you need to modify the file to not include volumes: referencing a local source tree and change the build: block to reference some image:. Never run docker commit.
The docker-compose.yml file mostly consists of options equivalent to docker run options, but in a more convenient syntax. For example, when docker-compose.yml says
services:
nginx:
ports:
- "80:80"
That's equivalent to a docker run -p 80:80 option, but when you later do
docker run -d --name webserver_nginx_2 webserver_nginx_image2
That option is missing. Docker Compose also implicitly creates a Docker network for you and without the equivalent docker run --net option, connectivity between containers doesn't work.
All of these options must be specified every time you docker run the container. docker commit does not persist them. In general you should never run docker commit, especially if you already have Dockerfiles for your images; in the scenario you're describing here the image that comes out of docker commit won't be any different from what you could docker build yourself, except that it will lose some details like the default CMD to run.
As the commenters suggest, the best way to run the same setup on a different machine is to set up a Docker registry (or use a public service like Docker Hub), docker push your built images there, copy only the docker-compose.yml file to the new machine, and run docker-compose up there. (Think of it like the run.sh script that launches your containers.) You have to make a couple of changes: replace the build: block with the relevant image: and delete the volumes: that reference local source code. If you need the database data this needs to be copied separately.
services:
web:
restart: always
image: myapp/web
depends_on:
- postgres
ports:
- "8000:8000"
# expose:, command:, and core environment: should be in Dockerfile
# stdin_open: and tty: shouldn't matter for a Flask app
# volumes: replace code in the image and you're not copying that
The way you've tried to deploy your app on another server is not how Docker recommends you to do it. You need to transfer all of your files that need to be present in order to get the application running along with a Dockerfile to build the image and a docker-compose.yml to define how each of the services of your application gets deployed. Docker has a nice document to get started with.
If your intention was to deploy the application on another server for redundancy and not to create a replica, consider using docker with swarm mode.
I have a web application based on php and nginx images ... Everything works great until I set a command under the PHP configuration:
command: /usr/bin/supervisord -c /symfony/supervisord.conf
docker-compose.yml
version: '2'
services:
php:
build: docker/php
tty: true
volumes:
- '.:/symfony'
command: /usr/bin/supervisord -c /symfony/supervisord.conf
nginx:
image: nginx:1.11
tty: true
volumes:
- './public/:/symfony'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf'
ports:
- '80:80'
links:
- php
This is my default.conf
server {
server_name ~.*;
location / {
root /symfony;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /symfony/public/index.php;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
This is my supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
logfile=/tmp/supervisord.log
pidfile=/var/run/supervisord.pid
nodaemon=true
Nginx logs show me:
nginx_1 | 2018/10/02 00:42:36 [error] 11#11: 1 connect() failed
(111: Connection refused) while connecting to upstream, client:
172.23.0.1, server: ~., request: "GET / HTTP/1.1", upstream: "fastcgi://172.23.0.2:9000", host: "127.0.0.1"
As we see, nginx report a 502 Bad Gateway error. If i remove the last line, CMD, everything works fine. If I remove the line and I acess via docker-compose exec php bash and launch the command manually everything work also.
Any Idea why adding that command leads to 502 Bad Gateway ??
Ok I found a solution It was a problem with supervisor. Because each time we launch our service supervisor, the php-fpm service is stopped automatically that's why it should add a configuration that will relaunch the php-fpm but this time from supervisor configuration.
[program:php-fpm]
command = /usr/local/sbin/php-fpm
autostart=true
autorestart=true
For anyone else with similar problem:
Don't forget that command key in docker-compose.yml file overrides default CMD in Dockerfile, therefore that command won't be run.
For example, if php:7.4-fpm final command is CMD php-fpm, it won't be run.
Therefore if you have some custom logic for running after container is ran, don't forget to include it in your command, e.g.:
command: bash -c "php-fpm & npm run dev"
I'm working on nginx service using docker-compose, I created the docker-compose.yml file :
version: '2'
services:
nginx:
image: nginx:1.11.8-alpine
ports:
- "8858:80"
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
- ./code:/usr/share/nginx/html
- ./html:/myapp
- ./site.conf:/etc/nginx/conf.d/site.conf
- ./error.log:/var/log/nginx/error.log
- ./nginx.conf:/etc/nginx/nginx.conf:ro
This is the site.conf file
server {
listen 80;
index index.html index.php;
server_name localhost;
error_log /var/log/nginx/error.log;
location / {
root /usr/share/nginx/html;
}
location /html {
alias /myapp;
}
}
This is the result of docker-compose up
ERROR: for ef5152b88a7c_ef5152b88a7c_nginxdocker_nginx_1 Cannot start
service nginx: oci runtime error: container_linux.go:247: starting
container process caused "process_linux.go:359: container init caused
\"rootfs_linux.go:54: mounting \\"/root/nginx-docker/nginx.conf\\"
to rootfs
\\"/var/lib/docker/aufs/mnt/b463f0e0ca95db8cd570dfb68fcf206df31e86998e725465a7673ca192af8342\\"
at
\\"/var/lib/docker/aufs/mnt/b463f0e0ca95db8cd570dfb68fcf206df31e86998e725465a7673ca192af8342/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)?
Check if the specified host path exists and is the expected type
This has nothing to do with Nginx.
If you use volumes in docker-compose.yml, always make sure that the files on the left side of the colon : exists! If you miss that, sometimes Docker Compose creates folders instead of files "on the left side" (= on your host) IIRC. Which leads to subsequent errors on the next run.