There is a working script in yii2, I can't deploy it.
Docker builds the project, everything is fine, but if i substitute the nginx config from docker into nginx / sites-available / default, then an error appears:
nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/sites-enabled/default:111
I read on the forums that i need to add the depends_on directive:
depend_on:
-app
but in this case, errors start to appear in:
docker-compose -f docker-compose.yml up -d
I substituted different versions - from "1" to "3.4", errors still appear. Here's the last one:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.environment: 'XDEBUG_CONFIG'
Invalid top-level property "environment". Valid top-level sections for this Comp ose file are: services, version, networks, volumes, and extensions starting with "x-".
You might be seeing this error because you're using the wrong Compose file versi on. Either specify a supported version (e.g "2.2" or "3.3") and place your servi ce definitions under the services key, or omit the version key and place you r service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compos e/compose-file/
The project only works on php-7.0
Here is the original (from project) nginx config:
## FRONTEND ##
server {
listen 80 default;
root /app/frontend/web;
index index.php index.html;
server_name yii2-starter-kit.dev;
charset utf-8;
# location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|js)$ {
# access_log off;
# expires max;
# }
location / {
try_files $uri $uri/ /index.php?$args;
}
client_max_body_size 32m;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off
# sendfile off;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
## Cache
# fastcgi_pass_header Cookie; # fill cookie valiables, $cookie_phpsessid for exmaple
# fastcgi_ignore_headers Cache-Control Expires Set-Cookie; # Use it with caution because it is cause SEO problems
# fastcgi_cache_key "$request_method|$server_addr:$server_port$request_uri|$cookie_phpsessid"; # generating unique key
# fastcgi_cache fastcgi_cache; # use fastcgi_cache keys_zone
# fastcgi_cache_path /tmp/nginx/ levels=1:2 keys_zone=fastcgi_cache:16m max_size=256m inactive=1d;
# fastcgi_temp_path /tmp/nginx/temp 1 2; # temp files folder
# fastcgi_cache_use_stale updating error timeout invalid_header http_500; # show cached page if error (even if it is outdated)
# fastcgi_cache_valid 200 404 10s; # cache lifetime for 200 404;
# or fastcgi_cache_valid any 10s; # use it if you want to cache any responses
}
}
## BACKEND ##
server {
listen 80;
root /app/backend/web;
index index.php index.html;
server_name backend.yii2-starter-kit.dev;
charset utf-8;
client_max_body_size 16m;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off on Vagrant based setup
# sendfile off;
location / {
try_files $uri $uri/ /index.php?$args;
}
# location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|js)$ {
# access_log off;
# expires max;
# }
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
## STORAGE ##
server {
listen 80;
server_name storage.yii2-starter-kit.dev;
root /app/storage/web;
index index.html;
# expires max;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off
# sendfile off;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
## PHP-FPM Servers ##
upstream php-fpm {
server app:9000;
}
## MISC ##
### WWW Redirect ###
#server {
# listen 80;
# server_name www.yii2-starter-kit.dev;
# return 301 http://yii2-starter-kit.dev$request_uri;
#}
Original (from project) docker-compose.yml
data:
image: busybox:latest
volumes:
- ./:/app
entrypoint: tail -f /dev/null
app:
build: docker/php
working_dir: /app
volumes_from:
- data
expose:
- 9000
links:
- db
- mailcatcher
environment:
XDEBUG_CONFIG: "idekey=PHPSTORM remote_enable=On remote_connect_back=On"
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./:/app
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/vhost.conf
links:
- app
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: yii2-starter-kit
MYSQL_USER: ysk_dbu
MYSQL_PASSWORD: ysk_pass
Help please make correct configs for nginx and docker-compose.
At yii2 newbie.
I would be very grateful for any help and advice.
Thank you in advance!
A Compose file without a version: key at the top level is a version 1 Compose file. This is a very old version of the Compose YAML file format that doesn't support networks, volumes, or other modern Compose features. You should select either version 2 or 3. (Version 3 is a little more oriented towards Docker's Swarm orchestrator, so for some specific options in a single-host setup you may need to specify version 2.) You need to specify a top-level version: key, and then the services you have need to go under a top-level services: key.
version: '3.8'
services:
app: { ... }
nginx: { ... }
mailcatcher: { ... }
db: { ... }
This will actually directly address your immediate issue. As discussed in Networking in Compose, Compose (with a version 2 or 3 config file) will create a default network for you and register containers so that their service names (like app) are usable as host names. You do not need links: or other configuration.
There are also a number of other unnecessary options in the Compose file you show. You don't need to repeat an image's WORKDIR as a container's working_dir:; you don't need to expose: ports (as distinct from publishing ports: out to the host); it's not really great practice to overwrite the code that gets COPYed into an image with volumes: from the host.
In modern Docker you also tend to not use data-volume containers. Instead, newer versions of Compose have a top-level volumes: key that can declare named volumes. You'd use this, for example, for your backing database storage.
The net result of all of this would be a Compose file like:
# Specify a current Compose version.
version: '3.8'
# Declare that we'll need a named volume for the database storage.
volumes:
mysql_data:
# The actual Compose-managed services.
services:
app:
build:
# If the application is in ./app, then the build context
# directory must be the current directory (to be able to
# COPY app ./
# ).
context: .
dockerfile: docker/php/Dockerfile
environment:
XDEBUG_CONFIG: "idekey=PHPSTORM remote_enable=On remote_connect_back=On"
nginx:
# Build a separate image that will also
# FROM nginx
# COPY app /usr/share/nginx/html
# COPY docker/nginx/vhost.conf /etc/nginx/conf.d
build:
context: .
dockerfile: docker/nginx/Dockerfile
ports:
- "80:80"
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
db:
image: mysql:5.7
volumes:
# Use the named volume we declared above
- mysql_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: yii2-starter-kit
MYSQL_USER: ysk_dbu
MYSQL_PASSWORD: ysk_pass
Related
Setting up my rundeck application within a docker container and using nginx to reverse proxy. Presume my problem is originating from the proxy that is being received back into the server.
When I access the desired URL (https://vmName.Domain.corp/rundeck) I am able to see the login page, even though it doesn't have any UI. Once I enter the default admin:admin information I am directed to a 404 page. I pasted below one of the error logs from the docker-compose logs. You'll notice it's going to /etc/nginx to find rundeck's logo.
I can't determine if the problem is in my docker-compose file or nginx' config file.
Example of error log:
production_nginx | 2021-02-04T08:17:50.770544192Z 2021/02/04 08:17:50 [error] 29#29: *8 open() "/etc/nginx/html/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js" failed (2: No such file or directory), client: 10.243.5.116, server: vmName.Domain.corp, request: "GET /assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js HTTP/1.1", host: "vmName.Domain.corp", referrer: "https://vmName.Domain.corp/rundeck/user/login"
If curious, I can access Rundeck's logo if I go to: https://vmName.Domain.corp/rundeck/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js"
Here's more information on my set-up
/nginx/sites-enabled/docker-compose.yml (main machine)
rundeck:
image: ${RUNDECK_IMAGE:-jordan/rundeck:latest}
container_name: production_rundeck
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_SERVER_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_GRAILS_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_SERVER_FORWARDED: "true"
RDECK_JVM_SETTINGS: "-Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dfile.encoding=UTF-8 -Drundeck.jetty.connector.forwarded=true -Dserver.contextPath=/rundeck -Dserver.https.port:4440"
#RUNDECK_SERVER_CONTEXTPATH: "https://vmName.Domain.corp/rundeck"
RUNDECK_MAIL_FROM: "rundeck#vmName.Domain.corp"
EXTERNAL_SERVER_URL: "https://vmName.Domain.corp/rundeck"
SERVER_URL: "https://vmName.Domain.corp/rundeck"
volumes:
- /etc/rundeck:/etc/rundeck
- /var/rundeck
- /var/lib/mysql
- /var/log/rundeck
- /opt/rundeck-plugins
nginx:
image: nginx:latest
container_name: production_nginx
links:
- rundeck
volumes:
- /etc/nginx/sites-enabled:/etc/nginx/conf.d
depends_on:
- rundeck
ports:
- 80:80
- 443:443
restart: always
networks:
default:
external:
name: vmName
nginx/sites-enabled/default.conf (main machine)
# Route all HTTP traffic through HTTPS
# ====================================
server {
listen 80;
server_name vmName;
return 301 https://vmName$request_uri;
}
server {
listen 443 ssl;
server_name vmName;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
return 301 https://vmName.Domain.corp$request_uri;
}
# ====================================
# Main webserver route configuration
# ====================================
server {
listen 443 ssl;
server_name vmName.Domain.corp;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
#===========================================================================#
## MAIN PAGE
location /example-app {
rewrite ^/example-app(.*) /$1 break;
proxy_pass http://example-app:5000/;
proxy_set_header Host $host/example-app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# #Rundeck
location /rundeck/ {
# rewrite ^/rundeck(.*) /$1 break;
proxy_pass http://rundeck:4440/;
proxy_set_header Host $host/rundeck;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
[image container]/etc/rundeck/ rundeck-config.properties:
# change hostname here
grails.serverURL=https://vmName.Domain.corp/rundeck
grails.mail.default.from = rundeck#vmName.Domain.corp
server.useForwardHeaders = true
[image container]/etc/rundeck/ framework.properties:
framework.server.name = vmName.Domain.corp
framework.server.hostname = vmName.Domain.corp
framework.server.port = 443
framework.server.url = https://vmName.Domain.corp/rundeck
It seems related to the Rundeck image/network problem, I did a working example with the official one, take a look:
nginx.conf (located at config folder, check the docker-compose file volumes section):
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
proxy_pass http://rundeck:4440;
}
}
docker-compose:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.9}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
Dockerfile:
ARG IMAGE
FROM ${IMAGE}
Build with: docker-compise build and run with docker-compose up.
rundeck-config.properties content:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/home/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
# Bind address and server URL
server.address=0.0.0.0
server.servlet.context-path=/
grails.serverURL=http://localhost
server.servlet.session.timeout=3600
dataSource.dbCreate = update
dataSource.url = jdbc:h2:file:/home/rundeck/server/data/grailsdb;MVCC=true
dataSource.username =
dataSource.password =
#Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
rundeck.api.tokens.duration.max=30d
rundeck.log4j.config.file=/home/rundeck/server/config/log4j.properties
rundeck.gui.startpage=projectHome
rundeck.clusterMode.enabled=true
rundeck.security.httpHeaders.enabled=true
rundeck.security.httpHeaders.provider.xcto.enabled=true
rundeck.security.httpHeaders.provider.xxssp.enabled=true
rundeck.security.httpHeaders.provider.xfo.enabled=true
rundeck.security.httpHeaders.provider.csp.enabled=true
rundeck.security.httpHeaders.provider.csp.config.include-xcsp-header=false
rundeck.security.httpHeaders.provider.csp.config.include-xwkcsp-header=false
rundeck.storage.provider.1.type=db
rundeck.storage.provider.1.path=keys
rundeck.projectsStorageType=db
framework.properties file content:
# framework.properties -
#
# ----------------------------------------------------------------
# Server connection information
# ----------------------------------------------------------------
framework.server.name = 85845cd30fe9
framework.server.hostname = 85845cd30fe9
framework.server.port = 4440
framework.server.url = http://localhost
# ----------------------------------------------------------------
# Installation locations
# ----------------------------------------------------------------
rdeck.base=/home/rundeck
framework.projects.dir=/home/rundeck/projects
framework.etc.dir=/home/rundeck/etc
framework.var.dir=/home/rundeck/var
framework.tmp.dir=/home/rundeck/var/tmp
framework.logs.dir=/home/rundeck/var/logs
framework.libext.dir=/home/rundeck/libext
# ----------------------------------------------------------------
# SSH defaults for node executor and file copier
# ----------------------------------------------------------------
framework.ssh.keypath = /home/rundeck/.ssh/id_rsa
framework.ssh.user = rundeck
# ssh connection timeout after a specified number of milliseconds.
# "0" value means wait forever.
framework.ssh.timeout = 0
# ----------------------------------------------------------------
# System-wide global variables.
# ----------------------------------------------------------------
# Expands to ${globals.var1}
#framework.globals.var1 = value1
# Expands to ${globals.var2}
#framework.globals.var2 = value2
rundeck.server.uuid = a14bc3e6-75e8-4fe4-a90d-a16dcc976bf6
I am stuck deploying docker image gitea/gitea:1 behind a reverse proxy jwilder/nginx-proxy with jrcs/letsencrypt-nginx-proxy-companion for automatic certificate updates.
gitea is running and I can connect by the http adress with port 3000.
The proxy is running also, as I have multiple apps and services e.g. sonarqube working well.
This is my docker-compose.yml:
version: "2"
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
When i call https://gitea.myhost.de the result is
502 Bad Gateway (nginx/1.17.6)
This is the log entry:
2020/09/13 09:57:30 [error] 14323#14323: *15465 no live upstreams while connecting to upstream, client: 77.20.122.169, server: gitea.myhost.de, request: "GET / HTTP/2.0", upstream: "http://gitea.myhost.de/", host: "gitea.myhost.de"
and this is the relevant entry in nginx/conf/default.conf:
# gitea.myhost.de
upstream gitea.myhost.de {
## Can be connected with "frontproxy_default" network
# gitea_server_1
server 172.23.0.10 down;
}
server {
server_name gitea.myhost.de;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
# Do not HTTPS redirect Let'sEncrypt ACME challenge
location /.well-known/acme-challenge/ {
auth_basic off;
allow all;
root /usr/share/nginx/html;
try_files $uri =404;
break;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name gitea.myhost.de;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/gitea.myhost.de.crt;
ssl_certificate_key /etc/nginx/certs/gitea.myhost.de.key;
ssl_dhparam /etc/nginx/certs/gitea.myhost.de.dhparam.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/gitea.myhost.de.chain.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://gitea.myhost.de;
}
}
Maybe it's a problem, I used a gitea backup for this container as suggested in https://docs.gitea.io/en-us/backup-and-restore/
What can I do to get this running? I have read this https://docs.gitea.io/en-us/reverse-proxies/ but maybe I missed something. The main point is to get letsencrypt-nginx-proxy-companion automatically managing the certificates.
Any help and tip is highly appreciated.
I believe all you are missing is your VIRTUAL_PORT setting in your gitea container's environment. This tells the reverse proxy container which port to connect with when routing incoming requests from your VIRTUAL_HOST domain, effectively adding along the lines of ":3000" to your upstream server in the nginx conf. This is also the case when your containers are all on the same host. By default, the reverse proxy container only listens on port 80 on that service, but since gitea docker container uses another default port of 3000, you need to tell that to the reverse proxy container essentially. See below using snippet from your compose file.
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- VIRTUAL_PORT=3000 <-------------------***Add this line***
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
P.S.: It is not required to expose the ports if all containers are on the same host and there was no other reason other than attempting to get this to work for it.
I have a set of docker containers that are generated from yaml files. These containers work ok - I can access localhost, ping http://web from the nginx container, and list the port mapping (see snippet1)
I now want to deploy them to another machine, so I used the docker commit, save, load, and run to create an image, copy the image and deploy new containers (see snippet2).
But after I deploy the containers, they don't run properly (I cannot access localhost, cannot ping http://web from the nginx container, and port mapping is empty - see snippet3)
The .yml file is in snippet4
and the nginx .conf files are in snippet5
What can be the problem?
Thanks,
Avner
EDIT:
From the responses below, I understand that instead of using "docker commit", I should build the container on the remote host using one of 2 options:
option1 - copy the code to the remote host and apply a modified docker-compose, and build from source using a modified docker-compose
option2 - create an image on local machine, push it to a docker repository, pull it from there, using a modified docker-compose
I'm trying to follow option1 (as a start), but still have problems.
I filed a new post here that describes the problem
END EDIT:
snippet1 - original containers work ok
# the original containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26ba325e737d webserver_nginx "nginx -g 'daemon of…" 3 hours ago Up 43 minutes 0.0.0.0:80->80/tcp, 443/tcp webserver_nginx_1
08ef8a443658 webserver_web "flask run --host=0.…" 3 hours ago Up 43 minutes 0.0.0.0:8000->8000/tcp webserver_web_1
33c13a308139 webserver_postgres "docker-entrypoint.s…" 3 hours ago Up 43 minutes 0.0.0.0:5432->5432/tcp webserver_postgres_1
# can access localhost
curl http://localhost:80
<!DOCTYPE html>
...
# can ping web container from the nginx container
docker exec -it webserver_nginx_1 bash
root#26ba325e737d:/# ping web
PING web (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.138 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.123 ms
...
# List port mappings for the container
docker port webserver_nginx_1
80/tcp -> 0.0.0.0:80
snippet2 - deploy the containers (currently still using the deployed containers on localhost)
# create new docker images from the containers
docker commit webserver_nginx_1 webserver_nginx_image2
docker commit webserver_postgres_1 webserver_postgres_image2
docker commit webserver_web_1 webserver_web_image2
# save the docker images into .tar files
docker save webserver_nginx_image2 > /tmp/webserver_nginx_image2.tar
docker save webserver_postgres_image2 > /tmp/webserver_postgres_image2.tar
docker save webserver_web_image2 > /tmp/webserver_web_image2.tar
# load the docker images from tar files
cat /tmp/webserver_nginx_image2.tar | docker load
cat /tmp/webserver_postgres_image2.tar | docker load
cat /tmp/webserver_web_image2.tar | docker load
# Create containers from the new images
docker run -d --name webserver_web_2 webserver_web_image2
docker run -d --name webserver_postgres_2 webserver_postgres_image2
docker run -d --name webserver_nginx_2 webserver_nginx_image2
# stop the original containers and start the deployed containers
docker stop webserver_web_1 webserver_nginx_1 webserver_postgres_1
docker stop webserver_web_2 webserver_nginx_2 webserver_postgres_2
docker start webserver_web_2 webserver_nginx_2 webserver_postgres_2
snippet3 - deployed containers don't work
# the deployed containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15ef8bfc0ceb webserver_nginx_image2 "nginx -g 'daemon of…" 3 hours ago Up 4 seconds 80/tcp, 443/tcp webserver_nginx_2
d6d228599f81 webserver_postgres_image2 "docker-entrypoint.s…" 3 hours ago Up 3 seconds 5432/tcp webserver_postgres_2
a8aac280ea01 webserver_web_image2 "flask run --host=0.…" 3 hours ago Up 4 seconds 8000/tcp webserver_web_2
# can NOT access localhost
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
# can NOT ping web container from the nginx container
docker exec -it webserver_nginx_2 bash
root#15ef8bfc0ceb:/# ping web
ping: unknown host
# List port mappings for the container
docker port webserver_nginx_2
# nothing is being shown
snippet4 - the .yml files
cat /home/user/webServer/docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/user/webServer/web:/home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -t 3600 -b :8000 project:app
depends_on:
- postgres
stdin_open: true
tty: true
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /home/user/webServer/web:/home/flask/app/web
depends_on:
- web
postgres:
restart: always
build: ./postgresql
volumes:
- data1:/var/lib/postgresql
expose:
- "5432"
volumes:
data1:
,
cat /home/user/webServer/docker-compose.override.yml
version: '3'
services:
web:
build: ./web
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- FLASK_APP=run.py
- FLASK_DEBUG=1
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
command: flask run --host=0.0.0.0 --port 8000
nginx:
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
depends_on:
- web
postgres:
ports:
- "5432:5432"
snippet5 - the nginx .conf files
cat /home/user/webServer/nginx/nginx.conf
# Define the user that will own and run the Nginx server
user nginx;
# Define the number of worker processes; recommended value is the number of
# cores that are being used by your server
worker_processes 1;
# Define the location on the file system of the error log, plus the minimum
# severity to log messages for
error_log /var/log/nginx/error.log warn;
# Define the file that will store the process ID of the main NGINX process
pid /var/run/nginx.pid;
# events block defines the parameters that affect connection processing.
events {
# Define the maximum number of simultaneous connections that can be opened by a worker process
worker_connections 1024;
}
# http block defines the parameters for how NGINX should handle HTTP web traffic
http {
# Include the file defining the list of file types that are supported by NGINX
include /etc/nginx/mime.types;
# Define the default file type that is returned to the user
default_type text/html;
# Define the format of log messages.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Define the location of the log of access attempts to NGINX
access_log /var/log/nginx/access.log main;
# Define the parameters to optimize the delivery of static content
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Define the timeout value for keep-alive connections with the client
keepalive_timeout 65;
# Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
#gzip on;
# Include additional parameters for virtual host(s)/server(s)
include /etc/nginx/conf.d/*.conf;
}
,
cat /home/user/webServer/nginx/myapp.conf
# Define the parameters for a specific virtual host/server
server {
# Define the server name, IP address, and/or port of the server
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
location /static {
alias /home/flask/app/web/instance;
}
location /foo {
root /usr/src/app/web;
index index5.html;
}
location /V1 {
root /usr/src/app/web;
index index.html;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
root /;
index index1.html;
resolver 127.0.0.11;
set $example "web:8000";
proxy_pass http://$example;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 10M;
client_body_buffer_size 10M;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
}
}
You need to copy the docker-compose.yml file to the target machine (with some edits) and re-run it there. If you're not also rebuilding the images there, you need to modify the file to not include volumes: referencing a local source tree and change the build: block to reference some image:. Never run docker commit.
The docker-compose.yml file mostly consists of options equivalent to docker run options, but in a more convenient syntax. For example, when docker-compose.yml says
services:
nginx:
ports:
- "80:80"
That's equivalent to a docker run -p 80:80 option, but when you later do
docker run -d --name webserver_nginx_2 webserver_nginx_image2
That option is missing. Docker Compose also implicitly creates a Docker network for you and without the equivalent docker run --net option, connectivity between containers doesn't work.
All of these options must be specified every time you docker run the container. docker commit does not persist them. In general you should never run docker commit, especially if you already have Dockerfiles for your images; in the scenario you're describing here the image that comes out of docker commit won't be any different from what you could docker build yourself, except that it will lose some details like the default CMD to run.
As the commenters suggest, the best way to run the same setup on a different machine is to set up a Docker registry (or use a public service like Docker Hub), docker push your built images there, copy only the docker-compose.yml file to the new machine, and run docker-compose up there. (Think of it like the run.sh script that launches your containers.) You have to make a couple of changes: replace the build: block with the relevant image: and delete the volumes: that reference local source code. If you need the database data this needs to be copied separately.
services:
web:
restart: always
image: myapp/web
depends_on:
- postgres
ports:
- "8000:8000"
# expose:, command:, and core environment: should be in Dockerfile
# stdin_open: and tty: shouldn't matter for a Flask app
# volumes: replace code in the image and you're not copying that
The way you've tried to deploy your app on another server is not how Docker recommends you to do it. You need to transfer all of your files that need to be present in order to get the application running along with a Dockerfile to build the image and a docker-compose.yml to define how each of the services of your application gets deployed. Docker has a nice document to get started with.
If your intention was to deploy the application on another server for redundancy and not to create a replica, consider using docker with swarm mode.
I am trying to set up a new docker-compose file.
version: '3'
services:
webserver:
image: nginx:latest
container_name: redux-webserver
# working_dir: /application
volumes:
- ./www:/var/www/
- ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
ports:
- "7007:80"
Currently it is very simple. But I copy the following config:
# Default server configuration
#
server {
listen 7007 default_server;
listen [::]:7007 default_server;
root /var/www;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/=404;
}
location /redux {
alias /var/www/Redux/src;
try_files $uri #redux;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #redux {
rewrite /redux/(.*)$ /redux/index.php?/$1 last;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
#fastcgi_index index.php;
# With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
But now when I try to start it with docker-compose run webserver I get the following error:
2019/07/20 08:55:09 [emerg] 1#1: open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
nginx: [emerg] open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
I understand it does not find the file fastcgi-php.conf. But why is this? Shouldn't that file be included in the standart nginx installation?
/etc/nginx/snippets/fastcgi-php.conf is in nginx-full package, but the image nginx:latest you used did not install nginx-full package.
To have it, you need to write your own dockerfile base from nginx:latest & install nginx-full in it:
Dockerfile:
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-full
docker-compose.yaml:
version: '3'
services:
webserver:
build: .
image: mynginx:latest
Put Dockerfile & docker-compose.yaml in the same folder then up it.
Additional, if you do not mind use other folk's repo(means not official), you can just search one from dockerhub, e.g. one I find from dockerhub (schleyk/nginx-full):
docker run -it --rm schleyk/nginx-full ls -alh /etc/nginx/snippets/fastcgi-php.conf
-rw-r--r-- 1 root root 422 Apr 6 2018 /etc/nginx/snippets/fastcgi-php.conf
You are trying to use a docker compose config that does not account for your trying to load fastcgi / php specific options.
You can use another image and link it to your web server like:
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code
Source, with a more thorough explanation: http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/
I have a web app (django served by uwsgi) and I am using nginx for proxying requests to specific containers.
Here is a relevant snippet from my default.conf.
upstream web.ubuntu.com {
server 172.18.0.9:8080;
}
server {
server_name web.ubuntu.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://web.ubuntu.com;
}
}
Now I want the static files to be served from nginx rather than uwsgi workers.
So basically I want to add something like:
location /static/ {
autoindex on;
alias /staticfiles/;
}
to the automatically generated server block for the container.
I believe this should make nginx serve all requests to web.ubuntu.com/static/* from /staticfiles folder.
But since the configuration(default.conf) is generated automatically, I don't know how to add the above location to the server block dynamically :(
I think location block can't be outside a server block right and there can be only one server block per server?
so I don't know how to add the location block there unless I add dynamically to default.conf after nginx comes up and then reload it I guess.
I did go through https://github.com/jwilder/nginx-proxy and I only see an example to actually change location settings per-host and default. But nothing about adding a new location altogether.
I already posted this in Q&A for jwilder/nginx-proxy and didn't get a response.
Please help me if there is a way to achieve this.
This answer is based on this comment from the #553 issue discussion on the official nginx-proxy repo. First, you have to create the default_location file with the static location:
location /static/ {
alias /var/www/html/static/;
}
and save it, for example, into nginx-proxy folder in your project's root directory. Then, you have to add this file to /etc/nginx/vhost.d folder of the jwilder/nginx-proxy container. You can build a new image based on jwilder/nginx-proxy with this file being copied or you can mount it using volumes section. Also, you have to share static files between your webapp and nginx-proxy containers using a shared volume. As a result, your docker-compose.yml file will look something like this:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx-proxy/default_location:/etc/nginx/vhost.d/default_location
- static:/var/www/html/static
webapp:
build: ./webapp
expose:
- 8080
volumes:
- static:/path/to/webapp/static
environment:
- VIRTUAL_HOST=webapp.docker.localhost
- VIRTUAL_PORT=8080
- VIRTUAL_PROTO=uwsgi
volumes:
static:
Now, the server block in /etc/nginx/conf.d/default.conf will always include the static location:
server {
server_name webapp.docker.localhost;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://webapp.docker.localhost;
include /etc/nginx/vhost.d/default_location;
}
}
which will make Nginx serve static files for you.