I have a set of docker containers that are generated from yaml files. These containers work ok - I can access localhost, ping http://web from the nginx container, and list the port mapping (see snippet1)
I now want to deploy them to another machine, so I used the docker commit, save, load, and run to create an image, copy the image and deploy new containers (see snippet2).
But after I deploy the containers, they don't run properly (I cannot access localhost, cannot ping http://web from the nginx container, and port mapping is empty - see snippet3)
The .yml file is in snippet4
and the nginx .conf files are in snippet5
What can be the problem?
Thanks,
Avner
EDIT:
From the responses below, I understand that instead of using "docker commit", I should build the container on the remote host using one of 2 options:
option1 - copy the code to the remote host and apply a modified docker-compose, and build from source using a modified docker-compose
option2 - create an image on local machine, push it to a docker repository, pull it from there, using a modified docker-compose
I'm trying to follow option1 (as a start), but still have problems.
I filed a new post here that describes the problem
END EDIT:
snippet1 - original containers work ok
# the original containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26ba325e737d webserver_nginx "nginx -g 'daemon of…" 3 hours ago Up 43 minutes 0.0.0.0:80->80/tcp, 443/tcp webserver_nginx_1
08ef8a443658 webserver_web "flask run --host=0.…" 3 hours ago Up 43 minutes 0.0.0.0:8000->8000/tcp webserver_web_1
33c13a308139 webserver_postgres "docker-entrypoint.s…" 3 hours ago Up 43 minutes 0.0.0.0:5432->5432/tcp webserver_postgres_1
# can access localhost
curl http://localhost:80
<!DOCTYPE html>
...
# can ping web container from the nginx container
docker exec -it webserver_nginx_1 bash
root#26ba325e737d:/# ping web
PING web (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.138 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.123 ms
...
# List port mappings for the container
docker port webserver_nginx_1
80/tcp -> 0.0.0.0:80
snippet2 - deploy the containers (currently still using the deployed containers on localhost)
# create new docker images from the containers
docker commit webserver_nginx_1 webserver_nginx_image2
docker commit webserver_postgres_1 webserver_postgres_image2
docker commit webserver_web_1 webserver_web_image2
# save the docker images into .tar files
docker save webserver_nginx_image2 > /tmp/webserver_nginx_image2.tar
docker save webserver_postgres_image2 > /tmp/webserver_postgres_image2.tar
docker save webserver_web_image2 > /tmp/webserver_web_image2.tar
# load the docker images from tar files
cat /tmp/webserver_nginx_image2.tar | docker load
cat /tmp/webserver_postgres_image2.tar | docker load
cat /tmp/webserver_web_image2.tar | docker load
# Create containers from the new images
docker run -d --name webserver_web_2 webserver_web_image2
docker run -d --name webserver_postgres_2 webserver_postgres_image2
docker run -d --name webserver_nginx_2 webserver_nginx_image2
# stop the original containers and start the deployed containers
docker stop webserver_web_1 webserver_nginx_1 webserver_postgres_1
docker stop webserver_web_2 webserver_nginx_2 webserver_postgres_2
docker start webserver_web_2 webserver_nginx_2 webserver_postgres_2
snippet3 - deployed containers don't work
# the deployed containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15ef8bfc0ceb webserver_nginx_image2 "nginx -g 'daemon of…" 3 hours ago Up 4 seconds 80/tcp, 443/tcp webserver_nginx_2
d6d228599f81 webserver_postgres_image2 "docker-entrypoint.s…" 3 hours ago Up 3 seconds 5432/tcp webserver_postgres_2
a8aac280ea01 webserver_web_image2 "flask run --host=0.…" 3 hours ago Up 4 seconds 8000/tcp webserver_web_2
# can NOT access localhost
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
# can NOT ping web container from the nginx container
docker exec -it webserver_nginx_2 bash
root#15ef8bfc0ceb:/# ping web
ping: unknown host
# List port mappings for the container
docker port webserver_nginx_2
# nothing is being shown
snippet4 - the .yml files
cat /home/user/webServer/docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/user/webServer/web:/home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -t 3600 -b :8000 project:app
depends_on:
- postgres
stdin_open: true
tty: true
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /home/user/webServer/web:/home/flask/app/web
depends_on:
- web
postgres:
restart: always
build: ./postgresql
volumes:
- data1:/var/lib/postgresql
expose:
- "5432"
volumes:
data1:
,
cat /home/user/webServer/docker-compose.override.yml
version: '3'
services:
web:
build: ./web
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- FLASK_APP=run.py
- FLASK_DEBUG=1
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
command: flask run --host=0.0.0.0 --port 8000
nginx:
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
depends_on:
- web
postgres:
ports:
- "5432:5432"
snippet5 - the nginx .conf files
cat /home/user/webServer/nginx/nginx.conf
# Define the user that will own and run the Nginx server
user nginx;
# Define the number of worker processes; recommended value is the number of
# cores that are being used by your server
worker_processes 1;
# Define the location on the file system of the error log, plus the minimum
# severity to log messages for
error_log /var/log/nginx/error.log warn;
# Define the file that will store the process ID of the main NGINX process
pid /var/run/nginx.pid;
# events block defines the parameters that affect connection processing.
events {
# Define the maximum number of simultaneous connections that can be opened by a worker process
worker_connections 1024;
}
# http block defines the parameters for how NGINX should handle HTTP web traffic
http {
# Include the file defining the list of file types that are supported by NGINX
include /etc/nginx/mime.types;
# Define the default file type that is returned to the user
default_type text/html;
# Define the format of log messages.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Define the location of the log of access attempts to NGINX
access_log /var/log/nginx/access.log main;
# Define the parameters to optimize the delivery of static content
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Define the timeout value for keep-alive connections with the client
keepalive_timeout 65;
# Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
#gzip on;
# Include additional parameters for virtual host(s)/server(s)
include /etc/nginx/conf.d/*.conf;
}
,
cat /home/user/webServer/nginx/myapp.conf
# Define the parameters for a specific virtual host/server
server {
# Define the server name, IP address, and/or port of the server
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
location /static {
alias /home/flask/app/web/instance;
}
location /foo {
root /usr/src/app/web;
index index5.html;
}
location /V1 {
root /usr/src/app/web;
index index.html;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
root /;
index index1.html;
resolver 127.0.0.11;
set $example "web:8000";
proxy_pass http://$example;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 10M;
client_body_buffer_size 10M;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
}
}
You need to copy the docker-compose.yml file to the target machine (with some edits) and re-run it there. If you're not also rebuilding the images there, you need to modify the file to not include volumes: referencing a local source tree and change the build: block to reference some image:. Never run docker commit.
The docker-compose.yml file mostly consists of options equivalent to docker run options, but in a more convenient syntax. For example, when docker-compose.yml says
services:
nginx:
ports:
- "80:80"
That's equivalent to a docker run -p 80:80 option, but when you later do
docker run -d --name webserver_nginx_2 webserver_nginx_image2
That option is missing. Docker Compose also implicitly creates a Docker network for you and without the equivalent docker run --net option, connectivity between containers doesn't work.
All of these options must be specified every time you docker run the container. docker commit does not persist them. In general you should never run docker commit, especially if you already have Dockerfiles for your images; in the scenario you're describing here the image that comes out of docker commit won't be any different from what you could docker build yourself, except that it will lose some details like the default CMD to run.
As the commenters suggest, the best way to run the same setup on a different machine is to set up a Docker registry (or use a public service like Docker Hub), docker push your built images there, copy only the docker-compose.yml file to the new machine, and run docker-compose up there. (Think of it like the run.sh script that launches your containers.) You have to make a couple of changes: replace the build: block with the relevant image: and delete the volumes: that reference local source code. If you need the database data this needs to be copied separately.
services:
web:
restart: always
image: myapp/web
depends_on:
- postgres
ports:
- "8000:8000"
# expose:, command:, and core environment: should be in Dockerfile
# stdin_open: and tty: shouldn't matter for a Flask app
# volumes: replace code in the image and you're not copying that
The way you've tried to deploy your app on another server is not how Docker recommends you to do it. You need to transfer all of your files that need to be present in order to get the application running along with a Dockerfile to build the image and a docker-compose.yml to define how each of the services of your application gets deployed. Docker has a nice document to get started with.
If your intention was to deploy the application on another server for redundancy and not to create a replica, consider using docker with swarm mode.
I have docker running on a machine with the IP address fd42:1337::31. One container is a nginx reverse proxy with the port mapping 443:443, in its configuration file it proxy_pass-es depending on the server name to other ports on the same machine, e.g.
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name plex.mydomain.tld;
location / {
proxy_pass http://[fd42:1337::31]:32400;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name file.mydomain.tld;
location / {
proxy_pass http://[fd42:1337::31]:2020;
}
}
These other ports refer to bottle py servers or other containers with mapped ports.
I've started this container with the command
docker run -d -p 443:443 (volume mappings) --name reverseproxy nginx
and it has served me well for a year.
I've now decided to work with docker-compose and have the following configuration file:
version: '3'
services:
reverseproxy:
image: "nginx"
ports:
- "443:443"
volumes:
(volume mappings)
When I shut down the original container and start my new one with docker-compose up, it starts, but every request gives me something like this:
2019/02/13 17:04:43 [crit] 6#6: *1 connect() to [fd42:1337::31]:32400 failed (99: Cannot assign requested address) while connecting to upstream, client: 192.168.178.126, server: plex.mydomain.tld, request: "GET / HTTP/1.1", upstream: "http://[fd42:1337::31]:32400/", host: "plex.mydomain.tld"
Why is the new container behaving differently? What do I have to change?
(I know I can have a virtual network mode to connect to other containers directly, but my proxy is supposed to connect to some services that are not inside containers (but on the same metal).)
I have a docker-compose.yaml similar to this (shortened for simplicity):
# ...
services:
my-ui:
# ...
ports:
- 5402:8080
networks:
- my-net
networks:
my-net:
external:
name: my-net
and I'm trying to set up nginx as a reverse proxy with this configuration:
upstream client {
server my-ui:5402;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
}
and this is the docker-compose.yaml I have for nginx:
# ...
services:
reverse-proxy:
# ...
ports:
- 5500:80
networks:
- my-net
networks:
my-net:
external:
name: my-net
What happens now is that when I run my-ui and reverse-proxy (each using its own docker-compose up), and I go to http://localhost:5500, I get a Bad Gateway message, and my nginx logs says this:
connect() failed (111: Connection refused) while connecting to
upstream, client: 172.19.0.1, server: , request: "GET / HTTP/1.1",
upstream: "http://172.19.0.5:5402/", host: "localhost:5500"
If I exec into my nginx container and use ping:
ping my-ui
ping 172.19.0.5
Both are successful, but if I want to, for example, curl:
curl -L http://my-ui
curl -L http://my-ui:5402
curl -L http://172.19.0.1
All of them fail with connection refused message. What am I missing here?
PS: I'm not sure, but it might be useful to add that my-ui is a basic vuejs application, running on Webpack dev server.
PS2: I also tried passing host headers etc. but same result
The name of the container (my-ui) resolves to the IP of the container. Therefor you have to provide in upstream the port of the container and not the port you have mapped to the host.
upstream client {
server my-ui:8080;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
}
You could also configure your upstream with the name of your host machine and use the mapped port. (server <name of host>:5402) But this could get quite messy and you would lose the advantage of isolating services with docker networks.
Furthermore you could also remove the port mapping unless you need to access the webservice without reverse proxy:
# ...
services:
reverse-proxy:
# ...
# ports:
# - 5500:80
To get the hang of nginx with docker, I have a very simple nginx.conf file + docker-compose, running 2 containers for 1 service (service itself+db).
What I want:
localhost --> show static page
localhost/pics --> show another static page
localhost/wekan --> redirect to my container, which is running on port 3001.
the last part (redirect to docker-container) does not work. The app can be reached under localhost:3001, tho.
My nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
server{
listen 80;
location / {
root /home/user/serverTest/up1; #index.html is here
}
location /wekan {
proxy_pass http://localhost:3001;
rewrite ^/wekan(.*)$ $1 break; # this didnt help either
}
location /pics {
proxy_pass http://localhost/example.jpg;
}
location ~ \.(gif|jpg|png)$ {
root /home/user/serverTest/data/images;
}
}
docker-compose.yml:
version: '2'
services:
wekandb:
image: mongo:3.2.21
container_name: wekan-db
restart: always
command: mongod --smallfiles --oplogSize 128
networks:
- wekan-tier
expose:
- 27017
volumes:
- /home/user/wekan/wekan-db:/data/db
- /home/user/wekan/wekan-db-dump:/dump
wekan:
image: quay.io/wekan/wekan
container_name: wekan-app
restart: always
networks:
- wekan-tier
ports:
# Docker outsideport:insideport
- 127.0.0.1:3001:8080
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=http://localhost
Looking at the nginx-error logs, I get this:
2018/12/17 11:57:16 [error] 9060#9060: *124 open() "/home/user/serverTest/up1/31fb090e9e6464a4d62d3588afc742d2e11dc1f6.js" failed (2: No such file or directory),
client: 127.0.0.1, server: ,
request: "GET /31fb090e9e6464a4d62d3588afc742d2e11dc1f6.js?meteor_js_resource=true HTTP/1.1", host: "localhost",
referrer: "http://localhost/wekan"
So I guess this makes sense because in my understanding, nginx is now adding the redirect to the root given # /, but clearly this is not where the container is running.
How do I prevent that?
Your nginx cannot access the local network interface of your docker composition.
Try to bind wekan's port like this:
wekan:
ports:
- 127.0.0.1:3001:8080
Mind the 127.0.0.1
See https://docs.docker.com/compose/compose-file/#ports
the problem was within the docker-compose configuration.
For anyone wondering, all you need is a proxy pass addr:port or addr:port/ whereas the 2nd option does the same as the rewrite part, so this can be skipped.
Apart from that, i had to add the /wekan into the ROOT_URL inside my docker-compose
I am using nginx as a simple proxy service for my multiple dockerized containers (including image with nginx layer as well). I am trying to create vhost for each branch and this is causing a lot of trouble here. What i want to achieve is:
An nginx proxy service should proxy to containers on paths:
[branch_name].a.xyz.com (frontend container)
some-jenkins.xyz.com (another container)
some other containers not existing yet
nginx.conf inside proxy container:
upstream frontend-branch {
server frontend:80;
}
server {
listen 80;
server_name ~^(?<branch>.*)\.a\.xyz\.com;
location / {
proxy_pass http://frontend-branch;
}
}
nginx.conf inside frontend container:
server {
listen 80;
location / {
root /www/html/branches/some_default_branch
}
}
server {
listen 80;
location ~^/(?<branch>.*)$ {
root /www/html/branches/$branch
}
}
docker-compose for proxy:
version: "2.0"
services:
proxy:
build: .
ports:
- "80:80"
restart: always
networks:
default:
external:
name: nginx-proxy
Inside frontend project it looks pretty much the same, except service name and ofc ports (81:80).
Is there any way to "pass" the branch as a path for frontend container (e.g. some frontend:80/$branch) ?
Is it even possible to create that kind of proxy? I don't want to use the same image based on nginx as a proxy and as a 'frontend keeper' because in the future I will want to use proxy for more than only one container so having configuration for whole site proxy inside frontend project would be weird.
Cheers