I have been attempting to configure nginx reverse proxy with php support in docker compose that runs an app service on port 3838. I want the app to run the nginx-proxy on port 80. I have combed through several tutorials online but none of them has helped me resolve the problem. I also tried to follow this https://github.com/dmitrym0/simple-lets-encrypt-docker-compose-sample/blob/master/docker-compose.yml but it didn't work. Here is my current docker compose file.
docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "82:80"
- "444:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "/etc/nginx/certs"
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
ports:
- 3838:3838
Am I missing something. Sometimes I see virtual_host environment variables include in the docker-compose file. Is that needed? Also do I have to manually configure nginx config files and attach them to the jwilder/nginx-proxy dockerfile? I a newbie at docker and and I really need some help.
Please refer to the Multiple Ports section of the nginx-proxy official docs. In your case, besides setting a mandatory VIRTUAL_HOST env variable (without this a container won't be reverse proxied by the nginx-proxy service), you have to set the VIRTUAL_PORT variable as the nginx-proxy will default to the service running on port 80, but your app service is bind to 3838 port.
Try this docker-compose.yml file to see if it works:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
expose:
- 3838
environment:
- VIRTUAL_HOST=app.localhost
- VIRTUAL_PORT=3838
Related
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
I'm little bit confused with docker and network communication. I tried many things but it didn't work :-(.
I have following docker compose:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:stable-alpine
restart: unless-stopped
tty: true
ports:
- 80:80
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
networks:
- frontend
- backend
app:
restart: unless-stopped
tty: true
build:
context: .
dockerfile: Dockerfile
container_name: app
expose:
- "9090"
ports:
- 9090:9090
networks:
- backend
networks:
frontend:
backend:
And I would like to communicate:
From nginx to app //this probably works
From app to postgreSQL which is installed on server (no docker container)
I cannot do this, I tried many things but something is wrong :-(
You can choose any of these two options:
Make your postgresql listen to all your network interfaces (or the docker bridge for more secure but complex setup), to achieve that you need to make sure your config looks like this:
# grep listen /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
Use host network mode in your docker compose, which runs docker in your host network name space instead of creating a new network:
network_mode: "host"
I tried this in the docker-compose.yml file but can't get php working in the nginx server. What I try to do is simply have nginx with php working
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./docker-nginx-php/html:/usr/share/nginx/html
links:
- php
php:
image: php:7-fpm
volumes:
- ./docker-nginx-php/html:/usr/share/nginx/html
Hope someone knows how to get it working!
I have on my host system apache2 installed which serves some of my apps but I want to have nginx with php server another domain so port 80 is currently in use by apache2 listener that's why I use port 8080:80 instead in this example above
You also need to specify the environment variable VIRTUAL_HOST on the php container as well as opening the port within docker for connection with other containers, like:
php:
image: php:7-fpm
environment:
- VIRTUAL_HOST=domain.example.com
ports:
- 80
volumes:
- ./docker-nginx-php/html:/usr/share/nginx/html
I am trying to get this tutorial to work. But at the end of the tutorial it says to run http://192.168.99.100:8080 and see your website. But it is not showing up for me at all after running docker-machine ls. There are no entries at all. So my question is how can I get a docker-machine instance running on an nginx container? I assume that is the one that runs the wwwroot folder, that is what points to it and has all of the port and root calls for the server.
My code in the docker-compose.yml is the same as the tutorial but here it is:
version: '2'
services:
nginx:
build:
context: .
dockerfile: docker.nginx
image: my-nginx
container_name: my-nginx-container
ports:
- "8080:8080"
volumes:
- wwwroot:/wwwroot
webpack:
build:
context: .
dockerfile: docker.webpack
image: my-webpack
container_name: my-webpack-container
ports:
- "35729:35729"
volumes:
- ./app:/app
- /app/node_modules
- wwwroot:/wwwroot
volumes:
wwwroot:
driver: local
From the comments, it has been established that your docker machine is running and the server you are trying to run is running on localhost:8080.
Before you run docker compose up, make sure that the directory contains other required files and folders, examplenginx.conf and index.html.
I'm using docker on osx via boot2docker.
I have 2 hosts: site1.loc.test.com and site2.loc.test.com pointed to ip address of docker host.
Both should be available via 80 and 443 ports.
So I'm using jwilder/nginx-proxy for reverse proxy purposes.
But in fact when I'm running all of them via docker-compose every time I try to open via 80 port I get redirect to 443 (301 Moved Permanently).
May be I've missed something in jwilder/nginx-proxy configuration?
docker-compose.yml
proxy:
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs
ports:
- "80:80"
- "443:443"
site1:
image: httpd:2.4
volumes:
- site1:/usr/local/apache2/htdocs
environment:
VIRTUAL_HOST: site1.loc.test.com
expose:
- "80"
site2:
image: httpd:2.4
volumes:
- site2:/usr/local/apache2/htdocs
environment:
VIRTUAL_HOST: site2.loc.test.com
expose:
- "80"
Just to keep this topic up to date, the jwilder/nginx-proxy meanwhile introduced a flag for that: HTTPS_METHOD=noredirect; To be set as environment variable.
Further reading on github
I think your configuration should be correct, but it seems that this is the intended behaviour of jwilder/nginx-proxy. See these lines in the file nginx.tmpl: https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl#L89-L94
It seems that if a certificate is found, you will always be redirected to https.
EDIT: I found the confirmation in the documentation
The behavior for the proxy when port 80 and 443 are exposed is as
follows:
If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available.
You can still use a custom configuration. You could also try to override the file nginx.tmpl in a new Dockefile .
To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the environment variable HTTPS_METHOD=noredirect (the default is HTTPS_METHOD=redirect).
HTTPS_METHOD must be specified on each container for which you want to override the default behavior.
Here is an example Docker Compose file:
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./config/certs:/etc/nginx/certs
environment:
DEFAULT_HOST: my.example.com
app:
build:
context: .
dockerfile: ./Dockerfile
environment:
HTTPS_METHOD: noredirect
VIRTUAL_HOST: my.example.com
Note: As in this example, environment variable HTTPS_METHOD must be set on the app container, not the nginx-proxy container.
Ref: How SSL Support Works section for the jwilder/nginx-proxy Docker image.