I have a problem and I can't find any solutions
I've a docker-componse ths looks like
version: '3.8'
services:
php:
container_name: php
image: php:8.0-apache
depends_on:
- mariadb
volumes:
- ./php/src/www:/var/www/html/
- ./php/src/api:/var/www/api/
ports:
- 8000:80
How can I configure the dockercontainer with a main- and subdomain in localhost like
localhost:8000 -> ./php/src/www
api.localhost:8000 -> ./php/src/api
I tried to search a solution in the web, because I only found examples for domains like www.example.com not for localhost in a docker setup
Related
I have docker compose web app:
version: '3.3'
services:
app:
image: xxxxxxxxxxxxx
restart: always
network_mode: 'host'
image is hidden because of private code
After startup I can call wget http://localhost:4004 on server but once I call PUBLICIP:4004 it doesnt wor, looks like port is not accesable. Firawall is disabled. I am using ubuntu.
Is there any wrong with docker compose?
I tried to google and SO
If you want to publish only port add ports key:
version: '3.3'
services:
app:
image: xxxxxxxxxxxxx
ports:
- "4004:4004"
You can read more here:
https://docs.docker.com/compose/networking/
Probably you will be interested in connecting it to domain and securing by ssl. I recommend you check nginx-proxy-automation.
https://github.com/evertramos/nginx-proxy-automation
I appending below example from my production that works with this library
version: '3'
services:
gql:
image: registry.digitalocean.com/main/xue-gql
ports:
- ${DOCKER_PORT}:4000
env_file:
- .env
environment:
- VIRTUAL_HOST=${HOST}
- LETSENCRYPT_HOST=${HOST}
- VIRTUAL_PORT=${DOCKER_PORT}
command: node ./src/index.js
redis:
image: 'redis:alpine'
networks:
default:
external:
name: ${NETWORK:-proxy}
I have been attempting to configure nginx reverse proxy with php support in docker compose that runs an app service on port 3838. I want the app to run the nginx-proxy on port 80. I have combed through several tutorials online but none of them has helped me resolve the problem. I also tried to follow this https://github.com/dmitrym0/simple-lets-encrypt-docker-compose-sample/blob/master/docker-compose.yml but it didn't work. Here is my current docker compose file.
docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "82:80"
- "444:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "/etc/nginx/certs"
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
ports:
- 3838:3838
Am I missing something. Sometimes I see virtual_host environment variables include in the docker-compose file. Is that needed? Also do I have to manually configure nginx config files and attach them to the jwilder/nginx-proxy dockerfile? I a newbie at docker and and I really need some help.
Please refer to the Multiple Ports section of the nginx-proxy official docs. In your case, besides setting a mandatory VIRTUAL_HOST env variable (without this a container won't be reverse proxied by the nginx-proxy service), you have to set the VIRTUAL_PORT variable as the nginx-proxy will default to the service running on port 80, but your app service is bind to 3838 port.
Try this docker-compose.yml file to see if it works:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
expose:
- 3838
environment:
- VIRTUAL_HOST=app.localhost
- VIRTUAL_PORT=3838
I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001
I currently have the following setup:
# https://github.com/SeleniumHQ/docker-selenium
version: "3"
services:
selenium-hub:
image: ${DOCKER_REGISTRY}selenium/hub:2.53.1-americium
container_name: selenium-hub
ports:
- 4444:4444
environment:
- NODE_MAX_SESSION=5
- GRID_DEBUG=false
selenium-chrome:
image: ${DOCKER_REGISTRY}selenium/node-chrome-debug:2.53.1-americium
container_name: chrome
ports:
- 5900:5900
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
- SHM-SIZE=2g
- SCREEN_WIDTH=2560
- SCREEN_HEIGHT=1440
- GRID_DEBUG=false
volumes:
- /tmp/
- /dev/shm/:/dev/shm/
tomcat:
build:
context: .
args:
ARTIFACTORY: ${DOCKER_REGISTRY}
container_name: tomcat
restart: on-failure
ports:
- 8080:8080
depends_on:
- db
volumes:
- ./src/test/resources/tomcat/context.xml:/opt/tomcat/conf/context.xml
- ./src/test/resources/tomcat/tomcat-users.xml:/opt/tomcat/conf/tomcat-users.xml
The above config sets up a selenium hub and deploys a webapp to a tomcat container. The resources that are served will have a href in the likes of http://tomcat:8080/...
If I want to access these resources via href from the outside, the tomcat DNS will not be resolved as the DNS is only exposed inside the virtual container network. One resolution would be to expose that internal DNS to the host machine, but I have no idea how.
Another would be to do a string replace of the href value and replace tomcat to localhost but that looks kind of dirty.
Anyone of you guys know how I can expose the internal DNS to the host machine?
Answer can be found at https://docs.docker.com/config/containers/container-networking/
Exposing /etc/hosts and /etc/resolv.conf
I'm trying to setup my environment to develop Phoenix apps using Docker.
Unitil this point everything is great, except the VIRTUAL_HOST part, I'd like to access my app by visiting app.dev instead of localhost:4000.
I'm using this docker-compose.yml file :
version: '2'
services:
proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
postgres:
image: postgres:latest
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
web:
build: .
command: mix phx.server
volumes:
- .:/app
ports:
- 4000:4000
depends_on:
- postgres
environment:
- MIX_ENV=dev
- VIRTUAL_HOST=app.dev
- VIRTUAL_PORT=4000
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
links:
- postgres
when I try to access the app.dev I'm getting site can't be reached.
edit #1
For using VIRTUAL_HOST, do I really need the reverse proxy for this ? or a simple dns or something will be enough ?
edit #2
Ok, that's strange, when I curl the app.dev I get the html content, but I can't access it (app.dev) from the browser.
You don't need nginx, you just need to add app.dev to your /etc/hosts file.
127.0.0.1 app.dev