Whats wrong in my code? thanks in advance!
I'm trying to set up a virtual host for my docker container.
On localhost: 8000 works perfectly, but when I try to access through http: //borgesmelo.local/ the error ERR_NAME_NOT_RESOLVED appears, what can be missing?
This is my -> docker-compose.yml
version: '3.3'
services:
borgesmelo_db:
image: mariadb:latest
container_name: borgesmelo_db
restart: always
volumes:
- ./mariadb/:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: My#159#Sql
MYSQL_PASSWORD: My#159#Sql
borgesmelo_ws:
image: richarvey/nginx-php-fpm:latest
container_name: borgesmelo_ws
restart: always
volumes:
- ./public/:/var/www/html
ports:
- "8000:80"
borgesmelo_wp:
image: wordpress:latest
container_name: borgesmelo_wp
volumes:
- ./public/:/var/www/html
restart: always
environment:
VIRTUAL_HOST: borgesmelo.local
WORDPRESS_DB_HOST: borgesmelo_db:3306
WORDPRESS_DB_PASSWORD: My#159#Sql
depends_on:
- borgesmelo_db
- borgesmelo_ws
borgesmelo_phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: borgesmelo_phpmyadmin
links:
- borgesmelo_db
ports:
- "8001:80"
environment:
- PMA_ARBITRARY=1
borgesmelo_vh:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "8002:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
This is my hosts file (/etc/hosts) [macOS]
#DOCKER
127.0.0.1:8000 borgesmelo.local
Hosts file doesn't support ports as it is for name lookup only. So you would have to set your hosts file to:
127.0.0.1 borgesmelo.local
Then access your application with http://borgesmelo.local:8000.
If you are listening on port 8000 because you already have something else on port 80, then consider using nginx as a reverse proxy and then you can route to different applications based on the server_name. That way, you can access multiple applications through port 80. If you're dealing with docker containers, then consider looking into Traefik as a reverse proxy.
Related
I am trying to run a WordPress site inside of a docker container on Ubuntu VPS using Nginx-Proxy.
I created the following docker-compose.yml file
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
networks:
- nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
- /etc/certificates:/etc/nginx/certs
wordpress:
image: wordpress
container_name: wordpress
restart: always
ports:
- 8080:80
environment:
- VIRTUAL_HOST=wordpress.domain.com
- VIRTUAL_PORT=5500
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=db_username
- WORDPRESS_DB_PASSWORD=db_password
- WORDPRESS_DB_NAME=db_name
depends_on:
- nginx-proxy
- db
networks:
- nginx-proxy
volumes:
- wordpress:/var/www/html
ports:
- 5500:5500
expose:
- 5500
db:
image: mysql:latest
container_name: db
restart: always
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MySQL_RANDOM_ROOT_PASSWORD: '1'
depends_on:
- nginx-proxy
networks:
- nginx-proxy
volumes:
- db:/var/lib/mysql
ports:
- 5600:5600
expose:
- 5600
volumes:
wordpress:
db:
Every time I run docker-compose up I get the following error
Service "nginx-proxy" uses an undefined network "nginx-proxy"
I created a network using the following command
docker network create nginx-proxy
Here is the output of docker network ls
Why do I get that error? How can I fix it?
Anything you name in a per-service networks: block needs to be declared in a top-level networks: block.
version: '3.4'
services:
nginx-proxy:
networks:
- nginx-proxy # <-- matches below
volumes: { ... }
networks:
nginx-proxy: # <-- matches above
# may be empty, but this block is required
If you don't declare any networks: at all, Compose creates a network named default and attaches containers to it. For almost all uses this is what you need. So it may be simpler to just delete the networks: blocks entirely.
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
# No networks:; just use automatic [default]
(You similarly do not need to manually provide a container_name:, or to expose: ports at the Compose level.)
i am trying to dockerize my web application. i am running a apache webserver + mariadb and redis server as you can see in my docker-compose file combined with an nginx proxy to use local domains and ssl.
everything works fine as long is i use the container names to connect to mysql / redis. I dont want to change all localhosts in my code to the mysql / redis container names.
Is there a way to keep "localhost" as Host instead of the containers name?
version: "3.5"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: portal-proxy
networks:
- portal
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
portal:
image: portal:latest
container_name: portal-webserver
networks:
- portal
volumes:
- ./portal:/var/www/html/portal
links:
- db
restart: always
environment:
VIRTUAL_HOST: portal.dev
db:
image: mariadb:latest
container_name: portal-db
networks:
- portal
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: portal
MYSQL_USER: www-data
MYSQL_PASSWORD: www-data
MYSQL_ROOT_PASSWORD: asdf1234
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db:/var/lib/mysql
redis:
image: redis:latest
container_name: portal-redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
- portal
ports:
- "6379:6379"
networks:
portal:
name: portal
Use a common hostname (staging.docker.host) on all containers, that resolves to the docker host's ip 1.2.3.4.
So adding this to containers:
extra_hosts:
- "staging.docker.host:1.2.3.4"
and use that name (staging.docker.host) in all you connection endpoints.
On you local machine you also add (staging.docker.host) to your /etc/hosts or C:\Windows\System32\drivers\etc\hosts with localhost 127.0.0.1 staging.docker.host.
I'm using docker-compose and I've a dev server with lot of virtual hosts on Nginx+PHP-FPM. At the moment nginx container handles multiple virtual hosts:
version: '3'
services:
nginx-proxy:
image: nginx:1.17.4-alpine
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
labels:
- 'com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true'
restart: always
dockergen:
[...]
letsencrypt:
[...]
nginx:
image: nginx:1.17.4-alpine
restart: always
container_name: nginx
environment:
VIRTUAL_HOST: proj1.site.com, proj2.example.com
LETSENCRYPT_HOST: proj1.site.com, proj2.example.com
LETSENCRYPT_EMAIL: tech#example.com
volumes:
- './proj1:/proj1'
- './proj2:/proj2'
- './site.conf:/etc/nginx/conf.d/site.conf'
php:
build:
context: ./php
container_name: php
volumes:
- './proj1:/proj1'
- './proj2:/proj2'
restart: always
volumes:
conf:
vhost:
html:
certs:
networks:
default:
external:
name: nginx-proxy
Now, I'd like to separate the virtual host containers, because I need to inject different env files. Should i replicate the nginx container (of course with different name) and the site.conf per each project? Am I doing it the right way? Could you please suggest me the right direction? P.S. I've read that extends keyword is deprecated for docker-compose v3, so I'd like to avoid that if possible.
I have a web app running outside of a container (localhost:8090).
How can I access it from within a container in a docker-compose network?
I tried to follow this answer that help for docker.
version: '3.6'
services:
postgres:
image: postgres
restart: always
volumes:
- db_data:/var/lib/postgresql/data
networks:
- host
graphql-engine:
image: hasura/graphql-engine:v1.0.0-beta.6
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_AUTH_HOOK: "http://localhost:8090/verify"
volumes:
db_data:
Add network_mode: "host" to your graphql-engine: and remove port mapping:
graphql-engine:
image: hasura/graphql-engine:v1.0.0-beta.6
depends_on:
- "postgres"
restart: always
network_mode: "host"
environment:
HASURA_GRAPHQL_AUTH_HOOK: "http://localhost:8090/verify"
graphql-engine would listen on host port 8080 and would be able to connect to localhost:8090
To make sure it worked, verify /etc/hosts file from the docker host is inside graphql-engine contianer .
Docs
I am trying to use docker and setup apache, php, mysql and adminer using this docker-compose.yml
The apache, php and mysql have been run. I have test it using php codes. But, the adminer can't do login.
version: "3.2"
services:
php:
image: php:latest
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
image: httpd:latest
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8000:80"
volumes:
- ./public_html/:/var/www/html/
mysql:
image: mysql:latest
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=admin
adminer:
image: adminer
restart: always
links:
- mysql
ports:
- "8080:8080"
networks:
frontend:
backend:
You are already using port 8080 on the host, so you need to either proxy pass using apache and dont share the port on adminer, or use a different port
adminer:
image: adminer
ports:
- 8081:8080
Your docker container is named mysql other than the default in adminer db. So you need to add environment variable for your adminer container like below.
adminer:
image: adminer
restart: always
ports:
- "8080:8080"
environment:
- ADMINER_DEFAULT_SERVER=mysql
and links are deprecated remove it. For any other issue please read the docker hub description.