I'm trying to understand or find information on how I would connect a new Wordpress container to an existing MariaDB container. I'm missing something. I can add a Wordpress instance while also creating the MariaDB container. See below.
services:
wordpress:
image: wordpress
restart: always
ports:
- 8282:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./:/var/www/html
links:
- db:db
db:
image: mariadb:latest
restart: always
container_name: mariadb
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_ROOT_PASSWORD: password
volumes:
- db:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
container_name: phpmyadmin
ports:
- "8081:80"
environment:
PMA_HOST: mariadb
volumes:
wordpress:
db:
phpmyadmin:```
After that has spun up and is good to go, I then attempt another docker-compose.yml (see below) and I cannot get the Wordpress instance to connect to the SQL instance.
```version: '3.7'
services:
wordpress:
image: wordpress
restart: always
container_name: wordup
ports:
- 8283:80
environment:
WORDPRESS_DB_HOST: 172.20.0.3
WORDPRESS_DB_USER: username
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: wp2
volumes:
- ./:/var/www/html
volumes:
wp2:
How would I point the new WP instance to the database that I created on the MariaDB container? Is it possible to point new Docker Compose stacks to an already created DB without recreating a new DB? I know that it's not a good idea to share DB's across different applications, but I have a need to pull in data from one Wordpress site to another.
Thanks!
You can use Docker networks. You need to connect two docker-compose files to the same network and inside this network containers can reference each other by container name.
Here is more information about docker-compose networking in the documentation: https://docs.docker.com/compose/networking/#specify-custom-networks
Take a look at an example with two Nginx proxies that started using different docker-compose files. The first proxy redirects to the second one and then it redirects us to google.com.
First proxy docker-compose.yml
version: "3.9"
services:
first:
build: .
ports:
- "8081:80"
networks:
- "test-network"
networks:
test-network:
name: "test-network"
driver: "bridge"
First proxy ngnix.conf
events {}
http {
server {
location / {
proxy_pass http://second:80; # in second docker-compose file we're setting container_name to "second", thus we can reference second container by this name
}
}
}
Second proxy docker-compose.yml:
version: "3.9"
services:
second:
container_name: "second" # note, that we don't need to expose ports because we don't need to make this service visible to a host. But it's not restricted to expose ports. You can do so if you need.
build: .
networks:
- "test-network"
networks:
test-network:
name: "test-network"
driver: "bridge"
Second proxy ngnix.conf:
events {}
http {
server {
location / {
proxy_pass https://google.com;
}
}
}
Related
I have a docker-compose.yml on VPS server root
version: '3'
services:
mysql:
image: mariadb:10.3.17
command: --max_allowed_packet=256M --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./data/db:/var/lib/mysql:delegated"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
restart: always
litespeed:
image: litespeedtech/litespeed:${LSWS_VERSION}-${PHP_VERSION}
env_file:
- .env
volumes:
- ./lsws/conf:/usr/local/lsws/conf
- ./lsws/admin/conf:/usr/local/lsws/admin/conf
- ./bin/container:/usr/local/bin
- ./sites:/var/www/vhosts/
- ./acme:/root/.acme.sh/
- ./logs:/usr/local/lsws/logs/
ports:
- 80:80
- 443:443
- 443:443/udp
- 7080:7080
restart: always
environment:
TZ: ${TimeZone}
phpmyadmin:
image: bitnami/phpmyadmin:5.0.2-debian-10-r72
ports:
- 8080:80
- 8443:443
environment:
DATABASE_HOST: mysql
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
restart: always
volumes:
esdata:
it has server configuration in above code, should i write my configuration related to magneto 2 in same file, shown below
version: '3'
services:
web:
image: webdevops/php-apache-dev:ubuntu-16.04
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- /path/to/magento:/app:cached
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
if no then what should be be scenario?
1- should i create new docker-compose-magento.yml on root or inside magento folder?
2- if i write docker-compose.yml inside magento folder then how can i connect it with my server root docker folder so that i can use elasticsearch also.
First, you need to know what application is running using the existing docker-compose file. And that you could check inside the existing virtual host configuration file. And that you could find inside the "sites" directory that is mapped to the lightspeed web server virtual host path that is "/var/www/vhosts" in volume mapping.
If any application is running using that docker-compose file for sure then you have to create a separate docker-compose for running Magento. In this case, a separate docker network will be created for all the Magento 2 docker-compose services and you could not access a service(ElasticSearch) on another network(on a separate docker-compose). You have to implement ES on Magento 2 docker-compose as well.
If nothing is running on the existing docker-compose then you could merge both the docker-compose files as per your requirement and understanding. Or you could apply only your new Magento 2 docker-compose file.
So the main thing here is the usage of two different networks. And docker containers can only talk to another container in the same network.
Also, lightspeed is a web server that uses the same port numbers as in the case of Apache(webdevops/php-apache-dev:ubuntu-16.04). So there will be a port conflict if you create a new docker-compose and try to run both simultaneously. So you need to manage that as well by using different host ports. If this is a production server then that is not possible cause people are not going to access web URLs using non-default port numbers.
The solution for this is Kubernetes, where you can run multiple applications all using the same public ports but with no conflict as in Kubernetes you will divide your single physical server machine into multiple virtual machines and hence no port conflicts.
See this article for Kubernetes setup https://technicallysound.in/how-to-setup-a-static-site-on-kubernetes/
See this article for Magento setup on Docker https://technicallysound.in/how-to-setup-magento-2-on-docker-for-development/
I want to know how to configure correctly the backend endpoint.
I have a docker images that runs different containers:
Backend
Frontend
Nginx for backend
DB
From my understanding, since all containers are running on the same machine, I should be able to reach the backend with "host.docker.internal".
Indeed I can successfully do it on the local machine where Docker is running on.
By the way the frontend is not able to resolve the endpoint "host.docker.internal" if I try to make a request from another machine. Please note that I'm able to reach the frontend from another machine, it's just a matter of endpoint configuration.
Note that "192.168.1.11" is the IP of the machine where Docker is running, and "8888" it's the port where the frontend is.
Obviously I can succesfully make the requests from other machines too if I put the static IP address instead of "host.docker.internal". But the question is: since the React frontend application is served on Docker itself, shouldn't it be able to resolve the "host.docker.internal" endpoint?
Just for reference, here it is my docker compose:
version: "3.8"
services:
db: #mysqldb
image: mysql:5.7
container_name: ${DB_SERVICE_NAME}
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- backend
mrmfrontend:
build:
context: ./mrmfrontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
- REACT_APP_BACKEND_ENDPOINT=$REACT_APP_BACKEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT=$REACT_APP_FRONTEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT_ERROR=$REACT_APP_FRONTEND_ENDPOINT_ERROR
- REACT_APP_CUSTOMER=$REACT_APP_CUSTOMER
- REACT_APP_NAME=$REACT_APP_NAME
- REACT_APP_OWNER=""
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
- ./docker-compose/nginx/frontend:/etc/nginx/conf.d/
app:
build:
args:
user: admin
uid: 1000
context: ./MRMBackend
dockerfile: Dockerfile
image: backend
container_name: backend-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./MRMBackend:/var/www
networks:
- backend
nginx:
image: nginx:alpine
container_name: backend-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./MRMBackend:/var/www
- ./docker-compose/nginx/backend:/etc/nginx/conf.d/
networks:
- backend
- frontend
volumes:
db:
networks:
frontend:
driver: bridge
backend:
driver: bridge
The endpoint is configured in this way in the .env:
REACT_APP_BACKEND_ENDPOINT="http://host.docker.internal:8000"
i am trying to dockerize my web application. i am running a apache webserver + mariadb and redis server as you can see in my docker-compose file combined with an nginx proxy to use local domains and ssl.
everything works fine as long is i use the container names to connect to mysql / redis. I dont want to change all localhosts in my code to the mysql / redis container names.
Is there a way to keep "localhost" as Host instead of the containers name?
version: "3.5"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: portal-proxy
networks:
- portal
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
portal:
image: portal:latest
container_name: portal-webserver
networks:
- portal
volumes:
- ./portal:/var/www/html/portal
links:
- db
restart: always
environment:
VIRTUAL_HOST: portal.dev
db:
image: mariadb:latest
container_name: portal-db
networks:
- portal
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: portal
MYSQL_USER: www-data
MYSQL_PASSWORD: www-data
MYSQL_ROOT_PASSWORD: asdf1234
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db:/var/lib/mysql
redis:
image: redis:latest
container_name: portal-redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
- portal
ports:
- "6379:6379"
networks:
portal:
name: portal
Use a common hostname (staging.docker.host) on all containers, that resolves to the docker host's ip 1.2.3.4.
So adding this to containers:
extra_hosts:
- "staging.docker.host:1.2.3.4"
and use that name (staging.docker.host) in all you connection endpoints.
On you local machine you also add (staging.docker.host) to your /etc/hosts or C:\Windows\System32\drivers\etc\hosts with localhost 127.0.0.1 staging.docker.host.
I have a wordpress app in a container that needs to access a localhost webapp setup with self signed cert. I've tried using extra_hosts (both with 127.0.0.1 & 10.0.2.2) in my docker compose file without much success. Any help with this is greatly appreciated.
version: '3.3'
services:
wordpress:
image: mywordpress:latest
container_name: mywordpress
networks:
- db-net
links:
- mysql
ports:
- "8088:80"
restart: always
volumes:
- wp_data:/var/www/html
depends_on:
- mysql
extra_hosts:
localhost: 10.0.2.2
environment:
WORDPRESS_DB_HOST: mysql:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
mysql:
image: mysql:latest
container_name: mysql
volumes:
- db_data:/var/lib/mysql
restart: always
networks:
- db-net
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
db-net:
driver: bridge
volumes:
db_data:
wp_data:
EDIT
The wordpress application in the container needs to access an API hosted locally on the actual host machine (this is the app with the self signed cert)
localhost and 127.0.0.1 used inside the container are both referring to the container itself. To refer to the host machine use another IP address (or name) of it.
Whats wrong in my code? thanks in advance!
I'm trying to set up a virtual host for my docker container.
On localhost: 8000 works perfectly, but when I try to access through http: //borgesmelo.local/ the error ERR_NAME_NOT_RESOLVED appears, what can be missing?
This is my -> docker-compose.yml
version: '3.3'
services:
borgesmelo_db:
image: mariadb:latest
container_name: borgesmelo_db
restart: always
volumes:
- ./mariadb/:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: My#159#Sql
MYSQL_PASSWORD: My#159#Sql
borgesmelo_ws:
image: richarvey/nginx-php-fpm:latest
container_name: borgesmelo_ws
restart: always
volumes:
- ./public/:/var/www/html
ports:
- "8000:80"
borgesmelo_wp:
image: wordpress:latest
container_name: borgesmelo_wp
volumes:
- ./public/:/var/www/html
restart: always
environment:
VIRTUAL_HOST: borgesmelo.local
WORDPRESS_DB_HOST: borgesmelo_db:3306
WORDPRESS_DB_PASSWORD: My#159#Sql
depends_on:
- borgesmelo_db
- borgesmelo_ws
borgesmelo_phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: borgesmelo_phpmyadmin
links:
- borgesmelo_db
ports:
- "8001:80"
environment:
- PMA_ARBITRARY=1
borgesmelo_vh:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "8002:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
This is my hosts file (/etc/hosts) [macOS]
#DOCKER
127.0.0.1:8000 borgesmelo.local
Hosts file doesn't support ports as it is for name lookup only. So you would have to set your hosts file to:
127.0.0.1 borgesmelo.local
Then access your application with http://borgesmelo.local:8000.
If you are listening on port 8000 because you already have something else on port 80, then consider using nginx as a reverse proxy and then you can route to different applications based on the server_name. That way, you can access multiple applications through port 80. If you're dealing with docker containers, then consider looking into Traefik as a reverse proxy.