How to link MySQL RDS in docker-compose.yml file? - docker

Instead of below mysql container DB link, I want to link AWS Mysql RDS in docker yml file. Is it possible?
mysql_db:
image: mysql:5.6
container_name: shishir_db
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxxx"
#MYSQL_USER: "shishir"
#MYSQL_DATABASE: "shishir1"
MYSQL_PASSWORD: "xxxxxxxx"
ports:
- "3306:3306"

There are a couple of ways that you can link to a AWS RDS MySQL instance from your docker-compose.yml.
The first, and perhaps simplest way is to set environment variables on the containers that need access to the RDS MySQL instance. So for example, you could update your OperationEngine service definition to look something like:
OperationEngine:
image: shishir/operationengine:${RELEASE_OTA_VERSION}
container_name: operation_engine
ports:
- "8080:8080"
environment:
- DOCKER_HOST_IP: ${DOCKER_HOST_IP}
- JAVA_OPTS: ${JAVA_OPTS}
- MYSQL_HOST: "your-mysql-cname.rds.amazon.com"
- MYSQL_USER: "username"
- MYSQL_PASSWORD: "password"
volumes:
- ${HOME}/operationengine/logs/:/usr/local/tomcat/logs/
You can then update the configuration in that service to read database connection details from the environment e.g ${MYSQL_HOST}.
The obvious downside to this approach is that you have connection details stored as plain text in your docker-compose.yml, which is not great, but may be acceptable depending on your requirements.
The 2nd approach (and the one I tend to favour) is to bind mount the database configuration into the running container.
Most applications support reading database connection details from a properties file. As an example: lets say that on start-up your application read from /config/database.properties and required the following properties to connect to the database:
config.db.host=your-mysql-cname.rds.amazon.com
config.db.user=foo
config.db.password=bar
I would setup my environment so that at runtime, I bind mount a properties file that provides all of the required values to the container:
OperationEngine:
volumes:
- /secure/config/database.properties:/config/database.properties
The /secure/config directory is part of the filesystem on your Docker host. How that directory gets created and populated is up-to-you. Typically I approach this by having my environment setup scripts make the directory and then clone a private Git repository into this directory which contains the correct configuration for that environment. Naturally only those with the required permission levels can view the Git repositories that contain sensitive configuration details i.e. for production system.
Hope that helps.

No, i have not created networking in my yml file. Below is the entirecontent of my yml file, so what changes would be required?
mysql_db:
image: mysql:5.6
container_name: shishir_db
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxxx"
#MYSQL_USER: "shishir"
#MYSQL_DATABASE: "shishir1"
MYSQL_PASSWORD: "xxxxxxxx"
ports:
- "3306:3306"
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: shishir/kafkaengine:0.10
container_name: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_HOST_IP}
KAFKA_ZOOKEEPER_CONNECT: ${DOCKER_HOST_IP}:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: ${KAFKA_TOPICS}
KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS: 12000
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 12000
flywaydb:
image: shishir/flywaydb
container_name: flywaydb
links:
- mysql_db
OperationEngine:
image: shishir/operationengine:${RELEASE_OTA_VERSION}
container_name: operation_engine
links:
- mysql_db
ports:
- "8080:8080"
environment:
DOCKER_HOST_IP: ${DOCKER_HOST_IP}
JAVA_OPTS: ${JAVA_OPTS}
volumes:
- ${HOME}/operationengine/logs/:/usr/local/tomcat/logs/

Related

Improving the adaptability of my docker-compose.yaml

I am using docker-compose for my open-source web application. In the process of publishing my project on github I wanted to make my docker-compose.yaml file easier to understand and adapt. I'm still a beginner with Docker but the file works as intended. I just want to improve the readability and changeability of the volumes used by the containers. The values a/large/directory/or/disk:/var/lib/postgresql/data and /another/large/disk/:/something will most likely need to be adapted to the system the user is running my application on. Can I introduce variables for these? How can I make it more obvious that these values are to be changed by the person running my application?
My current docker-compose.yaml file
version: '3'
services:
postgres:
image: postgres:latest
restart: always
expose:
- 5432
ports:
- 5432:5432
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'sample'
volumes:
- /a/large/directory/or/disk:/var/lib/postgresql/data
networks:
- mynetwork
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
expose:
- 3306
ports:
- 3306:3306
volumes:
- ~/data/mysql:/var/lib/mysql
networks:
- mynetwork
depends_on:
- postgres
core:
restart: always
build: core/
environment:
SPRING_APPLICATION_JSON: '{
"database.postgres.url": "postgres:5432/sample",
"database.postgres.user": "postgres",
"database.postgres.password": "password",
"database.mysql.host": "mysql",
"database.mysql.user": "root",
"database.mysql.password": "password"
}'
volumes:
- ~/data/core:/var
- /another/large/disk/:/something
networks:
- mynetwork
depends_on:
- mysql
ports:
- 8080:8080
web:
restart: always
build: web/
networks:
- mynetwork
depends_on:
- core
ports:
- 3000:3000
networks:
mynetwork:
driver: bridge
volumes:
myvolume:
(I'd also appreciate any other suggestions for improvements to my file!)
Docker Compose supports variable interpolation. But then you need to document those values, and people might just assume docker compose up and have it work without extra setup.
Compose typically isn't used for production deployment, so you wouldn't use a real volume. That being said, you can simply use relative directories rather than home folder or absolute path (./data/app:/mount) to the file itself, or a Docker managed volume.

installing magento 2 using docker, server has already docker-compose.yml, Should i write separate for magento 2 in the magento folder?

I have a docker-compose.yml on VPS server root
version: '3'
services:
mysql:
image: mariadb:10.3.17
command: --max_allowed_packet=256M --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./data/db:/var/lib/mysql:delegated"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
restart: always
litespeed:
image: litespeedtech/litespeed:${LSWS_VERSION}-${PHP_VERSION}
env_file:
- .env
volumes:
- ./lsws/conf:/usr/local/lsws/conf
- ./lsws/admin/conf:/usr/local/lsws/admin/conf
- ./bin/container:/usr/local/bin
- ./sites:/var/www/vhosts/
- ./acme:/root/.acme.sh/
- ./logs:/usr/local/lsws/logs/
ports:
- 80:80
- 443:443
- 443:443/udp
- 7080:7080
restart: always
environment:
TZ: ${TimeZone}
phpmyadmin:
image: bitnami/phpmyadmin:5.0.2-debian-10-r72
ports:
- 8080:80
- 8443:443
environment:
DATABASE_HOST: mysql
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
restart: always
volumes:
esdata:
it has server configuration in above code, should i write my configuration related to magneto 2 in same file, shown below
version: '3'
services:
web:
image: webdevops/php-apache-dev:ubuntu-16.04
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- /path/to/magento:/app:cached
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
if no then what should be be scenario?
1- should i create new docker-compose-magento.yml on root or inside magento folder?
2- if i write docker-compose.yml inside magento folder then how can i connect it with my server root docker folder so that i can use elasticsearch also.
First, you need to know what application is running using the existing docker-compose file. And that you could check inside the existing virtual host configuration file. And that you could find inside the "sites" directory that is mapped to the lightspeed web server virtual host path that is "/var/www/vhosts" in volume mapping.
If any application is running using that docker-compose file for sure then you have to create a separate docker-compose for running Magento. In this case, a separate docker network will be created for all the Magento 2 docker-compose services and you could not access a service(ElasticSearch) on another network(on a separate docker-compose). You have to implement ES on Magento 2 docker-compose as well.
If nothing is running on the existing docker-compose then you could merge both the docker-compose files as per your requirement and understanding. Or you could apply only your new Magento 2 docker-compose file.
So the main thing here is the usage of two different networks. And docker containers can only talk to another container in the same network.
Also, lightspeed is a web server that uses the same port numbers as in the case of Apache(webdevops/php-apache-dev:ubuntu-16.04). So there will be a port conflict if you create a new docker-compose and try to run both simultaneously. So you need to manage that as well by using different host ports. If this is a production server then that is not possible cause people are not going to access web URLs using non-default port numbers.
The solution for this is Kubernetes, where you can run multiple applications all using the same public ports but with no conflict as in Kubernetes you will divide your single physical server machine into multiple virtual machines and hence no port conflicts.
See this article for Kubernetes setup https://technicallysound.in/how-to-setup-a-static-site-on-kubernetes/
See this article for Magento setup on Docker https://technicallysound.in/how-to-setup-magento-2-on-docker-for-development/

Docker - Adding new Wordpress container to existing SQL database container

I'm trying to understand or find information on how I would connect a new Wordpress container to an existing MariaDB container. I'm missing something. I can add a Wordpress instance while also creating the MariaDB container. See below.
services:
wordpress:
image: wordpress
restart: always
ports:
- 8282:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./:/var/www/html
links:
- db:db
db:
image: mariadb:latest
restart: always
container_name: mariadb
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_ROOT_PASSWORD: password
volumes:
- db:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
container_name: phpmyadmin
ports:
- "8081:80"
environment:
PMA_HOST: mariadb
volumes:
wordpress:
db:
phpmyadmin:```
After that has spun up and is good to go, I then attempt another docker-compose.yml (see below) and I cannot get the Wordpress instance to connect to the SQL instance.
```version: '3.7'
services:
wordpress:
image: wordpress
restart: always
container_name: wordup
ports:
- 8283:80
environment:
WORDPRESS_DB_HOST: 172.20.0.3
WORDPRESS_DB_USER: username
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: wp2
volumes:
- ./:/var/www/html
volumes:
wp2:
How would I point the new WP instance to the database that I created on the MariaDB container? Is it possible to point new Docker Compose stacks to an already created DB without recreating a new DB? I know that it's not a good idea to share DB's across different applications, but I have a need to pull in data from one Wordpress site to another.
Thanks!
You can use Docker networks. You need to connect two docker-compose files to the same network and inside this network containers can reference each other by container name.
Here is more information about docker-compose networking in the documentation: https://docs.docker.com/compose/networking/#specify-custom-networks
Take a look at an example with two Nginx proxies that started using different docker-compose files. The first proxy redirects to the second one and then it redirects us to google.com.
First proxy docker-compose.yml
version: "3.9"
services:
first:
build: .
ports:
- "8081:80"
networks:
- "test-network"
networks:
test-network:
name: "test-network"
driver: "bridge"
First proxy ngnix.conf
events {}
http {
server {
location / {
proxy_pass http://second:80; # in second docker-compose file we're setting container_name to "second", thus we can reference second container by this name
}
}
}
Second proxy docker-compose.yml:
version: "3.9"
services:
second:
container_name: "second" # note, that we don't need to expose ports because we don't need to make this service visible to a host. But it's not restricted to expose ports. You can do so if you need.
build: .
networks:
- "test-network"
networks:
test-network:
name: "test-network"
driver: "bridge"
Second proxy ngnix.conf:
events {}
http {
server {
location / {
proxy_pass https://google.com;
}
}
}

Can't get access to DB via phpmyadmin - docker

I'm pretty new to docker and I guess I have made a proper beginner mistake here but I really can't get my head around of what's wrong...
I have sucesfully created a docker container with a running Wordpress installation. The link to the DB does work there. I can also access phpmyadmin but I can't get in. The following errors appear:
Invalid hostname for server 1. Please review your configuration.
Connection for controluser as defined in your configuration failed.
This is my docker.yml
version: "2"
services:
my-wpdb:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: letmein
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: letmein
phpmyadmin:
image: corbinu/docker-phpmyadmin
links:
- my-wpdb:mysql
ports:
- 8181:80
environment:
MYSQL_USERNAME: letmein
MYSQL_ROOT_PASSWORD: letmein
I'm trying to log in with: root, letmein
Thank's! Any help appeciated!
Your phpmyadmin is probably trying to connect to mysql using a different hostname from what you expect. (localhost probably?)
In your specific case you need to set it to use my-wpdb, more specifically you want to set that $MYSQL_PORT_3306_TCP_ADDR to point to your database.
From the source code of that (deprecated) docker image is not quite clear, but I'm guessing you need to specify that with
phpmyadmin:
image: corbinu/docker-phpmyadmin
ports:
- 8181:80
environment:
MYSQL_USERNAME: letmein
MYSQL_ROOT_PASSWORD: letmein
MYSQL_PORT_3306_TCP_ADDR: my-wpdb

docker-compose drupal not accepting my database name and throwing error below

I have the following docker-compose.yml file:
version: '3'
services:
maria_service:
build: ./db_maria
restart: always
environment:
MYSQL_DATABASE: mariadb
MYSQL_USER: joel
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: password
volumes:
- ./db:/var/lib/mysql
drupal_service:
build: ./website
restart: always
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
depends_on:
- maria_service
Here's my working directory:
Here's the drupal dockerfile where all I'm doing is to pull the drupal image:
Here's the mariadb dockerfile:
It automatically generate this "db" subfolder seen in the pic below:
My issue is everytime I enter mariadb on the drupal UI at localhost:8080, it throws this error below:
UPDATES:
Based on #Tarun Lalwani answer, my issue was that, in the Drupal UI, I would enter my username, password and db name but if you expand that Advanced Options in that Drupal screenshot, you'll see that the HOSTNAME was pointing to "localhost" when it should be pointing to the actual hostname of the mariadb database server which in DOCKER WORLD, the hostname name of a running container is ITS SERVICE NAME i.e "mariadb_service" as seen in the docker-compose.yml file - see screenshot. Hope I wasn't the only newbie that bumped into that and will help others, thanks Tarun Lalwani!!
You need to set the Host name also for the DB in Drupal. This db host will be maria_service as per the service name from your docker-compose.yml file. This needs to be done by expanding the Advanced options
Using Environment Variables
You could also try setting the environment variables for these settings
version: '3'
services:
maria_service:
build: ./db_maria
restart: always
environment:
MYSQL_DATABASE: mariadb
MYSQL_USER: joel
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: password
volumes:
- ./db:/var/lib/mysql
drupal_service:
build: ./website
restart: always
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
depends_on:
- maria_service
environment:
DB_HOST: maria_service
DB_USER: joel
DB_PASSWORD: password
DB_NAME: mariadb
DB_DRIVER: mysql

Resources