understanding Docker compose mongodb and rails application - ruby-on-rails

there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.

Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.

Related

Trying to connect all my docker containers to a separate MariaDB container

So, I've setup several container apps that use MariaDB as their db backend, using docker-compose.
Containers are setup as needed and therefore MariaDB gets installed each time on every container that uses the db.
For example, I have some containers (PHPMyAdmin, NGiNX-PM, etc.) that use MariaDB, and they, in turn, have a version of it installed within their container. I also have a separate container (MariaDB) that I would rather have shared amongst the other containered apps and, thereby, I'd only have to maintain one version of the db.
I've searched for a solution, but no luck. Needless to say, I'm a noob at docker.
The only thing I can come up with is that all the apps need to be installed through the same docker-compose.yaml file to use the same db? That would make for a very long file if I had many containers running, and I'd prefer to have a directory per app and all the app's contents available in this one location.
I'm sure there is a way, I just haven't been able to figure it out.
So this is what I've tried:
The following setup is what I've tried but I am unable to get it to work:
(/docker/apps/mariadb/mariadb.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# MariaDB (docker-compose -f mariadb.yml up -d) #
#############################################################################################
mariadb:
image: jsurf/rpi-mariadb:latest
restart: unless-stopped
environment:
- TZ=${TIMEZONE}
- MYSQL_DATABASE=dockerApps
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
volumes:
- $HOME/docker/apps/mariadb/db:/var/lib/mysql
expose:
- '3306'
networks:
- NET
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
depends_on:
- mariadb
(/docker/apps/phpmyadmin/phpmyadmin.yml)
version: "3.9"
networks:
NET:
external: true
services:
#############################################################################################
# phpMyAdmin (docker-compose up -d -OR- docker-compose -f phpmyadmin.yml up -d) #
#############################################################################################
phpmyadmin:
image: phpmyadmin:latest
container_name: phpMyAdmin
restart: unless-stopped
environment:
PMA_HOST: mariadb
PMA_USER: root
PMA_PASSWORD: ${MYSQL_PASSWORD}
volumes:
# Must add ServerName directive to end of file "ServerName 127.0.0.1"
- $HOME/docker/apps/phpmyadmin/apache2.conf:/etc/apache2/apache2.conf
ports:
- '8004:80'
networks:
- NET
Any help in this matter is greatly appreciated.
Ok, so after some more reading and testing, I've found the answer to my issue. I was assuming that "depends_on" was supposed to connect the containers, somehow. Not true!
I found that "external_links" is the correct way of connecting them.
So, my final docker-compose file looks like this:
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
external_links:
- mariadb

New to docker, Not seeing local changes made on site

I've been working on a site using laravel 5.8 which runs on a docker container and usually I've been able to save my local changes and the site on my local host reflects them but not my changes aren't seen on the site.
I'm running docker-compose up -d and it starts with the laravel driver, creating php and creating nginx but My local changes just won't show.
Should I be running a different command?
docker-compose file:
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./resources/docker/nginx:/etc/nginx/conf.d
depends_on:
- php
networks:
- laravel
php:
image: quay.io/testRepo/docker-php-iaccess-odbc:7.3-devel
container_name: php
volumes:
- ./:/var/www
environment:
- PHP_OPACHE_ENABLE=0
ports:
- "9000:9000"
networks:
- laravel
docker-compose volume mounting requires either a full path or using the Version 3 bind configuration.
https://docs.docker.com/compose/compose-file/#volumes
In Linux/Unix OSes the pwd CLI command can be used as a short cut.

Deploy existing Prestashop to server using Docker

I've created PrestaShop store on server. Is there any possible way to use docker for my store and migrate it into another server using docker? I know that I'll need docker-compose but to be honest I don't know what to do with files on current server.
Ok, so I deeped into problem and solution for ma quesstion is as below. What I did is pull original image from prestashop and copy there my files.
Next step was use mariadb image. I had backup.sql file exported from previous store phpmyadmin
version: '2'
services:
prestashop:
image: prestashop
ports:
- 80:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=root
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
volumes:
- backup.sql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
ports:
- 81:80
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=root
The biggest issue is IP in docker-machine. Keep in mind that if you are using docker toolbox you have IP 192.168.99.100 but in Docker for Windows your IP depends on localhost (or just type localhost).
You can use this docker-compose.yml :
version: "3"
services:
prestashop:
image: prestashop/prestashop
networks:
mycustomnetwork:
ports:
- 82:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=mycustompassword
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
networks:
mycustomnetwork:
volumes:
- presta_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mycustompassword
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
mycustomnetwork:
links:
- mariadb:mariadb
ports:
- 1235:80
depends_on:
- mariadb
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=mycustompassword
volumes:
presta_db:
networks:
mycustomnetwork:
external: true
Replace mycustomnetwork and mycustompassword
Then run docker-compose up
Web url : localhost:82
PHP MyAdmin url : localhost:1235
You can follow this tutorial to setup Prestashop in a Docker environment.
https://hub.docker.com/r/prestashop/prestashop/
You will need to add your current files to the Prestashop container and most likely import your database in a MySQL container. Docker-compose will be used to launch those containers together. Once this is done, you will be able to deploy the whole thing anywhere.
You should also include bridge network in your compose file, some examples might work from here https://runnable.com/docker/docker-compose-networking.
This way db can be configured to be accessed only by prestashop on local docker network without being exposed outside. Presta db can also be pointed to the name of the running image, in case your IP changes or something. All what you would leave running is port 80 on the app.

Docker service restart without order sequence when using docker-compose depends_on

I'm trying to setup a Sonarqube service in docker for windows and use mysql as database.
I am using below compose file, and using compose-file depends_on to control start up order:
db->sonarqube
But when docker services/windows restart, containers start up with no sequence.
Which will take error when sonarqube trying to connect mysql but mysql service didnt startup first.
version: '3'
services:
sonarqube:
image: sonarqube:6.5
container_name: sonarqube
restart: always
environment:
- SONARQUBE_JDBC_URL=jdbc:mysql://db:3306/sonar?useSSL=true&useUnicode=true&characterEncoding=utf8
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
ports:
- 9000:9000
- 9002:9002
- 9092:9092
volumes:
- ../../volumes/data/sonarqube/conf:/opt/sonarqube/conf
- ../../volumes/data/sonarqube/data:/opt/sonarqube/data
- ../../volumes/data/sonarqube/extensions:/opt/sonarqube/extensions
- ../../volumes/data/sonarqube/lib/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
grafana:
container_name: grafana
image: grafana/grafana
ports:
- 3000:3000
volumes:
- ../../volumes/data/grafana-storage:/var/lib/grafana
restart: always
depends_on:
- db
# mysql service for sonarqube & grafana
db:
image: mysql:5.7
container_name: sonar-mysql
restart: always
environment:
- MYSQL_DATABASE=sonar
- MYSQL_USER=sonar
- MYSQL_PASSWORD=sonar
- MYSQL_ROOT_PASSWORD=Password1
- MAX_ALLOWED_PACKET=13421772800
volumes:
- ../../volumes/data/mysql:/var/lib/mysql
ports:
- 3306:3306
command: mysqld --max_allowed_packet=80M --federated --event_scheduler=1
After reading some of docker-compose official document,
They said the best way is to check the application code, like updating entrypoint scripts using waiting for other service online then start application.
It's the right way I think, but I still want to know if there is any way we can control containers startup sequence when docker service restart?
Thanks.

How to run Docker container in it's own network

Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80

Resources